Basic Tensor Creation:
Question: Write a PyTorch code snippet to create a 3×3 tensor of random numbers and multiply it by 2.
Why: To ensure you understand tensor creation, basic arithmetic, and using built‑in functions.
Tensor from Python Data Structures:
Question: How do you create a tensor from a list of lists and compute its element‑wise square?
Why: It practices converting Python lists to tensors and applying element‑wise operations.
Tensor Indexing & Slicing:
Question: Write code to slice a given 4×4 tensor to extract the middle 2×2 sub-tensor.
Why: It reinforces your ability to index and manipulate tensor subsets.
Broadcasting in PyTorch:
Question: Demonstrate broadcasting by adding a 1×3 tensor to a 4×3 tensor.
Why: To understand how PyTorch automatically expands dimensions during operations.
Matrix Multiplication:
Question: Write a snippet that computes the matrix product of two tensors using torch.matmul
.
Why: To become comfortable with linear algebra operations that are foundational in deep learning.
Understanding Data Types:
Question: How do you change a tensor’s data type (e.g., from float32 to int64) and why might this be important?
Why: It’s critical to know how data types affect model computations and performance.
Gradient Computation Basics:
Question: Write a code snippet to compute the gradient of a simple scalar function (e.g., f(x)=x²) using torch.autograd
.
Why: To understand how PyTorch tracks operations and computes gradients automatically.
Disabling Gradients:
Question: How would you disable gradient calculations during inference using PyTorch? Write an example.
Why: This practice is essential for saving memory and speeding up inference.
Custom Autograd Function:
Question: Create a custom PyTorch autograd Function for a simple operation (for example, a custom square function) that implements both forward and backward passes.
Why: To deepen your understanding of the autograd system and custom gradient computation.
In-place vs. Out-of-place Operations:
Question: Explain the difference between in-place and out-of-place tensor operations and demonstrate with a code example.
Why: To learn how in-place operations can affect gradient tracking and memory usage.
Simple Linear Model:
Question: Implement a linear regression model using nn.Module
in PyTorch.
Why: To practice building a custom model and understand module structure.
Feedforward Neural Network:
Question: Code a two-layer MLP for a simple classification task and explain each component.
Why: It builds your skills in structuring multi-layer networks.
Activation Functions:
Question: Write a custom PyTorch module that applies ReLU or GELU activation and explain why non-linearities are essential.
Why: To appreciate the role of activation functions in deep networks.