We are introducing the "Backward Pass". This is where automatic differentiation truly shines, allowing a neural field to iteratively learn the shape of our target reference.
On the right, we have a split-screen A/B Comparison:
| Operator | Mathematical Operation | Use Case in Neural Rendering |
|---|---|---|
| fwd_diff | Jacobian-vector product | Calculating surface normals from SDFs |
| bwd_diff | Vector-Jacobian product | Training scene weights via gradient descent |
Loss is calculated using the Mean Squared Error (MSE) between the right (guess) and the left (target). Press Play to run the optimization loop. Adjust the Learning Rate to see how step size affects the speed of convergence!