4. AutoDiff & Optimization

Differentiable Rendering and Optimization

We are introducing the "Backward Pass". This is where automatic differentiation truly shines, allowing a neural field to iteratively learn the shape of our target reference.

On the right, we have a split-screen A/B Comparison:

Operator Mathematical Operation Use Case in Neural Rendering
fwd_diff Jacobian-vector product Calculating surface normals from SDFs
bwd_diff Vector-Jacobian product Training scene weights via gradient descent

Loss is calculated using the Mean Squared Error (MSE) between the right (guess) and the left (target). Press Play to run the optimization loop. Adjust the Learning Rate to see how step size affects the speed of convergence!

Ground Truth Target Differentiable Neural Guess
Epoch: 0  |  MSE Loss: 0.000000
PRESS ANIMATE

Real-Time Loss Curve

Plotting the error rate visually validates training progress. As weights actively learn, the error drops rapidly before successfully converging (flattening) near zero.

0.010