← Back to Modules

3. Neural Radiance Fields

Neural Implicit Representations

Traditional rendering uses discrete data structures—like a 2D grid of pixels for an image, or a 3D grid of voxels for a volume. A Neural Implicit Representation discards the grid entirely. Instead, a neural network (a Multi-Layer Perceptron, or MLP) acts as a mathematical function that you can query at any continuous point in space.

You pass in a continuous spatial coordinate (x, y, z), and the network outputs the density and color at that exact microscopic point. This allows for mathematically infinite resolution without the massive memory overhead of 3D voxel grids.

The Problem (Spectral Bias): Neural networks are fundamentally biased toward learning smooth, low-frequency functions. If you feed raw, linear coordinates directly into a neural network, it struggles to unroll them into sharp edges, resulting in blurry, blobby outputs.

The A/B Comparison

To fix this spectral bias, we use Fourier Features (Positional Encoding). Instead of passing the raw coordinate x, we preprocess it into a high-dimensional array of expanding sine and cosine waves: [sin(x), cos(x), sin(2x), cos(2x), sin(4x)...].

The Positional Encoding Slider

The slider controls how many high-frequency sine/cosine bands are analytically fed into the network's input layer:

Raw Coordinate MLP Fourier Features (Pos Encoding)