← Back
The Slang Code
[Differentiable]
vector<float, N> smoothStep<let N : int>(vector<float, N> x, vector<float, N> minval, vector<float, N> maxval)
{
    vector<float, N> y = clamp((x - minval) / (maxval - minval), 0.f, 1.f);
    return y * y * (3.f - 2.f * y);
}

[Differentiable]
float smoothStep(float x, float minval, float maxval)
{
    float y = clamp((x - minval) / (maxval - minval), 0.f, 1.f);
    return y * y * (3.f - 2.f * y);
}
1. The Problem: Hard Clamp vs Smooth Clamp

When training a model (like a Gaussian Splat), parameters are updated by gradient descent. If a parameter goes out of its valid range, a hard clamp (clamp(x, 0, 1)) fixes the value but makes the gradient zero outside the range — the optimiser has no signal to bring it back. smoothStep solves this by using a smooth polynomial that:

Drag the vertical cursor. Drag sliders to change the valid range.
x = 0.30 | clamp → 0.30 | smoothStep → 0.216
0.00
1.00
2. The Formula Step by Step

Step 1 — Normalise:

t = clamp((x - minval) / (maxval - minval), 0, 1)

This maps the input range to [0, 1]. Outside the range, it clamps to 0 or 1.

Step 2 — Hermite polynomial:

output = t² · (3 - 2t)

This is the cubic Hermite basis function H₁(t). It satisfies:

The derivative:

d(output)/dt = 6t(1 - t)

This is maximised at t = 0.5 (steepest slope in the middle) and exactly 0 at t = 0 and t = 1.

Drag the cursor to see each step of the computation.
Step 1 — t = clamp((0.30 - 0) / (1 - 0), 0, 1) = 0.300
Step 2 — output = 0.300² × (3 - 2×0.300) = 0.216
0.00
1.00
3. The Derivative: Why Gradients Don't Vanish

The derivative of smoothStep with respect to the input x is:

d(smoothStep)/dx = 6t(1-t) / (maxval - minval)    where t = clamp(...)

This is a bell-shaped curve (zero at the boundaries, maximum at the midpoint). Crucially, even when x is slightly outside the valid range, the gradient is not a cliff — it smoothly approaches zero.

Compare to hard clamp:

Drag the cursor to see values. Toggle curves with the buttons above.
x = 0.30 | smoothStep = 0.216 | derivative = 0.882
0.00
1.00
4. [Differentiable] and Automatic Differentiation

The [Differentiable] attribute in Slang tells the compiler that this function can be automatically differentiated — the compiler generates the derivative code for you. This is essential for training Gaussian Splats via gradient descent.

When smoothStep is used inside a loss function, the compiler:

Without [Differentiable], you'd have to implement the derivative manually and risk errors.

[Differentiable]
float computeOpacity(float rawParam)
{
    return smoothStep(rawParam, 0.f, 1.f);
}

// The compiler generates this derivative automatically:
// d(rawParam) += d(opacity) * 6t(1-t)   where t = clamp(rawParam, 0, 1)
Animated gradient descent. Watch the parameter converge to the target.
0.050
0.70
-0.30
Step 0 | x = -0.300 | output = 0.000 | loss = 0.490
5. The Vector Variant

The generic version vector<float, N> smoothStep<let N : int>(...) applies smoothStep component-wise to a vector. In Slang, <let N : int> is a generic integer parameter — the function works for float2, float3, float4 without code duplication.

Use cases in Gaussian Splatting:

Per-channel S-curves. The dot shows each channel's current position.
0.80
0.30
-0.10
Before smoothStep (hard clamped)
After smoothStep