Merging Dimensions

LiDAR provides sparse, highly accurate 3D points ($x, y, z$). Cameras provide dense 2D color information ($u, v$). To colorize a point cloud or project a depth map, we must solve for the Extrinsic Transform between them.

The Projection Equation

A 3D point in the LiDAR frame $P_L$ is projected into the pixel coordinates $(u, v)$ using a composite matrix:

s [u, v, 1]ᵀ = K [R | t] P_L

Where K are the Intrinsics (camera focal length, principal point) and [R | t] are the Extrinsics (Rotation and Translation from LiDAR to Camera).

Calibration Targets

We typically use "Calibration Boards" with high-contrast patterns. The LiDAR detects the physical plane, while the camera detects the visual edges. Aligning these two detections is the key to sub-millimeter extrinsic precision.

Adjust Extrinsics to align LiDAR rings with the Checkerboard