Is LiDAR a gimmick?

Yes, it is a gimmick, but not completely.

The way it works is a light at a specific wavelength is projected onto the bed/part. The camera, sitting at a known angle, takes a picture and analyzes the pixels to find out where the sharpest edges are. Since the camera is at a known angle relative to the laser angle, the pythagorean theorem is applied and a determination is made to find the displacement between the light point and the camera sensor. The resolution they claim is based on the pixel pitch of the camera sensor combined by the sharpness of the light on the bed/part, and a fuzz factor applied. If they were using more expensive lasers, the wavelength would also be considered as part of the algorithm. As the bed is raised and lowered, or material is added/subtracted, the point where the laser is on the bed changes relative to the camera sensor. Basically, it’s a poor man’s interferometer.

By being able to detect variations in displacement, it is possible to determine when there’s a defect in the layer when compared with a reference. So, a correction algorithm may be applied to attempt to correct for a defect–assuming the defect mode is known.

So, the gimmick is in believeing that the printer can move 7 microns in any direction reliably, and being able to figure out the correct defect mode so the appropriate correction is introduced. As more experienced 3D printer people know, a layer defect can be caused by several different factors, not only axis displacement.

There is plenty of prior art in this realm. One such is the DRSX860 photo mask laser repair system that was produced by Quantronix and Control Laser Corporation, which uses a much more expensive displacement sensor to detect angstroms of material thickness on a lithography photo mask and correct for deviations using that data.