When you’re evaluating a measurement device for an application, the first rule of thumb is that the device’s accuracy must be 10x better than your maximum tolerance (tolerance being the total allowable error). So if your target accuracy is +- 50µm (100µm), your measurement device should have an accuracy of 10µm or better.
Two terms commonly used interchangeable are accuracy and precision. This image illustrates the difference:

Accuracy is the ability to measure true values. It is a function of a devices resolution, linearity error, and temperature error.
Precision (also used interchangeably with repeatability) is the ability to perform consistent measurements. You can use offset’s to achieve accuracy if the device is highly repeatable.
To calculate a devices accuracy, there are three specifications to first consider:
- Resolution: the smallest measurement change the device can detect. Typically this is a function of the CMOS sensor pixel density and optics. If you think about a CMOS like a ladder, the spacing between each rung is its resolution. Light will fall in one rung or the other, and you can’t get better than that.
- Linearity (% F.S.): The consistency of measurements over the entire measurement window range. Linearity is a function of the electronics signal processing. It’s typically presented as a % error of the Full Scale of its measurement range. You can minimize out this error by decreasing the measurement range. You can also use a calibration function to adjust for linearity.
- Temperature characteristics (% F.S./ᵒC):Represents the maximum measurement error value that occurs when the temperature of the sensor head changes one degree. This is a function of the expansion and contraction of components within the device, such as optics, CMOS, etc.
% of F.S. (full scale) refers to the entire measurement range of the device. If a device has a +-5mm measurement range, it’s full scale is 10mm. The Linearity or Temperature error is a function of the range that’s being measured.


This is an example of the linearity effect demonstrated on a micrometer:

To calculate accuracy, you will add the following:
Resolution + Linearity error + Temperature error = Accuracy
If a measurement device application has the following specifications:
Measurement Range: +-5mm (10mm)
Resolution: 10 micron
Linearity: 0.1 % F.S.
Temperature characteristics: 0.01 % F.S./ᵒC
Ambient Operating Temperature Range: 20ᵒC to 30ᵒC (10ᵒC variation)
You can calculate accuracy:
(10µm resolution) + (0.001 linearity error * 10mm measurement range) + (0.0001 temperature characteristics * 10mm measurement range * 10ᵒC variation)=
10 µm + 0.01 µm + 0.01 µm = 10.02 µm
The final common consideration to calculate the controls system accuracy is your analog signal noise. If you’re sending measurement data digitally via serial or TCP, there won’t be any analog signal noise, but your controls response time will be slower since the signal is converted digitally before its sent out.
The fastest way to capture measurement data is analog signals: Voltage (-10 to 10V, 0 to 5V, 1 to 5V) or Current (4 to 20mA) outputs. The longer your cable from the device to your controller, the more electrical noise you introduce. The shorter the cable, the less noise.
Calculating this noise error ratio is beyond my expertise, but I do know that Current output is the preferred analog signal for eliminating noise error. However, I believe Current is also a slower response time, since the sensor voltage signal is converted to current in the device, sent to the controller, and converted back into voltage to be processed.
Most of these response time delays will be minimal, and for most applications, not worth the consideration.
I hope this was helpful!