It’s tough to trust measurements from instruments when you don’t have a clear understanding of how their sensitivity and accuracy is derived, and many times infrared cameras fall in this category. Additionally, discussions of infrared camera measurement accuracy typically involve complex terms and jargon that can be confusing and misleading. This can ultimately prompt some researchers to avoid these tools altogether. However, by doing so, they miss out on the potential advantages of thermal measurement for R&D applications. In the following discussion, we strip away the technical terms and explain measurement uncertainty in plain language, providing you with a foundation that will help you understand IR camera calibration and accuracy.
Camera Accuracy Specs and the Uncertainty Equation
You’ll notice that most IR camera data sheets show an accuracy specification such as ±2ºC or 2% of the reading. This specification is the result of a widely used uncertainty analysis technique called “Root‐Sum‐of‐Squares”, or RSS. The idea is to calculate the partial errors for each variable of the temperature measurement equation, square each error term, add them all together, and take the square root. While this equation sounds complex, it’s fairly straightforward. Determining the partial errors, on the other hand, can be tricky.
“Partial errors” can result from one of
several variables in the typical IR camera
temperature measurement equation,
- Reflected ambient temperature
- Atmosphere temperature
- Camera response
- Calibrator (blackbody) temperature