Bullseye! The Key to Optimum Performance
What is the difference between accuracy, repeatability, and resolution and why does it matter?
- How close a result is to the actual true measurement.
- Often expressed as a margin of error above or below the point being aimed at.
- Delivers identical results in conditions that are unchanged.
- In a sense, it is the tightness of the “grouping” from multiple attempts.
- What you should aim for in a product:
- Gets closest to the mark aimed at.
- Does so reliably and repetitively.
- The smallest increment that can be measured.
- The smallest movement that can be detected.
- The minimum registerable difference in a scale of measurement.
In an ideal world, the reading your process instrumentation is giving you would be prefectly correct, with no type of deviation. Unfortunately, this is not the case and the margin of error that is inherent in measurement must always be identified, accounted for, and minimized when possible.
There are many terms that are thrown around interchangeably and incorrectly when it comes to talking about how accurate your instrument’s results really are. Some of them are: accuracy, repeatability, and resolution. Let’s visit each in detail and clarify the unique meaning of each.
Accuracy is the most commonly used term and is sometimes used incorrectly. Accuracy is how close your instrument actually comes to giving you the exact value that exists in the process at that moment. It is commonly expressed as a value, or margin of error, above or below the reading that the instrument is showing. For example, let’s say that your magmeter is showing a result of 1 GPM, with an accuracy of ± 10%. But the exact value of the flow in the meter is more than likely not exactly 1 GPM because of the inherent deviation. More than likely, the actual flow rate is somewhere in between 0.9 GPM and 1.1 GPM. This is accuracy. When accounted for, in relation to the value being expressed by the meter, it gives you the range that the actual value falls between.
Repeatability is when close to identical results are produced after multiple measurements and there is no change in the conditions for all the results. In essence, it is the ability of the instrument to “group” the results. A highly repeatable instrument doesn’t necessarily mean that is then accurate. For example, a pressure gauge could consistenly be reading 5 degrees off every single time a measurement is taken. But it is 5 degrees off every single time. Here is where calibration can come into play and turn a highly repeatable instrument into one that is highly accurate after the identified and consistent degree of separation from the actual temperature is accounted for.
Resolution is the smallest increment that can be measured by an instrument. In a sense, it is the smallest part of whatever scale is being used. So for example, the resolution of a pressure transmitter could be 0.1 PSI or 1.0 PSI. How does this play into accuracy? While the importance of resolution may not seems as obvious as accuracy and repeatability, it does come into play. Imagine that you have a process that demands that you know down to the tenth of a PSI to operate corrrectly. If you install an instrument that only can only give you a reading to the nearest 1 PSI, then that instrument will not deliver enough resolution for you to accurately know what the true reading is, as the instrument is essentially is rounding up or down. So, in a sense, the instrument delivering a resolution of 1 PSI will not be accurate or finite enough for your process, even though it may be accurate in it’s actual reading.
So, in choosing an instrument for your application, it is important to consider accuracy, repeatability, and resolution.