Laboratory Quality Control

Error Rates at the POC

A new study in Clinical Chemistry investigated the errors rates for Point-of-Care (POC) devices:

Can you guess what the error rates were?

Here’s an adapted table from the study. We’ve added a column with the Sigma-metric, using the short-term table:

Test type# tests# defects% defects/
total tests
Sigma-metric
Blood gas/
electrolytes
22,687100.52%4.1
Blood gas /
electrolytes /
troponin I
5,809100.17%4.5
Pregnancy8.879140.158%4.5
Glucose303,389710.02%5
Drugs of abuse24710.4%4.2
HbA1c1,23680.65%4
Urinalysis64,37020.0025%5.6
Blood ketones1.08700%>6

[names of the methods are mentioned in the study but not here. This isn’t about specific manufacturers, but about the POC devices as a group]

To some, these metrics may look great. To others, these may look bad. But before we reach any conclusions, we should take into account a note from the authors. Just as you probably noticed, there is a great variance in the error rates for different POC tests. Is ketone analysis really defect free? Are glucose POC devices really better than HbA1c devices?

“For example, it is noteworthy that only 2 quality errors were logged (a defect rate of 0.0025%) for dipstick urinalysis (the second highest volume POCT test), both of which related to unsatisfactory EQA performance and were logged by the POCT officer rather than the clinical user. We speculate that dipstick urine testing may be regarded as a less critical POCT test by clinical users, since patients will generally proceed to a battery of further blood and quantitative urine tests that may be regarded as more definitive. This might result in a greater degree of tolerance toward errors in dipstick urinalysis compared with other tests such as blood gas analysis, which are likely to have a more immediate impact on patient management decisions.”

In other words, there is a reporting bias in effect with the error rates in this study. The users may “feel” like more errors are acceptable with some methods that they deem less critical, thus fewer errors are reported. This is sometimes called, What-you-see-is-what-you-look-for. Thus, it’s hard to know which of these error rates are well grounded in reality, and which might be artificially low (or high). The authors conclude:

“[I]t is likely that the quality error rate measured here for POCT represents an underreporting of the true rate…. [S]ome errors were either not identified or not logged.”

So, if we take these numbers as underestimates, that means the error rates for POC devices are significant. This is a double blow, since we have by now seen many studies that the devices themselves suffer from diminished analytical capability (for all the claims of “lab accurate results” there are a lot of devices on the market with less precise and less accurate performance than core laboratory instruments).

A study like this won’t halt the marketplace stampede to POC. There’s real money to be made in making POC devices, and healthcare clinicians still haven’t woken up to the fact that getting a fast result isn’t helpful if the number is wrong.

Again, it’s worth reading the whole study (subscription required): http://www.clinchem.org

Reference

http://james.westgard.com

Leave a Reply

Your email address will not be published. Required fields are marked *

*