Near-patient testing: when the operator becomes your evidence gap

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

A glucose meter gets approved for professional use in hospitals. The manufacturer submits analytical data showing excellent accuracy. Two years later, a PMCF study reveals nurses using it in the emergency department see a 15% outlier rate. The manufacturer defends the device. The data was correct. But the evidence was incomplete. The operator was never in scope.

This happens more often than it should.

Near-patient testing devices, point-of-care diagnostics, self-tests. These are IVDs that operate outside the controlled environment of a central laboratory. The operator changes. The setting changes. The workflow changes. And when the operator changes, the performance evidence must reflect that change.

Yet many clinical performance evaluation files still treat the operator as neutral. As if training alone will close the gap between a laboratory technician and a home user. As if professional use and lay use are just a matter of instructions.

They are not.

Why the operator is not a variable you can assume away

In a central laboratory, the operator is trained, supervised, and works with standardized protocols. The workflow is consistent. The sample handling is controlled. Errors still happen, but they are traceable and correctable.

In near-patient settings, none of that applies.

A nurse in a busy emergency department may run a D-dimer test between two critical patients. A diabetic patient performs a glucose test at home, possibly with wet hands, possibly in poor lighting, possibly while distracted. A pharmacist conducts a rapid antigen test on someone who self-collected a nasal swab incorrectly.

The device might perform perfectly in the hands of a trained operator. But if the intended user is not a trained operator, that performance data is not representative.

Common Deficiency
Clinical performance files that rely exclusively on analytical validation studies conducted by laboratory professionals, then claim the device is safe and effective for lay users based on a usability study that only assessed interface comprehension, not actual performance outcomes in the hands of intended users.

This is not a theoretical concern. This is what assessors look for. And this is what causes deficiencies during Notified Body reviews.

What IVDR Annex I actually requires

IVDR Annex I, General Safety and Performance Requirements, is explicit. The device must achieve its intended performance in the hands of the intended users, under the conditions of use specified by the manufacturer.

Not just in ideal conditions. Not just with trained operators. With the actual operators, in the actual settings, under the actual constraints they will face.

MDCG 2022-2 reinforces this. Clinical performance must be demonstrated for the target population and the intended use environment. If the device is intended for self-testing, the evidence must include self-testers. If it is intended for point-of-care use by non-laboratory personnel, the evidence must include those personnel.

Manufacturers sometimes try to argue that a usability validation covers this. It does not. Usability validation assesses whether users can operate the device without catastrophic errors. Clinical performance evaluation assesses whether the device, when operated by those users, still delivers the claimed performance.

These are not the same question.

What this means for your clinical performance file

If your device is intended for near-patient testing, your evidence strategy must address the operator from the beginning.

First, define your intended user clearly. Not vaguely. Not as

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). IVDR Annex I, MDCG 2022-2

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.