Why IVD performance studies fail clinical evaluation review
I reviewed a clinical performance evaluation report for an IVD last month. Fifty pages of sensitivity and specificity data. Proper sample size calculations. Statistical analysis tables. The Notified Body rejected it in the first round. The reason? The manufacturer never demonstrated clinical performance.
In This Article
- What MDR actually requires for IVD clinical performance
- When analytical data is sufficient
- What clinical performance evaluation requires
- The structure reviewers expect to see
- Why equivalence does not solve clinical performance
- PMCF as part of clinical performance evaluation
- What happens when clinical performance evaluation is incomplete
- Final thought
This is not unusual. The confusion between analytical performance and clinical performance runs deep in the IVD sector. Manufacturers think clinical performance evaluation is about showing the device works accurately in the lab. It is not. Clinical performance is about what happens when the result reaches the physician and changes patient management.
Most performance studies for IVDs are designed to satisfy analytical validation requirements. They show precision, accuracy, linearity, limit of detection. All necessary. All critical for technical documentation. None of it answers the clinical performance question.
The clinical performance question is different: Does the information provided by this device improve patient outcomes or clinical decision-making in the intended use population?
What MDR actually requires for IVD clinical performance
MDR Annex XIII lays out the clinical evidence requirements for IVDs. Section 1 requires manufacturers to demonstrate conformity through clinical evidence. Section 3 specifies that clinical performance evaluation must establish the scientific validity, analytical performance, and clinical performance of the device.
Three separate elements. Not interchangeable.
Scientific validity means the association between the measurand and the clinical condition is scientifically established. Analytical performance means the device accurately detects or measures the target. Clinical performance means the device achieves its intended purpose in clinical practice.
Most IVD files I see treat these as a single validation exercise. They are not. Each requires different evidence. Each requires different study design. Each answers a different question.
Submitting analytical performance data as proof of clinical performance. Reviewers consistently reject this. Showing your device measures HbA1c accurately does not demonstrate it improves diabetes management decisions.
The disconnect happens because analytical studies are familiar territory. Laboratories know how to run them. Quality teams know how to document them. Clinical performance studies require clinical endpoints, patient populations, and outcome measures. Different expertise. Different methods.
When analytical data is sufficient
There are cases where extensive clinical performance studies are not required. But the conditions are specific and the justification must be explicit.
If the measurand is well-established, the link to clinical decision-making is documented in clinical guidelines, and equivalent devices have existing clinical data, then analytical performance may sufficiently demonstrate clinical performance.
Example: A new glucose meter. Glucose measurement for diabetes management is scientifically validated. Clinical guidelines define how glucose values guide insulin dosing. Equivalent devices have decades of clinical use data. In this case, showing analytical equivalence to a predicate device plus analytical performance studies may be sufficient.
But you must document why clinical performance studies are not needed. That justification is part of the clinical performance evaluation. Simply skipping clinical studies without justification creates a critical gap.
And the justification must be defensible. I have seen manufacturers claim well-established measurand status for novel biomarkers. The Notified Body disagreed. If the clinical decision algorithm is not in published guidelines, if the interpretation of results is still evolving, if the target population has unique characteristics, then analytical data alone will not suffice.
What clinical performance evaluation requires
For most IVDs, especially those introducing new measurands, new clinical applications, or targeting specific patient subgroups, clinical performance evaluation requires outcome data.
This means studies that show what happens when the device is used in clinical practice. Not just whether the device gives accurate results, but whether those results lead to appropriate clinical actions and improved outcomes.
The study design depends on the intended purpose. Diagnostic IVDs need diagnostic accuracy studies with clinical reference standards. Monitoring IVDs need studies showing the results guide treatment adjustments appropriately. Screening IVDs need studies demonstrating the device identifies the target condition in the intended population.
The clinical performance endpoint must align with the intended purpose claim. If the device is intended to guide treatment decisions, the study must show treatment decisions were appropriately guided. If the device is intended to rule out disease, the study must show disease was correctly ruled out in the target population.
Here is where the confusion deepens. Manufacturers ask: Do we need a clinical trial? Do we need prospective studies? Can we use literature data?
The answer is: it depends on what evidence exists and what gaps remain.
If published studies demonstrate clinical performance for the same measurand, same intended use, same target population, and same clinical decision context, then a literature-based evaluation may be sufficient. But the studies must address clinical performance, not just analytical validation.
If such studies do not exist, you need to generate clinical performance data. That may be a prospective study. It may be a retrospective analysis of clinical outcomes. It may be a post-market clinical follow-up study. The key requirement is demonstrating the device performs its intended purpose in clinical use.
The structure reviewers expect to see
When a Notified Body opens your clinical performance evaluation report, they look for a clear logical flow. Not just data presentation. A structured argument.
First, define the clinical performance endpoints based on the intended purpose. What clinical question does the device answer? What decision does it inform? What outcome does it enable?
Second, identify the evidence required to demonstrate those endpoints. What study types would answer the clinical performance question? What data sources are available?
Third, appraise the available evidence. What studies exist? Do they address the clinical performance question? Are they methodologically sound? Do they apply to your intended use population?
Fourth, analyze the evidence for your device specifically. If equivalent devices have clinical performance data, show equivalence. If you conducted performance studies, present clinical outcomes. If gaps exist, acknowledge them and address them through PMCF.
This is the clinical evaluation logic applied to IVD performance. The same structured approach we use for therapeutic devices. Many IVD manufacturers skip this structure and jump straight to data tables.
Reviewers see it immediately. The report has data but no evaluation. It shows test results but does not demonstrate clinical performance.
Why equivalence does not solve clinical performance
I have seen manufacturers attempt to shortcut clinical performance evaluation by claiming equivalence to a predicate device with existing clinical data.
The reasoning goes: Our device is analytically equivalent to Device X. Device X has clinical performance data. Therefore our device has demonstrated clinical performance.
This fails in most cases.
Equivalence for IVDs requires demonstrating comparable analytical performance, same measurand, same specimen type, same measurement principle, and same clinical interpretation. Even with full equivalence, you must show the clinical performance data for the predicate applies to your device and your intended use.
If the predicate device is used in a different clinical setting, or your device targets a different patient population, or your intended use includes additional clinical claims, then the predicate’s clinical data may not cover your clinical performance requirements.
Equivalence allows you to reference existing data. It does not replace clinical performance evaluation. You still must demonstrate how that data supports your specific clinical performance claims.
Claiming equivalence without analyzing whether the predicate’s clinical data addresses the same clinical performance question. If your device adds a new intended use or targets a different population, equivalence to analytical performance does not transfer clinical performance evidence.
PMCF as part of clinical performance evaluation
For many IVDs, especially those with novel applications or limited prior clinical data, PMCF becomes essential to the clinical performance evaluation.
This is not a backup plan. It is not what you do when initial evidence is weak. It is a planned component of demonstrating clinical performance throughout the device lifecycle.
The PMCF plan must define how you will collect clinical performance data post-market. Not just complaints and adverse events. Actual outcome data showing the device performs its intended clinical purpose.
For a diagnostic IVD, this might mean tracking how often the test result led to correct diagnosis versus missed or delayed diagnosis. For a monitoring IVD, this might mean tracking how test results correlated with treatment adjustments and patient outcomes.
The PMCF design must generate data that answers the clinical performance question. Many PMCF plans I review are generic surveillance activities. They monitor device performance but do not assess clinical outcomes.
That approach does not satisfy clinical performance evaluation requirements. The data collected must be usable to confirm or update your clinical performance claims.
What happens when clinical performance evaluation is incomplete
Notified Bodies are increasingly focused on clinical performance for IVDs. The scrutiny has intensified under MDR. The questions are more specific. The deficiencies are documented more carefully.
An incomplete clinical performance evaluation leads to major non-conformities. Not just requests for more data. Rejections that require fundamental redesign of the clinical evaluation strategy.
I have seen manufacturers respond by adding more analytical studies. More precision data. More linearity curves. Reviewers reject the response because it still does not address clinical performance.
The resolution requires going back to the intended purpose and asking: What clinical evidence demonstrates this device achieves that purpose? Then building the evaluation to answer that question.
Sometimes this means conducting new studies. Sometimes it means reanalyzing existing clinical data. Sometimes it means narrowing the intended purpose to match available evidence.
None of these are quick fixes. Clinical performance evaluation for IVDs requires the same rigor and planning as clinical evaluation for therapeutic devices. The measurand is different. The clinical question is different. The evaluation logic is the same.
The quality of your clinical performance evaluation is determined before you write the report. It is determined when you design your performance studies and define what evidence you will generate. If the studies do not address clinical performance, no amount of report writing will fix the gap.
Final thought
The trend in IVD regulation is clear. Clinical performance evaluation is no longer a documentation exercise. It is a demonstration that the information provided by the device serves its intended clinical purpose.
Analytical performance remains essential. It proves the device works as a measurement instrument. But clinical performance proves the device works as a medical tool.
Manufacturers who understand this distinction early build better studies, generate appropriate evidence, and write evaluations that pass review. Those who treat clinical performance as an extended analytical validation continue to face rejections and major gaps.
The question to ask during development is not: Does our device give accurate results? The question is: Does the clinical information from our device improve patient care in the way we claim?
Answer that question with structured evidence and you have a clinical performance evaluation.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/746 on in vitro diagnostic medical devices (IVDR), Annex XIII
– MDCG 2022-2: Guidance on general principles of clinical evidence for in vitro diagnostic medical devices (IVDs)
– MDCG 2020-16: Guidance on clinical evaluation of medical devices





