Clinical Performance vs Clinical Evaluation: What Actually Changes
I reviewed an IVDR clinical performance evaluation last month that was essentially a copy-paste of a medical device clinical evaluation report. Same structure. Same data hierarchy. Same clinical benefit arguments. The Notified Body rejected it in the preliminary review. The reason? The manufacturer treated diagnostic accuracy as if it were clinical safety.
In This Article
- The Core Difference: From Benefit-Risk to Performance and Clinical Validity
- What IVDR Article 56 Actually Requires
- How the Evaluation Structure Changes
- The Problem with Equivalence Claims Under IVDR
- Clinical Performance Studies: When They Are Unavoidable
- Post-Market Clinical Follow-Up for IVDs
- What This Means for Your Submission
This happens more often than it should. Manufacturers with years of MDR experience assume IVDR clinical performance evaluation is just the same process with different devices. It is not.
The shift from device clinical evaluation to in vitro diagnostic clinical performance evaluation is not cosmetic. It changes what you evaluate, how you demonstrate performance, and how you connect your device to clinical decision-making.
Understanding this shift is not optional. The Notified Bodies reviewing your submission know the difference. Your assessors know the difference. And if your clinical performance evaluation reads like a device CER, you will hear about it.
The Core Difference: From Benefit-Risk to Performance and Clinical Validity
Under MDR, you evaluate clinical benefit against clinical risk. Your medical device must perform its intended purpose safely and achieve the benefit claimed by the manufacturer. The clinical evaluation report centers on demonstrating that balance.
Under IVDR, the framework changes. You evaluate analytical performance, scientific validity, and clinical performance. These are sequential requirements, not alternatives.
Analytical performance means your IVD produces accurate and reliable results under specified conditions. Scientific validity means the measured parameter is associated with a clinical condition or physiological state. Clinical performance means the IVD achieves its intended purpose in the target population and clinical setting.
This is not the same conversation as benefit-risk. You are not weighing harms against improvements. You are demonstrating that your diagnostic output leads to correct clinical decisions.
Clinical performance evaluation is not about what your device does to the patient. It is about what clinicians do with your device’s output. The IVD does not treat. It informs. Your evaluation must reflect that role.
What IVDR Article 56 Actually Requires
IVDR Article 56 describes the obligation to demonstrate clinical performance through clinical evidence. It does not prescribe a structure, but it signals what the evidence must cover.
You must establish that the IVD achieves its intended purpose. That means showing the device generates results that support clinical decisions in real use. You must provide scientific validity for the parameters measured. And you must demonstrate analytical and clinical performance in the target population.
This is where most manufacturers stumble. They confuse analytical performance studies with clinical performance evidence. They assume sensitivity and specificity data alone satisfy IVDR requirements. They do not.
Analytical performance tells you the device works in the lab. Clinical performance tells you the device works in clinical practice. These are not interchangeable.
The Role of MDCG 2022-2
MDCG 2022-2 clarifies expectations for clinical evidence and performance evaluation under IVDR. It provides guidance on planning, conducting, and documenting clinical performance studies. It also describes how manufacturers should demonstrate scientific validity and clinical performance in their technical documentation.
What stands out in the guidance is the emphasis on clinical decision-making. The Notified Body wants to see how your IVD output affects diagnosis, prognosis, monitoring, or treatment decisions. If your clinical performance evaluation does not connect diagnostic output to clinical action, it is incomplete.
This is not implied. It is explicit in MDCG 2022-2. Your clinical evidence must demonstrate that using the IVD results in correct clinical decisions compared to not using it or using an alternative method.
Manufacturers present receiver operating characteristic curves, analytical sensitivity, and limit of detection data as if these constitute clinical performance evidence. They do not. These are inputs to the evaluation, not the conclusion. Clinical performance requires evidence of real-world use and clinical decision impact.
How the Evaluation Structure Changes
The structure of a medical device clinical evaluation report follows a benefit-risk logic. You identify intended use, define clinical benefit, characterize risks, analyze clinical data, and conclude whether benefits outweigh risks.
The structure of an IVDR clinical performance evaluation follows a different path. You must demonstrate analytical performance first, then scientific validity, then clinical performance. The logic is sequential, not balanced.
Start with analytical performance. Show that your device measures what it claims to measure with acceptable accuracy, precision, and reliability. Reference your analytical performance studies. Document limits of detection, measuring intervals, and interference data.
Then move to scientific validity. Demonstrate that the biomarker, analyte, or parameter you measure is scientifically linked to the clinical condition. This is where literature review becomes critical. You must show that the association between your measurand and the target condition is established in the scientific community.
Finally, address clinical performance. Show that your IVD, when used in the intended population and setting, produces results that lead to correct clinical decisions. This requires clinical performance studies, real-world data, or equivalence to a device with demonstrated clinical performance.
This is not the same flow as a device CER. Trying to force IVDR evidence into an MDR template creates confusion and increases the risk of rejection.
The Problem with Equivalence Claims Under IVDR
Equivalence is more restrictive under IVDR than under MDR. For medical devices, you can claim equivalence based on technical, biological, and clinical characteristics. The bar is high, but the pathway exists.
For IVDs, equivalence is narrower. You must demonstrate that your device and the comparator device have the same intended purpose, same measurand, same technology, and same clinical performance. Any deviation breaks equivalence.
This matters because many manufacturers attempt to avoid clinical performance studies by claiming equivalence to predicate devices. But the predicate must have established clinical performance through its own studies or literature. If the comparator lacks documented clinical performance evidence, equivalence does not reduce your burden. It transfers it.
I see this regularly in submissions. The manufacturer identifies a comparator device, shows analytical similarity, and assumes clinical performance is covered. The Notified Body asks for the comparator’s clinical performance data. The manufacturer cannot provide it. The equivalence claim collapses.
Equivalence under IVDR is not a shortcut. It is a bridge to existing clinical performance evidence. If that evidence does not exist or is not accessible, you must generate your own clinical performance data.
Clinical Performance Studies: When They Are Unavoidable
Some manufacturers assume clinical performance studies are only required for novel IVDs or high-risk devices. This is incorrect.
You need a clinical performance study when existing data cannot demonstrate that your IVD achieves its intended purpose in the target population. This applies regardless of risk class if the scientific literature does not support your claims or if your device uses a new measurement principle.
MDCG 2022-2 describes acceptable alternatives to clinical performance studies, including literature-based evidence and equivalence. But these alternatives only work when robust data already exists. If you are measuring a novel biomarker, targeting a different population, or claiming superiority over existing methods, you will need a study.
The study design must reflect real-world clinical use. It must include the target population. It must compare your IVD to a reference method or clinical outcome. And it must show that clinicians using your device make correct decisions at an acceptable rate.
This is more complex than an analytical validation study. It requires clinical sites, ethical approvals, patient recruitment, and outcome tracking. Plan for this early if your clinical performance evaluation cannot rely on existing evidence.
Post-Market Clinical Follow-Up for IVDs
PMCF under IVDR is not the same as PMCF under MDR. For medical devices, PMCF monitors safety and performance over time. For IVDs, PMCF focuses on confirming clinical performance remains valid as clinical practice evolves.
This means your PMCF plan must track how clinicians use your device, how diagnostic accuracy holds in real-world populations, and whether clinical decision-making remains appropriate.
If new clinical guidelines change how your measurand is interpreted, your clinical performance may no longer be valid even if analytical performance remains stable. Your PMCF plan must detect this shift and trigger an evaluation update.
Many manufacturers underestimate this. They design PMCF plans that track complaints and device malfunctions. That is post-market surveillance, not post-market clinical follow-up. PMCF for IVDs must include data on diagnostic accuracy, clinical outcomes, and decision impact in actual use.
PMCF plans for IVDs that only track safety incidents miss the point. The plan must monitor whether the IVD continues to support correct clinical decisions as populations, practices, and guidelines evolve.
What This Means for Your Submission
If you are preparing an IVDR clinical performance evaluation, start by separating it from your MDR mindset. Do not reuse the same structure. Do not assume the same data hierarchy.
Map your evidence to analytical performance, scientific validity, and clinical performance separately. Make the connections explicit. Show how analytical data supports your measurement claims, how literature supports your scientific validity, and how clinical data demonstrates real-world decision impact.
If you claim equivalence, document the comparator’s clinical performance evidence in detail. Do not assume the Notified Body will accept a reference without supporting data.
If you rely on literature, ensure it covers your target population and clinical setting. Generic biomarker studies are not enough. You need evidence that links your specific measurand to clinical decisions in your intended use.
And if your device introduces a novel measurement, a new population, or a claim that existing literature does not support, prepare for a clinical performance study. No equivalence claim will substitute for missing clinical evidence.
The shift from device clinical evaluation to IVD clinical performance evaluation is real. The manufacturers who recognize this early avoid the rejection cycle. The ones who treat it as a formality spend months revising submissions that should have been structured correctly from the start.
Next in this series, I will address how to structure analytical performance data so it actually supports your clinical performance claims. Most submissions present this data in isolation. That is a mistake.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). IVDR Article 56, MDCG 2022-2
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/746 (IVDR) Article 56
– MDCG 2022-2: Clinical evidence needed for medical devices previously CE marked under Directives 93/42/EEC or 90/385/EEC – A guide for manufacturers and notified bodies
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





