Analytical vs Clinical: The Line That Breaks Your IVD File
I once reviewed an IVD submission where the entire clinical evidence section was built on analytical validation studies. The manufacturer believed they had proven clinical performance. The Notified Body stopped the review within days. The file confused two fundamentally different things: what the device measures and what that measurement means for patient care.
In This Article
Under IVDR, this distinction is not semantic. It is structural. Analytical performance and clinical performance serve different purposes, require different evidence, and answer different questions. Yet the confusion between them remains one of the most common reasons IVD files stall during assessment.
This is the fifth part of the IVD Clinical Performance series. Here, we examine why the distinction matters, how it shapes your evidence strategy, and what happens when manufacturers blur the line.
What Analytical Performance Actually Proves
Analytical performance is about the device itself. It answers technical questions: Does the device detect what it claims to detect? How accurately? How precisely? Under what conditions?
You demonstrate this through validation studies. Limit of detection. Limit of quantification. Accuracy. Precision. Analytical specificity and sensitivity. Linearity. Interference. Stability.
These studies are essential. They establish that the device functions as intended. But they do not establish clinical value.
A device can have excellent analytical performance and zero clinical utility. It can measure a biomarker with high precision, but if that biomarker has no established link to diagnosis or patient management, the measurement is clinically meaningless.
Analytical validation tells you the device works. Clinical performance tells you the device matters. The first is necessary. The second is what regulators evaluate under IVDR Annex I.
What Clinical Performance Actually Requires
Clinical performance is about patients. It answers a different set of questions: Does the test result influence clinical decisions? Does it improve diagnostic accuracy? Does it affect patient outcomes?
This is where IVDR Annex I, Chapter I, Section 2 applies. Clinical performance must be demonstrated through clinical evidence that shows the device achieves its intended clinical benefit.
MDCG 2022-2 clarifies this. Clinical evidence for IVDs must establish that the device, when used as intended, provides clinically relevant information. That means showing the connection between the analytical output and clinical decision-making or patient management.
For example, a glucose meter’s analytical performance is its ability to measure glucose concentration accurately. Its clinical performance is whether it enables patients to manage their diabetes effectively and avoid adverse events.
One is a laboratory metric. The other is a health outcome.
The Evidence Gap Manufacturers Miss
Most manufacturers handle analytical validation well. It is familiar ground. The methods are standardized. The endpoints are clear.
Clinical performance is less comfortable. It requires clinical studies. Real-world data. Literature that links test results to patient outcomes. Many manufacturers assume that analytical validation is enough, especially for devices that detect well-established biomarkers.
It is not.
Even for routine biomarkers, you must show that your device’s output supports clinical decisions. This often requires bridging data: studies that connect your analytical results to clinical interpretation and use.
Manufacturers submit analytical validation reports as proof of clinical performance. Reviewers reject the file because the clinical link is missing. The analytical data is accepted, but insufficient. The file cannot proceed without clinical evidence.
How the Confusion Happens
The confusion between analytical and clinical performance often starts with internal misalignment. The R&D team focuses on technical validation. The regulatory team assumes clinical performance is implied. The clinical affairs team, if consulted late, scrambles to find post-market data.
This is especially common in situations where:
1. The biomarker is well-established.
Manufacturers assume that because the clinical utility of a biomarker (like HbA1c or troponin) is well-documented, their device automatically inherits that utility. Not true. You must still show that your device’s specific measurement contributes to clinical decisions.
2. The device is a minor design change.
A new version of an existing device may have similar analytical performance. But clinical performance data often cannot be directly transferred. You need evidence that the new version performs clinically as intended, especially if the intended use has changed or the user population differs.
3. Equivalence is claimed.
Claiming equivalence to a predicate device does not exempt you from demonstrating clinical performance. Even if analytical performance is equivalent, you must show that your device achieves the same clinical outcomes in its intended context.
Where Reviewers Look
When a Notified Body or Competent Authority reviews your file, they do not stop at analytical validation. They look for the clinical bridge.
They ask: Where is the evidence that this measurement matters? Where are the clinical studies? What literature supports the clinical interpretation? How does this device fit into the clinical pathway?
If your file answers these questions only with analytical data, the review stalls. You get a deficiency. The clock stops. You scramble to gather clinical evidence that should have been planned from the start.
Analytical performance is a prerequisite. Clinical performance is the regulatory endpoint. Your evidence strategy must address both, in sequence, with clear connections between them.
Structuring Your Evidence Strategy
The distinction between analytical and clinical performance should shape your evidence plan from the beginning. Here is how to structure it:
Step 1: Define the clinical claim clearly.
What clinical decision does your device support? What patient population? What care setting? This defines what clinical performance you must demonstrate.
Step 2: Establish analytical performance first.
Conduct validation studies that prove your device measures accurately and reliably. Document these in your technical file. But label them correctly: analytical validation, not clinical evidence.
Step 3: Build the clinical bridge.
Identify the evidence that connects your analytical output to clinical use. This may include:
– Clinical studies that use your device in real settings
– Literature that links the biomarker to clinical outcomes
– Post-market data showing how clinicians interpret and act on your results
– Expert opinion or guidelines that support the clinical utility of the measurement
Step 4: Document the connection explicitly.
In your clinical evaluation report, make the distinction clear. Summarize analytical performance. Then present the clinical evidence separately. Show how one supports the other, but do not conflate them.
This structure aligns with MDCG 2022-2 and makes your file easier to review. It also reduces the risk of deficiencies related to insufficient clinical evidence.
What This Looks Like in Practice
Consider a point-of-care troponin assay. The manufacturer conducts analytical validation: limit of detection, precision, linearity, interference studies. All pass. The device measures troponin accurately.
But clinical performance requires more. The manufacturer must show:
– That the troponin levels detected by the device correlate with myocardial infarction diagnosis
– That clinicians can interpret the results in emergency settings
– That the device contributes to timely clinical decisions
– That patient outcomes are not compromised by using this device instead of a lab-based assay
This requires clinical studies, comparative data, and literature review. The analytical validation alone does not answer these questions.
The file includes analytical validation and a brief literature review on troponin’s role in MI diagnosis. But there is no data showing that this specific device supports clinical decisions in the intended setting. The gap between analytical measurement and clinical use is not bridged. The file is incomplete.
Implications for PMCF
The distinction also matters for post-market clinical follow-up. PMCF for IVDs must monitor clinical performance, not just analytical performance.
Quality control data and proficiency testing results are useful. They confirm analytical performance over time. But they do not replace clinical PMCF.
Your PMCF plan should include mechanisms to gather evidence on how the device performs in real clinical use. Are results being interpreted correctly? Are clinical decisions appropriate? Are there unexpected patterns in false positives or negatives?
This requires active surveillance, not just passive data collection. It often involves user surveys, case reviews, or registry participation. The goal is to confirm that the clinical benefit identified during pre-market evaluation persists in routine practice.
When Analytical Changes Trigger Clinical Reassessment
If you modify your device in ways that affect analytical performance—new reagents, different calibration, updated algorithms—you must reassess clinical performance as well.
Even if the analytical performance improves, the clinical impact may change. A more sensitive assay might detect borderline cases that were previously negative. This could alter clinical interpretation and patient management.
The change requires updated clinical evidence. You cannot assume that better analytical performance automatically translates to better clinical performance. The connection must be demonstrated, not inferred.
Why This Distinction Protects Your Timeline
Understanding the difference between analytical and clinical performance does more than satisfy regulatory requirements. It protects your project timeline.
When manufacturers confuse the two, they discover the gap late. Often during Notified Body review. At that point, generating clinical evidence takes months. The submission stalls. The market launch delays. The business case erodes.
When you plan for both from the start, you control the timeline. You know what evidence you need and when. You allocate resources appropriately. You avoid last-minute scrambles for clinical studies or literature that should have been gathered earlier.
The distinction also clarifies roles. R&D handles analytical validation. Clinical affairs handles clinical evidence. Regulatory coordinates both. Each team knows what it is responsible for. The handoffs are clear.
The line between analytical and clinical performance is not bureaucratic. It is logical. It reflects the difference between a device that works and a device that matters. Your evidence strategy must respect that difference.
Final Observation
Analytical performance tells you the device is reliable. Clinical performance tells you the device is relevant. Both are required under IVDR, but they are not interchangeable.
The manufacturers who succeed under IVDR are the ones who see this distinction early, plan for both types of evidence, and present them clearly in their files. The ones who struggle are the ones who assume analytical validation is enough.
It never is.
In the next part of this series, we will examine how to structure clinical evidence when your biomarker is novel or lacks established clinical utility. That is where the regulatory complexity deepens and the evidence strategy must be even more deliberate.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). IVDR Annex I, MDCG 2022-2
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/746 (IVDR), Annex I, Chapter I, Section 2
– MDCG 2022-2: Guidance on clinical evidence needed for IVDs under IVDR
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





