Companion diagnostics under IVDR: the evidence burden nobody expects
A pharmaceutical company submits their companion diagnostic for review. The clinical performance study looks solid. The device works. But the Notified Body stops at one question: where is the evidence that the therapeutic decision actually improves patient outcome? That question changes everything.
In This Article
Most manufacturers think companion diagnostics are about analytical performance and clinical sensitivity. They are not wrong. But they are incomplete.
Under IVDR Article 56, a companion diagnostic is defined as a device essential for the safe and effective use of a corresponding medicinal product. That single word, essential, carries more regulatory weight than most teams realize.
It does not mean the device performs well. It means the therapeutic decision it informs must be validated. And that validation sits at the intersection of two regulatory frameworks: IVDR and the medicinal product regulation. This is where the evidence burden multiplies.
What makes a companion diagnostic different
A companion diagnostic is not just an IVD that happens to relate to a drug. It is a device whose result directly determines whether a patient receives a specific therapy.
MDCG 2020-16 makes this distinction clear. The device must be demonstrated as necessary for patient selection, dose adjustment, or treatment monitoring in a way that affects clinical outcomes.
This creates a unique evidence requirement. You are not only proving that your device detects a biomarker. You are proving that detecting this biomarker, at this cutoff, using this method, leads to better therapeutic decisions.
And better decisions must translate into measurable clinical benefit.
The clinical performance evaluation of a companion diagnostic must demonstrate not only that the device works, but that the therapeutic strategy guided by the device improves patient outcomes compared to alternative approaches.
The evidence manufacturers typically prepare
Most submissions I review follow a predictable pattern. They include analytical studies showing accuracy, precision, reproducibility. They include clinical sensitivity and specificity data against a reference method. They may even include concordance studies with existing assays.
All of this is necessary. None of it is sufficient.
What is missing is the link between device output and therapeutic benefit. Manufacturers assume this link is established by the pharmaceutical company. After all, the drug was approved based on clinical trials. The biomarker was part of those trials. Why repeat the work?
Because the device under review was not the device used in those trials.
This is the gap that Notified Bodies focus on. You are not asking for approval of a biomarker concept. You are asking for approval of a specific device that measures that biomarker in a specific way. The question is whether your measurement, with your cutoffs, aligns with the evidence that supported the drug approval.
What reviewers look for
Reviewers want to see bridging evidence. They want to see that your device produces results that are clinically equivalent to the device or method used in the pivotal therapeutic trials.
If the drug trials used a central laboratory assay, and your device is a point-of-care test, you must show that patient classification does not change. If cutoffs differ, you must justify why the therapeutic decision remains valid.
This requires head-to-head comparison studies. Not just analytical concordance. Clinical concordance. Same samples. Same patient population. Same decision points.
And if your device changes the testing workflow, turnaround time, or sample type, you must address whether these changes affect real-world therapeutic outcomes.
Relying on published literature from drug trials without demonstrating that your specific device produces equivalent classification in the same clinical context. Reviewers will ask: how do you know your cutoff is the right one?
The regulatory coordination problem
Companion diagnostics live in two regulatory worlds. The device must meet IVDR requirements. The medicinal product must meet pharmaceutical regulations. But the two approval processes are not synchronized.
In practice, this creates timing and evidence challenges. The drug developer controls the clinical trial data. The device manufacturer controls the technical file. Neither may have full visibility into what the other is submitting.
IVDR Article 56 requires that the clinical evidence for the companion diagnostic be evaluated in relation to the corresponding medicinal product. This means the device manufacturer must obtain access to clinical outcome data from the drug trials.
This is not always straightforward. Drug companies may be reluctant to share proprietary data. Contractual agreements must be in place. Data formats may not align. And the device manufacturer must interpret therapeutic trial results in the context of device performance.
What this means for the clinical performance report
Your clinical performance report cannot be written in isolation. It must reference the therapeutic evidence explicitly. It must show that the patient population in your device studies matches the population in the drug trials. It must demonstrate that your measurement method does not introduce classification errors that would change therapeutic outcomes.
This requires more than a literature review. It requires active collaboration with the pharmaceutical partner. It requires access to patient-level data or at least summary data that allows comparison of decision thresholds.
And it requires a clear statement of the intended use that aligns with the approved drug indication. If the drug is approved for a specific line of therapy in a specific patient subgroup, your device must be validated for that exact use.
The clinical performance report for a companion diagnostic is not a standalone document. It must be read alongside the Summary of Product Characteristics (SmPC) of the corresponding medicinal product and demonstrate alignment in patient population, decision thresholds, and clinical claims.
When existing clinical data is not enough
There are situations where reliance on the drug developer’s data is insufficient. This happens when your device introduces a methodological change that was not present in the original trials.
For example, if the drug trials used a qualitative immunohistochemistry assay and your device uses quantitative PCR, you cannot assume equivalence. The biological target may be the same, but the measurement principle is different. Cutoffs may not translate directly.
In these cases, you must generate your own clinical evidence. This often means prospective studies that evaluate therapeutic outcomes based on your device’s classification.
This is expensive. It is time-consuming. And it is often unexpected by manufacturers who assumed the drug approval would carry the burden.
The role of retrospective analysis
One approach is retrospective analysis of archived samples from the drug trials. If samples were banked, they can be tested with your device. This allows direct comparison of patient classification.
But this only works if the samples are available, if they were stored properly, and if the stability of the biomarker is documented. In many cases, archived samples do not exist or cannot be used.
When retrospective data is not available, prospective studies become necessary. This is the evidence burden that nobody expects. And it changes the timeline and cost profile of the submission entirely.
Assuming that a new measurement technology can be validated purely through analytical bridging studies. Reviewers increasingly request clinical outcome data when the methodology differs from the trials that established therapeutic benefit.
The post-market reality
Even after approval, the evidence burden does not end. Companion diagnostics require ongoing performance monitoring that tracks not just device failure rates, but clinical decision quality.
This means your PMPF plan must include mechanisms to detect classification errors, changes in biomarker prevalence, and real-world deviations from the intended use.
If the corresponding drug undergoes label changes, your device may require reassessment. If new therapeutic data emerges that changes the decision threshold, your cutoffs may need revision.
This is rarely captured in initial PMPF plans. Most plans focus on device performance metrics. They do not track therapeutic outcomes. And when a Notified Body asks how you will monitor whether patient classification remains clinically valid, the answer is often unclear.
What should be in your PMPF plan
For a companion diagnostic, the PMPF plan must include access to treatment outcome data or at least aggregate data that shows whether patients classified by the device are experiencing expected therapeutic responses.
This may require agreements with the pharmaceutical partner. It may require participation in registries or post-authorization studies for the drug.
Without this, you cannot demonstrate ongoing clinical validity. And without ongoing clinical validity, you cannot maintain conformity with IVDR Article 56.
What to do now
If you are developing a companion diagnostic, start with the regulatory coordination. Establish data-sharing agreements with the pharmaceutical partner early. Understand what data you will have access to and what gaps remain.
Build your clinical performance evaluation around the therapeutic evidence. Show explicitly how your device aligns with the patient population, decision thresholds, and clinical outcomes from the drug trials.
If your device introduces methodological differences, plan for bridging studies or prospective validation. Do not assume analytical concordance will be sufficient.
And structure your PMPF plan to monitor clinical validity, not just device performance. This requires tracking how real-world use aligns with the validated therapeutic strategy.
The evidence burden for companion diagnostics is heavier than most teams anticipate. But it is predictable. And it is manageable if you understand what reviewers are really asking for.
They are not asking whether your device works. They are asking whether the therapeutic decisions it informs are supported by clinical evidence. That distinction defines the entire submission.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). IVDR Article 56, MDCG 2020-16
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– IVDR 2017/746 Article 56
– MDCG 2020-16 Guidance on Classification Rules for in vitro Diagnostic Medical Devices under Regulation (EU) 2017/746
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





