Clinical Data Definition MDR: What Counts as Clinical Evidence
I reviewed a clinical evaluation report last month where the manufacturer listed twenty publications as clinical data. Only three were actually usable under MDR. The rest were preclinical studies, in vitro tests, and opinion pieces. The Notified Body rejected the entire equivalence claim. This happens more often than you think.
In This Article
- What MDR Article 2(48) Actually Says
- The Three Recognized Sources of Clinical Data
- What Reviewers Actually Look For
- The Gray Zone: Post-Market Surveillance and Registries
- Why This Matters for Your Clinical Evaluation
- How to Build a Defensible Clinical Dataset
- What Happens When You Get It Wrong
- Final Thoughts
The MDR shifted the definition of clinical data in ways that many manufacturers still misunderstand. What counted as supporting evidence under the MDD often fails under MDR scrutiny. The difference is not subtle. It affects your equivalence claims, your SOTA analysis, and your entire clinical evaluation structure.
Understanding what qualifies as clinical data under MDR is not an academic exercise. It determines whether your technical file survives review or gets suspended mid-certification.
What MDR Article 2(48) Actually Says
MDR Article 2(48) defines clinical data as information concerning safety or performance obtained from the use of a device. This sounds broad. It is not.
The regulation specifies that clinical data includes information derived from clinical investigations, scientific literature, and registries or post-market surveillance. But each of these sources has qualification criteria that most submissions overlook.
Clinical data must relate to the clinical use of the device. Not bench testing. Not simulations. Not biocompatibility panels run in isolation. The data must demonstrate how the device performs or behaves in contact with human tissue or in clinical application.
Clinical data under MDR is limited to evidence generated from human use. Preclinical and in vitro studies support the clinical evaluation but do not count as clinical data themselves.
This distinction matters because many manufacturers still pad their clinical evaluation reports with benchtop studies and animal trials, then wonder why reviewers reject the dataset as insufficient.
The Three Recognized Sources of Clinical Data
MDR and MDCG 2020-5 identify three acceptable sources for clinical data: clinical investigations conducted on the device, relevant scientific literature, and data from equivalent devices when equivalence is properly demonstrated.
Each source has its own qualification process. You cannot substitute one for another without meeting the specific criteria.
Clinical Investigations
This is data you generate yourself. Prospective studies. Clinical trials. Observational studies conducted under defined protocols with ethical approval and proper consent.
Clinical investigation data is the strongest form of clinical evidence under MDR. It directly addresses your device, your intended use, and your target population. No inference. No extrapolation. No equivalence assumptions.
The problem is cost and timeline. Running a clinical trial for a Class IIa device is often impractical. That is why manufacturers rely on literature and equivalence. But that reliance must be justified.
Scientific Literature
Published studies count as clinical data if they meet specific quality and relevance criteria. The literature must describe devices similar to yours used in comparable clinical settings. The study design must be robust. The endpoints must align with your intended use.
MDCG 2020-13 outlines the appraisal process for scientific literature. It requires systematic search strategies, critical analysis of methodology, and transparent documentation of inclusion and exclusion decisions.
What does not count: in vitro studies published in peer-reviewed journals. Animal studies. Bench testing. Computational models. These may support your technical rationale, but they are not clinical data.
Manufacturers frequently cite laboratory studies as clinical evidence because they appear in medical journals. Reviewers reject these citations immediately. Publication does not convert preclinical work into clinical data.
Data from Equivalent Devices
If you demonstrate equivalence to another device under MDR Annex XIV, you can leverage the clinical data of that equivalent device. But equivalence is not similarity. It is not close enough. It is not within the same product family.
Equivalence requires identical technical characteristics, identical biological characteristics, and identical clinical performance. If you cannot demonstrate all three, you cannot rely on the equivalent device’s clinical data.
Most equivalence claims fail on biological or clinical characteristics. The devices may share the same materials and similar design, but differ in contact duration, tissue type, or clinical indication. That difference invalidates the equivalence claim.
When equivalence fails, the clinical data from the other device becomes irrelevant to your submission. You are left without sufficient clinical evidence.
What Reviewers Actually Look For
Notified Bodies and competent authorities do not just count publications. They assess the dataset for relevance, quality, and sufficiency.
Relevance means the data addresses your device’s specific risks, intended use, and target population. A study on a similar device used in a different clinical setting may be high quality but irrelevant to your submission.
Quality refers to study design, sample size, methodology, and bias control. Observational studies with undefined protocols and uncontrolled variables get flagged. Case reports and opinion pieces do not qualify as clinical evidence.
Sufficiency means the dataset covers all identified risks and claims. You cannot ignore a significant risk because no literature exists for it. You must either generate new data or justify the gap with technical rationale and risk management.
A clinical evaluation with fifty irrelevant publications is weaker than one with three highly relevant studies. Reviewers prioritize depth and applicability over volume.
The Gray Zone: Post-Market Surveillance and Registries
MDR Article 2(48) includes post-market clinical follow-up and registry data as potential sources of clinical data. This creates confusion because PMCF is often retrospective and uncontrolled.
PMCF data qualifies as clinical data when it is systematically collected, analyzed according to a defined plan, and addresses specific clinical questions. Ad hoc complaint logs and vague customer feedback do not meet this standard.
Registry data can be valuable if the registry tracks relevant clinical outcomes for devices similar to yours. But the registry must be credible, the data must be accessible, and the population must align with your intended use.
In practice, most manufacturers cannot rely on registries at the time of initial submission. The data is either unavailable or not specific enough. PMCF becomes more useful during recertification and periodic safety updates.
Why This Matters for Your Clinical Evaluation
The clinical data you include determines whether your clinical evaluation report demonstrates conformity with MDR Annex I. If your dataset is weak, your entire evaluation collapses.
Reviewers will challenge every non-clinical citation. They will question every equivalence assumption. They will flag every gap between your claims and your evidence.
I have seen submissions with comprehensive technical files, strong risk management, and robust quality systems still get rejected because the clinical data definition was misunderstood. The manufacturer thought biocompatibility data counted as clinical evidence. It does not.
Another common mistake is citing preclinical studies to support clinical claims. You can reference those studies in your technical rationale, but they cannot appear in your clinical data section. Mixing the two signals to the reviewer that you do not understand the regulatory framework.
Manufacturers list bench tests, material characterization, and sterilization validation studies under clinical data. This immediately undermines credibility and triggers deeper scrutiny of the entire submission.
How to Build a Defensible Clinical Dataset
Start by identifying what clinical questions need answers. What are the significant residual risks? What are the clinical claims? What does the state of the art require you to demonstrate?
Then map each question to a data source. Can it be answered with literature? Do you need clinical investigation? Can equivalence apply if properly justified?
Conduct your literature search according to MDCG 2020-13. Document your search terms, databases, inclusion criteria, and appraisal methodology. Do not cherry-pick favorable studies. Include contradictory findings and explain how they affect your conclusions.
If you rely on equivalence, document it rigorously. Provide side-by-side comparisons of technical specifications, material properties, and clinical use. If any characteristic differs, explain why it does not affect clinical performance. If you cannot explain it, equivalence is not valid.
For gaps in the dataset, acknowledge them openly. Explain why the gap exists. Describe how risk management addresses the uncertainty. Outline your PMCF plan to collect the missing data post-market.
Transparency builds trust. Trying to hide a weak dataset with volume or vague language does the opposite.
What Happens When You Get It Wrong
If your clinical dataset is insufficient, the Notified Body will issue a non-conformity. You will need to provide additional data or justify the gap. If you cannot, the certification process stalls.
In some cases, you can address the deficiency with a better literature search or refined equivalence justification. In other cases, you will need to conduct a clinical investigation. That adds months or years to your timeline.
I have seen manufacturers lose market access because they could not generate sufficient clinical data within a reasonable timeframe. The product was technically sound. The risk management was solid. But the clinical evidence was missing, and there was no fast path to generate it.
The earlier you understand what counts as clinical data, the earlier you can plan how to obtain it. Waiting until the Notified Body review to discover the problem is too late.
Clinical data planning should begin at the design phase, not during technical file compilation. The device you design determines what evidence you can realistically collect.
Final Thoughts
The definition of clinical data under MDR is narrower and more demanding than most manufacturers expect. It excludes preclinical work. It requires systematic appraisal of literature. It sets high bars for equivalence.
Understanding this definition is not optional. It is the foundation of your clinical evaluation. If you misinterpret it, every downstream decision in your submission is compromised.
Take the time to map your clinical data sources correctly. Challenge your assumptions about what qualifies. And when in doubt, focus on data derived from human use in clinical settings. That is what the regulation requires. That is what reviewers will demand.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 2(48), Annex XIV
– MDCG 2020-5: Clinical Evaluation
– MDCG 2020-13: Clinical Evaluation Assessment Report Template





