Real-World Data Is Not Real-World Evidence—And Your CER Depends On It
I have seen manufacturers submit clinical evaluation reports with hundreds of pages of real-world data—registry results, observational studies, post-market surveillance summaries. The documents look comprehensive. The references are abundant. But when the Notified Body reviewer writes back, the deficiency is clear: “Insufficient evidence to support the clinical claims.” The problem? They submitted data, not evidence.
In This Article
- What Real-World Data Actually Represents
- What Transforms Data Into Evidence
- Why This Distinction Matters For Clinical Evaluation Reports
- The Role of Real-World Evidence in Equivalence Demonstrations
- Real-World Evidence and PMCF Strategy
- Practical Implications for Your Next Submission
- Why Notified Bodies Focus on This Distinction
- Looking Ahead
This distinction causes more delays in technical documentation reviews than almost any other misunderstanding. And it is not about semantics. It is about whether you can actually use what you collected to demonstrate safety and performance under MDR.
The confusion is understandable. Both terms appear throughout MDCG guidance. Both are central to post-market surveillance and clinical evaluation updates. But they represent fundamentally different stages in the regulatory reasoning process.
Real-world data is raw material. Real-world evidence is the conclusion you draw from it—but only after you have assessed its quality, relevance, and applicability to your device and claims.
What Real-World Data Actually Represents
Real-world data refers to information collected outside the controlled environment of a clinical investigation. It includes registry data, insurance claims, electronic health records, post-market surveillance reports, scientific literature, and PMCF studies.
Under MDR Article 61 and Annex XIV Part A, manufacturers must conduct ongoing clinical evaluation throughout the lifecycle of the device. Real-world data is one of the primary sources for this evaluation.
But here is what I observe in submissions: manufacturers treat real-world data as if its existence alone satisfies the requirement. They include registry publications, cite observational studies, and reference post-market data streams. The assumption is that more data equals stronger evidence.
That assumption fails every time it reaches a competent reviewer.
Manufacturers append real-world studies to their clinical evaluation report without critically appraising the data quality, patient population comparability, or outcome measurement consistency. The Notified Body flags this as “data presented but not transformed into evidence.”
Real-world data becomes useful only after you apply a structured appraisal process. You must assess the study design, the population characteristics, the follow-up duration, the endpoint definitions, and the risk of bias.
Until you do that, it remains data. And data alone does not demonstrate anything.
What Transforms Data Into Evidence
Real-world evidence emerges when you appraise real-world data and determine that it is sufficiently valid, relevant, and applicable to support a specific claim about your device.
MDCG 2020-6 on sufficient clinical evidence describes this transformation implicitly. The document emphasizes that clinical data must be critically appraised before it can contribute to the demonstration of safety and performance.
This is where most manufacturers stop too early. They collect the data. They reference the study. But they do not complete the analysis that converts observation into evidence.
The appraisal requires three layers of assessment:
First, methodological quality. Is the study design appropriate for the research question? Are the endpoints clinically meaningful? Is there adequate follow-up? Are there confounding factors that were not controlled?
Second, relevance. Does the study population match your intended users and patient groups? Are the clinical conditions comparable? Are the device usage patterns similar?
Third, applicability. Can the findings from this study be reasonably applied to your device, considering differences in design, materials, technology, or intended use?
If any of these layers fail, the data cannot contribute to your evidence base—at least not without significant qualification or supplementary data.
Real-world evidence is not a category of data source. It is a conclusion you reach after methodologically appraising real-world data and determining it supports your claims.
Why This Distinction Matters For Clinical Evaluation Reports
The structure of your clinical evaluation report should reflect this distinction. Under Annex XIV Part A, you must present clinical data and then appraise it to demonstrate conformity with the relevant general safety and performance requirements.
When I review CERs, I see a recurring pattern. Section 6 on clinical data identification lists dozens of studies. Section 7 on appraisal of clinical data simply repeats the abstracts or summarizes the results without critical analysis.
This structure fails the MDR requirement. You have presented data. You have not generated evidence.
What reviewers expect is a clear appraisal for each study or data source. They want to see you assess quality, relevance, and applicability. They want to see you acknowledge limitations. They want to see you explain whether and how each source contributes to your demonstration of safety and performance.
Without that analysis, you have not completed the clinical evaluation. You have compiled a literature review.
The Role of Real-World Evidence in Equivalence Demonstrations
This distinction becomes even more critical when you rely on equivalence under MDCG 2020-5. The equivalence route allows you to use clinical data from an equivalent device to support your own device’s clinical evaluation.
But equivalence is not established by showing that similar devices exist in the market. It requires technical, biological, and clinical equivalence—and real-world data from the equivalent device must be critically appraised before it becomes evidence supporting your device.
I have reviewed equivalence claims where the manufacturer cites real-world studies of predicate devices. The studies show favorable outcomes. The device characteristics appear similar.
But the appraisal is missing. There is no analysis of whether the patient populations are comparable. There is no discussion of differences in surgical technique, anatomical site, or device preparation. There is no acknowledgment of differences in follow-up duration or endpoint definitions.
The result? The Notified Body does not accept the equivalence claim. Not because the data is bad, but because it was never transformed into evidence through proper appraisal.
Manufacturers assume that citing real-world studies of equivalent devices automatically constitutes evidence for their own device. Without appraisal and explicit discussion of applicability, the equivalence claim collapses.
Real-World Evidence and PMCF Strategy
The distinction also shapes how you design and interpret your post-market clinical follow-up plan. Under MDR Article 61(11) and Annex XIV Part B, PMCF is not optional. It is a continuous requirement aimed at confirming safety and performance and identifying emerging risks.
Many manufacturers design PMCF plans that focus on data collection—registry participation, periodic literature searches, complaint analysis. But the plan must also describe how that data will be appraised and transformed into evidence.
What will you do with the registry data once you collect it? How will you assess whether it remains relevant to your current design? How will you identify and manage bias in observational data?
If your PMCF plan describes data collection without describing evidence generation, it is incomplete. Reviewers will flag this during technical documentation assessment or surveillance audits.
Practical Implications for Your Next Submission
When you prepare your next clinical evaluation report or respond to a deficiency letter, apply this framework:
For each data source you cite, include a structured appraisal section. Use a consistent template that addresses methodological quality, relevance to your device and claims, and applicability considering design or population differences.
For equivalence claims, do not just show that similar devices exist. Appraise the clinical data from those devices and explicitly discuss why it can be applied to yours.
For PMCF plans, describe not only what data you will collect but how you will appraise it and integrate findings into your ongoing clinical evaluation.
For periodic safety update reports (PSURs), demonstrate that you have appraised new real-world data since the last update and explain whether it changes your benefit-risk determination.
This approach does not require more data. It requires more rigor in how you analyze and present the data you already have.
Reviewers do not expect perfect data. They expect transparent appraisal. When you acknowledge limitations and explain how you accounted for them, you build credibility.
Why Notified Bodies Focus on This Distinction
Notified Bodies are not being difficult when they insist on this distinction. They are fulfilling their obligation under MDR Article 52 to verify that clinical evaluation has been performed in accordance with Annex XIV.
Annex XIV requires appraisal of clinical data. It requires demonstration of conformity with safety and performance requirements. It requires analysis of the benefit-risk profile.
None of these can be accomplished with unapprised data. You need evidence—data that has been evaluated and determined to support specific conclusions.
When a reviewer asks for more evidence, they are often not asking for more studies. They are asking you to complete the appraisal process for the data you already presented.
This is why deficiency letters sometimes feel frustrating. You submitted extensive clinical data. But the reviewer did not see the analytical work that transforms data into evidence.
The solution is not to collect more data. The solution is to demonstrate that you critically appraised what you have and that it supports your claims.
Looking Ahead
As enforcement of MDR continues to tighten, expect more scrutiny of how you appraise and present real-world data. Notified Bodies and competent authorities are increasingly focused on the quality of clinical evaluation, not just its volume.
The manufacturers who succeed are those who understand that clinical evaluation is an analytical process, not a data compilation exercise.
Real-world data will always be central to lifecycle clinical evaluation. But it becomes useful only when you transform it into evidence through rigorous, transparent appraisal.
That transformation is not automatic. It requires methodological discipline, critical thinking, and clear documentation.
And when done correctly, it turns your clinical evaluation report from a collection of references into a credible demonstration of safety and performance.
Which is exactly what the regulation requires.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61: Clinical evaluation
– MDR 2017/745 Annex XIV: Clinical evaluation and post-market clinical follow-up
– MDCG 2020-5: Clinical Evaluation – Equivalence
– MDCG 2020-6: Regulation (EU) 2017/745: Sufficient clinical evidence for legacy devices
– MDCG 2020-13: Clinical evaluation assessment report template
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





