RWD vs RWE Under MDR: Why Confusion Costs You Months
I reviewed a clinical evaluation report last month where the manufacturer claimed they were generating Real World Evidence through their PMCF study. The Notified Body rejected it immediately. The reason? They were collecting Real World Data, not generating Real World Evidence. The manufacturer lost four months because they confused two concepts that regulators treat as fundamentally different under MDR requirements.
In This Article
- What Real World Data Actually Means Under MDR
- What Real World Evidence Requires
- Why This Distinction Matters For Your CER
- The Practical Implication For PMCF Planning
- How Notified Bodies Assess This Distinction
- The Role of Study Design in Evidence Generation
- Integration Into Your Clinical Evaluation
- Practical Steps To Implement This Correctly
- What This Means For Your Next Submission
This confusion appears in almost every third CER I review. Manufacturers use the terms interchangeably. They write PMCF plans that promise Real World Evidence but deliver only Real World Data. They present data collection as if it automatically becomes evidence.
It does not.
Under MDR, this distinction matters more than many manufacturers realize. The difference determines whether your PMCF plan is acceptable, whether your clinical data set is sufficient, and whether your CER conclusions are defensible. Let me show you what actually separates these two concepts in regulatory practice.
What Real World Data Actually Means Under MDR
Real World Data is information. It is raw material collected from sources outside traditional clinical investigations. It comes from registries, electronic health records, claims databases, patient surveys, and post-market surveillance systems.
Article 61 of MDR does not use the term Real World Data explicitly. But the regulation requires manufacturers to have a system for collecting and analyzing information from the post-market phase. That information is Real World Data.
The critical point: RWD is descriptive. It tells you what happened. It does not tell you why it happened, whether your device caused it, or what it means for safety and performance.
Real World Data is observational information from routine clinical practice. It becomes useful only when transformed through systematic analysis into evidence that answers specific clinical questions.
When I see a PMCF plan that lists data collection activities without defining the analytical framework, I know the manufacturer has not understood this distinction. They are building a data warehouse without a blueprint for how that data will generate evidence.
What Real World Evidence Requires
Real World Evidence is the product of systematic analysis applied to Real World Data. It is what you get when you take RWD and apply rigorous methodology to answer a specific clinical question.
The transformation from data to evidence requires three elements that are often missing in PMCF plans:
First, a defined clinical question. Not a vague objective like “collect safety data.” A precise question that can be answered: Does the device perform as intended in patients with comorbidity X? What is the real-world complication rate compared to the clinical investigation population?
Second, an appropriate analytical method. You must define how you will analyze the data before you collect it. What statistical approach will you use? How will you control for confounders? What comparison group will you use?
Third, a structured evaluation. The analysis must follow a protocol. The results must be interpreted against predefined success criteria. The conclusions must address the original clinical question.
Without these three elements, you have data. You do not have evidence.
Manufacturers describe data collection activities in their PMCF plan but fail to define the analytical methodology that will transform that data into evidence. Notified Bodies reject these plans because they cannot assess whether the planned activities will generate sufficient evidence.
Why This Distinction Matters For Your CER
The Clinical Evaluation Report under MDR must demonstrate sufficient clinical evidence. Evidence, not data.
I have seen manufacturers present pages of registry data in their CER. They show tables of procedure numbers, patient demographics, and basic outcome frequencies. Then they conclude the device is safe and performs as intended.
This approach fails under MDR scrutiny.
MDCG 2020-6 on sufficient clinical evidence is clear. You must demonstrate that your clinical data set adequately addresses the claims you make. This requires evidence that has been systematically analyzed to answer specific questions about safety and performance.
Here is what happens in a typical review cycle:
The manufacturer submits a CER with extensive RWD from registries and literature. The Notified Body asks: “How does this data demonstrate that your specific device performs as intended in your target population?” The manufacturer cannot answer because they presented data without the analytical framework that would transform it into evidence.
The gap is methodological, not informational.
The Practical Implication For PMCF Planning
When you write your PMCF plan, you must move beyond data collection objectives. You must define what evidence you need to generate and how you will generate it.
Start with your residual clinical questions. These come from gaps in your pre-market clinical data set. They might relate to long-term performance, rare complications, use in subpopulations, or real-world effectiveness compared to controlled investigation settings.
For each residual question, define the evidence requirement. What specific information would answer this question? What level of certainty do you need? What would constitute sufficient evidence?
Then design the analytical approach. Will you use comparative analysis against a control group? Survival analysis for long-term outcomes? Subgroup analysis for specific populations? Define this before you start collecting data.
Your PMCF plan should read like a research protocol, not a data collection checklist. Define your clinical questions, your analytical methods, and your success criteria before you begin. This is how you plan for evidence generation, not just data accumulation.
I worked with a manufacturer last year who revised their entire PMCF strategy using this approach. Their original plan listed five data sources they would monitor. The revised plan defined three specific clinical questions, described the analytical methodology for each, and specified how the results would be integrated into the next CER update.
The Notified Body approved the revised plan in one review cycle.
How Notified Bodies Assess This Distinction
Notified Body reviewers are trained to identify whether you understand the difference between data and evidence. They look for specific indicators in your documentation.
In your PMCF plan, they check whether you have defined analytical methods. If you only describe data collection, they will raise a non-conformity. They want to see that you have planned for evidence generation.
In your CER, they assess whether your conclusions are supported by evidence or just accompanied by data. A conclusion that says “the device is safe based on registry data showing low complication rates” is insufficient. A conclusion that says “comparative analysis of registry data against matched controls demonstrates non-inferiority in complication rates, supporting the safety profile” is evidence-based.
The difference is in the analytical framework.
CER authors present RWD tables and figures without explaining the analytical process used to generate conclusions. Reviewers cannot assess the validity of conclusions when the analytical methodology is not transparent.
One Notified Body reviewer told me: “I need to see the path from data to conclusion. If that path is not visible, I cannot accept the conclusion as evidence-based.”
That statement captures the regulatory expectation perfectly.
The Role of Study Design in Evidence Generation
Real World Data does not come with built-in study design. That is why it requires careful planning to generate valid evidence.
When you use data from registries or electronic health records, you face methodological challenges that do not exist in prospective clinical investigations. Selection bias, missing data, confounding variables, and lack of standardization all threaten the validity of your analysis.
Evidence generation from RWD requires you to address these challenges explicitly. You must define your inclusion criteria, your data quality requirements, your approach to missing data, and your method for controlling confounders.
This is why I emphasize protocol development for PMCF activities. Even when you are using existing data sources, you need a protocol that defines how you will extract, analyze, and interpret that data to generate evidence.
Without a protocol, you are making analytical decisions on the fly. Those decisions become invisible in the final report. Reviewers cannot assess whether your conclusions are valid because they cannot see your methodology.
Integration Into Your Clinical Evaluation
The ultimate purpose of generating Real World Evidence is to strengthen your clinical evaluation. This happens through your CER updates and your ongoing benefit-risk assessment.
MDR Article 61(11) requires you to update your clinical evaluation throughout the lifecycle of your device. Real World Evidence is the primary input for these updates. But only if it is actually evidence.
When you update your CER, you should be able to point to specific clinical questions that have been answered by your PMCF activities. You should demonstrate how new evidence has refined your understanding of safety and performance. You should show how evidence from real-world use confirms or modifies conclusions from pre-market data.
If your CER update just adds new data tables without changing or strengthening your conclusions, you have not generated meaningful evidence. You have accumulated information without transforming it into knowledge.
Each CER update should demonstrate how new Real World Evidence has strengthened or refined your clinical evaluation. If the update only shows that you collected more data without generating new insights, you have not fulfilled the MDR requirement for ongoing clinical evaluation.
Practical Steps To Implement This Correctly
First, audit your current PMCF plan. Does it define clinical questions or just data collection activities? If it is the latter, revise it to include specific research questions and analytical methods.
Second, review your data sources. For each source of Real World Data, define what evidence you intend to generate from it. What questions will this data answer? What analysis will you perform?
Third, develop analytical protocols for your major PMCF activities. These do not need to be as detailed as clinical investigation protocols, but they must define your methodology clearly enough that a reviewer can assess its validity.
Fourth, train your team on this distinction. Make sure everyone involved in PMCF and clinical evaluation understands that collecting data is the first step, not the final deliverable.
Fifth, review your CER template. Does it force you to explain the analytical process for each data source? Does it require you to connect evidence to specific clinical conclusions? If not, revise the template to make the evidence generation process visible.
What This Means For Your Next Submission
The next time you prepare a CER or a PMCF plan, ask yourself: Am I describing data or demonstrating evidence?
If you are listing data sources and showing tables, you are describing data. If you are explaining analytical methods and connecting results to clinical conclusions through systematic evaluation, you are demonstrating evidence.
Notified Bodies increasingly focus on this distinction. They understand that MDR requires evidence-based clinical evaluation, not data-rich documentation.
The manufacturers who adapt their approach now will avoid the review cycles that come from treating these concepts as interchangeable. The manufacturers who continue to confuse them will keep receiving the same deficiencies: insufficient evidence, inadequate analytical methodology, unclear connection between data and conclusions.
The choice is methodological, not philosophical. Generate evidence, not just data. Your regulatory timeline depends on it.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61
– MDCG 2020-6: Sufficient Clinical Evidence for Legacy Devices
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Plan Template
– MDCG 2020-13: Clinical Evaluation Assessment Report Template





