Real World Data is not Real World Evidence in PMCF

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see PMCF reports that claim they have collected Real World Evidence when all they have is scattered data. The confusion between Real World Data and Real World Evidence is not semantic. It is the reason why many PMCF activities fail to meet MDR requirements and why Notified Bodies reject reports that look complete on the surface.

This confusion creates a predictable pattern. Manufacturers collect data points, archive them, and believe they have fulfilled their post-market surveillance obligations. When the Notified Body or competent authority reviews the file, they find a collection of information but no evidence. The difference is not subtle.

Let me be clear about what each term means in regulatory practice, why the distinction matters, and how the misunderstanding translates into deficiencies during audits.

What Real World Data Actually Means

Real World Data, or RWD, refers to data collected outside the controlled environment of clinical investigations. It comes from sources that reflect routine use of the device in everyday clinical practice. This includes patient registries, electronic health records, claims databases, device registries, and structured data collection from hospitals and clinics.

In the context of PMCF under MDR 2017/745, RWD typically includes clinical outcomes data, device performance indicators, user feedback, complaint records, and procedural documentation. It is raw material. It is information that reflects what happens when the device is used in real conditions, by real users, on real patients.

The key characteristic of RWD is that it is not inherently analytical. It is observational. It reflects events and outcomes, but it does not yet answer questions about safety, performance, or clinical benefit.

Key Insight
Real World Data is the input. It is what you collect. It becomes useful only when you process it, contextualize it, and interpret it against your clinical evaluation plan.

Most manufacturers understand this conceptually. But in practice, I see PMCF plans that list data sources without explaining how those sources will be used to generate evidence. The plan describes what will be collected, but not how the data will be analyzed, interpreted, or integrated into the clinical evaluation.

This is where the problem starts.

What Real World Evidence Actually Requires

Real World Evidence, or RWE, is the product of analysis. It is the clinical evidence derived from RWD through systematic evaluation. It answers specific questions about the safety and performance of the device. It is contextualized, interpreted, and tied to the clinical evaluation objectives.

According to MDR Article 61 and the requirements for PMCF outlined in Annex XIV Part B, PMCF must generate evidence that confirms the safety and performance of the device throughout its lifecycle. Evidence is not data. Evidence is what the data shows when you apply a methodology, compare it to predefined acceptability criteria, and interpret it in the context of the state of the art.

This is the regulatory expectation. PMCF is not a passive data collection exercise. It is an active evidence generation process.

When a Notified Body reviews a PMCF report, they are looking for evidence that specific clinical questions have been answered. They want to see that residual risks have been monitored, that performance claims have been verified, that emerging safety signals have been evaluated, and that the clinical benefit remains favorable in routine use.

Common Deficiency
PMCF reports that present data tables, charts, and summaries without linking them to predefined clinical questions or acceptability criteria. The data is there, but the evidence is not.

The absence of this linkage is the most frequent deficiency I encounter. The manufacturer presents complaint rates, procedural outcomes, and device performance metrics. But there is no analysis of whether these outcomes are acceptable, how they compare to the state of the art, or what they mean for the residual risk profile.

Without this analysis, you have data. You do not have evidence.

Why the Distinction Matters in Practice

The regulatory framework does not ask for data collection. It asks for evidence generation. This distinction shapes every aspect of how PMCF must be planned, executed, and reported.

If your PMCF plan defines data sources but does not define the analysis methodology, you are planning for data collection. If your PMCF report presents data summaries but does not interpret them against predefined criteria, you are reporting data, not evidence.

Notified Bodies and competent authorities have become sharper on this point. They reject PMCF reports that look complete but lack analytical depth. They issue non-conformities that specifically point to the absence of evidence generation, even when data collection is extensive.

This is not a documentation issue. It is a conceptual issue. It reflects a misunderstanding of what PMCF is supposed to achieve under MDR.

Let me give you a concrete example from a recent audit I observed. The manufacturer had collected three years of complaint data, device return data, and procedural outcome data from multiple hospitals. The PMCF report included detailed tables showing complication rates, user feedback scores, and device survival rates.

The Notified Body rejected the report. The reason was simple. The data was there, but the analysis was not. There were no predefined acceptability criteria. There was no comparison to the clinical evaluation conclusions. There was no interpretation of whether the observed complication rate was consistent with the benefit-risk profile established in the clinical evaluation report.

The manufacturer had spent significant resources collecting data. But they had not generated evidence.

How to Move from Data to Evidence

The transition from RWD to RWE requires a structured approach. It begins with the PMCF plan and carries through to the PMCF evaluation report and ultimately to the periodic update of the clinical evaluation report.

Your PMCF plan must define the clinical questions that need to be answered. These questions should be derived from the residual risks, performance claims, and clinical benefit assertions in your clinical evaluation report. Each question must have predefined acceptability criteria based on the state of the art, clinical literature, and your device’s intended benefit-risk profile.

When you collect RWD, you must have a methodology for analyzing it. This means statistical methods where applicable, qualitative interpretation frameworks where relevant, and comparison benchmarks drawn from the state of the art literature review.

Your PMCF evaluation report must show that you applied this methodology. It must interpret the data in the context of the predefined questions. It must conclude whether the safety and performance of the device remain acceptable, whether the benefit-risk ratio is still favorable, and whether any new information requires updating the clinical evaluation or risk management.

Key Insight
Evidence generation is a loop. The clinical evaluation defines the questions. The PMCF plan defines how data will answer those questions. The PMCF report interprets the data and feeds conclusions back into the clinical evaluation update.

When this loop is closed correctly, you are generating evidence. When any step is missing, you are only collecting data.

What Reviewers Actually Look For

Notified Bodies and competent authorities do not assess PMCF based on the volume of data. They assess it based on the quality of evidence. This means they look for three things.

First, they look for alignment between the PMCF objectives and the clinical evaluation conclusions. If your clinical evaluation identified specific residual risks or made specific performance claims, your PMCF must show how you monitored those risks and verified those claims.

Second, they look for predefined acceptability criteria. If you present a complication rate of three percent, they want to know why three percent is acceptable. What does the state of the art show? What did your clinical evaluation conclude about acceptable risk levels? If this context is missing, the data is meaningless.

Third, they look for interpretation. Did you analyze trends? Did you compare outcomes across subpopulations or indications? Did you identify any emerging signals? Did you conclude whether the device still meets its intended performance and safety profile?

These are the markers of evidence generation. Data alone cannot answer these questions.

I have seen PMCF reports with hundreds of pages of data tables that fail this assessment. I have also seen concise reports of thirty pages that clearly demonstrate evidence generation. The difference is not length. It is analytical rigor.

The Role of MDCG Guidance

The MDCG guidance documents reinforce this distinction. MDCG 2020-7 on PMCF evaluation reports emphasizes that PMCF must generate clinical evidence that confirms ongoing compliance with MDR safety and performance requirements. It explicitly states that PMCF is not simply surveillance. It is evidence generation.

MDCG 2020-13 on clinical evaluation clarifies that clinical evidence includes data from PMCF activities, but only when that data has been evaluated and interpreted. The guidance describes how PMCF findings must be integrated into the clinical evaluation update and how new information must be assessed for its impact on the benefit-risk determination.

The regulatory expectation is clear. PMCF is an evidence-generating process that feeds directly into the lifecycle maintenance of the clinical evaluation. It is not a parallel activity. It is not an archive of data points. It is the mechanism through which you demonstrate that your clinical evaluation conclusions remain valid over time.

Common Deficiency
PMCF plans that describe data collection activities but do not define how the data will be evaluated or what clinical questions will be answered. This leads to reports that present data without generating evidence.

If your PMCF plan does not define the analysis framework, your PMCF activities will not generate evidence. If your PMCF report does not interpret the data against predefined criteria, you are submitting data, not evidence. And that is what gets rejected.

What This Means for Your Next Submission

Before you finalize your next PMCF plan or PMCF evaluation report, ask yourself these questions. Does your plan define specific clinical questions that need to be answered? Does it define acceptability criteria for each outcome you will measure? Does it describe the methodology you will use to analyze the data?

Does your PMCF report interpret the data in the context of those predefined questions? Does it compare outcomes to acceptability criteria and to the state of the art? Does it conclude whether the safety and performance of the device remain acceptable? Does it identify any findings that require action in the clinical evaluation or risk management?

If the answer to any of these questions is no, you are reporting data. You are not generating evidence.

The distinction is not academic. It is the difference between a compliant PMCF system and one that will not survive regulatory review. It is the difference between a clinical evaluation that evolves with real-world findings and one that becomes outdated the moment the device reaches the market.

Real World Data is what you collect. Real World Evidence is what you prove. The MDR requires the latter. Make sure your PMCF system delivers it.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61, Annex XIV Part B
– MDCG 2020-7 Post-Market Clinical Follow-up (PMCF) Evaluation Report
– MDCG 2020-13 Clinical Evaluation Assessment Report