Why your registry data won’t pass clinical evaluation review
A manufacturer submits a clinical evaluation report supported by registry data from 12,000 patients. The Notified Body flags it as non-compliant. The data exists. The numbers are real. But the evidence is inadmissible. This happens more often than you think.
In This Article
Registry data seems like the ideal source for post-market clinical evidence. Large patient populations. Real-world conditions. Long-term follow-up. On paper, it addresses everything reviewers ask for.
But when I review clinical evaluation reports that rely on registry data, I see the same pattern. The manufacturer lists the registry. They cite patient numbers. They reference publications. And the Notified Body rejects it.
The problem is not the registry itself. The problem is that most manufacturers treat registry data as if all registries are created equal. They are not.
Under MDR Annex XIV Part A and the requirements in Article 61, clinical evidence must be sufficient. Sufficient means the data must be adequate in quality, scope, and clinical relevance. Registry data can meet this standard. But only if the registry meets specific structural and methodological criteria.
Most manufacturers skip this assessment. They assume that publication in a peer-reviewed journal validates the data. It does not. Journal publication proves the analysis was sound. It does not prove the underlying data structure meets regulatory standards for clinical evaluation.
What Makes Registry Data Different
Registry data is observational. It is collected during routine clinical practice. This creates both an advantage and a risk.
The advantage is that the data reflects real-world use. No selection bias from controlled trial inclusion criteria. No artificial follow-up schedules. The device is used as it would be used in the market.
The risk is that the data was not collected with regulatory submission in mind. Variables may be inconsistently recorded. Follow-up may be incomplete. Key adverse events may not be captured with the granularity required for MDR compliance.
This matters because MDCG 2020-6 on sufficient clinical evidence explicitly states that data quality affects the weight assigned to the evidence. Poor quality data, even from large registries, receives minimal weight in the evaluation.
Registry data is not automatically admissible because it is published. Admissibility depends on whether the registry structure allows verification of device safety and performance at the level of detail required by MDR.
So what makes a registry admissible?
The Five Criteria for Admissible Registry Data
When I assess whether registry data can support a clinical evaluation, I look at five structural criteria. These are not explicitly listed in a single MDCG document, but they emerge from the combination of MDR Annex XIV requirements and the practical reality of what reviewers accept.
1. Device-Level Traceability
The registry must allow identification of the specific device model used in each patient. Not the device category. Not the device family. The exact model and version.
This is where most general registries fail. A national hip replacement registry may record the procedure and the manufacturer. But if it does not record the specific catalog number and lot identifier, you cannot link outcomes to your device.
Without device-level traceability, the data is not specific. And MDR requires device-specific evidence.
2. Adverse Event Capture with Sufficient Granularity
The registry must capture adverse events in enough detail to allow comparison with your risk management file. This means not just recording that a complication occurred, but recording the nature, severity, causality assessment, and timing of the event.
Many registries record only major adverse events. Revisions. Deaths. Re-interventions. But they do not capture minor complications, device malfunctions, or near-misses.
If your risk analysis includes risks that the registry does not systematically capture, the registry cannot be used to demonstrate control of those risks.
Manufacturers reference registry data for residual risk acceptability without verifying that the registry captures the specific risks listed in their risk management file. The Notified Body flags this as a gap in clinical evidence.
3. Follow-Up Completeness and Duration
The registry must have structured follow-up protocols and documented rates of patient retention. If 30% of patients are lost to follow-up after one year, the data quality degrades.
This is especially critical for long-term implants. A registry may report excellent outcomes at one year. But if only 40% of patients have five-year follow-up, the long-term safety profile remains unclear.
MDR requires lifetime evidence for implantable devices. A registry with poor long-term retention cannot fulfill this requirement.
4. Independent Data Verification and Audit Trails
The registry must have a quality assurance process. This includes source data verification, monitoring for data completeness, and audit trails for corrections.
Many registries rely on voluntary reporting by clinicians. Data entry is inconsistent. Missing fields are common. No one verifies that the recorded outcome matches the patient chart.
When the Notified Body asks how the manufacturer verified the accuracy of the registry data, most manufacturers have no answer. They relied on the published paper. But the paper does not include the audit trail.
Without verification, the data is not reliable. And unreliable data is inadmissible.
5. Ethical and Regulatory Compliance for Data Use
The registry must have obtained informed consent or ethical approval for secondary use of the data. If the registry was designed for quality improvement, using the data for regulatory submission may require additional approvals.
This is a legal issue, not a clinical one. But it affects admissibility. If the manufacturer cannot demonstrate legal access to the data for regulatory purposes, the Notified Body will not accept it.
I have seen manufacturers reference registry data published in journals, then discover they do not have permission to use that data in their CER. The publication rights do not transfer regulatory data rights.
How to Assess a Registry Before You Cite It
Before you include registry data in your clinical evaluation, you need to conduct a structured assessment. This is not a quick literature review. It is a formal evaluation of whether the registry meets MDR evidence standards.
The assessment should include:
Step 1: Obtain the registry protocol. Not the publication. The actual study protocol that defines data collection methods, variables, follow-up schedule, and quality assurance procedures.
If the protocol is not available, the registry is not transparent. And transparency is a minimum requirement for admissible evidence.
Step 2: Map registry variables to your clinical evaluation plan. List the clinical endpoints in your CER. Check whether the registry captures each endpoint with sufficient detail.
If there are gaps, document them. Explain why the missing variables do not affect the conclusions. Or find supplementary data sources.
Step 3: Request data quality metrics. Ask the registry administrators for loss-to-follow-up rates, missing data percentages, and source verification procedures.
If they cannot provide this information, the registry lacks the infrastructure for regulatory-grade evidence.
Step 4: Verify patient population alignment. Compare the registry inclusion criteria to your intended use population. If the registry excludes high-risk patients or limits enrollment to specialized centers, the data may not reflect your real-world use.
Misalignment reduces the relevance of the evidence. And relevance is one of the three pillars of sufficient clinical evidence under MDR.
Step 5: Confirm data access rights. Ensure you have legal and ethical authorization to reference the data in your submission. Document this authorization in your CER.
Assessing registry admissibility is a formal step in clinical evaluation planning. It is not something you do after the CER is written. It must happen during evidence gap analysis so you know whether the registry can fill the gaps.
What Reviewers Actually Check
When a Notified Body reviews a CER that includes registry data, they do not just check whether the data exists. They assess whether the data is usable.
Here is what they look for:
Can the manufacturer verify the data? If the manufacturer has no access to the raw data or audit trail, the Notified Body cannot assess reliability. They will request verification or reject the evidence.
Does the registry capture device-specific outcomes? If the registry groups multiple devices together, the outcomes cannot be attributed to your device. The evidence is not specific.
Are follow-up rates acceptable? If more than 20% of patients are lost to follow-up, the Notified Body will question the validity of long-term conclusions. They may request supplementary data or a revised PMCF plan.
Does the registry align with the intended use? If your device is used in high-risk patients but the registry excluded high-risk patients, the evidence does not cover your population. This is a gap.
These are not theoretical concerns. These are the questions that appear in Notified Body review reports. And if the CER does not address them, the submission stalls.
Manufacturers include registry data in the CER appendix without a structured appraisal of data quality. The Notified Body flags this as insufficient justification for weighting the evidence. The manufacturer must then conduct a retrospective appraisal, which delays the review.
When Registry Data Is Not Enough
Even high-quality registry data may not be sufficient on its own. MDR requires a body of evidence. That body must include data from multiple sources, analyzed together.
A registry may provide excellent long-term outcomes data. But it may not include bench testing results, usability validation, or biocompatibility evidence. Those must come from other sources.
Registry data is one component of the evidence base. It is not a substitute for the entire clinical evaluation.
The other limitation is that registries are retrospective. They capture what already happened. They do not address emerging risks or new indications.
If your device has a design change, the registry data applies to the previous version. You need additional evidence to demonstrate that the change does not introduce new risks.
If you are expanding the intended use to a new patient population, the registry may not include that population. You need targeted clinical data for the new indication.
This is why PMCF planning is critical. Registry data can support PMCF objectives, but only if the registry is actively collecting data on your current device version in your current target population.
If the registry is passive, or if your device represents a small subset of the registry, the data flow may be too slow to detect early signals. You need an active PMCF strategy alongside registry participation.
Practical Steps for Using Registry Data in Your CER
If you plan to use registry data, treat it as a formal data source. That means applying the same rigor you would apply to a clinical investigation.
Document the registry structure in your CER. Include the protocol, data dictionary, quality assurance procedures, and ethical approvals. Do not just cite the publication.
Appraise the data quality explicitly. Use a structured appraisal tool. Explain the strengths and limitations. Assign a weighting based on relevance and reliability.
Map registry outcomes to your risk-benefit analysis. Show how the registry data addresses specific risks and clinical claims. Do not assume the reviewer will make the connection.
Identify gaps. If the registry does not capture certain endpoints, say so. Explain how you will fill those gaps through other data sources or PMCF activities.
Maintain access to updates. If the registry is ongoing, commit to reviewing updated data during PMS. Include this in your PMCF plan.
This level of documentation takes time. But it is what separates admissible evidence from non-admissible evidence.
The Bottom Line
Registry data can be powerful evidence. But only if the registry was designed with sufficient structure, quality assurance, and device-level traceability.
Most registries were not designed for regulatory submissions. They were designed for quality improvement or academic research. That does not make them useless. But it does mean you cannot assume they meet MDR evidence standards.
Before you cite a registry in your CER, assess it. Formally. Document the assessment. And be prepared to explain to the Notified Body why the data is reliable, relevant, and sufficient.
If you cannot do that, the registry data is not admissible. And your clinical evaluation has a gap.
This is not about being difficult. It is about understanding that clinical evidence under MDR is not just about finding data. It is about demonstrating that the data is fit for regulatory purpose.
Registry data can be fit for purpose. But you have to prove it.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV Part A
– MDR Article 61: Clinical Evaluation
– MDCG 2020-6: Sufficient Clinical Evidence for Legacy Devices
– MDCG 2020-13: Clinical Evaluation Assessment Report Template





