Beyond PubMed: The Clinical Data Most Manufacturers Overlook
I have reviewed clinical evaluation reports where the manufacturer spent months collecting published literature, only to realize during the Notified Body review that they had overlooked their most valuable clinical data sources. The data existed. It was accessible. But they never thought to include it because they assumed clinical evaluation meant literature review.
In This Article
This misconception costs time, creates deficiencies, and weakens the entire clinical evidence package.
The MDR is explicit. Article 61(1) defines clinical evidence as data generated from clinical investigations and from other sources. Yet many manufacturers still treat published literature as the default, sometimes the only source they systematically collect.
The problem is not that literature is unimportant. The problem is that it is often insufficient, especially for devices with low publication rates, niche indications, or incremental design changes.
The MDR does not rank clinical data sources by prestige. It ranks them by relevance, quality, and applicability to your specific device.
When a Notified Body reviewer opens your clinical evaluation report, they are not counting how many PubMed articles you cited. They are assessing whether your clinical evidence covers the scope, the intended purpose, the target population, and the claimed performance.
If your device is used in a specific clinical setting that is poorly represented in the published literature, the reviewer will notice. If your device has a unique feature that no published study addresses, the gap will be visible.
And this is where the other sources become essential.
What the Regulation Actually Says
MDR Article 61(1) refers to clinical data from investigations and from other sources. Annex XIV Part A reinforces this by stating that clinical evaluation must be based on clinical data providing sufficient evidence.
MDCG 2020-5 clarifies this further. It states that clinical data may include data from clinical investigations, from scientific literature, and from other sources such as registries, post-market surveillance, vigilance data, and clinical experience.
The regulation does not say literature first, others if needed. It says sufficient clinical evidence. The burden is on the manufacturer to demonstrate sufficiency.
In practice, this means you must identify what data exists, assess its relevance and quality, and integrate it into your evaluation. If published literature does not cover a key aspect of your device, you need other sources.
But here is what happens. Many manufacturers follow a routine. They run the literature search. They appraise the articles. They write the report. They submit.
Then the Notified Body asks: what about the data from your predicate device? What about the PMCF data from the previous version? What about the complaints and vigilance reports you have been collecting for three years?
Manufacturers conduct exhaustive literature reviews but fail to include their own post-market data, clinical experience records, or registry data, even when these sources are more relevant than published studies.
The Sources You Already Have
Let me walk through the sources that are often overlooked.
Post-Market Surveillance Data
If your device has been on the market, you have been collecting PMS data. Complaints, adverse events, feedback from users, performance metrics. This is clinical data.
It reflects real-world use. It includes your actual patient population. It covers your actual device, not a similar one described in a journal article from five years ago.
Yet many clinical evaluation reports treat PMS data as a separate annex, something to reference but not to integrate. The Notified Body will ask why this data is not part of the clinical evidence base.
The answer should not be because we did not think it belonged there.
Clinical Experience and Registries
If your device is used in clinical practice, there is experience. If it is used in specific centers, there may be internal audits, case series, or outcome tracking.
If your device is part of a registry, national or institutional, that data exists. It may not be published. It may not have a DOI. But it is clinical data.
MDCG 2020-5 explicitly mentions registries and clinical experience as valid sources. The challenge is access and documentation.
When I review a CER and see no mention of registries, I ask: did you check? Did you contact the relevant clinical societies? Did you reach out to key centers?
Often the answer is no. The assumption was that if it is not in PubMed, it does not exist.
Data from Equivalent or Similar Devices
If you are relying on equivalence, the clinical data from the equivalent device is part of your evidence base. This includes the manufacturer’s own data, published or unpublished, if accessible.
But here is where it becomes complicated. If the equivalent device is from another manufacturer, you may not have direct access to their PMS data, their internal studies, their vigilance reports.
You are limited to what is publicly available. And if that data is sparse, your equivalence claim weakens.
This is why equivalence-based strategies require careful planning. You must know what data exists before you commit to the equivalence pathway.
Unpublished Studies and Technical Reports
Your company may have conducted usability studies, biocompatibility testing, bench testing with clinical endpoints, or feasibility studies that were never published.
These are clinical data sources if they generate evidence about safety or performance in a clinical context.
I have seen manufacturers exclude their own usability study from the clinical evaluation because it was not peer-reviewed. But usability data is clinical data. If it demonstrates safe and effective use by the intended users, it belongs in the evaluation.
The key is documentation. The study must be well-designed, conducted under defined conditions, and reported with sufficient detail to allow appraisal.
Unpublished does not mean invalid. The criterion is quality and relevance, not publication status.
How Notified Bodies Assess These Sources
The Notified Body reviewer does not apply different standards to different sources. They apply the same questions: Is this data relevant? Is it reliable? Does it support the claims?
For published literature, relevance and reliability are assessed through standard appraisal criteria. For other sources, the same logic applies, but the documentation requirements differ.
If you include PMS data, the reviewer will want to see how that data was collected, how complaints were categorized, how adverse events were investigated. They will assess whether the data is representative and whether the analysis is sound.
If you include registry data, they will want to know the registry’s methodology, the patient population, the follow-up duration, the completeness of reporting.
If you include unpublished studies, they will assess the study design, the controls, the endpoints, the statistical methods.
The burden is on you to present these sources in a way that allows appraisal. This means structured reporting, clear methodology, and transparent limitations.
The Problem of Access and Documentation
One of the reasons manufacturers avoid these sources is access. Published literature is easy to find. Registry data is not. Clinical experience is even harder to document systematically.
But difficulty is not a justification for omission. If the data exists and is relevant, the regulatory expectation is that you pursued it.
I have worked with manufacturers who contacted clinical societies, who requested registry data, who reached out to key opinion leaders. Not all requests succeeded. But the documentation of the attempt matters.
When the Notified Body asks why you did not include a specific registry, the answer should be we contacted them, requested access, and were denied. Not we did not know it existed.
Manufacturers state that no additional data was available without documenting their search for registries, clinical experience, or post-market sources. Notified Bodies expect evidence of due diligence.
Integrating Multiple Sources
The clinical evaluation is not a collection of separate data sets. It is an integrated analysis.
This means you do not present literature in one section, PMS data in another, and registry data in a third. You analyze them together, weighting their contributions based on relevance and quality.
For example, if your literature search yields studies on similar devices but not your exact model, and your PMS data shows real-world performance with your device, the PMS data may be more relevant for certain claims.
If a registry includes a subset of patients using your device, that data may fill a gap in the published evidence regarding long-term outcomes.
The integration is what the Notified Body is looking for. They want to see that you assessed all available evidence, not just the easiest to access.
Weighting and Transparency
Different sources carry different weights. A well-designed RCT published in a high-impact journal carries more weight than a single-center case series. But an RCT on a different device may be less relevant than your own PMCF data on your device.
The clinical evaluation must explain how you weighted the evidence. This requires transparency about limitations, gaps, and assumptions.
If you excluded a data source, explain why. If you included a source with known limitations, acknowledge them and explain how you accounted for them in your conclusions.
This transparency builds trust. It shows that you understand the evidence base, not just that you collected references.
Practical Implications
For manufacturers preparing a clinical evaluation, this means expanding the data collection process beyond the literature search.
Start by mapping all potential sources. What PMS data do you have? What registries are relevant? What clinical experience has been documented?
Then assess access. What can you obtain? What requires requests or agreements? What is publicly available?
Document your search. If you request data and do not receive it, document the request and the response. If a registry does not cover your device, document that finding.
This creates a defensible evidence base. You may not have every source, but you can demonstrate that you looked.
For clinical evaluation report writers, this means structuring the report to accommodate multiple sources. The appraisal section must include criteria for non-published data. The analysis section must integrate sources rather than list them.
For regulatory and clinical affairs teams, this means earlier planning. You cannot wait until the literature review is complete to realize that the literature is insufficient. You need to identify the gaps early and plan how to fill them.
The clinical evaluation is not a retrospective exercise. It is a continuous process of evidence collection, appraisal, and integration. Plan your sources early.
What Comes Next
The regulatory environment is moving toward more rigorous clinical evidence requirements. PMCF is becoming a central pillar of ongoing clinical evaluation. Equivalence claims are under tighter scrutiny.
Manufacturers who rely solely on published literature will face increasing challenges. The expectation is that you use all available data, that you pursue relevant sources, and that you integrate evidence into a coherent clinical evaluation.
This is not a trend. This is the MDR framework as written.
If your current approach is literature-first, consider expanding it. Look at your PMS data. Check for relevant registries. Document clinical experience. These sources are not supplementary. They are part of the core evidence base.
The next time you plan a clinical evaluation, ask not just what literature is available, but what data exists. The answer will shape the strength of your submission.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV
– MDCG 2020-5: Clinical Evaluation – A Guide for Manufacturers and Notified Bodies





