Your Literature Search Looks Thorough. But Is It Defensible?
I see the same pattern in almost every clinical evaluation review. The literature search section contains hundreds of references, detailed flowcharts, and multiple database queries. It looks impressive. Then a single question from the Notified Body reveals that the entire foundation is unstable. The search strategy cannot be reproduced, the inclusion logic shifts between paragraphs, and critical data was excluded without documentation. The manufacturer now faces months of rework.
In This Article
Literature review is not about volume. It is about method. Under MDR requirements, your literature search must be systematic, reproducible, and scientifically sound. This is stated directly in MDCG 2020-6 and reinforced in Article 61 of MDR 2017/745.
But what does systematic actually mean when you are under pressure to submit, when databases return thousands of hits, and when your device sits at the edge of multiple clinical specialties?
Let me walk you through what I observe when literature reviews fail, and what a defensible methodology actually requires.
Why Most Literature Searches Fail Regulatory Review
The failure is rarely obvious at first glance. The CER includes search strings, database names, and date ranges. It looks complete.
Then the Notified Body asks how you handled duplicate references across databases. Or why a highly cited study was excluded. Or how the search strategy connects to your specific intended use.
The manufacturer cannot answer. Because the search was not designed to be scrutinized. It was designed to find enough references to fill the report.
This is the fundamental error. A literature search under MDR is not a research activity that ends when you find sufficient data. It is an audit trail that must withstand external challenge.
Your literature search is not judged by what you found. It is judged by whether someone else, following your documented method, would find the same results. That is what reproducibility means in this context.
The Three Pillars of a Defensible Literature Search
Every compliant literature review rests on three foundations. Miss one, and the entire structure becomes questionable.
Protocol Definition Before Execution
The search protocol must be written before you run the first query. Not after. Not during. Before.
This means documenting your search question, your databases, your time frame, your language restrictions, and your planned approach to study selection. All of this goes into a protocol document that becomes part of your technical documentation.
I have reviewed submissions where the protocol was clearly written after the search was complete. The language is past tense. The justifications reference studies that were found, not criteria that existed beforehand. Reviewers notice this immediately.
Why does it matter? Because a retrospective protocol allows you to shape the method around convenient results. That is not science. That is rationalization.
Systematic Database Selection and Justification
You must explain which databases you searched and why those databases are appropriate for your device and clinical context.
For most medical devices, this means at minimum: PubMed/MEDLINE, Embase, and Cochrane Library. Depending on your specialty, you may need additional sources like IEEE Xplore for engineering literature, specific national registries, or manufacturer databases like MAUDE.
But the critical part is justification. Why did you choose these databases? What clinical or technical domains do they cover? If you excluded a major database, what is your reasoning?
I see many CERs that simply list databases without context. This raises immediate questions. Did you understand what each database contains? Or did you follow a template without thinking?
Using only PubMed for devices with significant engineering components or biomaterial interactions. PubMed indexes clinical journals well, but misses critical engineering and materials science literature. Your search scope must match your device characteristics, not just your comfort zone.
Transparent Inclusion and Exclusion Criteria
Every study you review must pass through a documented filter. That filter is your inclusion and exclusion criteria, and it must be defined in advance.
These criteria should address: study design, patient population, intervention or device characteristics, outcomes measured, publication type, and language.
The criteria must be specific enough to guide consistent decisions, but flexible enough to capture relevant evidence. This is harder than it sounds.
For example, if your device is used across multiple age groups, do you exclude pediatric studies? Do you exclude animal studies entirely, or do you consider them for specific safety endpoints?
These decisions must be made before you start screening references. And they must be documented with rationale.
Building the Search Strategy: From Clinical Question to Search String
The search strategy is where method becomes practice. This is where most errors happen, because this is where you translate clinical concepts into database language.
Start With Your Clinical Question
Your search must be guided by a clear clinical question derived from your device’s intended use and the scope of your clinical evaluation.
This is not a general question like
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.





