The Literature Review That Took Three Attempts
The literature search protocol looked solid on paper. Comprehensive databases, clear inclusion criteria, systematic approach. The first submission came back with a major deficiency. The revised version generated another round of questions. By the third attempt, we finally understood what the Notified Body was actually looking for.
In This Article
I see this pattern repeat across different clients, different devices, different reviewers. The literature review gets rejected not because teams lack diligence, but because they misunderstand what makes a search strategy defensible under MDR requirements.
The technical execution of the search is often fine. The problem lies in how the strategy is justified, documented, and connected to the actual clinical evaluation objectives.
What the First Submission Looked Like
The team had conducted a comprehensive search across MEDLINE, Embase, and Cochrane. They used relevant MeSH terms. They documented their search strings. They screened hundreds of abstracts and extracted dozens of studies.
The literature review section ran to forty pages. It included detailed summaries of selected publications. Everything seemed thorough.
The Notified Body issued a major deficiency within two weeks.
The core issue was not about what was found. It was about how the search boundaries were defined and why certain limitations were considered acceptable.
The Deficiency Statement
The reviewer noted that the search strategy did not adequately justify the exclusion of certain study types. The clinical evaluation relied heavily on case series and registry data, but the literature search had not systematically sought randomized controlled trials.
More critically, the justification for date limits was insufficient. The search covered the last ten years, but no explanation addressed whether relevant safety signals or performance data existed in earlier literature.
The team had assumed that recent data was inherently more relevant. The reviewer did not accept this assumption.
Assuming that time limits or study type restrictions are self-evident. Every boundary in your search strategy must be explicitly justified in relation to your device’s risk profile and intended purpose.
Why the Second Attempt Failed
The team responded by expanding the search. They removed the ten-year limit and added more databases. They included conference abstracts and regulatory databases.
This generated significantly more results. The literature review section grew to seventy pages.
The Notified Body came back with another deficiency. This time the issue was different but related.
The expanded search had identified several older studies reporting complications with similar devices. These studies were mentioned in the results summary but not adequately addressed in the appraisal section.
The reviewer asked: Why were these safety signals not integrated into the risk assessment? How did the manufacturer determine that these historical issues were not applicable to the current device?
The team had collected more data but had not connected it to the clinical evaluation’s reasoning structure.
The Real Problem Emerges
The issue was not about search completeness. It was about search intent and how the results feed into the clinical evaluation’s core questions.
Under MDR Article 61 and Annex XIV, the clinical evaluation must demonstrate sufficient clinical evidence. The literature review is not an academic exercise. It is a regulatory tool for establishing what is known about your device and similar devices.
Every search decision must be traceable to a clinical evaluation objective. Every limitation must be justified by the evaluation’s scope and the device’s risk characteristics.
If you restrict your search to recent publications, you must explain why older safety data is not relevant. If you exclude certain study designs, you must justify why the available evidence levels are sufficient for your device class and intended claims.
The literature search protocol must be designed backwards from your clinical evaluation questions, not forwards from database availability. Start with what you need to demonstrate, then build the search strategy that can defensibly answer those questions.
What Changed in the Third Attempt
Before revising the search protocol again, we restructured how the literature review connected to the clinical evaluation report.
We started by listing the specific clinical questions that required literature evidence. For this particular device, the key questions included:
– What is the current standard of care for the target condition?
– What clinical performance levels are achieved by equivalent devices?
– What are the known safety concerns with this device category?
– Are there specific patient populations where risks differ?
Each question required different types of evidence and different search strategies.
Separating State of the Art from Equivalence Literature
One critical change was recognizing that the state of the art review and the device-specific literature review serve different purposes.
The state of the art review under MDCG 2020-5 Rev.1 requires understanding current medical knowledge and available treatment options. This search must be broad enough to capture clinical practice guidelines, systematic reviews, and key clinical studies that define standards of care.
The device-specific literature review focuses on your device and equivalent devices. This search must be precise enough to identify all relevant safety and performance data but justified in how you define equivalence and similarity.
In the first two attempts, these were mixed together. The search strategy tried to do both simultaneously, which created ambiguity about what was being sought and why.
In the third attempt, we separated them. Each had distinct search protocols with distinct justifications.
Documenting the Decision Trail
The other major change was documentation structure.
Rather than simply presenting search strings and results, we documented the reasoning behind each strategic choice:
– Why these databases were selected and why others were excluded
– Why these date limits were appropriate for each clinical question
– Why certain publication types were prioritized or excluded
– How language restrictions were justified given the device’s market history
– Why certain search terms were included and what clinical concepts they represented
This was not about adding volume. It was about making the logic visible.
When a reviewer sees that you excluded publications before 2010, they need to see your reasoning. If earlier publications contain relevant safety data, your exclusion is indefensible. If earlier publications predate significant technological changes that make them irrelevant, document that rationale.
Documentation is not about describing what you did. It is about demonstrating why your approach was sufficient to identify the evidence needed for your specific clinical evaluation objectives.
The Handling of Negative or Inconclusive Results
One aspect that nearly derailed the third submission involved how we handled studies with negative findings.
The expanded search had identified several papers reporting complications or suboptimal outcomes with devices in the same category. In the second attempt, these were summarized but not deeply analyzed.
The reviewer’s question was direct: Do these findings represent residual risks that your device shares, or has your design addressed these issues?
This is where the literature review must connect to risk management.
Every negative finding or safety signal in the literature must be explicitly addressed. Either the risk applies to your device and must be managed, or you have evidence that your device differs in ways that eliminate or reduce that risk.
In the third attempt, we created a specific section mapping literature-identified risks to the risk management file. Each safety signal from the literature was traced to either:
– An existing risk assessment entry with documented mitigation
– A design difference that eliminates the hazardous situation
– A clinical difference in intended use or patient population
This satisfied the reviewer because it demonstrated that literature findings informed device development and risk control, not just documentation.
What This Means for Reviewers’ Expectations
Notified Bodies are increasingly focused on whether manufacturers actually use literature to inform decisions or simply collect it for compliance.
The question behind their questions is: Did this literature search contribute to device safety, or was it a post-hoc documentation exercise?
If your literature review identifies safety concerns but your risk file was finalized before the review, the timing creates doubt.
If your review finds performance benchmarks but your IFU claims are not calibrated to those benchmarks, the disconnect raises questions.
The literature review must be part of the device development and evaluation process, not a separate document prepared for submission.
Literature reviews conducted after clinical evaluation conclusions are already drafted. Reviewers can identify this from internal inconsistencies, missing connections between literature findings and risk assessments, or claims that are not benchmarked against published data.
Lessons That Apply Beyond This Case
Three attempts taught us what should have been clear from MDCG 2020-13: the literature review is not a standalone document.
It is a component of clinical evidence generation that must be designed to answer specific questions relevant to your device’s safety and performance demonstration.
Several principles emerged:
Design search strategies question by question. Do not create one comprehensive search and hope it covers everything. Different clinical questions require different search boundaries and different types of publications.
Justify every limitation explicitly. Date ranges, language restrictions, study type filters, database selections—all must have documented rationales that connect to your device’s characteristics and evaluation needs.
Separate state of the art from device-specific searches. These serve different regulatory purposes and require different levels of breadth and precision.
Map literature findings to risk management. Every safety signal must be traceable to either a risk control measure or a justified reason why it does not apply to your device.
Document the decision trail, not just the results. Reviewers need to see why your approach was sufficient, not just what you found.
The Role of Clinical Evaluation Planning
What became clear through this process is that literature search strategies should be defined during clinical evaluation planning, not during report writing.
The Clinical Evaluation Plan should specify:
– What clinical questions require literature evidence
– What types of studies would constitute sufficient evidence for each question
– What search boundaries are justified given the device’s risk class and claims
– How literature findings will be integrated into risk management and design validation
When the search protocol is derived from the CEP, the connection between search strategy and evaluation objectives becomes explicit.
When the search is conducted ad hoc during report preparation, these connections must be reverse-engineered, which creates the documentation gaps that lead to deficiencies.
What Acceptance Finally Required
The third attempt succeeded not because we found different publications, but because we demonstrated that our search strategy was sufficient to identify the evidence needed for our specific clinical evaluation.
We showed why our boundaries were appropriate for the device class and intended use.
We demonstrated that literature findings informed risk management and design decisions.
We connected state of the art understanding to clinical evaluation conclusions.
The reviewer accepted the submission because the literature review was integrated into the clinical evaluation’s logic, not appended to it.
This is the standard that MDR Annex XIV Part A requires and that MDCG 2020-13 clarifies. Literature searches must be systematic, but more importantly, they must be sufficient and appropriately scoped for the evaluation’s objectives.
Sufficiency is always context-dependent. It depends on your device’s risks, your clinical claims, and what evidence already exists. There is no universal search protocol that works for every device.
This is why template-based literature reviews generate deficiencies. They optimize for comprehensiveness rather than relevance and justification.
A defensible literature search is not the most comprehensive search. It is the search whose scope, limitations, and execution can be justified in relation to your specific device’s clinical evaluation needs.
Final Reflection
Three attempts cost time, delayed certification, and created pressure across multiple departments. But the process clarified something fundamental about MDR clinical evaluation work.
The regulatory documents are not asking you to prove you searched everywhere. They are asking you to demonstrate that you searched in ways that would identify evidence relevant to your device’s safety and performance.
This is a different standard. It requires clinical judgment about what evidence matters and regulatory judgment about what limitations are defensible.
If I were conducting this literature review today, I would start by asking: What could exist in the literature that would change my clinical evaluation conclusions? Then I would design a search strategy capable of finding that evidence.
That is the question reviewers are asking when they assess your search protocol. Not whether you followed best practices, but whether your approach was sufficient to support the clinical evaluation you are submitting.
The literature review is not a document. It is a process for ensuring that your clinical evaluation is informed by what is known, not just by what you tested.
Get the process right, and the documentation follows.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XIV Part A
– MDCG 2020-5 Rev.1: Clinical evaluation – Equivalence
– MDCG 2020-13: Clinical evaluation assessment report template
Related Resources
Read our complete guide to SOTA analysis under EU MDR: State of the Art (SOTA) Analysis under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





