What reviewers actually check in your literature search protocol
I have seen literature search strategies rejected even when they returned hundreds of relevant studies. The issue was never the number of results. It was that the strategy could not be defended as systematic, transparent, and reproducible. Notified Bodies and competent authorities do not just count your references. They verify whether your search was designed to capture all relevant data.
In This Article
The literature search is the foundation of every clinical evaluation. Yet it remains one of the most frequently cited deficiencies in MDR submissions. The reason is simple: manufacturers often treat the search as a task to complete, not as a protocol to defend.
Under MDR Article 61 and Annex XIV Part A, the clinical evaluation must be based on a comprehensive review of clinical data. That comprehensiveness is proven through the literature search strategy. The documentation must show that the search was designed to identify all relevant clinical evidence, not just the studies that support a favorable conclusion.
What reviewers verify is not whether you found studies. They verify whether your method was capable of finding all studies that should have been considered.
The search protocol as a legal document
A literature search strategy is not a narrative description. It is a reproducible protocol that another evaluator could execute and arrive at the same set of results.
This is where most documentation fails. Manufacturers describe what they did in general terms, but they do not document it with the precision required for verification.
The CER states: “A literature search was performed in PubMed and Embase using relevant keywords related to the device and its clinical application.” No search strings. No filters. No date ranges. No explanation of how keywords were selected. This is not a protocol. It is a claim.
Reviewers need to see the exact search strings used in each database. They need to see how those strings were constructed. They need to understand why certain terms were included and others excluded.
Without this, the search cannot be reproduced. And if it cannot be reproduced, it cannot be verified.
Database selection and justification
The choice of databases is not arbitrary. It must be justified based on the type of device, the clinical field, and the expected location of relevant evidence.
PubMed and Embase are standard. But for certain devices, other databases are necessary. If your device is used in a specific regional context, additional databases may be required. If it relates to rehabilitation or quality of life, sources like CINAHL or PsycINFO may be relevant.
What reviewers check is whether the manufacturer considered the scope of available evidence and selected databases accordingly.
Justification is not optional. The CER should explain why each database was included and why others were excluded. If you searched only PubMed, explain why that was sufficient. If you added specialized databases, explain what additional evidence they were expected to capture.
The same applies to grey literature. Clinical trial registries, regulatory databases, and manufacturer repositories are part of the evidence base. Reviewers verify whether these sources were considered and, if not, why they were deemed unnecessary.
Search string construction and logic
The search string is the technical core of the strategy. It determines what is captured and what is missed.
Reviewers verify whether the search string was built logically. They check the combination of Boolean operators, the use of truncation and wildcards, and the inclusion of MeSH terms or Emtree descriptors where applicable.
A common deficiency is the use of overly narrow search strings that exclude relevant variations. For example, searching only for the proprietary device name without including generic terms, synonyms, or related procedures.
A CER for a vascular stent searches only for the device brand name. No terms for “stent,” “endovascular,” “angioplasty,” or the specific indication. The result: only a handful of studies are identified, and all are sponsored by the manufacturer. This does not demonstrate comprehensiveness. It demonstrates a search designed to avoid inconvenient data.
Reviewers also check for balance. If the search includes only terms related to efficacy and benefits, but excludes terms related to complications or adverse events, that is a red flag. The search must be designed to capture both favorable and unfavorable evidence.
This is not about creating a biased search. It is about demonstrating that the manufacturer did not create a biased search.
Filters, limits, and their justification
Filters and limits are necessary. But every restriction must be justified.
If the search is limited by publication date, the CER must explain why. If it is limited by language, the justification must be clear. If certain study types are excluded, the rationale must be provided.
Reviewers verify whether these restrictions are reasonable or whether they were applied to reduce the volume of inconvenient evidence.
A common issue is the exclusion of older literature without justification. Manufacturers assume that recent studies are more relevant, but this assumption must be defended. If the device has been on the market for years, older studies may contain critical safety data. Excluding them without explanation creates a gap in the evidence base.
Every filter or limit should be described in the protocol and justified in the report. If the search was restricted to English-language publications, explain why non-English sources were considered unlikely to contain relevant clinical data. If animal studies were excluded, explain why human data alone is sufficient for the evaluation.
Reviewers do not expect every possible study to be included. They expect the exclusions to be transparent and defensible.
Documentation of search execution
The protocol describes what should be done. The execution documentation proves that it was done.
This includes the exact search strings entered into each database, the date of the search, the number of results retrieved, and any alerts or error messages encountered during execution.
Many CERs skip this step. They document the protocol but not the execution. When reviewers ask for verification, the manufacturer cannot provide it because the searches were not logged.
The CER references a literature search but provides no evidence that it was performed as described. No screenshots. No export files. No search history. The reviewer cannot verify that the stated protocol was followed. This leads to a request for a new search, delaying the entire submission.
Best practice is to document each search with screenshots or exported search histories. Include the number of hits for each string. Include the date and database version. This creates a complete audit trail.
It also protects the manufacturer. If the Notified Body questions whether certain studies were missed, the documented search proves what was retrievable at the time of execution.
The selection and screening process
Once the search is executed, the next step is screening. Reviewers verify that the screening process was systematic and transparent.
The CER should document how many records were retrieved, how many were duplicates, how many were excluded at title and abstract screening, and how many proceeded to full-text review.
Each exclusion decision should be based on predefined criteria. If studies were excluded because they did not meet the population, intervention, comparator, or outcome criteria, those criteria must be stated upfront in the protocol.
Reviewers check for consistency. If the protocol states that case reports will be excluded, but several case reports appear in the final reference list, that is a discrepancy. If the protocol states that only studies on the exact device will be included, but the CER relies on studies of similar devices, that is a contradiction.
The screening criteria should align with the clinical evaluation plan. If the plan defines equivalence criteria, the screening criteria should reflect those boundaries. If the plan includes devices from the same generic group, the search should be designed to capture that group.
Many deficiencies arise because the search strategy and the clinical evaluation plan were developed in isolation. They do not match. Reviewers notice.
Updates and ongoing surveillance
The initial search is not the end. Under MDR Article 61(11) and Annex XIV Part B, clinical evaluation is a continuous process. The literature search must be updated regularly.
Reviewers verify whether the CER includes a plan for updating the search. They check whether previous updates were performed as planned. They verify whether newly identified studies were incorporated into the evaluation.
A common issue is that manufacturers perform an initial search during the CER preparation, but they do not update it before submission. If six months have passed, significant new evidence may have been published. The absence of an update raises questions about the completeness of the data.
The CER is dated June 2024, but the literature search was last performed in December 2023. No update is documented. No explanation is provided. The reviewer cannot know whether the evaluation reflects the current state of knowledge. This creates a deficiency.
Best practice is to perform a final update search immediately before submission. Document it. Include the results in the CER. If no new relevant studies are found, state that. If new studies are found, incorporate them.
Transparency as the standard
What reviewers verify is transparency. They verify that the manufacturer documented the search in enough detail that it could be reproduced, challenged, and defended.
This is not about perfection. No search strategy is perfect. But it must be transparent.
When deficiencies arise, they rarely result from genuine gaps in the literature. They result from gaps in the documentation. The manufacturer performed a reasonable search but did not document it with the rigor required for regulatory verification.
The solution is not to perform more searches. The solution is to document the searches as if they will be audited. Because they will be.
Treat the literature search strategy as a technical document that will be reviewed by someone who does not assume you acted in good faith. Provide enough detail that the reviewer can verify your work without contacting you. Anticipate their questions. Answer them in the documentation.
This approach eliminates most deficiencies. It shifts the conversation from whether the search was adequate to whether the evidence supports the conclusions. That is where the conversation should be.
The literature search is not an administrative step. It is the proof that your clinical evaluation is based on all available evidence, not just the evidence you wanted to find.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 Article 61 and Annex XIV
– MDCG 2020-13 on Clinical Evaluation Assessment Report Template
Related Resources
Read our complete guide to SOTA analysis under EU MDR: State of the Art (SOTA) Analysis under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





