Why Your Literature Search Gets Rejected Before Chapter One

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see the same deficiency in almost every clinical evaluation report that gets sent back: the literature search protocol fails basic reproducibility requirements. The problem is not the execution. The problem is that most protocols were never designed to pass an audit.

When a Notified Body or regulatory assessor opens your clinical evaluation report, they do not start by reading your clinical data analysis. They start by checking whether your literature search can be independently replicated. If it cannot, the entire evidence base becomes questionable.

This is not a theoretical concern. This is the first gate your CER must pass.

The Reproducibility Standard Nobody Explains

MDR Annex XIV Part A requires that clinical evaluation is based on a systematic literature review. The word “systematic” has a precise regulatory meaning: another qualified person, following your documented protocol, should arrive at the same set of publications you identified.

Most literature search protocols fail this test not because they lack search strings or databases. They fail because critical decision points are left undocumented.

During an audit, the assessor will check three things in order:

First: Can I understand exactly what you searched for, where, and when?
Second: Can I understand how you decided what to include or exclude?
Third: Can I verify that your decisions were applied consistently?

If any of these three checks fail, your entire clinical evidence structure is considered unreliable.

Common Deficiency
Literature search protocols that list databases and keywords but do not document the exact search strings, date ranges, filters applied, or the sequence of searches performed. The assessor cannot replicate the search, so they cannot verify the evidence base.

What Happens Before You Write the Protocol

Most teams start by opening PubMed and typing keywords related to their device. This approach guarantees a deficient protocol.

A literature search protocol must answer a predefined clinical question. That question comes from your scoping process. Before you design any search, you must define what clinical claim you are supporting and what type of evidence would be relevant to that claim.

If your device is a bone screw for spinal fusion, your clinical question is not “What is published about bone screws?” It is “What is the clinical performance and safety of bone screws used in posterior spinal fusion in adult patients with degenerative conditions?”

The specificity of your question determines the specificity of your search.

But here is where most protocols break down: they define the question, then immediately jump to search execution without documenting the logic that connects the two.

The Missing Link: Eligibility Criteria as Part of the Protocol

Your protocol must state, before the search begins, what types of studies will be considered eligible for inclusion. This is not something you decide later during screening. This is something you define now, in writing, with justification.

Eligibility criteria should address:

Population: Which patient groups are relevant?
Intervention: Which devices, procedures, or techniques are comparable?
Comparator: What is the reference standard or alternative treatment?
Outcome: What clinical endpoints matter for your claims?
Study design: What evidence types are acceptable?

These are your PICOS criteria. They must be documented in the protocol before the search is executed.

Key Insight
Eligibility criteria are not a post-search filter. They are part of the protocol design. If you define them after seeing the search results, your process is no longer systematic.

Documenting the Search Strategy in Auditable Terms

Once your clinical question and eligibility criteria are defined, you design the search strategy. This is where the protocol must become technically precise.

An auditable search strategy documents:

Which databases were searched (PubMed, Embase, Cochrane, others)
The exact date and time of each search
The complete search string used in each database, including Boolean operators, MeSH terms, and field tags
Any filters applied (publication date, language, study type)
The number of results returned by each search
How duplicate records were identified and removed

Most deficiencies occur because teams document only the final combined search string without showing how it was built or tested.

Why Search Strings Must Be Iterative and Documented

A single search string is rarely sufficient. You typically run pilot searches, refine terms based on sensitivity and specificity, and combine multiple searches to capture the full scope of relevant evidence.

The protocol must document this iterative process. If you tested three variations of a search string before selecting the final one, that must be documented with justification for why the final version was chosen.

If you combined searches from multiple databases, the protocol must explain how results were merged and de-duplicated.

This level of detail is not bureaucracy. It is the only way to demonstrate that your search was designed to minimize bias and maximize reproducibility.

Common Deficiency
Protocols that state “A comprehensive search was conducted in PubMed using relevant keywords” without providing the actual search string, date, or results count. This is not reproducible and will be rejected.

Screening and Selection: The Audit Trail

After the search is executed, you screen the results in two stages: title/abstract screening, then full-text review. Both stages must be documented with a clear audit trail.

The protocol must specify:

How many reviewers will screen each record
How disagreements between reviewers will be resolved
How the reasons for exclusion will be categorized and recorded
What tool or system will be used to document the screening process

If you exclude a study, the reason must be recorded and linked to a predefined exclusion criterion. You cannot exclude a study because “it did not seem relevant.” You exclude it because it does not meet eligibility criterion X, which was defined in the protocol.

During an audit, assessors will select excluded studies at random and check whether the exclusion reason is consistent with the protocol. If they find inconsistencies, they will question the reliability of your entire selection process.

The PRISMA Flow Diagram Is Not Optional

MDCG 2020-5 references the PRISMA statement as best practice for reporting systematic reviews. A PRISMA flow diagram shows the number of records at each stage of the search and screening process, with reasons for exclusions.

This diagram is not a nice-to-have. It is the visual audit trail that proves your process was systematic.

The numbers in the PRISMA diagram must match the numbers documented in your protocol execution record. If they do not, you will be asked to explain the discrepancy.

Key Insight
The PRISMA flow diagram is the single most scrutinized element of a literature search during a regulatory review. It must be internally consistent and traceable to the documented protocol.

Version Control and Protocol Amendments

Literature searches are often updated during the lifecycle of a device, especially for PMCF or when new clinical data becomes available. Each update must follow a documented protocol amendment process.

If you change the search strategy, eligibility criteria, or databases used, that change must be documented with justification. You cannot simply re-run a search with different parameters and present it as part of the same systematic process.

Protocol amendments should be versioned and approved before implementation. The clinical evaluation report must clearly state which version of the protocol was used for which search iteration.

This is critical for devices that remain on the market for years. An assessor reviewing your updated CER will check whether the literature search updates were conducted systematically or whether they were ad hoc responses to findings.

What Assessors Look For

When I review a literature search protocol, I check whether I could hand it to a colleague and have them execute the same search independently. If I cannot, the protocol is deficient.

Assessors look for the same thing. They want to see:

A clear clinical question
Predefined eligibility criteria with justification
Complete, reproducible search strings with execution dates
Documented screening process with reasons for exclusions
A PRISMA flow diagram that matches the documented numbers
Version control if the search was updated

If any of these elements are missing or inconsistent, the protocol fails the reproducibility test. The consequence is not just a minor finding. The consequence is that your clinical evidence base is considered unreliable, and your CER cannot be accepted.

Common Deficiency
Literature searches conducted by external consultants without full documentation of the search process. The CER includes the results but not the reproducible protocol. This fails audit requirements.

Building Protocols That Survive Scrutiny

A literature search protocol is not a formality. It is the foundation of your clinical evidence structure. If it is deficient, everything built on top of it becomes questionable.

The protocol must be written before the search is executed. It must be precise enough that another qualified person can replicate your work. It must document every decision point and every reason for inclusion or exclusion.

This level of rigor is not optional under the MDR. It is the regulatory expectation.

Most deficiencies I see are not caused by incompetence. They are caused by teams underestimating how thoroughly the protocol will be scrutinized. They treat it as a summary of what they did, rather than a prospective plan for what they will do.

The difference matters. One passes audit. The other does not.

If your literature search protocol cannot be independently replicated, your CER is already vulnerable. The clinical data you present, no matter how strong, rests on an unreliable foundation.

That is the part most teams realize too late.

Peace,
Hatem

Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Annex XIV Part A
– MDCG 2020-5: Clinical Evaluation Assessment Report Template
– MDCG 2020-13: Clinical Evaluation – Equivalence