When Literature Is Sparse: Alternative Evidence Strategies
You have a novel device. The literature search returns twelve papers. Half are not relevant. Three are case reports. Two conflict with each other. The Notified Body asks how you will demonstrate safety and performance. This is not a hypothetical scenario. This is where many manufacturers find themselves when innovation moves faster than publication.
In This Article
- Why Literature Scarcity Happens
- What the Regulation Actually Requires
- Alternative Evidence Strategy One: Clinical Experience Data
- Alternative Evidence Strategy Two: Bench and Animal Data
- Alternative Evidence Strategy Three: Equivalence to a Predicate Device
- Alternative Evidence Strategy Four: PMCF as Evidence Generation
- How to Document Your Alternative Evidence Strategy
- When Clinical Investigation Becomes Necessary
- Final Reflection
The expectation under MDR is clear. Clinical evaluation must be based on sufficient clinical evidence. But what does sufficient mean when the literature barely exists?
I see manufacturers freeze at this point. They assume that sparse literature automatically disqualifies their clinical evaluation. They assume the only path forward is a clinical investigation. Some even consider abandoning the project.
That is not how the regulation works. That is not how reviewers reason. But it does require you to understand what alternative evidence strategies are acceptable, how to structure them, and what rigor they demand.
Why Literature Scarcity Happens
Sparse literature is not always a sign of weak science. It can reflect genuine innovation. It can reflect a narrow indication. It can reflect a device that solves a problem without generating academic interest.
I worked on a device for a rare surgical complication. The condition affects less than two thousand patients per year in Europe. The current standard of care is poorly documented. Most surgeons manage it based on experience, not protocols.
We conducted a comprehensive literature search. We found six relevant publications. None were randomized trials. Two were retrospective case series. One was a survey of surgeon opinions.
This was not a failure of methodology. This was the reality of the clinical landscape. The question was not whether we could find more literature. The question was how to build a valid clinical evaluation with what existed.
Sparse literature does not automatically trigger the need for a clinical investigation. It triggers the need for a documented strategy that explains how you will generate sufficient evidence through alternative means.
What the Regulation Actually Requires
MDR Article 61 requires clinical evidence to demonstrate safety and performance. It does not prescribe a single path. Annex XIV Part A describes clinical evaluation as a systematic process that appraises available clinical data.
The key word is available. If randomized controlled trials do not exist, they are not available. If published case series are limited, that is what is available. The regulation acknowledges this.
MDCG 2020-5 reinforces this flexibility. It states that the clinical evaluation must be based on clinical data that is relevant to the device. When literature is insufficient, the manufacturer must demonstrate safety and performance through other means. This includes clinical investigations, but also includes structured clinical experience, post-market data from equivalent devices, and surrogate endpoints.
The flexibility exists. But it is not a free pass. You must explain your reasoning. You must justify your choices. You must show that your alternative strategy is scientifically sound.
What Reviewers Expect to See
When literature is sparse, the Notified Body expects a clear acknowledgment in the clinical evaluation report. They expect a documented gap analysis. They expect a plan for how those gaps will be addressed.
I have reviewed submissions where manufacturers mention sparse literature in one sentence and then move on. No analysis. No strategy. No justification.
That approach fails every time. Not because the literature is sparse. But because the manufacturer did not demonstrate that they understand what the sparse literature means for their evaluation.
Stating that literature is limited without analyzing why it is limited, what specific gaps remain, and how those gaps impact the demonstration of safety and performance. Reviewers interpret this as lack of understanding, not lack of data.
Alternative Evidence Strategy One: Clinical Experience Data
Clinical experience data is underused. Many manufacturers think it does not count as clinical evidence. That is incorrect.
Clinical experience includes documented use of your device or equivalent devices in clinical practice. It includes surgeon feedback. It includes adverse event data from vigilance systems. It includes real-world performance observations.
This data becomes valid clinical evidence when it is systematically collected, documented, and analyzed. You cannot rely on anecdotal reports. You cannot reference vague clinical impressions. But you can structure a process to capture clinical experience in a way that meets regulatory standards.
How to Structure Clinical Experience Data
First, define what clinical experience you will collect. Be specific. If you are collecting surgeon feedback, define what questions you will ask. If you are tracking device performance, define what metrics you will measure.
Second, establish a collection method. This can be structured interviews. This can be case report forms. This can be device use logs. The method must be consistent and reproducible.
Third, document the data. Every observation. Every case. Every incident. This documentation must be traceable and verifiable.
Fourth, analyze the data as you would analyze literature. Look for trends. Look for signals. Look for performance indicators. Summarize the findings in your clinical evaluation report.
I worked with a manufacturer who had twelve clinical sites using their device off-label before CE marking. They collected structured feedback from each site. They documented every use case. They tracked complications and outcomes.
This data became a core part of their clinical evaluation. It was not literature. But it was systematic. It was documented. It was analyzable. The Notified Body accepted it because the methodology was sound.
Clinical experience data is valid clinical evidence when the collection, documentation, and analysis meet the same standards you would apply to published studies. The issue is not the source. The issue is the rigor.
Alternative Evidence Strategy Two: Bench and Animal Data
Bench testing and animal studies are not clinical evidence. But they can support clinical evaluation when clinical data is limited.
The key is integration. You cannot present bench data in isolation and expect it to compensate for absent clinical data. But you can use bench data to support the biological and mechanical rationale for clinical performance.
When Bench Data Adds Value
Bench data is most useful when it directly correlates to a clinical endpoint. If your device delivers a specific force, and that force corresponds to a therapeutic effect, bench testing that validates force delivery supports your clinical evaluation.
I see this done poorly more often than I see it done well. Manufacturers include pages of bench test results with no explanation of how those results relate to clinical performance. The data sits there, disconnected.
The solution is explicit linkage. For each bench test, state what clinical question it addresses. State what clinical risk it mitigates. State what performance claim it supports.
Animal data follows the same principle. It is not clinical data. But if the animal model is relevant, and the endpoints are translatable, the data can support the biological plausibility of your device.
The challenge is demonstrating relevance. You must explain why the animal model approximates human physiology. You must explain why the endpoints in the animal study predict human outcomes. You must acknowledge the limitations.
Including bench or animal data in the clinical evaluation report without explaining its relevance to clinical performance. Reviewers see this as padding, not evidence. Relevance must be argued, not assumed.
Alternative Evidence Strategy Three: Equivalence to a Predicate Device
If your device is equivalent to a device with established clinical evidence, you can leverage that evidence. This is standard practice. But when literature on your own device is sparse, equivalence becomes even more critical.
The problem is that equivalence claims often collapse under scrutiny. The manufacturer claims equivalence. The reviewer disagrees. The submission stalls.
Equivalence requires three demonstrations. Technical equivalence. Biological equivalence. Clinical equivalence. All three must be established with objective data.
When literature is sparse, the equivalence analysis must be more detailed, not less. You cannot afford ambiguity. You must show that every technical difference has been evaluated. You must show that every material difference has been justified. You must show that the clinical use is truly comparable.
What Makes Equivalence Credible
Credible equivalence is specific. It addresses dimensions. It addresses materials. It addresses mechanisms of action. It addresses patient populations. It addresses intended uses.
I reviewed a submission where the manufacturer claimed equivalence to a predicate device. The devices had the same general function. But the materials were different. The design was different. The mechanism was different.
The manufacturer argued that these differences were minor. The Notified Body disagreed. The equivalence claim was rejected.
The issue was not that differences existed. The issue was that the differences were not analyzed. The manufacturer assumed equivalence. They did not demonstrate it.
When literature is sparse, your equivalence demonstration becomes the foundation of your clinical evaluation. It must be rigorous. It must be detailed. It must anticipate objections.
When your own device has limited literature, equivalence to a predicate device is not a shortcut. It is a strategy that demands even greater rigor because you are substituting one device’s evidence for another. Every difference must be justified.
Alternative Evidence Strategy Four: PMCF as Evidence Generation
Post-market clinical follow-up is not just a post-market obligation. It is an evidence generation tool. When literature is sparse at the time of submission, PMCF becomes the mechanism to fill gaps over time.
But this requires a specific approach. Your PMCF plan must explicitly identify the evidence gaps. It must define what data will be collected. It must specify endpoints. It must commit to timelines.
How PMCF Functions as Alternative Evidence
PMCF becomes alternative evidence when it is designed to answer specific clinical questions. Not general surveillance. Not vague performance monitoring. Specific, pre-defined questions with measurable endpoints.
I worked on a device where the literature supported the mechanism of action but not the long-term durability. We designed a PMCF study to track durability at six-month intervals over three years. We defined failure as a specific mechanical event. We committed to reporting interim results.
The Notified Body accepted the submission because the PMCF plan was not generic. It was targeted. It addressed the identified gap. It committed to generating the missing data.
This is different from a standard PMCF plan. A standard plan monitors general safety and performance. An alternative evidence PMCF plan generates data to answer unresolved clinical questions.
The distinction matters. Reviewers can see the difference. If your PMCF plan reads like a template, they will question whether it actually addresses the evidence gaps. If it reads like a study protocol, they will recognize that you are using PMCF strategically.
Using a generic PMCF plan when literature is sparse. The plan describes general monitoring but does not define how it will generate the specific evidence needed to address identified gaps. Reviewers see this as avoidance, not strategy.
How to Document Your Alternative Evidence Strategy
The alternative evidence strategy must be documented in your clinical evaluation report. It must be explicit. It must be justified. It must be transparent.
Start with a gap analysis. State what evidence exists. State what evidence is missing. State why it is missing.
Then, for each gap, state how it will be addressed. Will you use clinical experience data? Will you leverage equivalence? Will you generate data through PMCF? Will you conduct a clinical investigation?
For each approach, provide justification. Explain why the approach is appropriate. Explain why it is sufficient. Explain what limitations remain.
What Justification Looks Like
Justification is not assertion. It is reasoning. It is explanation. It is anticipation of objections.
If you are relying on clinical experience data, justify why structured clinical experience is appropriate for this device and this indication. Explain what makes the data reliable. Explain what makes it representative.
If you are relying on equivalence, justify why the predicate device’s evidence applies to your device. Explain what makes the devices comparable. Explain what differences exist and why they do not invalidate the equivalence.
If you are relying on PMCF, justify why post-market data generation is acceptable for this gap. Explain why pre-market data is not required. Explain what timelines are appropriate.
This documentation takes effort. It takes thought. But it is the difference between a submission that moves forward and a submission that stalls.
Your alternative evidence strategy must be documented with the same rigor you would apply to a clinical investigation protocol. The absence of literature does not reduce the documentation burden. It increases it.
When Clinical Investigation Becomes Necessary
Sometimes, alternative strategies are not sufficient. Sometimes, the only path forward is a clinical investigation.
This is true when the device addresses a critical safety concern that cannot be extrapolated from other data. This is true when the mechanism of action is unproven. This is true when the risk-benefit assessment cannot be established without direct clinical evidence.
The decision is not about literature volume. It is about whether sufficient evidence can be generated through alternative means. If the answer is no, a clinical investigation is required.
I do not recommend avoiding this conclusion. I have seen manufacturers delay for years, trying to force alternative strategies when a clinical investigation was clearly needed. The delay does not help. It creates more problems.
If a clinical investigation is necessary, the path is clear. Design the study. Obtain approvals. Execute the study. Generate the data. This is not failure. This is the appropriate regulatory path for devices that require direct clinical evidence.
Final Reflection
Sparse literature is a challenge. It is not a barrier. The regulation provides flexibility. But flexibility is not leniency. It is an opportunity to demonstrate thoughtful, scientifically sound reasoning.
The manufacturers who succeed are the ones who acknowledge the gaps, explain the strategy, justify the approach, and commit to generating data where needed. The manufacturers who fail are the ones who pretend the gaps do not matter or assume the Notified Body will accept weak justifications.
Reviewers do not expect perfection. They expect rigor. They expect transparency. They expect that you understand what you know and what you do not know.
When literature is sparse, your alternative evidence strategy is not a workaround. It is the core of your clinical evaluation. Treat it accordingly.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61 and Annex XIV
– MDCG 2020-5: Clinical Evaluation – Equivalence
– MDCG 2020-13: Clinical Evaluation Assessment Report Template





