When your clinical study fails – salvaging evidence for the CER

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

The investigator calls you on Friday afternoon. Patient recruitment stalled at 40%. Two sites dropped out. The steering committee just voted to terminate the study early. Your pivotal trial, the one you promised the Notified Body, is dead. And your CER submission deadline is eight weeks away.

This scenario plays out more often than manufacturers admit publicly. Clinical studies fail. Not because of poor science necessarily, but because of recruitment challenges, site performance issues, budget constraints, or external factors like a pandemic shutting down enrollment.

The instinct is panic. You built your entire clinical strategy around this study. The protocol is referenced throughout your CER. Your risk-benefit analysis assumes this data will close critical gaps. Now what?

Here is what I observe in practice: most manufacturers freeze when a study fails. They treat the collapsed trial as a binary outcome—success or failure. They either scramble to restart the study with modifications or they abandon the evidence entirely and rewrite their clinical strategy from scratch.

Both responses waste opportunity.

The regulatory reality of failed studies

First, clarify what “failed” actually means in regulatory terms.

A study that terminates early is not automatically worthless. A study that misses its primary endpoint is not automatically inadmissible. A study with incomplete enrollment is not automatically excluded from your CER.

The MDR does not require perfect studies. Article 61 and Annex XIV require sufficient clinical evidence to demonstrate safety and performance. MDCG 2020-5 emphasizes that clinical evidence is evaluated cumulatively, not in isolation.

What matters is whether the data generated—even from a terminated or underpowered study—contributes meaningfully to your clinical evaluation. Whether it addresses specific safety concerns. Whether it provides insight into device performance in real conditions. Whether it fills gaps that literature cannot address.

The Notified Body does not score your CER based on how many studies went perfectly. They assess whether your totality of evidence supports your claims and adequately characterizes risk.

Key Insight
A terminated study with partial data can still contribute to your clinical evidence base if you analyze what was collected and integrate it transparently into your appraisal.

What can be salvaged from a failed study

When a study collapses, your first action is forensic analysis. What data actually exists? What was collected before termination? What can be analyzed?

I have reviewed CERs where manufacturers salvaged substantial value from studies that never reached their primary endpoint:

Safety data from enrolled subjects. Even if your study enrolled only 40 patients instead of 120, those 40 patients generated safety data. Adverse events were documented. Follow-up visits occurred. Device malfunctions were recorded or ruled out. This safety profile, even from a smaller cohort, informs your risk analysis.

Performance observations. If your device was used in those 40 cases, you have real-world performance data. Procedure times. Technical success rates. Learning curves. User feedback. These observations may not power a statistical claim, but they provide qualitative and quantitative insight.

Subgroup data that addresses specific concerns. Perhaps your study terminated early, but you successfully enrolled the high-risk subgroup that your risk management file flagged as a concern. That subset may provide the exact evidence the Notified Body needs to assess risk in vulnerable populations.

Comparative insights against your literature review. Even underpowered data allows comparison. Are your complication rates consistent with published rates for similar devices? Are your performance outcomes within expected ranges? This contextual alignment strengthens your literature-based claims.

Process learnings that inform PMCF design. A failed study often reveals practical challenges—patient identification difficulties, endpoint measurement issues, site capability gaps. These learnings directly improve your PMCF plan. You demonstrate to the Notified Body that you learned from the failure and adapted your post-market strategy accordingly.

The salvage process is not about pretending the study succeeded. It is about extracting every piece of legitimate evidence that contributes to understanding your device’s clinical profile.

How to present salvaged data in the CER

Transparency is not optional. It is strategic.

When you include data from a terminated or failed study in your CER, the worst approach is to bury the context or present it as if everything went according to plan. Reviewers will discover the truth during document review or audit. When they do, they question your entire appraisal’s integrity.

Instead, present salvaged data with full transparency:

Describe what happened. State clearly in your CER that the study terminated early, missed its enrollment target, or failed to meet its primary endpoint. Explain why. External factors? Recruitment challenges? Interim analysis decision?

Specify what data was collected. Define exactly what evidence exists. How many subjects? What follow-up duration? Which endpoints were assessed? What remains incomplete?

Analyze the data for what it can support. Do not claim the data proves something it cannot prove. If your study was underpowered for efficacy, state that clearly. But analyze what the data does show—safety signals, performance trends, consistency with literature, identification of subgroup risks.

Integrate the data into your cumulative appraisal. Position the salvaged data as one component of your total evidence base. Show how it complements your literature review, your equivalent device data, your real-world evidence. The cumulative weight matters more than any single study’s outcome.

Address the limitations explicitly. Acknowledge what the terminated study cannot conclude. Then explain how your overall clinical evidence strategy compensates for those limitations—through additional literature, through PMCF commitments, through extended labeling precautions.

Common Deficiency
Manufacturers hide study failures in the appendix or omit them entirely from the CER, hoping the Notified Body will not notice. This always backfires during review or audit when the registered study is discovered.

Notified Bodies respect transparent handling of setbacks. They distrust attempts to obscure failure. Your credibility increases when you acknowledge problems and demonstrate how you adapted.

When salvaged data is not enough

Honesty also means recognizing when salvaged data cannot support your clinical claims.

If your study was your only planned source of data for a critical safety concern or performance claim, and it collapsed before generating meaningful evidence, you have a gap. Salvaging partial data may reduce the gap, but not eliminate it.

In these situations, you face difficult decisions:

Can literature close the gap? Revisit your systematic literature review. Can you identify studies with similar devices, similar populations, or similar use conditions that address the concern your failed study was meant to resolve?

Can equivalence data substitute? If you have an equivalent device with clinical history, can that data address the gap? This only works if equivalence is valid and well-demonstrated.

Can you modify claims or indications? Sometimes the most pragmatic response is narrowing your intended purpose or adding precautions until you generate the missing evidence through PMCF.

Can PMCF be accelerated? If the gap is not immediately life-threatening but requires evidence, can you design a rapid PMCF study to collect the data post-market? Some Notified Bodies accept this approach with interim risk mitigation measures.

Do you need to restart or redesign the study? In some cases, yes. If the evidence is critical for market access and no alternative exists, restarting the study with a revised protocol or different sites may be unavoidable. But this is a last resort, not a first response.

The key question is proportionality. How critical is the missing evidence to your benefit-risk profile? How significant is the clinical concern? Can you demonstrate safety through other means while collecting the ideal data post-market?

The Notified Body conversation

When your study fails, communicate with your Notified Body early.

Do not wait until they discover the terminated study during CER review. Reach out proactively. Explain what happened. Present your plan for handling the evidence gap—whether through salvaged data, alternative evidence sources, or modified claims.

This conversation reveals how the Notified Body will assess your situation. Some will accept partial data if your cumulative evidence is strong. Some will require additional commitments in your PMCF plan. Some will ask for claim modifications.

What they will not accept is surprise. Discovering a failed study during formal review, with no prior discussion and no clear mitigation strategy, triggers scrutiny of your entire clinical evaluation process. It raises questions about your planning, your risk assessment, your transparency.

Early communication also allows you to negotiate. If the Notified Body considers the salvaged data insufficient, you can discuss alternatives before you finalize the CER. You can adjust your clinical strategy while you still have time to adapt your submission.

In my experience, Notified Bodies are pragmatic about study failures when manufacturers handle them professionally. Clinical research is uncertain. Recruitment is unpredictable. External factors intervene. What matters is how you respond—with transparency, with alternative evidence, with adaptive planning.

What this means for your clinical strategy

The deeper lesson from failed studies is strategic redundancy.

If your entire clinical evidence strategy depends on a single pivotal study, you are vulnerable. One recruitment problem, one site closure, one interim analysis decision, and your entire CER collapses.

Robust clinical strategies build redundancy from the beginning:

Multiple evidence sources. Literature, real-world data, equivalent device data, and clinical investigations should all contribute. No single source should be make-or-break.

Phased study designs. Consider pilot studies or feasibility studies before committing to large pivotal trials. Early small studies provide preliminary data that supports your CER even if the larger study encounters problems.

Adaptive protocols. Design studies with interim analyses and adaptive features that allow you to respond to recruitment or performance issues without total study failure.

PMCF as primary strategy. For many devices, well-designed PMCF studies provide more reliable evidence than pre-market trials. You control site selection, you adapt in real time, and you build evidence continuously post-market.

Evidence generation before you need it. Start literature reviews early. Collect real-world data from initial cases. Build your evidence base before you face regulatory deadlines. This creates buffer when problems occur.

A failed study should stress-test your clinical strategy, not break it.

The mindset shift required

When I work with manufacturers recovering from failed studies, the biggest obstacle is not technical. It is psychological.

Teams treat the failed study as a personal failure or a catastrophic setback. They lose confidence in their clinical strategy. They question their entire regulatory approach. They spiral into indecision or reactive panic.

The mindset shift required is this: clinical evidence generation is iterative, not linear.

Studies fail. Data disappoints. Plans change. This is normal in clinical research. The manufacturers who succeed are not the ones who never encounter setbacks. They are the ones who adapt when setbacks occur, who extract value from partial data, who pivot their strategy without losing momentum.

A failed study is information. It tells you what recruitment challenges exist in your market. It reveals which endpoints are measurable and which are theoretical. It exposes gaps in your risk assessment. It tests your clinical strategy under real conditions.

Use that information. Salvage what can be salvaged. Adapt what must be adapted. Build the next phase of evidence generation with the lessons you learned.

Your CER is not a monument to perfect execution. It is a living document that reflects your cumulative understanding of your device’s clinical profile. A well-handled failure, transparently presented and strategically addressed, demonstrates maturity in your clinical evaluation process.

And that maturity is exactly what Notified Bodies assess when they review your submission.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV
– MDCG 2020-5: Clinical evaluation – Assessment of equivalence of medical devices
– MDCG 2020-13: Clinical evaluation assessment report template

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.