Why your clinical evaluation already failed before first patient

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I review clinical evaluation reports where manufacturers invested months drafting a 200-page document, only to discover their device classification was wrong, their equivalence pathway closed before they began, or their clinical data gaps too wide to bridge. The report becomes a ritual performed too late. The real question is not how to write a clinical evaluation. It is when to start building it.

Most manufacturers treat clinical evaluation as a documentation phase that begins when design freeze happens. They have a prototype. They have performance data. Now they need the clinical evaluation report to complete the technical file.

This sequence guarantees deficiencies.

Because by the time you sit down to write the clinical evaluation report, the decisions that determine whether your clinical evidence is adequate or inadequate have already been made. The device design is fixed. The intended purpose is locked. The claims are defined. And the clinical data you have is the clinical data you must work with.

Under MDR Article 61 and Annex XIV, clinical evaluation is not a report. It is a process. A continuous process that begins at concept and runs through the entire lifecycle of the device. When you start this process determines whether your evidence base will support conformity or whether you will face reconstruction late in development.

The Planning Blind Spot

I see technical files where the first clinical evaluation activity documented is the literature search protocol. This happens six months before submission. The team runs the search, finds 40 relevant papers, analyzes them, and concludes the evidence supports safety and performance.

Then the Notified Body asks: Why did you choose this equivalence device? Why not the other model that shares more characteristics? What data supports your claim that tissue response is equivalent when the materials differ?

The answer is often silence. Because the equivalence decision was not clinical. It was convenient. Someone looked at what was cleared in the past and assumed similarity was enough. No one evaluated whether the clinical data for that device would actually address the risks of the new design.

Common Deficiency
Clinical evaluation planning starts after device design is complete. The team inherits a fixed design and must retrofit clinical evidence to justify it. When gaps appear, the options are limited: redesign the device or generate new clinical data. Both are expensive. Both delay submission.

The root cause is simple. Clinical evaluation was not integrated into development. It was appended to development.

When clinical evaluation starts too late, you lose the ability to make evidence-informed design decisions. You cannot adjust intended purpose to match available data. You cannot select materials or design features that align with a stronger equivalence pathway. You cannot identify data gaps while there is still time to generate data efficiently.

The Correct Starting Point

Clinical evaluation planning must begin at the concept phase. Before you finalize design inputs. Before you lock the intended purpose. Before you commit to a regulatory pathway.

This does not mean writing the clinical evaluation report early. It means asking clinical evaluation questions early.

What are the clinical risks associated with this device concept? What clinical data exists for similar devices? Is there a viable equivalence pathway, or will we need clinical investigations? What claims can be supported by existing data, and which claims will require new data? How does our choice of materials, design features, or intended patient population affect the availability of clinical evidence?

These are not documentation questions. These are development questions. And they directly affect feasibility, timeline, and cost.

Key Insight
Clinical evaluation planning is a design input. The results should influence device specifications, material selection, intended purpose definition, and regulatory strategy. If clinical evaluation does not shape development, it becomes a post-hoc justification exercise that rarely satisfies reviewers.

I worked on a project where the manufacturer planned a Class IIb implantable sensor. Early clinical evaluation planning revealed that equivalent devices had only short-term clinical data, and long-term safety evidence was sparse. The team faced a choice: proceed with the original design and commit to a multi-year clinical investigation, or modify the intended duration of use to align with available evidence.

They modified the design. The device became a short-term sensor. This decision was possible because clinical evaluation planning happened before design freeze. The evidence gap was identified when there was still flexibility to respond.

That is the value of starting early. You preserve options.

What Early Clinical Evaluation Planning Looks Like

At the concept stage, clinical evaluation planning is not a full literature review. It is a preliminary assessment. The goal is to understand the evidence landscape and identify major risks or gaps that could affect feasibility.

This preliminary assessment should answer:

1. What devices are sufficiently similar to support an equivalence claim?
2. What clinical data exists for those devices?
3. Are there known clinical risks or failure modes we must address?
4. What claims can likely be supported by literature alone, and which will require additional data?

The output is not a report. It is a clinical data strategy. A living document that informs design decisions, risk management, and regulatory planning.

This strategy should be reviewed and updated at each development milestone. As design matures, the clinical evaluation questions become more specific. By the time you reach design verification, the clinical evaluation plan should be detailed enough to define exactly what literature will be searched, what equivalence arguments will be made, and what data gaps will be addressed through PMCF or clinical investigations.

Key Insight
Clinical evaluation planning is iterative. It does not happen once. It evolves as the device design evolves. Each design decision should be evaluated for its clinical evidence implications. Each new risk identified in risk management should trigger a review of clinical evidence adequacy.

The Link Between Risk Management and Clinical Evaluation

One of the clearest signals that clinical evaluation started too late is a disconnect between the risk management file and the clinical evaluation report.

The risk management file identifies 30 clinical hazards. The clinical evaluation report discusses device performance and surgical outcomes, but it does not systematically address those hazards. There is no clear mapping showing which clinical data supports which risk control measures.

This happens when clinical evaluation and risk management run on parallel tracks. They should be integrated from the start.

Every clinical risk identified in ISO 14971 risk management should have a corresponding section in the clinical evaluation. For each risk, the clinical evaluation must demonstrate that the residual risk is acceptable based on clinical data. If the risk is mitigated by design, the clinical evaluation should show that the design feature performs as intended in clinical use. If the risk is mitigated by instructions, the clinical evaluation should show that users can follow those instructions effectively.

This integration only works if clinical evaluation planning begins when risk management begins. If you wait until the risk analysis is finalized, you inherit a list of risks you must address with whatever clinical data you already have. If you start early, you can identify which risks require clinical evidence and plan data generation accordingly.

Common Deficiency
Clinical evaluation reports that treat safety as a general section with a few adverse event tables. No systematic linkage to the risk management file. No demonstration that each identified clinical risk has been evaluated and found acceptable based on data. Notified Bodies will issue non-conformities on this disconnect.

When Does the Clinical Evaluation Report Get Written?

The clinical evaluation report itself is typically written late in development, after design verification and often during design validation. By this stage, the clinical evaluation plan should already define the structure of the report, the literature to be reviewed, the equivalence demonstration, and the data gap analysis.

Writing the report becomes execution of the plan, not creation of the plan.

If you are making strategic decisions while drafting the report—deciding which equivalence device to use, determining whether to pursue a literature-based pathway or a clinical investigation, defining which claims are supportable—you have started too late.

The report should document decisions that were already made and evidence that was already gathered. It should synthesize what is known and identify what remains to be monitored through PMCF.

I have seen teams spend three months writing a clinical evaluation report, only to realize during internal review that their equivalence claim is weak. They have to restart. They have to identify a different equivalent device. They have to redo the literature review. Three months of effort discarded because the strategic questions were not answered before the writing began.

This is avoidable. The effort you invest in clinical evaluation planning during early development directly reduces the effort and risk during report writing.

PMCF and the Lifecycle Perspective

MDR requires that clinical evaluation is continuous throughout the lifecycle of the device. This means PMCF planning must also begin early.

The PMCF plan is not something you write after the clinical evaluation report is finished. It is part of the clinical evaluation plan. The data gaps you identify during development planning become the objectives of your PMCF activities.

If you identify a gap related to long-term performance, your PMCF plan must define how you will collect long-term data post-market. If you identify a gap related to performance in a specific patient subgroup, your PMCF must define how you will monitor outcomes in that subgroup.

This planning cannot happen last minute. PMCF methods take time to establish. If you plan to use registry data, you need to identify relevant registries and establish data access agreements. If you plan to conduct a PMCF study, you need to design the protocol, obtain ethics approval, and recruit sites. These activities have long lead times.

When clinical evaluation planning starts early, PMCF planning is already in motion by the time you submit for CE marking. You are not scrambling to design a PMCF plan in response to Notified Body questions. You are implementing a plan that was built into development from the start.

Key Insight
PMCF is not a post-market add-on. It is the lifecycle extension of your pre-market clinical evaluation. The gaps you identify early define your PMCF objectives. The data you collect post-market feeds back into clinical evaluation updates. This is the continuous cycle MDR requires.

The Cost of Starting Late

Delaying clinical evaluation planning has measurable consequences.

First, you lose the ability to make evidence-informed design decisions. You commit to a design before knowing if clinical evidence supports it. This increases the risk of late-stage redesign or expensive clinical investigations that could have been avoided.

Second, you increase the risk of submission delays. If clinical evidence gaps are discovered during report writing, there is no time to address them. You either submit with acknowledged gaps and weak PMCF plans, or you delay submission to generate data.

Third, you increase the risk of non-conformities. Notified Bodies evaluate whether your clinical evidence is sufficient and whether your evaluation process was systematic. If your clinical evaluation looks like it was constructed after the fact to justify a predetermined design, reviewers will challenge it.

I have seen submissions delayed by six months or more because clinical evaluation was started too late. The report was written. The literature review was done. But the fundamental questions—Is this equivalence claim defensible? Is this intended purpose supported?—were never asked when there was still time to adjust the strategy.

By contrast, the projects that move smoothly through review are the ones where clinical evaluation was integrated from the beginning. The evidence strategy was clear. The gaps were known and planned for. The report was a synthesis of a structured process, not a rescue effort.

Practical First Steps

If you are beginning a new device development project, start with a preliminary clinical evaluation assessment. Before finalizing design inputs, answer these questions:

What similar devices exist on the market? What clinical data is publicly available for those devices? What are the known clinical risks for this device type? What claims do we intend to make, and what level of evidence will those claims require?

Document your findings in a clinical data strategy document. Update this document at each stage gate. Use it to inform design decisions, regulatory pathway selection, and resource planning.

Integrate clinical evaluation planning into your design control process. Make it a standard input to design reviews. Ensure that clinical evaluation questions are asked before decisions are finalized, not after.

Link clinical evaluation planning directly to risk management. For each clinical risk, define what clinical evidence is needed and how it will be obtained. Build this into your verification and validation plans.

These steps do not require additional resources. They require earlier engagement. They require treating clinical evaluation as a development activity, not a documentation activity.

Closing

Clinical evaluation does not begin when you start writing the report. It begins when you start designing the device. The decisions that determine whether your evidence is adequate are made long before the first draft.

Start clinical evaluation planning at the concept stage. Use it to shape development. Integrate it with risk management. Build your PMCF strategy in parallel. When you finally sit down to write the clinical evaluation report, you should already know exactly what the report will say and exactly what evidence supports it.

That is how you avoid the late-stage surprises that derail submissions. That is how you build a clinical evaluation that stands up to review.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV
– MDCG 2020-13: Clinical Evaluation Assessment Report Template
– ISO 14971: Application of risk management to medical devices