Your clinical evaluation strategy starts before the first sketch

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

Most teams start thinking about clinical evaluation when the device is nearly finished. They gather studies, draft a report, and hope it holds. But by then, the foundational decisions are locked. The choice of intended use, the definition of the patient population, the claims made in the technical file—all of these shape what evidence you will need, what comparators you can use, and what endpoints matter. If those early choices were made without clinical evaluation in mind, you are not just delayed. You are structurally misaligned.

I have seen this pattern in nearly every audit where the clinical evaluation is struggling. The device works. The team is competent. But the clinical strategy was never built into the design process. It was added later, like a compliance afterthought.

The result is predictable. The literature search reveals that your intended use does not match the available clinical data. Your equivalence claim depends on features that are not clinically relevant. Your risk profile demands clinical data you cannot generate in time. And now you are rewriting sections of the technical file, renegotiating timelines, and trying to retrofit a strategy that should have been there from day one.

This is not about starting earlier. It is about integrating clinical evaluation into the design decisions themselves.

What MDCG 2020-6 actually requires

MDCG 2020-6 is titled “Clinical Evidence Needed for Medical Devices Previously CE Marked Under Directives.” But the principles it lays out apply to any device under MDR. The guidance makes clear that clinical evaluation is not a final report. It is a continuous activity that starts during development and continues throughout the lifecycle.

Article 61(1) of MDR states that manufacturers must plan, conduct, and document clinical evaluations for every device. This is not about gathering data at the end. It is about designing a strategy that aligns with the device from the beginning.

MDCG 2020-6 reinforces this by describing clinical evaluation as a methodologically sound ongoing procedure. It requires you to define your clinical development plan early, adjust it as the device evolves, and ensure that every claim you make is supported by evidence.

But here is what trips most teams: they treat the clinical evaluation plan as a document to complete, not as a decision-making framework. They write it, submit it, and then ignore it during development. When the device changes, the plan does not. When new risks emerge, the evidence strategy stays static. By the time you submit, the plan and the device have diverged.

Common Deficiency
The clinical evaluation plan is written after the technical file is nearly complete. The intended use, claims, and risk analysis are already locked. The plan becomes a description of what evidence exists, not a strategy for what evidence is needed.

The three decisions that define your clinical strategy

Your clinical evaluation strategy depends on three foundational choices. These are made early in development, often before anyone thinks about the CER. But they determine everything that follows.

Intended use and clinical claims

This is the first anchor. Your intended use defines what the device is supposed to do, for whom, and in what clinical context. It sounds simple, but I see teams define their intended use based on what the device can do, not on what clinical evidence supports.

If your intended use is too broad, you will need evidence for multiple indications, patient populations, and clinical settings. If it is too narrow, you may limit your market unnecessarily. But the worst case is when your intended use does not align with the available clinical data.

For example, you design a device for acute settings, but the published literature focuses on chronic use. Or you claim it is for a general patient population, but the studies you reference were done in high-risk subgroups. Now your equivalence argument does not hold. Your literature review is weak. And your Notified Body asks why your claims are not supported.

The clinical evaluation strategy starts here: define the intended use with the evidence in mind. Not after you write the technical file. Before you finalize the design inputs.

Equivalence or clinical investigation

The second decision is whether you will rely on equivalence to a device with existing clinical data, or whether you will generate new clinical data through investigations.

This is not a simple choice. Equivalence requires that your device is clinically, technically, and biologically equivalent to a comparator with sufficient clinical evidence. That comparator must meet MDR standards for clinical evaluation. And the equivalence must be demonstrated, not assumed.

Most teams underestimate what equivalence requires. They identify a predicate device, list similar features, and move on. But equivalence is not about similarity. It is about demonstrating that the clinical performance and safety profile will be the same.

If the devices differ in material, design, operating principle, or clinical claims, equivalence becomes difficult to justify. You may need bridging data. You may need clinical investigations. And if the comparator device does not have robust clinical evidence under MDR, your entire strategy collapses.

This decision must be made early. Because if you assume equivalence and later discover it does not hold, you are forced into clinical investigations late in the process. That delays certification, increases costs, and often requires design changes you cannot afford.

Key Insight
Equivalence is a clinical argument, not a technical comparison. It must be supported by evidence that the clinical outcomes will be the same. If you cannot defend this with data, equivalence is not your pathway.

Risk profile and evidence requirements

The third decision is how your risk classification and risk profile shape your evidence strategy. Higher-risk devices require more robust clinical data. But risk is not just about classification. It is about the specific hazards associated with your device, the severity of harm if things go wrong, and the vulnerability of the patient population.

MDCG 2020-6 emphasizes that the level of clinical evidence must be proportionate to the risk. For Class III devices or implants, you will likely need clinical investigations. For Class IIb devices, the requirement depends on novelty, risk profile, and available data. For Class IIa and Class I devices, literature and equivalence may suffice—but only if the risks are well understood and the data is sufficient.

The mistake is treating this as a regulatory checkbox. Teams assume that because their device is Class IIa, a literature review is enough. But if your device addresses a new indication, or if the failure mode is severe, the Notified Body will expect more.

Your clinical strategy must account for this from the start. If your risk analysis identifies hazards that are not well covered in the literature, plan for additional data. If your device is used in a vulnerable population, anticipate the need for post-market surveillance or PMCF studies. Do not wait for the Notified Body to tell you this is insufficient.

Clinical Evaluation Strategy Integration

1
Define Intended Use
2
Map Evidence Requirements
3
Choose Pathway
4
Integrate with Design
5
Document Strategy

How to integrate clinical evaluation into design control

Most teams run clinical evaluation in parallel to design, not integrated with it. The engineers work on the device. The clinical team works on the report. They meet occasionally, exchange documents, and hope it aligns.

This does not work under MDR. Clinical evaluation is part of design control. It informs design inputs, validates design outputs, and drives risk management decisions. If it is not embedded in your development process, it becomes reactive instead of strategic.

Here is what integration looks like in practice.

During design inputs, you define the intended use with clinical evidence in mind. You ask: what data exists for this indication? What endpoints matter clinically? What comparators are available? You do not finalize the intended use until you know the evidence strategy is viable.

During design outputs, you verify that your technical specifications support the clinical claims. If you claim improved accuracy, your verification testing must demonstrate that. If you claim faster recovery, your risk-benefit analysis must reflect that. The clinical evaluation plan should reference these outputs and explain how they support the clinical argument.

During risk management, you use the clinical data to inform residual risk evaluation. If your risk analysis identifies a hazard, your clinical evidence should show that the benefit outweighs that risk in the intended population. If it does not, you either mitigate the risk further or adjust your intended use.

This is not about more documentation. It is about making clinical evaluation a decision-making tool, not a compliance task.

Key Insight
Clinical evaluation is not something you add to the technical file. It is something that shapes the technical file. If your design decisions are made without considering the clinical evidence strategy, you are building misalignment into the project.

The role of the clinical evaluation plan

The clinical evaluation plan is required by MDR. But most teams treat it as a formality. They write it late, copy sections from templates, and file it away. Then they wonder why the Notified Body questions their evidence strategy.

A real clinical evaluation plan is a living document. It defines your evidence strategy, explains your rationale, and guides your decisions throughout development. It should answer these questions clearly:

What clinical data is required to support your intended use and claims? What is your pathway—equivalence, clinical investigation, or a combination? What literature will you search, and how will you appraise it? What endpoints matter clinically, and how will you demonstrate them? What gaps exist, and how will you address them post-market?

If your plan does not answer these questions, it is not a strategy. It is a placeholder.

The plan should be written early—before you lock the intended use, before you commit to equivalence, before you finalize the design. And it should be updated as the device evolves. If the design changes, the plan changes. If new risks emerge, the plan changes. If the literature reveals gaps, the plan adapts.

Notified Bodies review the clinical evaluation plan during the initial assessment. If it is vague, incomplete, or inconsistent with the technical file, they will issue deficiencies. And those deficiencies are not quick fixes. They reveal that your clinical strategy was never solid.

Evidence Pathway Decision Factors

Clinical Data Available
75%
Equivalence Demonstrable
45%
Risk Level Acceptable
60%
Timeline Feasible
80%

What happens when you skip this step

I have worked with teams who built excellent devices but could not certify them because the clinical evaluation strategy was broken. The device worked. The risk management was thorough. But the intended use did not match the available evidence. The equivalence claim could not be defended. The literature search revealed gaps they could not fill.

The cost of fixing this late is enormous. You rewrite sections of the technical file. You commission new studies. You delay market entry by months or years. And in some cases, you realize that the intended use you built the device for is not achievable under MDR without data you cannot generate.

This is not about regulatory burden. It is about structural misalignment. If you do not build your clinical evaluation strategy from day one, you are hoping that the evidence will fit your device. And hope is not a strategy.

Common Deficiency
The clinical evaluation plan is written after the device is designed, the technical file is drafted, and the submission is being prepared. At that point, the plan describes what evidence exists, not what evidence is needed. The strategy is reactive, not proactive.

Moving forward

Building your clinical evaluation strategy from day one is not about starting the CER earlier. It is about embedding clinical thinking into every design decision. It is about aligning your intended use, your equivalence pathway, and your risk profile with the evidence before you commit to the design.

This requires discipline. It requires coordination between clinical, regulatory, and engineering teams. And it requires treating the clinical evaluation plan as a decision-making tool, not a compliance document.

But when you do this, the CER becomes straightforward. You are not scrambling to justify claims you already made. You are documenting a strategy you built from the beginning. The evidence aligns. The argument is clear. And the Notified Body sees a coherent submission that reflects planning, not improvisation.

In the next post, we will look at how to structure the CER itself—how to organize the evidence, how to present the argument, and how to avoid the structural deficiencies that delay approvals.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-6


Previous

What Makes a CER Fail at Notified Body Review

Coming Soon

State of the Art Analysis: Beyond the Literature Review

Join the Clinical Evaluation Community

Connect with regulatory professionals, get exclusive insights, and stay updated on MDR requirements.

Peace, Hatem

Clinical Evaluation Expert for Medical Devices

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61
– MDCG 2020-6: Clinical Evidence Needed for Medical Devices Previously CE Marked Under Directives