Most CEPs Fail Because They Answer the Wrong Questions
You can have all the clinical data in the world. But if your Clinical Evaluation Plan doesn’t frame the right evaluation questions from the start, your CER will struggle at every step. I see this pattern across submissions: manufacturers treating the CEP as a formality, then realizing too late that the entire evaluation logic was misaligned with MDR expectations.
In This Article
- Why the CEP Determines Everything That Follows
- What Must Be in the CEP Before You Start Evaluation
- The Evaluation Questions That Anchor the Entire Plan
- What Reviewers Actually Check in the CEP
- How to Structure the CEP So It Passes First Review
- The Connection Between CEP and CER
- Final Thoughts on Getting the CEP Right
The Clinical Evaluation Plan is not a courtesy document.
It’s the strategic blueprint that determines whether your clinical evaluation will pass or fail review. Yet most manufacturers approach it backward. They collect data first, then try to organize it into a plan that justifies what they already have.
Reviewers see through this immediately.
A properly constructed CEP under MDR defines the scope, the acceptance criteria, the evaluation pathway, and the justification logic before you start collecting evidence. It’s the framework that makes your CER defensible. Without it, you’re building on unstable ground.
Why the CEP Determines Everything That Follows
The Clinical Evaluation Plan is your commitment to the Notified Body about how you will demonstrate safety and performance.
It must be in place before the clinical evaluation itself. That’s not administrative preference. That’s MDR Article 61(5) and Annex XIV Part A requiring a documented, reasoned approach to clinical evidence.
When the CEP is vague, generic, or copied from templates, the entire evaluation becomes reactive. You end up searching for data to fill gaps you didn’t anticipate. You miss critical endpoints. You overlook device-specific risks that should have been addressed from the beginning.
But here’s what really happens in reviews.
Notified Bodies don’t just check if you have a CEP. They assess whether your plan logically supports the claims you’re making. They look for alignment between your intended use, your risk profile, your clinical data strategy, and your evaluation endpoints.
If those pieces don’t connect, the entire submission weakens.
Manufacturers list data sources without explaining why those sources are sufficient to address the specific risks and claims of their device. The CEP becomes a data inventory instead of an evaluation strategy.
What Must Be in the CEP Before You Start Evaluation
The CEP is not a summary of what you did. It’s a prospective declaration of what you will do and why that approach is sufficient.
MDR Annex XIV Part A outlines the structure. But what matters in practice is how you define the evaluation scope and justify your choices.
Device Description and Intended Use
You need a precise description of the device, including technical characteristics, intended purpose, patient population, indication, and any claimed benefits.
This isn’t copy-paste from the IFU. It’s the clinical perspective on what the device does and who it’s for. This section anchors everything else.
If your device description is too broad or vague, your evaluation will lose focus.
Clinical Claims and Performance Endpoints
State explicitly what clinical outcomes you claim the device achieves.
For each claim, define measurable performance endpoints. These endpoints must align with the intended use and address the specific benefits you’re asserting.
Reviewers will check this alignment repeatedly. If your performance endpoints don’t match your claims, your evidence won’t support your conclusions.
Risk Profile and Safety Concerns
Identify the device-specific risks based on your risk management file.
For each significant risk, explain how the clinical evaluation will address it. This means defining what type of data is needed, what acceptance criteria apply, and how you will demonstrate that residual risks are acceptable.
This is where many CEPs fail. They list risks but don’t connect them to the evaluation strategy.
The CEP must show that your clinical evaluation is risk-driven. Every major risk should map to a specific evaluation question and a defined data requirement.
Equivalence or Clinical Investigation Pathway
This is the most scrutinized decision in the CEP.
You must justify whether you will rely on equivalence, clinical investigation data, or a combination. That justification must be explicit and defendable.
If you claim equivalence, you need to outline the criteria you’ll use to demonstrate technical and clinical equivalence as defined in MDCG 2020-5. You need to explain why equivalence is appropriate for your device and your risk profile.
If you plan a clinical investigation, explain the study design, the sample size rationale, the endpoints, and the timeline.
This decision cannot be made lightly. And it cannot be changed halfway through the evaluation without rewriting the entire CEP.
Literature Search Strategy
Define your search protocol upfront.
Specify databases, search terms, inclusion and exclusion criteria, and appraisal methodology. Explain how you will handle data gaps or conflicting evidence.
The literature search is not a fishing expedition. It’s a structured process designed to retrieve relevant clinical data for your specific device and claims.
If your search strategy is too narrow, you’ll miss critical evidence. If it’s too broad, you’ll drown in irrelevant studies and waste time justifying exclusions.
Data Appraisal and Acceptance Criteria
This is where many CEPs become too generic.
You need to define what makes clinical data sufficient and relevant for your device. That means setting acceptance thresholds for study quality, patient population similarity, follow-up duration, and outcome measures.
These criteria must be defined before you start appraising data. Otherwise, you risk selecting evidence that supports your conclusion rather than evidence that objectively addresses your evaluation questions.
Notified Bodies watch for this bias carefully.
The Evaluation Questions That Anchor the Entire Plan
Here’s the part most manufacturers overlook.
The CEP must formulate specific clinical evaluation questions that guide the entire process. These questions should be framed around safety, performance, benefit-risk balance, and undesirable side effects.
For example:
Does the device achieve the claimed reduction in healing time compared to standard care?
Are the documented adverse events acceptable given the severity of the condition being treated?
Is the device safe for use in the intended patient population, including vulnerable subgroups?
These questions must be answerable with the data you plan to collect. If your evaluation questions are too vague, your conclusions will lack precision. If they don’t match your claims, your CER will fail logical consistency checks.
Good evaluation questions create a roadmap. Every piece of clinical data you collect should answer one or more of these questions.
Your evaluation questions are not rhetorical. They are the specific clinical uncertainties your evaluation must resolve to demonstrate conformity with MDR General Safety and Performance Requirements.
What Reviewers Actually Check in the CEP
When a Notified Body reviews your CEP, they’re not checking completeness of sections. They’re assessing logical coherence.
Does the plan show that you understand your device’s risk profile?
Does it demonstrate that you’ve thought through what clinical evidence is necessary and sufficient?
Does it connect your data strategy to your regulatory claims?
I’ve reviewed CEPs where every section was filled out, but the logic was circular. The device description didn’t match the intended use. The performance endpoints didn’t address the claims. The equivalence justification referenced devices with different risk profiles.
Reviewers reject these plans not because they’re incomplete, but because they reveal a fundamental misunderstanding of what clinical evaluation means under MDR.
Another common issue: manufacturers treat the CEP as static.
But the CEP should evolve. When new risks emerge, when literature searches reveal unexpected findings, when PMCF data contradicts assumptions—the CEP must be updated to reflect these changes.
Failure to update the CEP signals that you’re not actively managing your clinical evaluation process.
The CEP remains unchanged for years despite evolving state of the art, new clinical data, or design modifications. This creates a disconnect between the plan and the actual evaluation being performed.
How to Structure the CEP So It Passes First Review
Start with clarity of purpose.
The CEP must answer one fundamental question: What do we need to demonstrate clinically to prove this device is safe and performs as intended?
From there, every section builds logically.
Your device description defines what you’re evaluating. Your clinical claims define what you need to prove. Your risk profile defines what safety concerns must be addressed. Your data strategy defines how you will collect the evidence. Your appraisal criteria define how you will judge that evidence.
Each section should reference the others. The CEP should read like a coherent argument, not a collection of independent sections.
When drafting the CEP, anticipate reviewer questions:
Why is this data source relevant to your device?
Why is this endpoint appropriate for your claim?
Why is equivalence justified for your risk class?
Why is this sample size sufficient?
If your CEP doesn’t preemptively answer these questions, you’ll face them in the review.
Another practical point: version control.
The CEP should be a living document with a clear version history. When you update the plan, document what changed and why. This shows active management and reinforces that your clinical evaluation is a continuous process, not a one-time exercise.
The Connection Between CEP and CER
The Clinical Evaluation Report is the execution of the Clinical Evaluation Plan.
If the CEP is weak, the CER will struggle. If the CEP asked the wrong questions, the CER will provide the wrong answers. If the CEP didn’t define acceptance criteria, the CER will lack a defensible basis for conclusions.
I’ve seen manufacturers spend months writing a CER, only to have it rejected because the underlying CEP was flawed. At that point, fixing the problem means rewriting both documents.
That’s why the CEP matters so much.
It’s not bureaucracy. It’s the foundation. If you get the CEP right, the CER becomes a logical progression of evidence, analysis, and conclusions. If you get the CEP wrong, the CER becomes an exercise in justification and retrofitting.
Notified Bodies can tell the difference.
A well-constructed CEP makes the CER easier to write, easier to defend, and more likely to pass review. The time you invest in the plan directly reduces risk downstream.
Final Thoughts on Getting the CEP Right
Most deficiencies in clinical evaluation start with the plan.
Not because manufacturers are careless, but because the CEP is often treated as a formality. It’s written to satisfy a checklist rather than to guide a meaningful evaluation process.
But the CEP is where you demonstrate clinical thinking. It’s where you show the Notified Body that you understand your device, your risks, and your evidence requirements.
When the plan is clear, logical, and aligned with MDR expectations, the evaluation that follows becomes defensible. When the plan is vague or misaligned, everything downstream becomes harder.
So before you start collecting data, before you draft your CER, before you submit anything—get the CEP right.
It’s the one document that determines whether everything else holds together.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61 and Annex XIV Part A
– MDCG 2020-5 Rev.1: Clinical Evaluation – Equivalence
– MDCG 2020-13: Clinical Evaluation Assessment Report Template





