The Clinical Evaluation Plan: Where Most Projects Go Wrong
I review clinical evaluation plans weekly. Most get rejected not because they lack detail, but because they answer the wrong questions. The plan is approved internally, sent to the Notified Body, and comes back with deficiencies that reveal a fundamental misunderstanding of what the plan is supposed to do. This happens more often than you think.
In This Article
The Clinical Evaluation Plan is not a formality. It is not a summary of what you intend to include in the clinical evaluation report. It is not a literature search protocol.
It is the documented reasoning for how you will prove safety and performance under MDR Article 61 and Annex XIV. Everything that follows depends on how well you establish that reasoning upfront.
When the plan is weak, the entire clinical evaluation collapses under scrutiny. And the weakness is rarely obvious until someone who knows what to look for opens the file.
What the Plan Actually Needs to Establish
The Clinical Evaluation Plan must answer one question clearly: How will you generate sufficient clinical evidence to demonstrate compliance with the general safety and performance requirements?
That means defining your clinical strategy. What pathway are you taking? Clinical data from your device, equivalence, or a combination? What literature will you use? What post-market data do you need?
Most plans describe activities without explaining the logic that connects them to the safety and performance requirements. You list a literature search. You mention PMCF. You reference standards. But you do not explain how these elements together will build the evidence base needed for conformity.
The plan describes what will be done but not why it is sufficient. There is no reasoning that connects the planned activities to the specific safety and performance claims of the device.
The reviewer needs to see that you understand what evidence is required and that your plan is designed to generate or gather that evidence. Without that connection, the plan is procedural, not strategic.
The Equivalence Trap
If you claim equivalence, the Clinical Evaluation Plan must define how equivalence will be demonstrated. This is where I see the most serious gaps.
You identify an equivalent device. You state that clinical data from that device will be used. But you do not explain how you will prove the three requirements from MDCG 2020-5: technical equivalence, biological equivalence, and clinical equivalence.
You do not define what clinical data from the equivalent device is needed. You do not explain how you will access it. You do not state what happens if the data is insufficient or if equivalence cannot be fully justified.
The plan assumes equivalence is straightforward. It is not.
Equivalence is not a shortcut. It is a detailed scientific argument that must be justified step by step. The plan must explain how each element of that argument will be addressed.
If you cannot show in the plan how you will prove equivalence, you will not be able to justify it in the report. And the Notified Body will raise this before you even submit the clinical evaluation report.
Literature Search: More Than a Protocol
The literature search is part of the plan, but it is not the plan itself. Yet many Clinical Evaluation Plans read like extended literature search protocols.
The plan must explain what role the literature will play in your clinical evaluation. Are you using it to establish state of the art? To identify risks? To support equivalence? To supplement your own clinical data?
If the literature is your primary source of clinical evidence, the plan must explain why that is acceptable. What makes the available literature sufficient? What gaps exist? How will those gaps be addressed?
Most plans skip this reasoning. They describe the search strategy, the databases, the inclusion criteria. But they do not explain why the expected results will be adequate for demonstrating safety and performance.
The literature search protocol is included, but there is no assessment of whether the literature is expected to provide sufficient clinical evidence. The plan does not address what happens if the search yields limited data.
A strong plan states upfront what the literature needs to show and what you will do if it does not. That signals to the reviewer that you have thought through the limitations of your evidence base before starting the evaluation.
PMCF: Planned or Reactive?
Every device under MDR requires post-market clinical follow-up. The Clinical Evaluation Plan must define the PMCF strategy clearly.
But in most plans, PMCF is mentioned generically. You state that a PMCF plan will be developed. You mention periodic safety update reports. You reference ongoing surveillance. But you do not define what clinical questions the PMCF is designed to answer.
The PMCF is not separate from the clinical evaluation. It is the continuation of it. The plan must explain what uncertainties remain after the pre-market evaluation and how PMCF will address them.
If you are relying on equivalence, what will you monitor to confirm that performance in real-world use matches the equivalent device? If you have limited clinical data, what endpoints will PMCF track? If there are identified risks, how will PMCF provide ongoing evidence of acceptability?
The PMCF strategy must be defined in the Clinical Evaluation Plan, not deferred to a separate PMCF plan. The two are linked, and the reviewer needs to see that linkage upfront.
When the plan does not define the PMCF strategy clearly, it signals that PMCF is being treated as a compliance checkbox rather than a clinical tool. That creates problems later when the Notified Body reviews the PMCF plan and finds no connection to the pre-market evaluation.
Appraisal Criteria: The Missing Element
The Clinical Evaluation Plan must define how you will appraise the clinical data. This is one of the most overlooked requirements.
Appraisal is not the same as data extraction. It is the critical assessment of whether the data is relevant, reliable, and sufficient for your device.
Most plans mention that data will be appraised according to established methodologies. But they do not define the criteria. What makes a study relevant to your device? What level of evidence is required? How will you handle conflicting data? What thresholds will you use to determine sufficiency?
Without predefined appraisal criteria, the clinical evaluation becomes subjective. The reviewer cannot assess whether your conclusions are justified because there is no framework for how you evaluated the evidence.
The plan states that data will be appraised but does not define the appraisal criteria. The clinical evaluation report then presents conclusions without a transparent methodology for how those conclusions were reached.
This is a recurring deficiency in audits. The appraisal criteria must be defined upfront in the plan, not invented during the evaluation. That is how you demonstrate objectivity and rigor.
The Clinical Evaluation Plan Is a Contract
When you submit the Clinical Evaluation Plan to the Notified Body, you are making a commitment. You are saying: This is how I will prove safety and performance.
If the plan is approved and you later deviate from it without justification, you create a compliance issue. If the plan is vague and you fill in the gaps later, the reviewer will question whether your clinical evaluation was truly planned or assembled reactively.
The plan must be specific enough that someone reading it can assess whether your approach is sound before you execute it. That level of specificity is what most plans lack.
I see plans that could describe almost any device in the same class. There is nothing device-specific. Nothing that explains why this particular clinical strategy is appropriate for this particular device with these particular claims.
A strong Clinical Evaluation Plan is device-specific and claim-specific. It explains the logic of the clinical strategy in a way that makes the plan non-transferable to another device.
That is the standard you should aim for. If your plan could be used for a similar device with minor edits, it is not specific enough.
What Happens When the Plan Is Weak
A weak Clinical Evaluation Plan does not stop the project. It creates delays and rework further downstream.
The clinical evaluation report is written based on the plan. If the plan was vague, the report will reflect that vagueness. The Notified Body raises deficiencies. You revise the report. But the underlying issue is that the strategy was never clearly defined.
Now you are in a cycle of reactive fixes. The plan is revised. The report is revised. The PMCF plan is revised. Each revision takes weeks. The certification timeline extends. And the root cause was a weak plan at the beginning.
I have seen projects lose six months because the Clinical Evaluation Plan was approved internally without critical review. Everyone assumed it was sufficient because it followed a template. But templates are not strategies.
The time you invest in developing a strong plan upfront is the time you save in avoiding deficiencies later. That is not theory. That is what happens in real submissions.
Final Thought
The Clinical Evaluation Plan is where you prove that you understand what clinical evidence is required and how you will generate it. It is not a formality. It is not a checklist. It is the foundation of your entire clinical evaluation.
Most projects go wrong here because the plan is written to satisfy a requirement rather than to guide the clinical evaluation. The difference is visible to anyone who reviews the plan critically.
If you are developing a Clinical Evaluation Plan now, ask yourself: Does this plan explain why this clinical strategy is sufficient for this device? If not, the plan is not ready.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 Article 61 and Annex XIV
– MDCG 2020-5 Clinical Evaluation – Equivalence
– MDCG 2020-13 Clinical Evaluation Assessment Report Template





