The Evidence Hierarchy That Keeps Tripping Up Your CER
Last month, I reviewed a clinical evaluation report where the manufacturer dedicated 40 pages to laboratory testing and animal studies, then dismissed the single available clinical study in two paragraphs. When the Notified Body rejected the file, the manufacturer was genuinely confused. They had followed what they thought was proper evidence hierarchy. They had not.
In This Article
This is not an isolated case. I see this pattern repeatedly. Manufacturers invest enormous effort documenting bench testing, biocompatibility studies, and performance data. Then they treat clinical evidence as supplementary material.
The confusion comes from a fundamental misreading of what MDCG 2020-6 actually says about evidence hierarchy.
What MDCG 2020-6 Actually Establishes
MDCG 2020-6 exists to clarify what “sufficient clinical evidence” means under MDR Article 61(1). The document establishes a hierarchy, but not the one most manufacturers think they are following.
The hierarchy is not about volume. It is not about which data type you should document most extensively. It is about which evidence carries the most weight when demonstrating conformity with General Safety and Performance Requirements.
Here is where the misreading happens.
Manufacturers see that MDCG 2020-6 lists different data types: clinical data from the device itself, clinical data from equivalent devices, preclinical data, bench testing, literature on similar technologies. They interpret this as a checklist. They think: “If I include all of these, my evidence base is sufficient.”
This is wrong.
Sufficiency is not about including all data types. It is about whether the evidence you present can actually demonstrate that the device is safe and performs as intended in the clinical context where it will be used.
The hierarchy MDCG 2020-6 describes is a hierarchy of relevance. Clinical data from your device, used in the target population, under conditions of normal use, is the most relevant evidence. Everything else is context or support.
When you flip this around, when you treat preclinical data as the foundation and clinical data as optional decoration, you create a file that cannot withstand review.
The Real Hierarchy: Relevance, Not Volume
\p>Let me be explicit about what the guidance establishes.
At the top of the hierarchy sits clinical data from the subject device. This means data generated from your device, in humans, under conditions that reflect intended use. This is the only evidence that directly demonstrates clinical safety and performance.
If you have this data, it forms the core of your clinical evaluation. All other data types support, contextualize, or help interpret this clinical data.
If you do not have this data, you must justify why equivalence to another device is valid, and then you rely on clinical data from that equivalent device. The equivalence demonstration becomes critical. Your preclinical and bench data now serve to support the equivalence claim.
If you have neither direct clinical data nor a valid equivalence claim, you are in a position where clinical evidence is insufficient by definition. You may have mountains of bench data. You may have animal studies. You may have biocompatibility reports and performance testing. None of this, alone, constitutes sufficient clinical evidence.
Manufacturers present extensive preclinical datasets and assume this compensates for weak or absent clinical data. Notified Bodies reject these files because preclinical data, no matter how comprehensive, cannot demonstrate clinical safety and performance on its own.
Here is the practical consequence: when a Notified Body reviews your CER, they start by looking for clinical data. If they do not find it, or if they find it buried under fifty pages of bench testing, they immediately flag a gap.
The question they ask is not: “Did you do enough testing?” The question is: “Can you demonstrate clinical safety and performance with the evidence you have?”
If the answer depends entirely on extrapolation from preclinical data, the answer is no.
Why Manufacturers Get This Wrong
The confusion has roots in how medical device development historically worked under the Medical Device Directive. Many manufacturers built their quality systems and documentation practices in an era where extensive preclinical testing and a single clinical investigation (often small and non-pivotal) were enough to demonstrate conformity.
Under MDR, this no longer works. The threshold for sufficient clinical evidence has risen significantly. But internal processes, templates, and mental models have not always caught up.
I see manufacturers who structure their CERs like this: Section 1 is device description. Section 2 is preclinical data (extensive). Section 3 is bench testing (extensive). Section 4 is biocompatibility. Section 5 is literature review (often irrelevant). Section 6 is “clinical data” (two paragraphs summarizing a single study or stating that no clinical investigation is planned).
This structure reveals the underlying assumption: clinical evaluation is about documenting everything we tested, with clinical data as one item on the list.
But MDCG 2020-6 describes a different logic. Clinical evaluation is about answering whether the device is safe and performs as intended in clinical use. Preclinical data informs this question. It does not answer it.
When I review files structured the wrong way, I often find that the preclinical sections are exhaustive and well-referenced. The clinical section is thin, generic, and unsupported. The appraisal is missing. The connection between what was tested in the lab and what will happen in the patient is never made explicit.
Notified Bodies see this immediately.
What This Means for Your Appraisal
The most common place this misreading causes failure is in the appraisal. Manufacturers document data, but they do not appraise it according to its relevance.
Appraisal means you assess the quality, relevance, and weight of each piece of evidence. You explain how it contributes to demonstrating safety and performance. You identify gaps. You explain how you address those gaps.
If you treat all data types as equally relevant, your appraisal loses structure. You end up with statements like: “Bench testing shows the device meets performance specifications. Biocompatibility testing shows no adverse tissue reaction. Literature review shows the technology is well-established.”
These statements may all be true. But they do not appraise clinical evidence. They describe preclinical results.
A proper appraisal following the MDCG 2020-6 hierarchy looks different. It starts with clinical data. It assesses whether that data is sufficient to demonstrate safety and performance for the intended purpose, in the target population, considering the risk profile.
If clinical data is limited, the appraisal explains why. It explains what equivalence claim or what preclinical-to-clinical extrapolation is being used. It explains the limitations of that approach. It explains what post-market data will be collected to address the limitations.
The appraisal is where the hierarchy becomes visible. If your appraisal gives equal weight to bench data and clinical data, you are not following the hierarchy. Clinical data must be appraised first and most critically. Everything else is appraised in relation to it.
This is also where PMCF planning connects. If your clinical data is limited, your PMCF plan must address that limitation. MDCG 2020-6 is explicit: insufficient clinical evidence at the time of initial conformity assessment must be compensated by a robust PMCF plan.
But I see manufacturers who write PMCF plans that are generic and disconnected from the actual evidence gaps. They plan surveys and complaint monitoring. They do not plan studies that generate the clinical data they are missing.
This happens because they never correctly identified the gap in the first place. They thought they had sufficient evidence because they had extensive preclinical data.
How to Apply the Hierarchy Correctly
Start your clinical evaluation with a clear question: What clinical data do I have from this device, used in the target population, under normal conditions?
If the answer is “none” or “limited,” you immediately know you have a gap. This gap must be addressed through equivalence or through a plan to generate data.
Then assess your preclinical data in relation to that gap. Ask: Does this preclinical data support an equivalence claim? Does it allow reasonable extrapolation to clinical outcomes? Does it help predict clinical safety and performance, or does it only demonstrate technical function?
When you write your appraisal, structure it according to the hierarchy. Address clinical data first. Explain its scope and limitations. Then explain how preclinical data supports or contextualizes the clinical data. Then explain what remains uncertain and how PMCF will address it.
This structure makes your reasoning visible. It shows the Notified Body that you understand what evidence matters most. It shows that you are not hiding behind preclinical testing.
Manufacturers assume that demonstrating technical performance in the lab is equivalent to demonstrating clinical performance. Reviewers reject this logic. Clinical performance must be demonstrated in clinical use, not inferred from bench testing.
If you are using equivalence, apply the hierarchy to your equivalence data. The clinical data from the equivalent device becomes your primary evidence. Your own preclinical data is used to demonstrate that your device is sufficiently similar to rely on that clinical data.
If your equivalence claim depends on stating that both devices passed the same bench test, you do not have a valid equivalence claim. You have two devices that meet a technical standard. This does not demonstrate clinical equivalence.
What Notified Bodies Look For
When a Notified Body reviews your CER against MDCG 2020-6, they assess whether you have correctly applied the evidence hierarchy.
They look for clinical data. If it is present, they assess its relevance, quality, and sufficiency. If it is absent or limited, they look for a clear explanation and a credible plan to address the gap.
They do not look at your preclinical data volume and say: “This is thorough, so the clinical evidence is sufficient.” They look at preclinical data and ask: “How does this support the clinical claim?”
If your CER does not make that connection explicit, if it presents preclinical data as standalone proof of safety and performance, the file will be rejected.
I have seen manufacturers surprised by this. They invested significant resources in testing. They documented everything. They met standards. But they never demonstrated clinical safety and performance.
The rejection often includes language like: “The clinical evaluation does not provide sufficient clinical evidence as per MDR Annex XIV and MDCG 2020-6. The manufacturer relies primarily on preclinical data without adequate justification for the absence of clinical data.”
This is not a technicality. It is a fundamental failure to apply the correct evidence hierarchy.
Fixing Files That Got It Wrong
If you are revising a CER after this type of rejection, the fix is not to add more preclinical data. The fix is to restructure the appraisal around clinical evidence.
If you have clinical data, move it to the front. Make it the core of your argument. Use preclinical data to support interpretation, not to substitute for clinical evidence.
If you do not have clinical data, acknowledge this explicitly. Explain whether you are relying on equivalence or planning to generate data through PMCF. Justify the approach. Explain the limitations. Explain the risk management.
Do not try to argue that extensive preclinical testing compensates for absent clinical data. It does not. MDCG 2020-6 is clear on this.
If your device is low-risk, if the technology is well-established, if equivalence is valid, then the absence of new clinical data may be acceptable. But you must demonstrate why. You must show that existing clinical knowledge, combined with your preclinical data, is sufficient for the specific claims you are making.
This is a reasoned argument, not a documentation exercise.
The Bigger Picture
The evidence hierarchy in MDCG 2020-6 reflects a regulatory philosophy. The regulation is focused on clinical outcomes. It is focused on whether devices are safe and effective when used by practitioners on patients in real conditions.
Laboratory performance and preclinical testing are necessary. But they are not sufficient. They inform clinical evaluation. They do not replace it.
When manufacturers misread the hierarchy, they produce files that look comprehensive but lack clinical substance. These files take months to compile. They pass internal review. They fail Notified Body review.
The way to avoid this is to start with the hierarchy in mind. Before you write the CER, assess your evidence according to relevance. Identify what clinical data you have. Identify what gaps remain. Structure your evaluation around clinical evidence, not around what you tested most extensively.
This changes how you allocate effort. It changes how you plan studies. It changes how you write appraisals.
It also changes how you communicate with Notified Bodies. When your CER clearly applies the correct hierarchy, when it addresses clinical evidence first and appraises preclinical data in relation to clinical claims, reviewers see that you understand the regulation.
They may still ask questions. They may still request more data. But the foundation is solid.
Files that ignore the hierarchy, that treat preclinical data as the core and clinical data as optional, fail before the discussion even begins.
The evidence hierarchy is not a suggestion. It is the logical structure the regulation demands. Apply it explicitly in your CER. Make it visible in your appraisal. Show that clinical evidence is your foundation, and everything else supports it.
If you are preparing a submission, review your CER against this logic. Ask whether a reviewer, reading your appraisal, will immediately see clinical data as the primary evidence. If the answer is no, restructure.
This is not about rewriting the entire document. It is about reordering the logic. It is about making the hierarchy explicit.
Most of the data you need is probably already in your file. It is just presented in the wrong order, with the wrong emphasis.
Fix the structure. Fix the appraisal. The rejection you avoid is worth the effort.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-6
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61, Annex XIV
– MDCG 2020-6: Regulation (EU) 2017/745: Sufficient clinical evidence for legacy devices
Related Resources
Read our complete guide to CER under EU MDR: Clinical Evaluation Report (CER) under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





