The clinical evidence summary: where reviewers form their first judgment
Most manufacturers bury their evidence summary deep in Annex II. By the time the reviewer finds it, they have already formed an opinion. And if that opinion is doubt, every page that follows will be read through a lens of skepticism.
In This Article
The clinical evidence summary is not a formality. It is not a recap placed at the end for convenience. It is the first real test of whether your clinical evaluation holds together.
Reviewers know this. They turn to it before reading the full clinical evaluation report. They use it to assess completeness, logic, and credibility. If the summary is vague, inconsistent, or incomplete, the rest of the documentation will be scrutinized with heightened suspicion.
Yet in most technical files, the clinical evidence summary is treated as an afterthought. Copied from earlier sections. Written by someone who did not conduct the evaluation. Assembled in the final days before submission.
That approach costs manufacturers months in review cycles.
What the clinical evidence summary actually is
According to MDR Annex II Section 4 and MDCG 2020-6, the clinical evidence summary must provide a standalone overview of the clinical data supporting the safety and performance of the device. It is not a brief introduction. It is not a table of contents.
It must allow a reviewer to understand three things without reading the full CER:
- What clinical data exists for the device
- How that data addresses the intended purpose and risks
- What conclusions can be drawn about safety and performance
The summary is placed in Section 4 of the technical documentation. It precedes the clinical evaluation report. This is intentional. The regulation expects the reviewer to read the summary before diving into the full evaluation.
This changes the function of the summary. It is not a recap. It is a filter. It determines how the reviewer approaches the rest of the file.
The clinical evidence summary is read first. That means it shapes the reviewer’s expectations before they open the CER. If the summary is unclear, every ambiguity in the CER will be interpreted negatively.
Why reviewers read the summary before the CER
Reviewers work under time constraints. A technical file for a Class IIb or Class III device can contain thousands of pages. They cannot afford to read everything in sequence and hope the logic emerges.
Instead, they scan for coherence. They look for red flags. They form a working hypothesis about whether the file is compliant.
The clinical evidence summary is where that hypothesis forms.
If the summary is well-structured, specific, and backed by clear data sources, the reviewer proceeds with the assumption that the manufacturer understands the regulation. If the summary is vague, generic, or evasive, they proceed with skepticism.
This is not subjective. Reviewers are trained to identify gaps in logic, missing data, and weak justifications. The summary reveals those weaknesses faster than any other section.
A common deficiency I see: manufacturers write the summary as if the reviewer already believes the device is safe. They describe conclusions without explaining how those conclusions were reached. This creates an immediate credibility gap.
The summary must be self-sufficient
The reviewer should not need to cross-reference the CER to understand your clinical evidence. They should not need to flip back and forth between documents to follow the logic.
If the summary forces them to do that, it fails.
A self-sufficient summary includes:
- The scope of the clinical evaluation (device description, intended purpose, claims)
- The data sources used (clinical investigations, literature, PMCF, equivalence data)
- The adequacy of that data for addressing risks and intended use
- The conclusions on safety and performance, with supporting rationale
Each of these elements must be described with enough specificity that the reviewer can assess whether the evaluation is complete.
Manufacturers write: “The clinical data demonstrates that the device is safe and performs as intended.” The reviewer asks: What data? Which risks? What endpoints? The summary provides the claim but not the evidence.
What makes a clinical evidence summary credible
Credibility in the summary comes from specificity. Not length. Not repetition. Specificity.
A credible summary tells the reviewer exactly what data was evaluated and how it supports each claim.
Specify the data sources with precision
Do not write: “A literature review was conducted.”
Write: “A systematic literature review was conducted in accordance with MDCG 2020-6, covering databases X, Y, Z, with search terms aligned to the intended purpose. The search identified 127 publications, of which 18 were deemed relevant after appraisal. These studies included data from 3,422 patients over a follow-up period of 12 to 36 months.”
The difference is not cosmetic. The first version signals uncertainty. The second version signals control.
Reviewers are trained to notice this. If you cannot describe your data sources with precision in the summary, they assume you do not have a clear understanding of your evidence base.
Link data to specific claims and risks
Every intended use claim and every identified risk must be addressed by clinical data. The summary must show that linkage explicitly.
For example, if your device is intended for long-term implantation, the summary must state which studies provide data on long-term outcomes, over what duration, and for how many patients.
If your risk analysis identifies a risk of infection, the summary must state which data sources address infection rates, how those rates compare to acceptable thresholds, and whether any mitigation measures are in place.
This is where manufacturers lose reviewers. They describe the data. They describe the risks. But they do not connect them.
The reviewer is left to infer the connection. And if the connection is not obvious, they will assume it does not exist.
The clinical evidence summary is not a summary of the CER. It is a demonstration that your clinical data adequately covers your claims and risks. That demonstration must be explicit.
State conclusions with supporting rationale
Do not write: “The device meets the requirements for safety and performance.”
Write: “The clinical data demonstrates that the device achieves the intended performance with infection rates below 2%, consistent with the state of the art for similar devices. No unanticipated serious adverse events were identified in the clinical investigation (n=150) or post-market surveillance (n=2,340 cumulative device-years). The benefit-risk profile is favorable for the intended patient population.”
The second version gives the reviewer something to assess. It includes numbers, benchmarks, and context. It does not ask the reviewer to trust. It gives them the basis to verify.
Where manufacturers go wrong
The most common mistake is treating the clinical evidence summary as a regulatory checkbox. Manufacturers assume that because the summary is short, it requires less effort.
The opposite is true. Writing a clear, specific, and credible summary requires more precision than writing the full CER. Every sentence must carry weight. Every claim must be justified.
Generic language that applies to any device
I see summaries that could describe any device in the same class. They use phrases like “extensive clinical data” and “well-established technology” without defining what that means.
Reviewers read hundreds of technical files. They recognize generic language immediately. It signals that the manufacturer does not have a deep understanding of their own evidence.
No quantitative data in the summary
Some manufacturers avoid numbers in the summary. They worry that including specific data will invite scrutiny.
But the absence of numbers invites more scrutiny. It suggests the data does not support the claims.
If you conducted a clinical investigation with 150 patients, say so. If your literature review identified 18 relevant studies, say so. If your PMCF includes 2,340 device-years of follow-up, say so.
Quantitative data gives the reviewer confidence that you have done the work.
Disconnection between the summary and the CER
The summary and the CER must align. If the summary states that the device was evaluated through equivalence to a predicate device, the CER must provide a detailed equivalence demonstration.
If the summary states that clinical investigations were conducted, the CER must include the protocols, results, and analysis.
Reviewers check for alignment. If they find inconsistencies, they assume the evaluation is incomplete.
The clinical evidence summary states: “The device is supported by clinical investigations and literature data.” The CER only includes a literature review. The clinical investigation is referenced but not included. The reviewer issues a deficiency requesting the missing data.
How to structure the summary for clarity
A well-structured clinical evidence summary follows a logical sequence:
1. Device and scope: What is being evaluated, for what purpose, and for which patient population.
2. Data sources: What clinical data exists, from what sources, covering how many patients or device-years.
3. Adequacy of data: How the data addresses the intended use, claims, and identified risks.
4. Conclusions: What the data demonstrates about safety and performance, with supporting rationale.
This structure mirrors the logic of the clinical evaluation. It allows the reviewer to follow the reasoning without guesswork.
Each section should be concise but specific. The goal is not to repeat the CER. The goal is to give the reviewer enough information to assess whether the evaluation is complete before they read the full report.
The consequence of a weak summary
If the clinical evidence summary is unclear, the reviewer will read the CER with the assumption that the evaluation is incomplete. Every ambiguity will be flagged. Every gap will be questioned.
This does not mean the evaluation is wrong. It means the reviewer does not trust it yet. And earning that trust after losing it is far harder than establishing it from the start.
A strong summary sets the tone. It tells the reviewer that the manufacturer understands the regulation, has conducted a thorough evaluation, and can communicate their findings clearly.
A weak summary does the opposite. It signals uncertainty, lack of rigor, or incomplete work.
The difference is not subjective. It shows up in review timelines, deficiency letters, and approval outcomes.
The clinical evidence summary is the first judgment. Make it count.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDR Annex II, MDCG 2020-6
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR) Annex II Section 4
– MDCG 2020-6 Guidance on sufficient clinical evidence for legacy devices
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





