Why your CER conclusion reads like a compliance checkbox
I once reviewed a CER where the conclusion section was copied across three different devices with only the product name changed. The manufacturer was surprised when the Notified Body rejected it. They had followed a template. They had checked all the boxes. But they had missed the entire point of what a conclusion must do.
In This Article
- What the MDR actually requires from the conclusion
- The structure of a conclusion that survives review
- What makes a conclusion unconvincing
- The role of the evaluator’s judgment
- The connection to PMCF planning
- What reviewers look for in the conclusion
- The difference between a summary and a conclusion
- How to write a conclusion that survives review
- Why this matters more than manufacturers realize
The conclusion section of a Clinical Evaluation Report is not a summary. It is not a restatement of what was already said. It is the reasoned answer to the fundamental question the MDR requires you to address: does the clinical evidence demonstrate that the device meets its intended purpose with an acceptable benefit-risk profile?
Most conclusion sections fail because they are written as if they are the last task on a checklist. They restate findings. They list compliance statements. They avoid clear reasoning. And when a Notified Body reviewer reaches that section, they find no actual conclusion at all.
What the MDR actually requires from the conclusion
Under Annex XIV Part A of the MDR, the clinical evaluation must include a clear statement on the benefit-risk determination, the acceptability of residual risks, and whether the device meets its intended purpose. The conclusion is where this determination is explicitly stated and justified.
This is not a formality. Article 61(1) of the MDR requires that clinical evidence demonstrates safety and performance. The conclusion is the section where you state whether that requirement has been met, and on what basis.
MDCG 2020-6 on sufficient clinical evidence reinforces this. The conclusion must reflect whether the clinical data package is sufficient to support the intended use and expected lifetime of the device. It must address gaps, residual uncertainties, and how those will be managed through PMCF.
But here is what happens in practice.
Most conclusions are written as if the goal is to avoid saying anything that could be challenged. They are vague. They use passive constructions. They state that the device “appears” to be safe, or that clinical data “supports” the claims, without ever clearly stating the basis for that support.
A conclusion that states: “The clinical data reviewed demonstrates that Device X is safe and performs as intended.” This tells the reviewer nothing. What data? What comparators? What risks were considered? What uncertainties remain?
A convincing conclusion does not avoid complexity. It acknowledges it and explains how the determination was made despite it.
The structure of a conclusion that survives review
A strong conclusion section follows a clear internal logic. It does not jump to the final determination. It builds toward it.
First, it restates the scope of the evaluation clearly. What was the intended purpose? What were the clinical claims? What patient population and conditions of use were considered?
This is not repetition. It is framing. The reviewer needs to see that the conclusion is addressing the exact scope that was defined at the beginning of the CER. Any mismatch here signals a problem.
Second, it summarizes the evidence base that was used to make the determination. Not a list of all studies. A clear statement of what types of evidence were available, how they were weighted, and what their combined contribution was.
For example: clinical data from predicate devices, bridging through equivalence. Literature data on specific surgical techniques. PMCF data from earlier device generations. Each category is acknowledged with its role in the overall appraisal.
Third, it addresses the benefit-risk profile explicitly. Not just a statement that benefits outweigh risks. A structured explanation of what the benefits are, what the risks are, and how the balance was assessed in the context of the intended use.
This is where many conclusions fail. They state the result without showing the reasoning.
A convincing conclusion explicitly states what level of evidence was available for each clinical claim and how gaps or limitations were addressed through clinical risk management and PMCF planning.
Fourth, it addresses residual uncertainties and how they will be managed. No clinical evaluation is perfect. No dataset is complete. The question is whether the gaps are acceptable and whether there is a plan to address them.
A conclusion that pretends there are no gaps is less credible than one that names them clearly and explains why they do not prevent a positive determination.
What makes a conclusion unconvincing
The clearest sign of a weak conclusion is that it could be written without reading the rest of the CER. If the conclusion is generic enough to apply to any device in the same category, it is not a conclusion. It is a template.
Another signal is passive language. “Clinical evidence was reviewed.” “The device was found to be acceptable.” These constructions obscure who made the determination and on what basis.
A third issue is the absence of specific references. A strong conclusion references specific sections of the CER where the supporting analysis is found. It connects the determination to the data.
For example: “The equivalence analysis in Section 4.2 demonstrated that the predicate device shares the same technical and biological characteristics. The literature appraisal in Section 5 identified no additional safety concerns. The benefit-risk determination in Section 7 concluded that residual risks are acceptable in the context of the intended surgical use.”
This is not about length. It is about traceability. The reviewer should be able to verify every statement in the conclusion by following the references back to the analysis.
What often happens instead is that the conclusion introduces new reasoning that was not developed earlier in the CER. This creates doubt. If the reasoning was sound, why was it not presented in the body of the report?
The role of the evaluator’s judgment
One thing that distinguishes a convincing conclusion from a generic one is the presence of the evaluator’s professional judgment. The conclusion is not just a summary of data. It is an expert determination based on that data.
This does not mean opinion. It means applying clinical reasoning to the evidence and explaining how that reasoning led to the conclusion.
For example, if there is limited clinical data on a specific subpopulation, the evaluator might explain why data from the general population is sufficient given the device’s mechanism of action and the similarity of the anatomical conditions.
Or if there is a known risk that appears in the literature, the evaluator might explain why the risk is controlled through device design or labeling, and why the residual risk is acceptable compared to the clinical benefit.
This reasoning should be transparent. The reviewer may disagree with the judgment, but they should be able to follow the logic.
The conclusion section is where the evaluator’s clinical expertise is most visible. It is not about stating facts. It is about interpreting evidence in context and making a defensible determination.
What weakens this is when the conclusion reads as if it was written by someone who did not perform the evaluation. The language is too cautious. The statements are hedged. There is no clear authorship.
A Notified Body reviewer can sense this. They are looking for confidence backed by reasoning, not caution backed by ambiguity.
The connection to PMCF planning
A strong conclusion does not end with the current determination. It connects to what comes next. It identifies what needs to be monitored, what questions remain open, and how PMCF will address them.
This is required by MDCG 2020-7 on PMCF planning. The PMCF plan must be based on the gaps and uncertainties identified in the clinical evaluation. The conclusion section is where those gaps are formally acknowledged.
For example, if the literature data is limited to short-term outcomes, the conclusion should state that long-term performance will be monitored through PMCF. If there is limited data on a specific patient subgroup, the PMCF plan should include targeted surveillance for that group.
What makes a conclusion unconvincing is when it states that the evidence is sufficient without acknowledging any limitations, and then the PMCF plan lists multiple objectives that suggest the evidence was not sufficient after all.
The two documents must align. The conclusion should prepare the reader for the PMCF plan, not contradict it.
What reviewers look for in the conclusion
When a Notified Body reviewer reads the conclusion, they are checking several things. First, does it answer the question that the CER was supposed to address? Is there a clear statement on whether the device meets its intended purpose with an acceptable benefit-risk profile?
Second, is the reasoning traceable? Can the reviewer follow the logic back to the data and analysis presented earlier in the report?
Third, are the limitations acknowledged? Does the conclusion reflect an honest appraisal, or is it overstating what the data can support?
Fourth, is the conclusion specific to this device, or is it generic? Could it have been written for any similar device without reading this particular CER?
And fifth, does the conclusion connect to the clinical risk management and PMCF planning? Are there gaps that were identified but not addressed?
A conclusion that states the device is safe and effective but does not explain how that determination was reached given the specific evidence available for this device. The reviewer is left to guess at the reasoning, which almost always leads to requests for clarification or rejection.
These are not abstract criteria. These are the questions that get asked during audits and reviews. If the conclusion does not address them proactively, the review process will force them to be addressed retroactively.
The difference between a summary and a conclusion
One of the most common mistakes is treating the conclusion section as a summary. A summary restates what was said. A conclusion makes a determination based on what was said.
A summary says: “Literature data was reviewed. Clinical data from the predicate device was analyzed. Risks were assessed.”
A conclusion says: “Based on the literature appraisal and equivalence analysis, the clinical evidence is sufficient to demonstrate that the device meets its intended purpose. The identified risks are controlled through design and labeling. The residual benefit-risk profile is acceptable for the intended use. The following areas will be monitored through PMCF.”
The difference is the presence of a determination and the reasoning that supports it.
This is why the conclusion section cannot be written by copying paragraphs from earlier in the CER. It requires synthesis. It requires the evaluator to step back from the details and state what the evidence means when taken together.
How to write a conclusion that survives review
Start by asking: what is the single question this CER was supposed to answer? Write that question down. The conclusion must provide a direct answer to that question.
Then outline the reasoning that leads to that answer. What evidence was used? How was it weighted? What were the key considerations in the benefit-risk assessment?
Next, acknowledge the limitations. What was not available? What assumptions were made? How will the gaps be addressed?
Finally, connect to the PMCF plan. What will be monitored? Why? How does that monitoring address the uncertainties identified in the conclusion?
Write the conclusion in active voice. Name the evaluator. State the determination clearly. Reference the specific sections of the CER where the supporting analysis is found.
And test it. If someone read only the conclusion, would they understand what determination was made and why? If not, the conclusion is not complete.
A conclusion that can be understood without reading the rest of the CER is too vague. A conclusion that requires reading the rest of the CER to make sense is well-structured. It should signal what was determined and point to where the reasoning is found.
The conclusion is not the place to introduce new data or new reasoning. It is the place to synthesize what has already been presented and state what it means.
Why this matters more than manufacturers realize
The conclusion section is often written last. It is treated as the final administrative step before submission. But it is the first place a reviewer goes to understand what the CER is claiming.
If the conclusion is weak, the reviewer’s expectation is set. They will approach the rest of the document with skepticism. They will look for gaps. They will question the reasoning.
If the conclusion is strong, the reviewer’s expectation is different. They will look to verify the reasoning, not to challenge it. The review becomes a process of confirmation rather than interrogation.
This is not about manipulation. It is about clarity. A strong conclusion makes the reviewer’s job easier. It states clearly what is being claimed and where the support for that claim can be found.
A weak conclusion forces the reviewer to reconstruct the reasoning themselves. And if they reconstruct it differently than the manufacturer intended, the result is deficiencies and delays.
The conclusion is not a formality. It is the culmination of the entire clinical evaluation. It is where the evidence, the reasoning, and the determination come together into a single defensible statement.
Most manufacturers do not fail because they lack data. They fail because they cannot clearly state what the data means. The conclusion section is where that failure becomes visible.
And it is also where the solution lies. A clear, specific, reasoned conclusion changes the trajectory of the entire review process.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61 and Annex XIV Part A
– MDCG 2020-6 on Sufficient Clinical Evidence
– MDCG 2020-7 on Post-Market Clinical Follow-up





