Your Annex II is complete. But does it answer the real question?
I review technical documentation files where every Annex II section is filled. Every subsection has content. Every requirement appears addressed. Yet the Notified Body comes back with fundamental questions about device characterization, clinical context, or risk justification. The file looks complete, but it does not answer what matters.
In This Article
Annex II is not a form to complete. It is a logic structure designed to show how your device works, why it is safe, and how clinical evidence supports every claim you make. When manufacturers treat it as a checklist, they produce documentation that satisfies format but fails function.
The structure exists to guide reviewers through a reasoning path. Section by section, the file should build understanding. Device description leads to intended purpose. Intended purpose defines risk analysis. Risk analysis informs clinical evaluation scope. Clinical evaluation justifies performance claims. Everything connects.
When this logic breaks, even a complete file becomes unreadable. Reviewers cannot follow the thread. They ask basic questions because the answers, though present somewhere in the file, are not where reasoning demands them.
Why Annex II Structure Exists
MDR Annex II replaced the older annexes from the directives with a more demanding framework. The change was not administrative. It reflects how regulators now expect manufacturers to think.
The structure forces you to establish device identity before discussing performance. It requires risk characterization before clinical claims. It demands post-market evidence planning before concluding your evaluation. This sequence is not arbitrary.
Each section prepares the ground for what follows. If your device description in Section 1 is vague, your benefit-risk analysis in Section 6 will lack foundation. If your risk analysis in Section 3 misses clinical risks, your clinical evaluation in Section 4 will not address them. The structure is a reasoning chain.
Annex II is not documentation storage. It is a demonstration of regulatory reasoning. Every section must support the sections that follow.
This is why format compliance is not enough. You can fill every subsection and still fail to show how your device justification holds together. Reviewers read Annex II as an argument, not as a collection of documents.
What Goes Where and Why It Matters
Section 1: Device Description and Specification
This section must answer a basic question: what exactly is this device? Not in marketing terms. Not in general categories. Specifically.
I see files where device description spans three pages but still leaves core questions unanswered. What are the critical components? What is the mechanism of action? What are the materials in contact with tissue? What differentiates this device from similar products?
The specification subsection should define every parameter that matters for safety and performance. Dimensional tolerances, material grades, software version control, sterilization validation parameters. If a parameter appears later in risk analysis or clinical evaluation, it must be specified here first.
Device description uses commercial language without defining critical characteristics. Reviewers cannot assess equivalence, risk, or clinical claims without precise device definition.
Why does this matter? Because everything downstream depends on device identity. Equivalence claims require detailed comparison. Risk analysis requires component-level understanding. Clinical evaluation scope follows from intended purpose, which follows from device characteristics.
If Section 1 is unclear, the entire file becomes unstable.
Section 2: Information Supplied by the Manufacturer
This section documents what you tell users. Instructions for use, labeling, promotional materials, training content. It is not just about compliance with label regulations.
What you claim in your IFU defines what you must demonstrate in clinical evaluation. If your label states the device improves healing time, your clinical evidence must support that claim with data. If you claim ease of use, you need usability validation. Every claim creates an evidence obligation.
I have seen files where marketing claims in promotional materials go beyond what the clinical evaluation supports. This creates a gap that becomes visible during review. The Notified Body will ask: where is the evidence for this claim? If it is not in Section 4, you have a problem.
Section 2 also includes warnings and contraindications. These must align with your risk analysis in Section 3. If a risk is identified but not communicated to users, you have a justification gap. If a contraindication is listed but not justified by clinical evidence, you have an overstatement.
Section 3: Design and Manufacturing Information
This is where you show how the device is made and how manufacturing controls ensure consistency. Design history, manufacturing process, quality controls, verification and validation activities.
Risk management documentation sits here. ISO 14971 compliance, risk analysis files, risk control measures, residual risk evaluation. This section must show not only that risks were identified, but that they were controlled and that residual risks are acceptable given the benefits.
Here is where logic gaps appear most often. A risk is identified in the risk analysis. A control measure is described. But the verification of that control measure is missing or weak. Or the residual risk is stated as acceptable without showing the benefit side of the equation.
Your risk management file must feed directly into your clinical evaluation. Every clinical risk identified must be addressed by clinical data. Every benefit claim must be weighed against those risks.
Design verification and validation results also belong here. Bench testing, biocompatibility, electrical safety, software validation, usability studies. These results will be cited in Section 4 when you build your clinical evidence profile.
If the link between Section 3 and Section 4 is weak, reviewers will not see how your technical testing supports your clinical claims. They will ask for additional clinical data, even if your technical evidence is strong, because the connection was not made explicit.
Section 4: Clinical Evaluation
This is not just a summary of your clinical evaluation report. This section must present the clinical evidence strategy, the evaluation itself, and the conclusions in a way that is coherent with everything that came before.
Your clinical evaluation scope must reflect the intended purpose from Section 1. Your clinical claims must match what you state in Section 2. Your clinical risk assessment must address the risks identified in Section 3. Your evidence base must justify the residual risks as acceptable.
The most frequent gap I encounter here: clinical evaluation is prepared as a standalone report by someone who did not fully engage with the rest of the technical documentation. Claims are made without checking against the IFU. Risks are assessed without referencing the risk management file. Literature is summarized without connecting it to device-specific characteristics.
The result is a complete clinical evaluation report that does not integrate with the technical file. It answers the wrong questions. It misses the logic thread.
Clinical evaluation report is written independently of the technical file. Claims, risks, and evidence do not align with device description, IFU, or risk analysis.
Section 4 must also include your post-market clinical follow-up plan. This is not a formality. Your PMCF plan must address evidence gaps identified during the evaluation. If your pre-market data is limited, PMCF becomes your commitment to close those gaps. Reviewers will check whether your PMCF objectives target real uncertainties or just repeat what you already know.
Section 5: Benefit-Risk Analysis and Risk Management
This section brings together the risk analysis from Section 3 and the clinical evaluation from Section 4 to show that benefits outweigh risks for the intended patient population.
It is not enough to list benefits and list risks. You must weigh them. This requires clinical judgment. For each significant residual risk, you must show which clinical benefit justifies accepting that risk. For each claimed benefit, you must show that it is supported by sufficient evidence and that it is meaningful to patients.
I have reviewed files where benefit-risk conclusions are generic. The analysis states that benefits outweigh risks without specifying which benefits justify which risks. This does not satisfy MDR expectations.
The benefit-risk determination must be specific, evidence-based, and contextual. Different patient populations may have different risk-benefit profiles. A device acceptable for end-stage disease may not be acceptable for early intervention. Your analysis must address this.
Section 6: Product Verification and Validation
This section documents final verification that the manufactured device meets all specifications and that validation confirms the device performs as intended in the clinical context.
Process validation, sterilization validation, software validation, package integrity, shelf-life studies. These must be complete before you claim conformity. Validation protocols, results, and acceptance criteria must align with the specifications in Section 1.
A common issue: validation is ongoing or incomplete at the time of submission. Manufacturers include protocols but not results, or preliminary results that do not cover the full specification range. This weakens the entire conformity claim.
Validation must also confirm that risk control measures are effective. If your risk analysis identified contamination risk and your control measure is validated sterilization, the validation report in Section 6 must confirm that sterilization achieves the required sterility assurance level under worst-case conditions.
How Sections Connect
The power of Annex II structure is not in individual sections. It is in how sections build on each other.
Device description defines what you are regulating. Labeling defines what you claim. Risk analysis identifies what could go wrong. Clinical evaluation shows that benefits justify accepting residual risks. Benefit-risk analysis integrates risk and clinical data. Validation confirms you can manufacture what you designed.
When one section is weak, the entire chain weakens. When sections are not cross-referenced, reviewers cannot follow the logic. When content is duplicated instead of referenced, inconsistencies appear.
Strong technical documentation has internal coherence. Reviewers can trace a claim from IFU back through clinical evidence to device characteristics and forward to risk justification. Every statement has a thread.
This is what separates adequate files from strong files. Adequate files contain the required content. Strong files show the required reasoning.
What This Means in Practice
When you prepare technical documentation, start by mapping the logic. What do you claim? What evidence supports it? What risks does it create? How do you control those risks? What residual uncertainty remains? How will post-market data address it?
Once you have this map, you can place content in the right sections and build the cross-references that show coherence. Without the map, you will fill sections but miss connections.
Reviewers from Notified Bodies read hundreds of files. They can spot logic gaps quickly. A missing cross-reference between risk analysis and clinical evaluation. A claim in the IFU not supported in Section 4. A validation result that does not match the specification. These gaps signal that the manufacturer did not think through the file as a unified argument.
This is why format compliance is not sufficient. You can have every required subsection and still produce a file that does not convince. The structure is there to guide reasoning. If you do not use it for that purpose, you are working against yourself.
In the next part of this series, I will address how to define device scope and intended purpose in a way that actually supports clinical evaluation. Because that is where logic chains often break first.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDR Annex II
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR)
– MDR Annex II: Technical Documentation
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





