Structuring the CER for AI Medical Software
Your evidence is collected and appraised. Now you must present it in a CER that reviewers can follow. For AI software, the structure must address specific requirements that traditional device CERs do not cover. The wrong structure buries your strongest evidence. The right structure makes your case obvious.
In This Article
The Clinical Evaluation Report for AI software must demonstrate conformity with relevant General Safety and Performance Requirements through clinical evidence. MDR Annex XIV Part A establishes the framework. MEDDEV 2.7.1 rev 4 provides structure guidance. MDCG 2020-1 adds software-specific requirements. Your CER must address all three.
Recommended CER Structure
A well-structured CER for AI software contains thirteen sections:
1. Administrative Information. Device identification, manufacturer details, version numbers, document control information, and declarations.
2. Device Description. Technical description, intended purpose, and measurable claims. For AI software, include algorithm type, input and output specifications, and user interface description.
3. Regulatory Framework. Applicable standards, guidance documents, and classification rationale. Reference MDR, MDCG 2020-1, and any harmonized standards applied.
4. State of the Art. Clinical context, current alternatives, benchmark performance, and evidence gaps. This section establishes the context for evaluating your evidence.
5. Clinical Evaluation Plan Summary. Overview of the evaluation strategy, acceptance criteria, and evidence sources. Reference the full CEP document.
Reviewers check that your CER structure demonstrates clear traceability from claims through evidence to conclusions. Each section should build on the previous one.
Evidence Sections
6. Clinical Data Identification. Systematic description of how clinical data was identified. Literature search protocol, database queries, internal data sources, and screening results.
7. Quality Appraisal. Assessment of each data source for scientific validity and device relevance. Present your appraisal matrix with clear criteria and consistent application.
8. Performance Data Analysis. Results organized by claim and by pillar. For each claim, present the evidence, the analysis, and the conclusion against acceptance criteria. This is the heart of your CER.
9. Equivalence Justification. If claiming equivalence, demonstrate technical, biological, and clinical equivalence with full access to equivalent device data. For AI software, this is rarely straightforward.
AI-Specific Requirements
Three-Pillar Organization. Organize performance evidence under valid clinical association, analytical performance, and clinical performance. Each pillar should have separate acceptance criteria and evidence presentation.
Transparency Documentation. Include information about training data, validation methodology, and known limitations. Reviewers expect to see how the algorithm was developed and what its boundaries are.
Version Control. Address how evidence applies to the specific software version under evaluation. Describe the update validation process.
CERs that present evidence without clear connection to specific claims. Every dataset, every study must trace to acceptance criteria and GSPR requirements.
Integration Sections
10. PMS and PMCF Strategy. How ongoing evidence collection will update and validate the clinical evaluation. Connect PMCF objectives to identified gaps. Describe drift monitoring and update triggers.
11. Benefit-Risk Evaluation. Quantified benefits compared to characterized risks, with explicit reasoning for why benefits outweigh risks in the context of alternatives.
12. Conformity Conclusions. Statement of conformity with each relevant GSPR, supported by specific evidence references.
13. Supporting Annexes. Full literature search protocol, appraisal tables, study reports, and detailed data tables.
Key Tables
Claims-to-Evidence Traceability Matrix. Shows each claim, its acceptance criteria, the supporting evidence, and the conclusion. Reviewers can trace any claim to its proof.
Clinical Evidence Inventory. Lists all evidence sources with appraisal scores, relevance ratings, and contribution to conclusions.
In the next post, we cover how to write performance claims that reviewers will accept.
Peace,
Hatem
Your Clinical Evaluation Partner
Frequently Asked Questions
How long should an AI software CER be?
Length depends on complexity, but structure matters more than length. A well-organized 50-page CER with clear traceability is better than a 200-page document where reviewers cannot find evidence for specific claims.
Should I include all study reports in the CER?
Include summaries in the main text and full reports as annexes. Reviewers need to see key results quickly but must be able to verify details if needed.
Part 5 of 6
Performance Claims and Ongoing Monitoring for AI Software
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Annex XIV Part A
– MEDDEV 2.7/1 Rev 4
– MDCG 2020-1: Guidance on Clinical Evaluation for Medical Device Software
– MDCG 2020-13: CEAR Template





