Why clinical investigation reports fail Notified Body reviews
I see clinical investigation reports rejected not because the study was poorly executed, but because the report structure doesn’t match what reviewers expect. The data exists. The conclusions are sound. But the document fails to answer the questions in the sequence reviewers need.
In This Article
- What MDR Actually Requires
- The Sequential Logic Reviewers Follow
- What the Opening Sections Must Contain
- Methodology Section: The Make or Break Point
- Results Section: Structure Drives Clarity
- Discussion Section: Where Context Returns
- Conclusions Section: Direct and Traceable
- Appendices and Supporting Documentation
- Why Structure Determines Outcome
This is not about writing skills. This is about understanding how a reviewer works through your clinical investigation report during a conformity assessment.
When a Notified Body opens your clinical investigation report, they are not reading it like a journal article. They are checking whether the document allows them to assess compliance with MDR Annex XV and whether the results support your clinical evaluation. If they cannot find critical information quickly, or if the logic jumps around, the document gets flagged.
The structure matters because it controls the flow of reasoning. A well-structured report guides the reviewer through your methodology, your data, your analysis, and your conclusions without forcing them to search backward for missing pieces.
What MDR Actually Requires
MDR Annex XV, Section 2.3.3 lays out the content requirements for clinical investigation reports. It is not a suggestion. It is the minimum that must be present.
The regulation specifies that the report must include objectives, methodology, results, discussion, and conclusions. But what trips teams up is the assumption that listing these sections is enough. It is not.
Reviewers expect the report to demonstrate that the investigation was conducted according to the approved clinical investigation plan, that deviations are justified, and that the results are presented in a way that allows independent assessment of safety and performance.
The structure must allow a reviewer to trace the line from study objective to conclusion without hunting for information across scattered sections.
If they have to flip back and forth, the report fails the basic test of clarity. And clarity is not a courtesy. It is a compliance requirement.
The Sequential Logic Reviewers Follow
Reviewers do not read the report from start to finish and then form an opinion. They move through a checklist of questions. Each section must answer specific questions before they move to the next.
The opening sections must establish what you were trying to prove and how you planned to prove it. If the objectives are vague or the methodology is incomplete, the reviewer stops. Nothing that comes after will matter if the foundation is not solid.
Then they look at how subjects were selected, how data was collected, and whether the protocol was followed. Deviations happen. That is normal. But if deviations are not documented and justified, the reviewer assumes the investigation was not controlled.
Next comes the results section. This is where I see the most structural failures. Teams dump tables and figures without context. They present data without linking it back to the objectives. The reviewer is left to guess which result addresses which objective.
Presenting raw data without interpretation in the results section, then attempting interpretation only in the discussion. This forces the reviewer to mentally reconstruct the logic.
The discussion section must interpret the results in the context of the clinical evaluation. If the discussion does not connect the investigation results to the state of the art, to the risk management file, and to the intended use, the reviewer will flag it. The investigation does not exist in isolation.
Finally, the conclusions must be clear, direct, and traceable to the data. Vague statements like “the device performed as expected” are not conclusions. They are placeholders.
What the Opening Sections Must Contain
The title page and executive summary are not formalities. They set expectations.
The title page must identify the device, the sponsor, the investigation sites, and the report version. If the device description is incomplete or the version is unclear, the reviewer flags it immediately. They need to know they are reviewing the correct document for the correct device.
The executive summary must provide a standalone overview. This means objectives, methodology, key results, and conclusions in two pages maximum. The reviewer should be able to understand what you did and what you found without reading further.
If the executive summary is missing or is just a copy-paste of the introduction, you have already created friction. The reviewer now has to work harder than necessary.
The introduction and background must position the investigation within the broader clinical evaluation. Why was this investigation needed? What gaps in clinical data was it meant to fill? If this context is missing, the reviewer will question whether the investigation was justified in the first place.
Methodology Section: The Make or Break Point
This is where most reports either earn trust or lose it.
The methodology section must describe the investigation design, the subject selection criteria, the data collection methods, and the statistical analysis plan. It must also reference the approved clinical investigation plan and note any deviations.
Reviewers check whether the methodology aligns with the objectives. If you stated that the objective was to assess performance in a specific patient population, but the inclusion criteria do not match that population, the investigation is flawed.
I have seen reports where the methodology section is written generically, as if copied from a template. The reviewer notices. They look for specificity. They look for decisions that were made for this device, for this population, for this clinical question.
The methodology section must demonstrate that the investigation was designed to answer the clinical question. Generic descriptions signal that the team did not think critically about study design.
Deviations from the approved clinical investigation plan must be documented in this section. If you changed the sample size, if you modified the follow-up schedule, if you adjusted the endpoints, you must explain why and demonstrate that the deviation did not compromise the validity of the investigation.
Undocumented deviations are not minor issues. They are red flags that suggest poor oversight or inadequate monitoring.
Results Section: Structure Drives Clarity
The results section must present data in the same sequence as the objectives were stated.
If objective one was to assess safety, the first results should address safety. If objective two was to assess performance, the next results should address performance. This seems obvious, but it is frequently ignored.
Each table and figure must have a clear title and a brief interpretation. The reviewer should not have to decode what the data means. You must tell them.
Tables that present raw numbers without context are useless. If you present adverse events, you must also present the severity, the causality assessment, and the outcome. If you present performance metrics, you must present them against the predefined success criteria.
Presenting results without linking them to the success criteria defined in the clinical investigation plan. The reviewer is left to infer whether the results meet expectations.
Statistical analysis must be clearly described. If you used specific tests, state which tests and why. If you calculated confidence intervals, present them. If the analysis was descriptive, state that explicitly.
Missing or incomplete statistical reporting signals that the analysis was not rigorous. Reviewers will question whether the conclusions are supported.
Discussion Section: Where Context Returns
The discussion section is not a repetition of the results. It is an interpretation.
You must explain what the results mean in the context of the clinical evaluation. How do the results compare to the state of the art? How do they address the identified risks? How do they support the benefit-risk determination?
If the investigation revealed unexpected findings, you must discuss them. If certain objectives were not fully met, you must explain why and what the implications are.
Reviewers look for honest discussion. If you gloss over negative findings or unexpected results, they notice. And they lose confidence in your entire submission.
The discussion must also address limitations. Every investigation has limitations. Acknowledging them demonstrates that you understand the boundaries of your data.
Discussing limitations is not a weakness. It is a demonstration of scientific rigor. Reviewers expect it.
If your investigation had a small sample size, if the follow-up was short, if certain subgroups were underrepresented, state it. Then explain how these limitations were considered in your conclusions.
Conclusions Section: Direct and Traceable
The conclusions must be direct statements that answer the objectives.
If objective one was to assess safety in a specific population, the conclusion must state whether safety was demonstrated in that population. If objective two was to confirm performance against a predefined criterion, the conclusion must state whether that criterion was met.
Vague or hedged conclusions fail. Statements like “the device appears to be safe” or “performance was generally acceptable” are not conclusions. They are evasions.
Reviewers expect you to commit. If the investigation demonstrated safety and performance, say so. If it raised concerns, say so. If further investigation is needed, say so.
The conclusions must also state whether the investigation results support the intended use and the claims made in the clinical evaluation report. This is the bridge between the investigation and the broader conformity assessment.
Appendices and Supporting Documentation
Appendices are not optional extras. They provide the evidence that supports the main report.
You must include the approved clinical investigation plan, the informed consent form, the case report forms, and the ethics committee approvals. If these documents are missing, the reviewer cannot verify that the investigation was conducted properly.
Appendices should also include detailed statistical outputs, investigator CVs, and monitoring reports. These documents allow the reviewer to assess the credibility of the investigation.
If appendices are incomplete or poorly organized, the reviewer will flag it. They need to be able to find supporting documents quickly.
Submitting an investigation report without key supporting documents in the appendices. The reviewer cannot assess compliance without them.
Why Structure Determines Outcome
The structure of your clinical investigation report is not about aesthetics. It is about logic.
A well-structured report allows a reviewer to assess your investigation efficiently and confidently. It demonstrates that you understand what the investigation was meant to achieve and that you can present the evidence clearly.
A poorly structured report forces the reviewer to work harder. It creates doubt. It signals that the team may not fully understand what they were doing or why.
Every deficiency I have seen in clinical investigation reports traces back to one of two issues: missing content or illogical structure. Both are avoidable.
The content requirements are defined in MDR Annex XV. The logical structure is defined by the questions reviewers ask. If you build your report to answer those questions in sequence, the structure will be correct.
This is not about writing talent. It is about understanding the conformity assessment process and respecting the reviewer’s need for clarity.
When you submit a clinical investigation report, you are not telling a story. You are presenting evidence. The structure must support that purpose.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex XV
– MDCG 2020-7: Post-market clinical follow-up (PMCF) evaluation report template
– ISO 14155:2020: Clinical investigation of medical devices for human subjects





