What Notified Bodies Actually Look For in Technical Documentation
I’ve seen manufacturers spend months preparing technical documentation, only to receive a deficiency letter within the first review cycle. The submission looked complete on paper. Every section was filled. Every template followed. Yet the Notified Body stopped at the first major claim and asked: “Where is the evidence that supports this?” The file wasn’t missing documents. It was missing the logic that connects them.
In This Article
- The First Question: Does the Manufacturer Understand Their Device?
- Design and Manufacturing: Beyond Process Descriptions
- Risk Management: The Central Thread
- Clinical Evaluation: The Evidence Behind Every Claim
- Post-Market Surveillance and PMCF: Proof of Commitment
- Labeling and Instructions for Use: Consistency Check
- What Ties Everything Together
- Practical Implications for Your Submission
This happens more often than it should. Teams prepare technical files by checking boxes. They compile documents. They fill tables. They follow templates provided by consultants or internal quality systems. But when the Notified Body opens the file, they’re not looking for completeness in the administrative sense. They’re looking for coherence. They’re asking whether the file demonstrates that the manufacturer understands their own device, its risks, and the evidence needed to support every claim made.
The gap between what manufacturers think is required and what assessors actually evaluate is where most deficiencies emerge. Let me walk you through what Notified Bodies are really looking at when they open your technical documentation.
The First Question: Does the Manufacturer Understand Their Device?
Before diving into clinical data or risk management files, the assessor starts with the basics. They read the device description. They review the intended purpose. They check the claims. And they ask themselves: Does this manufacturer know what they’re submitting?
This might sound obvious, but it’s where many files begin to unravel. The intended purpose is vague. The claims are broader than the evidence supports. The device description lacks enough detail for someone unfamiliar with the technology to understand what it actually does.
Assessors notice when the intended purpose in the technical file doesn’t match the one in the clinical evaluation report. They notice when the instructions for use describe a different patient population than the risk management file. These aren’t minor administrative inconsistencies. They signal that the file was assembled in pieces without a central logic.
Notified Bodies assess coherence before compliance. If the fundamental descriptions don’t align across documents, the rest of the file becomes unreliable in their eyes.
Design and Manufacturing: Beyond Process Descriptions
The design and manufacturing information isn’t just about showing you have an ISO 13485 system. Assessors want to see that design controls were applied to this specific device. That design verification and validation activities match the risks identified. That the manufacturing process controls account for the critical parameters that affect safety and performance.
I’ve reviewed files where the design verification section listed generic tests with pass/fail results but no explanation of why those tests were chosen. The file showed that testing happened. It didn’t show that the manufacturer understood which characteristics needed verification based on the device’s risks and intended use.
Assessors look for traceability. They want to see that design inputs came from user needs and risk analysis. That design outputs translate into specifications. That verification methods are justified. That validation confirms the device meets user needs in the intended environment.
When this traceability is missing, the file feels like a collection of isolated activities rather than a controlled development process.
Risk Management: The Central Thread
The risk management file is not a standalone document. It’s the thread that connects everything else. Assessors use it to evaluate whether the manufacturer’s decision-making throughout the file is rational.
They check whether identified hazards align with the device type and intended use. Whether risk estimates are justified. Whether risk control measures are appropriate and verifiable. Whether residual risks are acceptable and communicated.
But more importantly, they check whether risk management informed other parts of the file. Did the identified risks drive the design verification tests? Do the clinical data address the clinical risks? Does the post-market surveillance plan monitor the residual risks?
Risk management files that exist in isolation from clinical evaluation and design verification. The risks identified don’t appear to influence what evidence is collected or what is monitored post-market.
I’ve seen files where the risk analysis identified biological risks from material contact, but the biocompatibility section provided only generic ISO 10993 testing with no justification for which tests were relevant to the identified risks. The documents existed. The connection didn’t.
Assessors also look at benefit-risk analysis with increasing scrutiny. Under MDR, demonstrating that benefits outweigh risks requires more than a summary statement. It requires clinical data that quantifies benefits and a transparent discussion of how residual risks were weighed against those benefits for the target population.
Clinical Evaluation: The Evidence Behind Every Claim
When assessors turn to the clinical evaluation, they’re not reading it as a standalone report. They’re checking whether it supports what the rest of the file claims.
The intended purpose says the device is for a specific indication. Does the clinical data demonstrate safety and performance for that indication? The risk management file identifies certain clinical risks. Does the clinical evaluation provide evidence that those risks are acceptable? The labeling makes comparative claims. Where is the clinical data that substantiates those comparisons?
Assessors read clinical evaluations with a critical eye for logic gaps. They notice when the literature search strategy was too narrow to capture relevant data. When equivalence claims are made without proper demonstration. When appraisal of clinical data is superficial. When conclusions overstate what the evidence actually shows.
They particularly scrutinize equivalence claims. Demonstrating equivalence under MDR requires more than showing two devices have similar technical characteristics. It requires demonstrating that clinical performance and safety can reasonably be expected to be equivalent. Many manufacturers underestimate what this requires.
Assessors don’t accept equivalence based on technical similarity alone. They expect a detailed comparison of clinical, biological, and technical characteristics, with justification for why any differences don’t affect safety and performance.
I’ve seen equivalence claims rejected where the devices had the same materials and similar design but different coating processes. The manufacturer assumed the coating difference was minor. The assessor asked for evidence that the coating didn’t affect biological response or clinical performance. That evidence didn’t exist.
Assessors also evaluate whether the clinical evaluation reflects current knowledge. The State of the Art analysis should demonstrate awareness of alternative treatments, recent literature, and evolving standards of care. A clinical evaluation that references only decade-old studies or ignores recent safety signals raises questions about whether the manufacturer is monitoring relevant knowledge.
Post-Market Surveillance and PMCF: Proof of Commitment
The PMS plan and PMCF plan are often treated as formalities. Manufacturers write them to satisfy the requirement, with little intention of implementing them rigorously. Assessors recognize this.
They look for specificity. What data will be collected? From which sources? How often? What triggers a safety review? What constitutes a trend worth investigating? Plans that provide only generic statements about monitoring complaints and reviewing literature don’t demonstrate a real commitment to post-market vigilance.
Assessors check whether PMCF objectives are tied to uncertainties in the clinical evaluation. If the pre-market data has limitations, the PMCF plan should address those limitations. If equivalence was claimed, PMCF should confirm that clinical performance aligns with expectations. If long-term safety is uncertain, PMCF should generate that data.
PMCF plans that list generic activities without connecting them to specific knowledge gaps or residual risks. The plan exists but doesn’t address what actually needs to be confirmed post-market.
I’ve reviewed PMCF plans for implantable devices where long-term performance was a known uncertainty, yet the plan only committed to passive complaint monitoring. No registry participation. No follow-up studies. No active data collection. The assessor flagged this immediately. The uncertainty was acknowledged in the clinical evaluation but not addressed in the PMCF plan.
Labeling and Instructions for Use: Consistency Check
Assessors cross-reference the IFU against every other part of the file. Does the intended purpose in the IFU match the technical file? Are the indications supported by clinical data? Are contraindications consistent with identified risks? Are warnings adequate given the residual risks?
They look for omissions. If the risk management file identifies a risk that requires user precautions, those precautions must appear in the IFU. If the clinical evaluation shows the device is less effective in a subpopulation, the IFU should reflect this.
They also evaluate whether the IFU is usable. Is the information clear? Are warnings prominent? Are instructions complete enough for the intended user to use the device safely?
This isn’t about perfect writing. It’s about ensuring the end user receives the information they need to make informed decisions and use the device correctly.
What Ties Everything Together
What Notified Bodies are ultimately looking for is rational decision-making supported by evidence. Every claim should be traceable to data. Every risk control measure should be verified. Every design choice should be justified. Every post-market activity should address an uncertainty.
The technical documentation isn’t a collection of required documents. It’s a demonstration of how the manufacturer reasoned through the development, validation, and lifecycle management of their device.
When the file is coherent, assessors can follow the logic. They might challenge specific evidence or ask for more data, but they see that the manufacturer has a structured approach. When the file is incoherent, even minor questions become major deficiencies because the assessor can’t trust that the manufacturer has control over their own process.
Notified Bodies assess whether the manufacturer demonstrates rational control over their device lifecycle. Evidence of process is not enough. They need to see evidence of understanding.
Practical Implications for Your Submission
Before submitting your technical documentation, ask yourself the same questions the assessor will ask. Does every claim have supporting evidence? Are the documents consistent with each other? Does the risk management inform the clinical evaluation and vice versa? Is the PMCF plan specific enough to generate meaningful data?
Walk through the file as if you’re encountering the device for the first time. Can you follow the logic? Can you see why each decision was made? If you find gaps in your own review, the assessor will find them too.
The goal isn’t perfection. The goal is coherence. A file that demonstrates the manufacturer knows their device, understands the risks, has gathered appropriate evidence, and has a plan to manage uncertainties over the device’s lifecycle.
That’s what Notified Bodies are looking for. Not box-checking. Not template completion. Coherence and evidence-based reasoning.
When manufacturers approach technical documentation with this mindset, the submission becomes stronger. The review process becomes more efficient. And the deficiency letters become shorter.
Because the file finally demonstrates what it was always supposed to demonstrate: that the manufacturer has the device under control.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex II and Annex III
– MDCG 2020-13: Clinical evaluation assessment of medical device manufacturers
– MDCG 2022-4: Clinical investigation and evaluation for certain class IIb and class III devices
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





