Your technical file looks complete—until the reviewer asks
I’ve seen it happen during live reviews. The assessor opens the clinical evaluation report, then the risk management file, then the design verification protocols. The documents exist. The content is there. But the connections between them? Missing. And that’s when the questions begin.
In This Article
The issue is rarely that documents are absent. Most manufacturers deliver the sections. But reviewers don’t read your technical file linearly. They follow logical pathways. They ask how a claim in one section is supported in another. They trace a risk from the FMEA through mitigation strategy to clinical data. They verify that what you concluded in your clinical evaluation is reflected in your instructions for use.
If those connections are not explicit, the file becomes a collection of isolated chapters. And reviewers interpret that as incomplete work.
Why traceability matters more than completeness
MDR Annex II defines the structure of the technical documentation. It lists the sections you must provide. But it also requires that the file demonstrates how those sections interact. The regulation expects coherence, not just compliance with a checklist.
When you submit a claim that your device reduces infection risk, that claim must appear in multiple places. It should be in your intended purpose. It must be evaluated in your clinical evaluation. It should be supported by design verification data. It will influence your risk analysis. It will appear in your labeling.
The problem arises when those mentions are disconnected. The clinical evaluation states the claim. The risk file addresses a different version of it. The verification protocol tests something adjacent but not identical. The IFU uses language that doesn’t match any of them.
Reviewers see this as fragmentation. And fragmentation raises doubt about whether the manufacturer truly understands their own device.
Traceability is not about referencing documents. It’s about demonstrating logical consistency across the entire technical file. Reviewers follow reasoning, not references.
Where cross-referencing breaks down
The most common failure point is between the clinical evaluation and the design verification data. Manufacturers often write the clinical evaluation as a standalone document. They summarize clinical data, analyze equivalence or clinical investigations, and conclude that the device is safe and effective.
But they don’t connect those conclusions back to the specific technical characteristics that were verified in the design file. The clinical evaluation mentions performance metrics. The verification protocols test different parameters. The language doesn’t align. The thresholds don’t match.
When a reviewer checks this, they can’t confirm that what was clinically evaluated is the same as what was technically verified. That creates a gap. And gaps require clarification requests.
The risk management disconnect
Another weak point is the relationship between clinical data and risk analysis. Your risk management file should reflect clinical risks. But often, the FMEA focuses on technical failure modes without addressing clinical consequences that were identified in the literature or in post-market data.
For example, your clinical evaluation identifies a specific adverse event reported in similar devices. That event should appear in your risk file. It should be analyzed. Mitigation measures should be defined. And those measures should be traceable to design controls or labeling.
When this chain is broken, reviewers question whether clinical findings actually informed your risk management. It looks like two parallel processes that never converged.
The intended purpose ambiguity
Your intended purpose statement is the foundation of the technical file. Every section should align with it. But I’ve reviewed files where the intended purpose in the clinical evaluation differs slightly from the version in the risk file. Or the IFU describes a broader application than what was clinically evaluated.
These inconsistencies are not always intentional. They emerge during document updates when one section gets revised but others don’t follow. Still, reviewers interpret them as lack of control. And that affects their confidence in the entire submission.
The intended purpose statement varies across sections. The clinical evaluation evaluates one version, the risk file addresses another, and the IFU presents a third. This signals process fragmentation, not just documentation error.
What reviewers actually trace
Understanding what reviewers look for helps you structure your cross-referencing correctly. They don’t trace everything. They focus on logical connections that validate your conclusions.
Claims to evidence
Every performance or safety claim must be supported. If you claim equivalence to a predicate device, the technical comparison must be traceable to specific design features in your verification file. The clinical evaluation must reference those same features.
If you claim clinical benefits, those benefits must be quantified or described in your clinical data section. They must be reflected in your intended purpose. They must be compared against identified risks.
Reviewers will open the clinical evaluation, identify a claim, and then search for supporting evidence in the appropriate technical sections. If they can’t find it quickly, they flag it.
Risks to mitigations
Each identified risk should be traceable to a mitigation strategy. And that strategy should appear in the relevant technical section. If the mitigation is design-based, it should link to design specifications and verification results. If it’s information-based, it should connect to labeling and IFU content.
Reviewers verify that risks are not just listed but actually addressed. If a risk is mentioned in the FMEA but no mitigation is documented elsewhere in the file, that’s a gap.
Post-market data to file updates
Your PMCF data should feed back into the technical file. If you identify new risks or performance issues post-market, those findings must be reflected in updated risk analysis, updated clinical evaluation, or revised labeling.
Reviewers check whether your post-market surveillance actually influences your technical documentation. If PMCF reports exist but the rest of the file remains unchanged for years, that suggests the feedback loop is broken.
How to build traceability that works
Effective cross-referencing is not about adding reference numbers everywhere. It’s about structuring your file so that logical pathways are explicit.
Start with a traceability matrix
A traceability matrix maps relationships between sections. It shows which claims are supported by which evidence. It links risks to mitigations. It connects clinical findings to design controls.
This matrix doesn’t need to be submitted as a standalone document. But it should exist internally and guide how you write each section. When you draft your clinical evaluation, the matrix tells you which verification protocols to reference. When you update the risk file, it shows which sections of the clinical evaluation are affected.
Use consistent terminology
One of the simplest but most overlooked practices is linguistic consistency. If you describe a device characteristic as
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDR Annex II
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
Building a complete technical file requires investment across multiple disciplines. Understand the financial picture in our CE marking costs for medical devices.
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





