The traceability gap that fails audits before clinical review
I have seen technical files rejected not because the clinical evidence was weak, but because the auditor could not trace how clinical requirements shaped the design. The device worked. The clinical evaluation existed. But the documented link between clinical needs and design decisions was invisible. That single gap can stop a submission.
In This Article
The problem is not that companies ignore clinical requirements. Most don’t. They conduct risk assessments. They define intended use. They identify hazards. They collect clinical data.
But when an auditor opens the technical documentation, they look for something very specific: a documented chain that shows how clinical requirements entered the design process, influenced design decisions, and were verified through design outputs.
If that chain is not visible, the rest does not matter.
What auditors actually check
Auditors do not start by reading your clinical evaluation report. They start with your design control records. They look at your design and development plan. They examine your design input documents.
The first question is always: where are the clinical requirements?
Not marketing requirements. Not user preferences. Clinical requirements. These are the characteristics your device must have to be safe and effective for the intended clinical use.
ISO 13485 requires that design inputs include applicable regulatory requirements and risk management outputs. But it also requires functional, performance, usability, and safety characteristics derived from the intended use.
That last part is clinical. It comes from understanding the clinical context, the patient population, the clinical endpoints, and the benefit-risk profile you are aiming for.
Clinical requirements are not just safety limits. They include performance characteristics that determine whether the device achieves its intended clinical benefit. If those are not documented as design inputs, your traceability is broken from the start.
I have reviewed files where the clinical evaluation described excellent performance data, but the design input specification mentioned only dimensional tolerances and material properties. No clinical performance metric. No clinical endpoint. No link to the indication.
When the auditor asks where the clinical requirement is documented, the answer is often: it is implied. Or it is in the clinical evaluation report.
That is not enough. The clinical requirement must be an explicit design input.
The invisible translation layer
There is a translation step that many teams skip. You have a clinical need. You have a device concept. But how did one become the other?
For example, if your device is intended to reduce post-operative infection rates, that clinical goal must translate into specific design requirements. What biocompatibility level? What surface characteristics? What cleaning and sterilization resistance?
If your device is intended to improve diagnostic accuracy in a specific patient population, that clinical goal must translate into sensitivity and specificity targets, measurement range, and performance under specific physiological conditions.
Most teams do this work. But they do not document the reasoning. The design input list shows the final requirements. It does not show where they came from or why those values were chosen.
Auditors ask: why is the sensitivity threshold set at 95%? Why is the measurement range 10 to 200 units? Why is the contact duration limited to 30 minutes?
If the answer is not documented, the traceability is incomplete.
Design input specifications list technical parameters without explaining their clinical justification. The link between clinical need and technical requirement exists in the minds of the development team but is not captured in the documentation.
Where the chain must be visible
Traceability is not a single document. It is a documented trail across multiple interconnected records.
The chain starts with the intended use and indication for use. These must be defined early and documented clearly. They are clinical statements. They describe the clinical purpose of the device.
From the intended use, you derive clinical requirements. These go into your design input specification. Each clinical requirement should reference the intended use or the clinical rationale that justifies it.
From the design inputs, you develop design outputs. These are the specifications, drawings, software code, and test protocols that define your device. Each design output should trace back to at least one design input.
Then you verify that the design outputs meet the design inputs. Your verification testing must show that the device achieves the clinical performance requirements you documented at the input stage.
Finally, you validate that the device meets user needs and intended uses in the actual use environment. This includes clinical validation, which confirms that the device achieves the intended clinical benefit.
At every step, the link must be documented. Not implied. Not assumed. Documented.
The traceability matrix problem
Many companies create a traceability matrix. They list design inputs in one column, design outputs in another, verification tests in a third, and validation activities in a fourth.
This is useful. But it is not enough if the matrix only shows links between document numbers.
A meaningful traceability matrix shows the logical relationship. It shows that design input DI-012, which specifies minimum diagnostic sensitivity of 95%, is addressed by design output DO-045, which is the algorithm specification, and verified by test protocol TP-078, which measures sensitivity against a reference standard.
If the matrix only says DI-012 links to DO-045 links to TP-078, the auditor still has to open all three documents to confirm the relationship. That is acceptable, but it slows the review.
If the matrix includes a brief description of how the link is established, the review is faster and the logic is clearer.
The traceability matrix is not just an index. It is a logical map that shows how clinical needs flow through design decisions into verified and validated outputs. If the logic is not visible in the matrix, the auditor must reconstruct it from individual documents.
The clinical evaluation connection
Here is where many files break down. The clinical evaluation report exists. The design control records exist. But they do not reference each other.
The clinical evaluation should inform the design inputs. If your literature review shows that diagnostic sensitivity below 90% is clinically insufficient for early detection, that finding should be referenced in the design input document that sets your sensitivity target at 95%.
If your clinical evaluation identifies a specific risk, such as false negatives in a particular patient subgroup, that risk should appear in your risk management file and drive a design requirement for performance verification in that subgroup.
Auditors look for this connection. They ask: where in your design inputs do you see the clinical evidence being used?
If the answer is nowhere, it suggests that the design process and the clinical evaluation process ran in parallel without interaction. That is a quality system failure under ISO 13485.
Design changes and clinical updates
The connection must also work in the other direction. When you make a design change, you must evaluate its clinical impact.
If you change a material, does that affect biocompatibility? If you change a measurement algorithm, does that affect diagnostic performance? If you change the user interface, does that affect usability in the clinical setting?
These questions require clinical judgment. The answers should be documented in your design change records and, where necessary, reflected in updates to the clinical evaluation.
MDR Annex II requires that the technical documentation demonstrate conformity with the general safety and performance requirements. That demonstration depends on showing that clinical requirements were identified, translated into design requirements, implemented in the design, verified through testing, and validated in clinical use.
If any link in that chain is missing, the demonstration is incomplete.
Design changes are processed through the change control system without evaluating whether the change affects clinical performance or requires an update to the clinical evaluation. The quality system treats design control and clinical evaluation as separate processes.
What this means in practice
When I work with a development team, the first thing I ask for is the design input specification. Then I ask: which of these inputs are clinically driven?
If the answer is none, or if the clinical inputs are buried in general safety requirements without explanation, we have work to do.
The next question is: where is the justification for each clinical requirement documented?
If the justification is in someone’s memory or in an old email thread, it needs to be formalized. Either in the design input document itself, or in a linked clinical rationale document, or in the clinical evaluation report with clear references from the design inputs.
Then I trace forward. For each clinical design input, where is the corresponding design output? Where is the verification test? Where is the validation evidence?
If the verification test exists but does not reference the clinical requirement it is verifying, the traceability is weak. If the validation study measures something different from what the design input specified, the traceability is broken.
This sounds tedious. It is. But it is also the foundation of regulatory compliance under ISO 13485 and MDR.
The role of the clinical affairs team
Clinical affairs should not wait until the clinical evaluation report phase. They should be involved from the design input stage.
When design inputs are being defined, someone with clinical expertise should review them and ask: are these clinically appropriate? Are these the right performance targets for the intended use? Are we missing a clinical requirement?
When design outputs are being finalized, clinical affairs should confirm that the outputs address the clinical needs identified at the input stage.
When verification testing is planned, clinical affairs should confirm that the test protocols will generate data relevant to the clinical evaluation.
This is not micromanagement. It is integration. It ensures that the clinical perspective shapes the design process, not just the post-market documentation.
Clinical affairs involvement should start at design inputs, not at clinical evaluation drafting. Early involvement ensures that clinical requirements are identified, documented, and traced through the entire design process.
Why auditors focus on this
Auditors focus on traceability because it reveals whether your quality system is actually functioning as designed.
If traceability is strong, it shows that your organization has a disciplined process for translating user needs into product features, verifying that those features work, and validating that the product meets its intended use.
If traceability is weak, it suggests that design decisions are being made without documented justification, that testing is not systematically linked to requirements, and that the final product may not reliably meet its intended clinical purpose.
Traceability is not bureaucracy. It is evidence of control.
When an auditor cannot trace a clinical requirement through to a verified design output, they cannot confirm that your device will consistently deliver the claimed clinical benefit. That is a conformity issue.
When a Notified Body reviewer opens your technical documentation and cannot follow the logic from intended use to design requirements to test results, they cannot assess whether your device meets the general safety and performance requirements. That is a submission issue.
Building the habit
The way to fix this is not to create more documents. It is to change how the team works.
When you write a design input, ask yourself: where did this requirement come from? Document the answer.
When you create a design output, ask yourself: which input does this address? Document the link.
When you write a test protocol, ask yourself: which requirement am I verifying? Document the reference.
When you update the clinical evaluation, ask yourself: which design inputs and design changes are relevant? Document the connection.
This is not extra work. It is making the reasoning visible. The reasoning already exists. You are just capturing it in real time instead of trying to reconstruct it later.
Teams that do this from the start have clean technical files. Teams that skip it spend months during submission trying to reverse-engineer the logic and fill the gaps.
Traceability is treated as a documentation task to complete before submission, rather than a real-time practice during design and development. This leads to gaps, inconsistencies, and retrospective justifications that auditors easily identify.
What comes next
Traceability does not stop at design verification. It continues through validation, post-market surveillance, and clinical evaluation updates.
When you collect post-market clinical data, that data should feed back into your understanding of whether the device is meeting its clinical requirements in real-world use.
When you update your clinical evaluation based on new evidence, you should assess whether any design inputs need to be revised.
When you process a complaint or a vigilance report, you should trace back to the relevant design requirements and verify that they are still appropriate.
This is the clinical evaluation lifecycle. It is also the design control lifecycle. They are not separate. They are the same process viewed from different angles.
If your quality system treats them as separate, your traceability will always be fragile.
In the next part of this series, we will look at how risk management integrates with clinical evaluation, and why auditors check whether your benefit-risk analysis is actually based on the risks you identified and the benefits you verified.
The principle is the same: documented linkage. Real integration. Visible logic.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). ISO 13485, MDR Annex II
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Annex II
– ISO 13485:2016, Medical devices — Quality management systems
– MDR Article 10 (General obligations of manufacturers)
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





