When accessory evidence exists only in the system file
A manufacturer submits a clinical evaluation report for a new catheter. The device is intended to be used with an established system already on the market. The CER contains 250 pages. Nearly all clinical data describes outcomes with the complete system. Standalone evidence for the catheter itself? Three lines in one study’s methods section.
In This Article
This happens more often than it should. The accessory device receives a separate CE mark. It has its own technical file. Its own clinical evaluation report. But the clinical evidence supporting it is embedded entirely within the parent system’s documentation.
The question Notified Bodies ask is simple: What is the clinical evidence for this device, independent of the system it connects to?
If the answer is unclear, the submission stalls.
The regulatory expectation for accessory devices
MDR Article 2(4) defines an accessory as a device intended specifically by its manufacturer to be used together with another device to enable or support the intended purpose of that other device.
The definition is precise. The accessory is not the parent device. It has a distinct intended purpose within the system. It has its own design, its own risks, its own interaction with the patient or user.
And because of that, it requires its own clinical evaluation under Annex XIV.
Manufacturers sometimes assume that if the accessory is always used within a system, the system-level evidence is sufficient. That the clinical performance of the system inherently validates the accessory.
That assumption fails in most reviews.
The CER for the accessory references only system-level clinical studies. No data isolates the accessory’s contribution to safety or performance. Reviewers cannot identify which risks or claims are substantiated specifically for the accessory.
The reasoning is straightforward. The accessory contributes specific functions to the system. It may introduce specific risks. Contact with tissue, fluid pathways, mechanical force transmission, imaging quality, drug delivery precision—each of these depends on accessory design.
If the clinical data does not allow reviewers to assess those contributions, the evaluation is incomplete.
When system-level evidence is appropriate
There are scenarios where system-level evidence legitimately supports an accessory’s clinical evaluation. But they require careful structuring.
If the accessory has been evaluated within the system-level studies, and those studies report outcomes that reflect the accessory’s specific role, that data can be used. But it must be explicitly extracted and presented.
For example, consider a catheter used in an ablation system. If the clinical study protocol specifies that catheter model, if adverse events related to catheter navigation or tissue contact are documented separately, if imaging or procedural outcomes are linked to catheter performance—then that evidence supports the catheter.
But the CER must make that linkage explicit. The reader should not have to infer which parts of the system study apply to the accessory.
If the study describes only the system’s overall success rate, complication rate, or clinical benefit, without isolating the accessory’s contribution, the evidence does not fulfill Annex XIV requirements for that accessory.
The role of design and risk analysis
The clinical evaluation must address the risks identified in the accessory’s own risk management file. If those risks are mitigated by design features specific to the accessory, the clinical data must show that those features perform as intended in clinical use.
This is where many system-level studies fall short. They report system outcomes. They do not report outcomes attributable to the accessory’s specific design characteristics.
For instance, if a catheter has a novel tip geometry intended to reduce vessel trauma, the clinical evidence must show that this design feature achieves that outcome. If the parent system study does not assess or report tip-related trauma separately, the evidence gap remains.
System-level evidence can support an accessory only if it allows reviewers to trace outcomes back to the accessory’s specific design, risks, and intended purpose. If that traceability is absent, the evidence does not fulfill the requirement.
When standalone evidence is required
Some accessories cannot rely on system-level evidence at all. This is especially true when the accessory introduces novel features, new materials, or new patient contact characteristics not present in predicate versions.
If the accessory is the first of its kind within the system, or if it changes the risk profile compared to previous accessories used with the same parent device, standalone evidence becomes necessary.
Standalone evidence does not always mean new clinical studies. It can include:
– Bench testing that simulates clinical use conditions for the accessory alone
– Literature data on similar accessories with equivalent design features
– Post-market data from earlier-generation accessories with comparable intended purpose and risk profile
But the evidence must address the accessory’s specific claims and risks. It cannot be entirely absorbed into system documentation.
The equivalence pathway and accessories
Some manufacturers attempt to claim equivalence between their accessory and a predicate accessory already supported by system-level clinical data.
This can work, but only if the equivalence demonstration is robust. The technical and biological characteristics of both accessories must be shown to be equivalent. The clinical data supporting the predicate accessory must be clearly identified and applicable.
MDCG 2020-5 sets out the requirements for equivalence. The demonstration must show that:
– Clinical, technical, and biological characteristics are equivalent
– The devices have the same intended purpose and similar design
– The predicate device’s clinical data is adequate and applicable
If the predicate accessory’s clinical evidence is embedded within a system study, the same traceability issue applies. The equivalence claim must show that the system study data applies specifically to the predicate accessory, and therefore to the new accessory by equivalence.
If that chain of reasoning is not documented clearly, the equivalence claim fails.
The manufacturer claims equivalence to a predicate accessory, but does not demonstrate that the predicate accessory itself has standalone clinical evidence. The entire equivalence chain rests on system-level data that never isolated the accessory’s performance.
What reviewers look for
When I review a CER for an accessory device, I look for three things immediately:
1. Is there a clear description of what the accessory does within the system, and what risks it introduces independently?
2. Is there clinical or performance data that addresses those specific functions and risks?
3. Can I trace each safety and performance claim back to evidence that applies to this accessory, not just the system as a whole?
If any of these elements is missing, the evaluation does not meet the standard.
The problem is not that system-level evidence is unusable. The problem is that manufacturers often present it without the necessary extraction, analysis, and justification.
A well-constructed CER for an accessory will contain a section that explicitly maps system-level study outcomes to the accessory’s design features and risk mitigations. It will show which adverse events are attributable to the accessory. It will reference specific data points, tables, or study sections that isolate the accessory’s contribution.
This requires more than copying the system CER. It requires clinical and regulatory judgment.
The consequence of inadequate separation
If the accessory’s clinical evaluation is not sufficiently independent, the Notified Body will issue a deficiency. The manufacturer will be asked to provide additional data or justification.
In some cases, this leads to requests for new clinical investigations. In others, it leads to re-analysis of existing data, or expanded literature reviews.
Either way, it delays conformity assessment. It increases cost and resource burden. And it reflects a gap in clinical evaluation planning that could have been addressed earlier.
The better approach is to plan the accessory’s clinical evaluation from the beginning as a distinct, though potentially related, evaluation from the system’s.
If the accessory will be supported by system-level studies, design those studies to capture accessory-specific endpoints. If the accessory will rely on equivalence, ensure the predicate has traceable standalone evidence.
This is a design-phase decision, not a submission-phase correction.
Practical guidance for accessory CERs
Here is what works in practice:
Start with the accessory’s risk file. Identify which risks are unique to the accessory versus the system. Use this as the foundation for your evidence needs.
Map each claim and risk to specific evidence. If a claim is supported by system-level data, document exactly which part of that data applies to the accessory and why.
Do not assume the reader will make the connection. Spell it out. Reference study sections, table rows, adverse event categories. Make the traceability explicit.
If you are using equivalence, document the predicate accessory’s evidence base first. Show that it has standalone support before claiming your device is equivalent to it.
Consider whether your accessory introduces any novel feature. If it does, system-level evidence alone will not be sufficient. Plan for additional bench, literature, or clinical data.
The strength of an accessory CER lies in its ability to isolate and justify the accessory’s contribution to the system’s clinical performance. This isolation must be documented, not assumed.
Final reflection
The regulatory framework does not exempt accessories from clinical evaluation requirements simply because they function within a larger system. The requirement under Annex XIV applies to each device according to its own intended purpose and risk profile.
This means accessory manufacturers must think carefully about how their evidence base is structured. System-level studies can contribute, but only if they allow for clear, traceable conclusions about the accessory itself.
When that traceability is absent, the evaluation is incomplete. And incomplete evaluations do not pass conformity assessment.
The solution is not always new studies. Often, it is better extraction, better analysis, and better documentation of what the existing data actually shows about the accessory.
But that requires planning. It requires clinical judgment. And it requires a clear understanding of what reviewers need to see in order to conclude that the accessory is safe and performs as intended.
If you are preparing a CER for an accessory device, ask yourself: Can a reviewer, reading only this CER, understand what clinical evidence supports this device specifically?
If the answer is unclear, the work is not done.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 2(4), Annex XIV
– MDCG 2020-5 Clinical Evaluation – Equivalence
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





