Why Your Software Accessory Might Need a Full Clinical Evaluation
I reviewed a submission last year where a manufacturer presented a mobile app for post-procedure patient monitoring. They included a one-page document stating it was “just an accessory” and therefore exempt from clinical evaluation. The Notified Body stopped the review immediately. The manufacturer lost four months rebuilding documentation they thought was optional.
In This Article
- The Regulatory Definition Creates Confusion
- Classification Drives Clinical Evidence Depth
- The Clinical Evaluation Must Address Software-Specific Risks
- Equivalence Strategies Often Fail for Software Accessories
- What Clinical Data is Actually Needed
- PMCF Planning for Software Accessories
- What Happens During Notified Body Review
- Practical Steps Forward
- Final Thoughts
This happens more often than it should. Software accessories occupy a regulatory grey zone that many teams misunderstand. The term “accessory” suggests something minor, secondary, or less critical. But under MDR, accessories are devices. And devices require clinical evaluation.
The confusion starts with how teams approach classification and clinical evidence planning. Software that processes, analyzes, or displays data for clinical decisions carries clinical risk. That risk does not disappear because the software is labeled an accessory to another device.
The Regulatory Definition Creates Confusion
MDR Article 2(2) defines an accessory as an article intended by its manufacturer to be used together with one or several particular medical devices to enable or support the intended purpose of those devices.
The definition sounds straightforward. But it does not reduce clinical evaluation requirements. An accessory is classified independently according to Annex VIII. That classification determines the depth of clinical evidence required.
I see teams assume that because their software supports another device, the clinical evaluation can be minimal or derivative. That assumption breaks down during Notified Body review. The clinical evaluation must address the specific risks and performance claims of the accessory itself, not just reference the primary device.
Manufacturers cite the clinical data of the primary device and claim it covers the accessory. Reviewers reject this because the accessory introduces distinct functions, interfaces, and failure modes that require independent evaluation.
Classification Drives Clinical Evidence Depth
Software accessories typically fall under Rule 11 of Annex VIII. If the software provides information used to make decisions with diagnostic or therapeutic purposes, it is at least Class IIa. If it drives or influences treatment or monitors vital physiological processes, it can reach Class IIb or Class III.
Classification determines the conformity assessment route and the Notified Body involvement. It also defines the scope and rigor expected in the clinical evaluation report.
A Class IIa software accessory that calculates dosage recommendations needs clinical data demonstrating that those calculations are accurate, that the interface is usable in real clinical settings, and that errors do not lead to patient harm. You cannot skip this by pointing to the primary device’s clinical evaluation.
I worked with a team developing software to optimize imaging parameters for an ultrasound system. They classified it as Class I based on incorrect interpretation of Rule 11. During review, the Notified Body reclassified it as Class IIa. The clinical evaluation report they had prepared was insufficient. They had to conduct a usability study and collect post-market data retrospectively.
The Clinical Evaluation Must Address Software-Specific Risks
Software accessories introduce risks that differ from hardware devices. Algorithm errors, data handling failures, integration issues, and user interface misinterpretations are not covered by clinical data from the primary device.
The clinical evaluation must analyze these risks explicitly. This means going beyond general software validation. You need clinical evidence showing that the software performs safely and effectively when used in the intended clinical environment with the intended user population.
Clinical evaluation of software accessories must demonstrate that the software itself meets its intended purpose without introducing unacceptable risk. This requires data on algorithmic performance, user interaction, and clinical outcome impact, not just device integration testing.
Consider a software accessory that analyzes ECG waveforms and highlights segments for physician review. The clinical evaluation must show that the analysis algorithm correctly identifies relevant segments, that physicians can interpret the highlighted data accurately, and that the software does not cause diagnostic errors through false positives or false negatives.
This requires clinical data. Not just verification and validation reports. You need studies demonstrating clinical performance in real use conditions.
Equivalence Strategies Often Fail for Software Accessories
Equivalence under MDR requires demonstrating that the device is equivalent to a device with sufficient clinical evidence. Equivalence must cover technical, biological, and clinical characteristics.
For software accessories, equivalence is difficult to establish. Software algorithms are unique. Even if the intended purpose is similar to another product, the implementation, user interface, integration, and error handling differ.
I reviewed an equivalence claim where the manufacturer compared their blood glucose monitoring app to a competitor’s app. They argued both apps displayed glucose readings and sent alerts. But the apps used different algorithms for trend prediction, different alert thresholds, and different user interfaces. The equivalence claim was rejected because the clinical characteristics were not equivalent.
Equivalence for software requires demonstrating that the algorithms, data processing, and clinical decision support mechanisms are sufficiently similar. In most cases, manufacturers cannot demonstrate this. They must generate their own clinical data.
What Clinical Data is Actually Needed
The clinical evaluation must answer whether the software accessory is safe and performs as intended. This requires evidence on several levels.
First, algorithmic performance. If the software processes or analyzes data, you need data showing accuracy, sensitivity, specificity, or other relevant performance metrics. This data must come from real clinical samples or realistic simulated data that reflects the intended use population.
Second, usability in clinical context. Software is used by clinicians or patients. The clinical evaluation must address whether the interface, workflows, and outputs are usable without causing errors. This often requires human factors studies with representative users.
Third, clinical outcome impact. Does the software improve diagnostic accuracy, treatment effectiveness, or patient safety? Or does it introduce new risks such as alert fatigue, misinterpretation, or workflow disruption?
Manufacturers present verification and validation testing as clinical evidence. V&V shows the software works as designed, but does not show the software is clinically safe and effective. Reviewers require clinical performance data from real use or simulated clinical conditions.
For lower-risk accessories, literature data and post-market surveillance from similar devices may suffice. For higher-risk accessories, clinical investigations or post-market clinical follow-up become necessary.
PMCF Planning for Software Accessories
Post-market clinical follow-up is required under MDR for most device classes. Software accessories are no exception.
PMCF for software must address how the software performs in real-world clinical use. This means tracking clinical outcomes, usage patterns, error rates, and user feedback.
I worked with a manufacturer whose PMCF plan for a diagnostic support app consisted only of complaint monitoring. The Notified Body required active data collection on diagnostic accuracy in post-market use. They had to implement a registry to collect real-world performance data.
PMCF should be built into the product from the beginning. Software can log usage data, track outcomes, and collect user feedback automatically if designed with PMCF in mind. Retrofitting PMCF after market entry is more difficult and less effective.
What Happens During Notified Body Review
Notified Bodies review clinical evaluations for software accessories with the same rigor as other devices. They look for clear intended purpose, justified classification, identified clinical risks, and sufficient clinical evidence addressing those risks.
Common questions from reviewers include:
Does the clinical evaluation address the software’s specific functions and failure modes?
Is there clinical data showing performance in real or simulated clinical use?
How does the manufacturer monitor clinical performance post-market?
If equivalence is claimed, is it technically and clinically justified?
When the clinical evaluation does not answer these questions, the manufacturer receives deficiency letters. The review stalls until the gaps are closed.
Notified Bodies treat software accessories as independent devices. Referencing the primary device’s clinical evaluation is not sufficient. The accessory must have its own clinical evaluation addressing its own risks and performance claims.
Practical Steps Forward
If you are developing or maintaining a software accessory, start with correct classification. Do not assume the accessory is Class I because it supports another device. Apply Annex VIII Rule 11 carefully and consult with your Notified Body early.
Plan your clinical evaluation based on that classification. Identify the clinical risks specific to the software. Determine what clinical evidence is needed to demonstrate safety and performance.
Do not rely on equivalence unless you can demonstrate true technical and clinical similarity. In most cases, generating your own clinical data is faster and more defensible.
Build PMCF into the product design. Plan how you will collect real-world performance data from the start. This makes post-market obligations manageable and provides valuable evidence for future updates and submissions.
Engage your Notified Body early in the process. Clarify classification, discuss clinical evidence expectations, and align on the clinical evaluation approach before drafting the report.
Final Thoughts
Software accessories are devices. They have clinical risk. They require clinical evaluation. The fact that they support another device does not reduce these requirements.
Teams that approach software accessory clinical evaluation seriously from the beginning avoid delays and deficiencies. Teams that assume minimal requirements often face significant rework during Notified Body review.
The question is not whether your software accessory needs clinical evaluation. The question is whether your clinical evaluation addresses the specific risks and performance claims of that software in a way that satisfies MDR requirements and Notified Body expectations.
If you cannot answer that question clearly, you have work to do.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 2(2) – Definition of accessories
– MDR 2017/745 Annex VIII – Classification rules
– MDR 2017/745 Annex XIV – Clinical evaluation requirements
– MDCG 2020-1 – Guidance on Clinical Evaluation
– MDCG 2019-11 – Guidance on Qualification and Classification of Software





