When a Software Update Becomes a New Device Under MDR

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

The software is upgraded. The manufacturer calls it version 2.3. The clinical team asks if they need new clinical data. The answer is almost always: it depends on what actually changed. And that answer, vague as it sounds, is where most deficiencies start.

I have seen manufacturers release multiple software versions, each time adding new features, expanding indications, or modifying the algorithm. Each time, the clinical evaluation is barely touched. A paragraph is added. A line in the CER states that the update does not affect the clinical performance or safety profile. No new evidence is presented. No structured justification is provided.

Then the Notified Body asks: how did you determine that the change does not require additional clinical evidence? And the file goes silent.

This is not a theoretical problem. It is common. It reflects a fundamental misunderstanding of how MDR approaches software modifications and when clinical re-evaluation becomes necessary.

The Core Issue: Software as a Living Device

Medical device software is rarely static. Updates are frequent. Some are corrections. Some are enhancements. Some fundamentally change the function, the algorithm, or the clinical claim.

Under MDR, each significant change that affects safety or performance triggers a reassessment of clinical evidence. This is not optional. Article 61 and Annex XIV make it clear: manufacturers must continuously update the clinical evaluation throughout the lifecycle of the device.

But what counts as significant? What requires new clinical data versus what can be addressed through risk analysis and literature alone?

That is where clarity breaks down.

Key Insight
A software update is not automatically a minor change just because the version number is incremental. What matters is whether the change affects clinical safety, performance, or the benefit-risk profile.

When Does a Software Update Require New Clinical Evidence?

The decision process starts with understanding what changed.

If the update corrects a bug that had no clinical impact, the clinical evaluation may only need documentation of the correction and confirmation that no new risk was introduced.

If the update modifies the algorithm, adds a new diagnostic feature, expands the intended use, or changes the user interface in a way that could affect clinical decision-making, then the clinical evaluation must be reopened. And new evidence may be required.

This is not about formalities. It is about demonstrating that the modified device still meets its safety and performance requirements under the new conditions.

The Question That Should Trigger Action

Here is the test I use: if the software update changes what the device does, how it does it, or who uses it for what purpose, then the clinical evidence base must be reconsidered.

If the answer is unclear, the default position should be conservative: treat the update as requiring clinical justification until proven otherwise.

Common Deficiency
Manufacturers often rely on the risk management file alone to justify that no new clinical evidence is needed. Risk management is necessary but not sufficient. Clinical evaluation must independently assess whether the clinical data still supports the modified device.

The Role of Risk Management in the Decision

Risk management informs the clinical evaluation, but it does not replace it.

When a software update is made, the manufacturer must update the risk analysis. New hazards may be introduced. Existing hazards may be mitigated or worsened. This is standard practice.

But the clinical evaluation goes further. It asks: does the existing clinical evidence still demonstrate that the device is safe and performs as intended, given the modification? Or does the change introduce a gap in the evidence base?

For example, if a new algorithm is introduced to improve diagnostic accuracy, the clinical evaluation must address whether the claim of improved accuracy is supported by clinical data. A risk analysis that concludes the change is low-risk does not answer that question.

The Notified Body Perspective

From the reviewer’s perspective, the absence of clinical justification for a significant software update is a red flag.

It suggests that the manufacturer has not fully considered the implications of the change. It raises doubt about whether the clinical evaluation process is truly continuous, as required by MDR.

Notified Bodies will ask: where is the evidence that the modified device still meets the essential requirements? Where is the rationale for concluding that no new clinical data was necessary?

If those answers are not clearly documented in the clinical evaluation, the submission stalls.

Types of Software Changes and Their Clinical Implications

Let me break this down more concretely.

Bug Fixes and Minor Corrections

If the update corrects a defect that had no clinical consequence, the clinical evaluation may only need to document the correction and confirm through risk analysis that no new risk was introduced.

The key is documentation. The clinical evaluation should state clearly what was corrected, why it had no clinical impact, and what evidence supports that conclusion.

Performance Enhancements

If the update claims to improve performance—faster processing, better accuracy, enhanced usability—then the clinical evaluation must address whether that claim is substantiated.

Even if the core function remains the same, a claim of improved performance is a clinical claim. It requires evidence.

Literature may suffice if the enhancement is based on a well-established technical principle. But often, new clinical data or at least new performance data is needed.

New Features or Expanded Indications

This is where the requirement for new evidence becomes clearest.

If the software now performs a new function, supports a new clinical indication, or is used in a new patient population, the existing clinical evidence base may not cover the change.

The clinical evaluation must identify the gap and determine what evidence is needed to close it. This might be new clinical studies, additional literature, or real-world data from post-market surveillance.

Key Insight
The clinical evaluation must explicitly state whether the existing evidence base is still applicable after the software update, or whether new evidence is required. Silence on this point is not acceptable under MDR.

Algorithm Modifications

This is the most complex case.

If the algorithm that drives the clinical output is modified—whether through manual reprogramming or through machine learning updates—the clinical evaluation must assess whether the modified algorithm still performs as intended.

For machine learning algorithms, this becomes particularly challenging. If the algorithm adapts based on new data, how do you ensure that its clinical performance remains within acceptable limits?

MDCG 2019-11 on software qualification and classification provides some guidance, but the clinical evaluation must still address whether the clinical evidence supports the modified algorithm’s performance claims.

The Documentation Challenge

The problem I see most often is not that manufacturers fail to collect evidence. It is that they fail to document the rationale for their decisions.

The clinical evaluation report should include a dedicated section on software updates. Each update should be listed, with a clear explanation of what changed and whether new clinical evidence was required.

If new evidence was not required, the rationale must be documented. This includes reference to the risk analysis, the unchanged intended use, and the applicability of existing clinical data.

If new evidence was collected, it must be integrated into the clinical evaluation and appraised like any other evidence.

This is not bureaucracy. It is the demonstration of a continuous evaluation process, which is the foundation of MDR’s approach to device lifecycle management.

Common Deficiency
Software updates are mentioned in the technical file but not reflected in the clinical evaluation report. The CER remains unchanged across multiple versions, with no explanation of how each update was assessed. This creates a gap that reviewers will not overlook.

The Practical Workflow

Here is how I recommend structuring the process.

When a software update is planned, the clinical team should be involved early. Before the update is finalized, a preliminary assessment should be made: does this change affect the clinical claims, the safety profile, or the performance characteristics?

If yes, the clinical evaluation must be updated. The scope of the update depends on the nature of the change.

If no, the decision should be documented with reference to the risk analysis and the unchanged intended use.

After the update is released, the clinical evaluation report should be revised to include the new version, the justification for the decision, and any new evidence that was collected.

This should happen before the next Notified Body review, not during it.

The Role of PMCF

Post-market clinical follow-up is particularly important for software devices.

Because software updates are frequent, PMCF provides a mechanism to continuously verify that the device performs as intended in real-world use.

If the PMCF plan is well-designed, it can provide evidence that supports the clinical evaluation of software updates. For example, real-world performance data can demonstrate that a new feature performs safely and effectively without requiring a new clinical study.

But this only works if the PMCF plan is structured to capture relevant data and if the data is analyzed and integrated into the clinical evaluation.

When Equivalence Is No Longer Valid

One scenario that often gets missed: a software update that breaks an equivalence claim.

If the original device relied on equivalence to a predicate device, and the update changes a key characteristic, the equivalence may no longer hold.

For example, if the algorithm is modified, the device may no longer be technically equivalent to the predicate. If a new feature is added, the intended use may no longer be the same.

In these cases, the clinical evaluation must be rebuilt. New evidence is required, because the equivalence pathway is no longer available.

This is a significant regulatory event, and it is often underestimated.

Key Insight
A software update can invalidate an equivalence claim. If the update changes the technical or clinical characteristics that formed the basis of equivalence, the clinical evaluation must be reassessed from the ground up.

What Notified Bodies Expect to See

Notified Bodies expect to see a clear trail.

They want to see that each software update was assessed for its clinical impact. They want to see the rationale for the decision on whether new evidence was needed. They want to see that the clinical evaluation report reflects the current version of the device, not an outdated one.

They also want to see that the manufacturer has a process in place for managing software updates from a clinical evaluation perspective. This process should be described in the quality management system and reflected in the technical documentation.

If the process is missing, or if the documentation is inconsistent, the review will stall. And the manufacturer will be asked to go back and rebuild the rationale for every update, which is far more work than maintaining the documentation in real time.

The Strategic View

Software updates are a normal part of device lifecycle. They should not be treated as surprises.

The clinical evaluation process should anticipate updates and be structured to accommodate them. This means building flexibility into the clinical evaluation plan, ensuring that PMCF captures relevant data, and maintaining clear communication between the development team and the regulatory team.

When this is done well, software updates become manageable regulatory events. When it is not, they become sources of delay and deficiency.

The difference is not in the complexity of the software. It is in the clarity of the process.

The version number changes, but the clinical responsibility does not. Each update must be clinically justified. Each modification must be reflected in the clinical evaluation. And each claim must be supported by evidence.

That is not a burden. It is the standard.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61, Annex XIV
– MDCG 2020-13: Clinical Evaluation Assessment Report Template
– MDCG 2019-11: Guidance on Qualification and Classification of Software

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.