When Software Talks to Devices: The Interoperability Trap
The clinical evaluation looked solid. The software worked perfectly in isolation. Then the Notified Body asked one simple question: ‘How do you know it works with all the devices it connects to?’ The manufacturer had no documentation covering interoperability failures. The submission stopped there.
In This Article
I see this scenario repeatedly. Manufacturers invest heavily in clinical evaluation of their software’s core functionality. They document algorithms, clinical benefits, and user workflows in detail. Then they treat interoperability as a technical afterthought.
It never works out that way.
When software communicates with other medical devices, monitors, or health information systems, that interface is not just a technical connection. It is a clinical risk pathway. And regulatory reviewers know exactly where to look for missing evidence.
Why Interoperability Is a Clinical Question
Under MDR Article 2(1), a medical device is defined by its intended purpose. If your software is designed to receive data from another device, process it, and produce clinical outputs, then the entire chain matters.
The clinical evaluation cannot stop at the software boundary. It must address what happens when data comes in corrupted, delayed, or formatted incorrectly. It must consider what happens when the connected device operates outside its specifications.
MDCG 2020-1 makes this explicit when discussing clinical evaluation scope. The performance and safety profile must cover all aspects of the device’s operation, including interfaces that affect clinical decision-making.
Yet most clinical evaluation reports I review treat interoperability as a validation checkbox. They cite IEC 62304 compliance and move on. That works for software documentation. It does not work for clinical evaluation.
The clinical evaluation describes the software’s intended use without documenting which specific devices or systems it will interface with, what data formats it accepts, or how interface failures could affect patient outcomes.
The Regulatory Expectation Gap
When you submit a clinical evaluation for software with interoperability functions, reviewers expect to see three layers of evidence.
First, they want a clear description of the interoperability architecture. Not a technical specification document. A clinical description of what flows in, what flows out, and what clinical decisions depend on that flow.
Second, they want hazard analysis that explicitly covers interface risks. What happens if the connected glucose meter sends values in mmol/L when the software expects mg/dL? What happens if the ECG monitor sends incomplete waveforms?
Third, and this is where most files fail, they want clinical evidence that the software performs safely across the range of devices it claims to support.
This is not about theoretical compatibility. It is about documented performance.
If your software claims to work with ‘CE-marked pulse oximeters,’ your clinical evaluation must either test with representative models or justify why performance does not vary across that device category. Generic claims create documentation gaps.
Where the Evidence Comes From
Now the difficult part. How do you generate clinical evidence for interoperability when you cannot test every possible device combination?
Start with risk classification. Not all interfaces carry the same clinical weight.
If your software receives diagnostic images for automated analysis, the interface is high risk. Image corruption, format misinterpretation, or metadata loss directly affects clinical output. You need specific evidence showing that the software handles variations correctly.
If your software logs activity data from fitness wearables for wellness monitoring, the interface carries lower clinical risk. You still need evidence, but the scope can be more focused.
The clinical evaluation should categorize interfaces by clinical impact and then document evidence proportionate to that impact.
For high-risk interfaces, you need validation studies showing the software performs correctly across representative device models. This means testing with actual devices, not simulated data streams.
For lower-risk interfaces, you may rely on standards compliance and error-handling verification, combined with post-market surveillance to detect field issues.
But here is what many manufacturers miss: you cannot decide this internally and expect reviewers to agree. The clinical evaluation must justify the approach.
The Standards Are Not Enough
IEC 62304 requires you to document software interfaces and verify their implementation. That is necessary but not sufficient.
HL7, DICOM, Bluetooth Medical Device Profile, IEEE 11073 standards all establish communication protocols. Compliance proves technical interoperability. It does not prove clinical safety.
I have reviewed files where manufacturers listed ten compatibility standards and assumed that covered interoperability risks. The Notified Body response was always the same: ‘Show us the clinical evidence that the software functions safely when these interfaces operate at the edge of the standard’s specification.’
That question exposes the gap. Standards define nominal operation. Clinical evaluation must address degraded operation, unexpected input patterns, and device behavior outside the manufacturer’s direct control.
Clinical evaluation states ‘Software complies with HL7 FHIR standard’ without documenting how errors, incomplete messages, or non-conformant implementations are detected and managed to prevent clinical harm.
What Interoperability Evidence Looks Like
In practical terms, clinical evaluation of interoperability requires three documentation elements.
First, an interoperability risk analysis integrated into the overall risk management file. This analysis identifies which connected devices or systems are intended, which interface failures could occur, and how those failures could affect clinical use.
Second, validation reports showing that the software handles both conformant and non-conformant inputs correctly. This includes testing with devices that send boundary values, malformed packets, or unexpected data types.
If your software claims to work with ‘all CE-marked blood pressure monitors,’ you need representative testing. Not all devices. Representative coverage based on communication protocols, data formats, and known device behaviors.
Third, post-market surveillance structured to detect interoperability issues in real use. You cannot test every combination pre-market. PMCF must fill that gap.
This means field data collection should specifically track which devices are connected, which interface errors occur, and whether any clinical decisions were affected.
The Update Problem
Here is where interoperability creates ongoing clinical evaluation work.
When a connected device updates its firmware or a health information system changes its API version, your software’s safety profile may change.
If you claim broad compatibility, you inherit responsibility for monitoring those changes. The clinical evaluation must describe how you track compatibility over time and what triggers a re-evaluation.
Most manufacturers discover this during their first PMCF cycle. They release software that works with Device A version 2.1. Six months later, Device A updates to version 3.0 with new data fields. Users start seeing errors.
The question from the Notified Body: ‘How does your clinical evaluation process account for changes in connected device behavior?’
If the answer is not already documented in the CER, you have a gap.
Interoperability is not a one-time validation. It is a continuous surveillance requirement. Your PMCF plan must include specific methods for detecting compatibility issues as connected devices evolve.
Equivalence and Interoperability
If you are using equivalence for part of your clinical evaluation, interoperability creates additional complexity.
You cannot claim equivalence with another software device and then declare broader interoperability scope. If your predicate device only works with three specific monitor models, your equivalence argument does not extend to compatibility with ten other models.
I see this attempted frequently. Manufacturer finds equivalent software with good clinical data. They claim equivalence for core functionality, then add extensive interoperability features without generating supporting evidence.
Reviewers reject this immediately. Equivalence requires comparable technical characteristics and clinical use. Expanding the range of connected devices changes both.
If interoperability scope differs from your equivalent device, you need clinical data covering that difference.
Documentation Strategy
When building your clinical evaluation for software with interoperability functions, structure the CER to address interfaces explicitly.
Include a dedicated section describing the interoperability architecture from a clinical perspective. Which devices connect, what clinical data flows, and what decisions depend on that data.
Then show how the risk analysis covers interface failures. Not generic software risks. Specific scenarios where connected device behavior could compromise clinical safety or performance.
Then present the evidence. Validation testing, clinical studies if applicable, and PMCF plans that monitor real-world compatibility.
Make the logic transparent. The reviewer should see how you determined which device combinations to test, how you validated performance, and how you will detect issues post-market.
If you cannot test certain combinations pre-market, state that explicitly and justify your PMCF approach. Reviewers accept reasonable limitations if you demonstrate awareness and mitigation strategy.
What they do not accept is silence. If your software connects to other devices and the CER does not address interoperability risks, you have a major deficiency.
Interoperability is mentioned in the technical documentation but absent from the clinical evaluation report. The CER evaluates software performance without addressing what happens when connected devices behave unexpectedly.
The Coming Scrutiny
As health systems become more integrated, interoperability scrutiny will increase. Notified Bodies are already asking more detailed questions. Regulators see interoperability failures contributing to patient harm.
The manufacturers who prepare now, who build clinical evaluation processes that genuinely address interface risks, will move faster through review cycles.
Those who treat interoperability as purely technical will face repeated deficiencies and expensive evidence generation late in development.
This is not a future risk. It is happening in current submissions. The question is whether you address it proactively or reactively.
When software talks to devices, every conversation is a clinical event. Your clinical evaluation must prove you understand what can go wrong in those conversations and that you have evidence showing your software responds safely.
That evidence gap closes with documentation strategy, not technical assurances.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-1, IEC 62304
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 2(1), Article 61
– MDCG 2020-1 (Clinical Evaluation)
– IEC 62304 (Medical Device Software Lifecycle)
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





