When Usability Testing Stops Being Just Usability

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I reviewed a CER last month where the manufacturer had completed full IEC 62366 compliance testing, documented every use error scenario, validated the user interface, and passed the summative evaluation. The Notified Body still issued a major nonconformity on clinical evidence. Why? Because none of that usability work appeared in the clinical evaluation. The gap between human factors engineering and clinical evidence is not a documentation problem. It’s a conceptual misunderstanding of what clinical evaluation actually requires under MDR.

Most manufacturers treat IEC 62366 as an engineering deliverable and clinical evaluation as a separate medical deliverable. That separation creates a blind spot. Under MDR Article 61 and Annex XIV, clinical evaluation must address safety and performance across the device lifecycle, and usability directly determines both.

When a user cannot identify the correct insertion depth on a catheter, that is not just a design issue. It is a clinical risk that translates into vessel perforation, hemorrhage, or failed procedures. When a software interface presents unclear dosing calculations, the resulting errors are adverse events with clinical consequences.

Yet usability reports rarely make it into clinical evaluation reports with the clinical framing that Notified Bodies expect.

Why IEC 62366 Data Is Clinical Evidence

IEC 62366 defines usability as the characteristic of the user interface that establishes effectiveness, efficiency, ease of learning, and user satisfaction. These are not abstract qualities. They translate directly into clinical outcomes.

Under MDR, clinical data means information on safety or performance obtained from the use of a device. MDCG 2020-6 clarifies that clinical data includes real-world evidence, post-market data, and usability evaluations when those evaluations address safety and performance in clinical use.

This is the shift that many manufacturers miss. Usability is not just about compliance with a standard. It is about demonstrating that the device performs safely and effectively when real users operate it in real conditions.

Key Insight
Usability testing generates clinical evidence when it demonstrates that intended users can operate the device safely and effectively under realistic conditions. The same test that validates your user interface can support your clinical claims if framed correctly.

The challenge is that usability engineers and clinical evaluators speak different languages. Usability engineers focus on task success rates, time on task, and error frequencies. Clinical evaluators focus on hazards, adverse events, and benefit-risk balance.

The translation between these perspectives does not happen automatically. It requires deliberate integration.

Where the Integration Breaks Down

I see three recurring gaps when reviewing CERs and technical files.

Gap One: Usability Data Stays in Risk Management Files

Manufacturers complete formative evaluations, identify use errors, implement design changes, and conduct summative validation. All of that work stays in the IEC 62366 usability engineering file. The clinical evaluator receives a summary statement that usability testing was completed and passed.

That is not clinical integration. That is a cross-reference.

The clinical evaluator needs to understand which use errors were observed, which hazards they could trigger, how design mitigations reduced risk, and what residual risks remain. Without that detail, the CER cannot address whether the device is safe and effective in the hands of real users.

Gap Two: Formative Testing Is Treated as Preliminary Work

Formative usability testing is often dismissed as early-stage design validation. But formative testing reveals how users actually interact with the device before the design is locked. It exposes misunderstandings, workarounds, and cognitive errors that may persist even after design improvements.

That information is critical for clinical evaluation. It shows what can go wrong and why. It informs the post-market surveillance strategy. It helps the clinical evaluator anticipate which adverse events might appear in real-world use.

If formative findings never reach the CER, the clinical evaluator is working blind.

Common Deficiency
Notified Bodies frequently issue findings when CERs claim safe and effective use but provide no usability data to support that claim. The absence of usability integration is interpreted as incomplete clinical evidence, not as a separate compliance track.

Gap Three: Summative Testing Is Framed as Pass/Fail

Summative usability validation is designed to confirm that the device can be used safely and effectively by the intended users in the intended environment. That is a clinical claim.

But most CERs present summative results as a binary outcome. The device passed. No critical use errors were observed. Testing complete.

That framing wastes the clinical value of the data. The clinical evaluator should know which tasks were tested, how users performed, where hesitation or confusion occurred, and how performance compared to the acceptable risk threshold defined in the usability plan.

The summative report is not just validation evidence. It is clinical performance data.

How to Integrate Usability Data into Clinical Evaluation

Integration is not about copying usability reports into the appendix of the CER. It is about translating usability findings into clinical language and using that translation to support clinical conclusions.

Step One: Map Use Errors to Clinical Hazards

Every use error identified in formative or summative testing should be traced to a clinical hazard in the risk management file. That traceability allows the clinical evaluator to assess which hazards have been tested, which mitigations have been validated, and which risks remain.

If a use error does not link to a clinical hazard, either the risk analysis is incomplete or the usability testing is addressing non-critical tasks. Either situation needs resolution.

Step Two: Present Usability Results as Clinical Performance

Reframe usability metrics in clinical terms. Instead of reporting that 95% of users completed the task successfully, report that 95% of users were able to deliver the intended therapy without error. Instead of stating that no critical use errors were observed, state that no use errors resulting in patient harm were observed during validation.

This is not marketing language. It is aligning the usability data with the clinical claims in the CER.

Step Three: Address Residual Use-Related Risks

No usability testing eliminates all use errors. The clinical evaluator must address residual risks that remain after design mitigations and user training. These residual risks inform the benefit-risk analysis and the post-market surveillance strategy.

If the CER does not acknowledge residual use-related risks, the Notified Body will ask why. The absence of risk discussion signals either incomplete analysis or incomplete integration.

Key Insight
The most effective CERs treat usability validation as a clinical study. The objectives, methods, results, and conclusions are presented with the same rigor as any other clinical investigation. The standard being followed is IEC 62366, but the output is clinical evidence.

What This Means for PMCF

Post-market clinical follow-up must address whether the device continues to perform safely and effectively in real-world use. That includes monitoring use-related adverse events, near misses, and user complaints that suggest usability issues.

If usability was not integrated into the pre-market CER, the PMCF plan will lack the baseline to detect use-related performance drift. You cannot monitor what you have not measured.

PMCF is often where use-related issues surface. Users deviate from instructions. Workloads increase. Lighting conditions vary. Fatigue and distraction introduce errors that did not appear in controlled testing. The PMCF plan must be designed to capture these patterns, and the periodic safety update reports must analyze them as clinical data.

This is why usability integration is not just a pre-market requirement. It is the foundation for post-market surveillance of device performance.

The Notified Body Perspective

When I review CERs during audits, I look for consistency between the clinical claims and the evidence base. If the CER claims that the device is safe and effective for the intended users, I expect to see usability data that supports that claim.

If the usability data is absent or superficially referenced, the clinical evaluation is incomplete. The manufacturer has validated compliance with IEC 62366, but has not demonstrated clinical safety and performance in real-world use.

That distinction matters. Compliance is necessary but not sufficient. Clinical evaluation requires evidence that the device works as intended in the hands of the people who will use it.

Most deficiencies in this area are not about the quality of usability testing. They are about the failure to present that testing as clinical evidence.

How to Implement This Practically

Manufacturers do not need to redesign their usability programs. They need to change how usability data flows into clinical evaluation.

Start by including the clinical evaluator in usability planning. The clinical evaluator should review the test protocols, understand which tasks are being validated, and confirm that critical use scenarios are covered.

During testing, document observations in clinical terms. When a user hesitates or makes an error, describe the potential clinical consequence, not just the task failure.

In the CER, dedicate a section to use-related clinical evidence. Present the usability validation results as performance data. Discuss how observed use patterns support or challenge the clinical claims.

In the PMCF plan, define specific indicators for use-related performance. Monitor adverse events, complaints, and training requests that suggest usability issues in the field.

This is not additional work. It is repurposing work that is already being done.

Common Deficiency
Manufacturers often update the CER to reference usability testing after a Notified Body finding. That reactive approach delays certification and signals weak process integration. Build usability into the CER structure from the start.

What Comes Next

Usability is one dimension of real-world device performance. The next part of this series addresses how training requirements, user competence, and real-world deviations from intended use affect clinical evidence.

Understanding where your device will actually be used and by whom is not a human factors question alone. It is a clinical question that shapes the entire evidence base.

If your CER assumes ideal use conditions that do not match clinical reality, you are not managing risk. You are deferring it.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). IEC 62366, MDCG 2020-6

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61 and Annex XIV
– IEC 62366-1:2015 Medical devices – Application of usability engineering to medical devices
– MDCG 2020-6 Regulation (EU) 2017/745: Sufficient clinical evidence for legacy devices

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.