Why healthcare professional feedback isn’t clinical evidence yet

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see manufacturers collecting feedback from hundreds of surgeons, nurses, and specialists. Structured forms. High satisfaction scores. Detailed comments about usability and performance. Then they reference this data in the clinical evaluation report as supporting evidence for safety and performance. The Notified Body flags it immediately. Not because the feedback lacks value, but because opinions and clinical evidence are not the same thing.

The confusion is understandable. Healthcare professionals use the device. They observe outcomes. They have clinical judgment. Surely their feedback contributes to demonstrating conformity with general safety and performance requirements.

It can contribute. But not automatically. And not without a structured framework that transforms subjective feedback into data that meets the standards of clinical evidence under MDR 2017/745.

This distinction matters because manufacturers often invest significant resources collecting professional feedback, then discover during review that it cannot be used as intended. The data exists. The insights are valuable. But the evidentiary weight is insufficient.

What MDR Actually Requires

Article 61 of MDR 2017/745 establishes that clinical evaluation must be based on clinical data. Annex XIV, Part A defines clinical data as safety and performance information generated from the use of the device.

The regulation distinguishes between different types of clinical data:

Clinical investigation data from studies conducted under Annex XV. Published literature that meets appraisal standards. And other relevant clinical data, which includes experience from post-market surveillance and user feedback when properly collected and analyzed.

Healthcare professional feedback can qualify as clinical data. But only when it meets specific conditions regarding collection methodology, documentation, and analysis depth.

MDCG 2020-13 on clinical evaluation provides guidance on what constitutes adequate clinical evidence. The threshold is not about volume of feedback. It is about demonstrating that the data collection process produces reliable, verifiable information that can support conformity assessment.

Key Insight
Healthcare professional feedback becomes clinical evidence only when the collection process is designed to generate objective, reproducible data that addresses specific safety and performance claims. Satisfaction surveys and general comments do not meet this threshold.

Where the Gap Appears

Most healthcare professional feedback programs are designed as quality management activities or customer satisfaction initiatives. They collect valuable information for product improvement. They identify usability concerns. They reveal how the device performs in diverse clinical settings.

But they are not designed as clinical data collection systems.

I review clinical evaluation reports where manufacturers reference feedback from 200 healthcare professionals. The submission includes summary statistics showing high satisfaction rates and positive comments about ease of use.

Then I look at the underlying methodology. The feedback forms asked open-ended questions about user experience. The responses were categorized by the manufacturer. No predefined clinical endpoints. No systematic assessment of adverse events. No comparison against specified performance criteria.

The data tells us that professionals generally like the device. It does not tell us whether the device meets specific safety and performance requirements with adequate certainty.

That distinction determines whether the data can support claims in the clinical evaluation.

Common Deficiency
Manufacturers collect feedback using general questionnaires about user satisfaction and device performance. When asked to demonstrate how this data addresses specific GSPRs in Annex I, they cannot establish the connection. The feedback exists. The evidentiary link does not.

What Makes Feedback Evidentiary

The transformation from opinion to evidence requires intentional design of the feedback collection system.

First, the questions must be structured to address specific clinical endpoints or safety parameters. Not “How would you rate device performance?” but “During the procedure, did the device maintain specified parameter X within the claimed range?”

The difference is between subjective assessment and objective observation of defined characteristics.

Second, the feedback must include systematic collection of adverse events and device deficiencies. Not just asking “Did you experience problems?” but implementing a structured reporting framework that captures incidents according to predefined severity categories and classification systems.

Third, the analysis must go beyond descriptive statistics. It must evaluate the feedback against specified acceptance criteria. What percentage of observations must confirm performance claims? How are conflicting observations investigated? When does feedback trigger additional clinical investigation?

Fourth, the documentation must demonstrate traceability. Who provided the feedback? When? In what clinical context? How were responses verified? What quality controls ensure data integrity?

These elements distinguish a feedback program that generates clinical evidence from one that collects opinions.

The PMCF Connection

Healthcare professional feedback programs often form part of post-market clinical follow-up under Article 61(11). The PMCF plan describes how clinical data will be collected after the device is on the market.

This creates an opportunity. When the PMCF plan specifies that professional feedback will address defined clinical questions, and the feedback system is designed accordingly, the resulting data has evidentiary weight.

But this requires that the PMCF plan actually defines those clinical questions. Many PMCF plans state that the manufacturer will collect user feedback. They describe survey distribution. They mention feedback forms.

What they do not do is specify which safety and performance requirements will be evaluated through this feedback. They do not define the clinical endpoints the feedback must address. They do not establish analysis methods or acceptance criteria.

When the PMCF plan lacks this specificity, the resulting feedback cannot fulfill its intended evidentiary role.

I see this pattern regularly. The manufacturer implements the PMCF plan. Feedback is collected. The annual PMCF evaluation report summarizes positive responses. Then during clinical evaluation update, the question arises: How does this feedback demonstrate continued conformity with GSPRs?

The connection cannot be made retroactively. It must be designed into the system from the start.

Key Insight
Healthcare professional feedback programs integrated into PMCF can generate clinical evidence, but only when the PMCF plan explicitly links feedback collection to specific clinical questions that address GSPRs. The evidentiary framework must exist before data collection begins.

Reviewers Look for Structure

When I assess whether healthcare professional feedback constitutes adequate clinical evidence, I look at the collection protocol first. Not the results. The protocol.

Does it define what will be measured? Does it specify how observations will be documented? Does it establish quality controls for data collection? Does it describe analysis methods?

If the protocol exists and demonstrates these elements, the resulting data can support clinical evaluation. If the protocol is absent or describes only general feedback collection, the data has limited evidentiary value regardless of how positive the responses are.

Notified Body reviewers apply similar logic. They assess whether the data collection system is fit for purpose. Purpose being generation of clinical evidence that supports conformity assessment.

This means the feedback program must be designed with regulatory requirements in mind. Not designed as a quality initiative that happens to produce information referenced in the clinical evaluation.

The distinction appears subtle but produces different outcomes. A quality-focused feedback program optimizes for actionable insights about product improvement. An evidence-generating feedback program optimizes for data that can be evaluated against regulatory standards.

Both are valuable. But they serve different functions and require different designs.

The Documentation Trail

Even when feedback is collected through a structured protocol, documentation determines whether it can be used as clinical evidence.

I see manufacturers with well-designed feedback systems that cannot demonstrate data integrity because the documentation trail is incomplete. Who verified that the healthcare professional actually used the device? How was the feedback form completed? Was it a contemporaneous observation or a retrospective recollection?

These questions matter because clinical evidence must be verifiable. The reviewer must be able to trace the data back to its source and assess its reliability.

This requires maintaining records that demonstrate:

The healthcare professional’s qualifications and experience with the device type. The clinical context in which the device was used. The timeframe between device use and feedback submission. Any relationship between the healthcare professional and the manufacturer that might affect objectivity.

Without these elements, the feedback is unverifiable. It may be accurate. It may reflect genuine clinical experience. But it cannot be confirmed, which limits its evidentiary weight.

The documentation also must show how the manufacturer handled inconsistent or negative feedback. When 95% of responses are positive but 5% report problems, how were those problems investigated? Were they device deficiencies? User errors? Misapplication?

The investigation and resolution of negative feedback often provides stronger evidence than the positive responses. It demonstrates that the system captures problems and that the manufacturer responds systematically.

Common Deficiency
Manufacturers present summary statistics from healthcare professional feedback without maintaining detailed records of individual responses, verification of respondent qualifications, or investigation of negative findings. Reviewers cannot verify the data, rendering it unusable as clinical evidence.

Integration into Clinical Evaluation

When healthcare professional feedback is properly collected and documented, integration into the clinical evaluation requires explicit connection to specific safety and performance requirements.

This is not done by including a section that summarizes positive feedback. It is done by referencing specific feedback data points as evidence supporting particular claims.

For example, if the clinical evaluation asserts that the device enables procedures to be completed within a specified time range, and healthcare professional feedback includes structured observations of procedure duration across multiple users and cases, that feedback supports the claim.

But only if the feedback was collected using a standardized method. Only if the procedure duration was defined consistently. Only if the data can be verified.

The integration requires showing:

Which GSPR or clinical claim the feedback addresses. How the feedback collection method was designed to evaluate that specific requirement. What the data shows. What the limitations and uncertainties are. How the feedback relates to other clinical data sources.

This level of integration is possible only when the feedback program was designed with these connections in mind.

When Opinions Remain Opinions

Not all healthcare professional feedback needs to become clinical evidence. Some feedback serves other legitimate purposes.

User satisfaction information supports quality management. General comments about device handling inform design improvements. Observations about clinical workflow integration guide training programs.

This information is valuable. It should be collected. But it does not need to meet the standards of clinical evidence because it is not being used to demonstrate conformity with regulatory requirements.

The problem arises when manufacturers collect feedback for these purposes, then attempt to reference it in the clinical evaluation as supporting evidence without having designed the collection system to generate evidentiary-grade data.

The solution is not to eliminate quality-focused feedback. It is to distinguish between feedback programs designed for different purposes and to implement appropriate methodology for each.

If the feedback will support clinical evaluation, design the program as a clinical data collection system. If it will support quality management, design it as a quality initiative.

Trying to serve both purposes with a single general feedback form usually serves neither purpose well.

Building the Right System

Designing a healthcare professional feedback program that generates clinical evidence requires starting with the clinical evaluation needs.

What are the key safety and performance claims? What data is needed to support those claims? What can healthcare professionals observe and report reliably? What cannot be assessed through professional feedback and requires other data sources?

The answers to these questions define the feedback program design.

Then the protocol is developed. Questions are formulated to elicit specific observations rather than general opinions. Data collection forms are created with clear instructions. Analysis methods are defined with acceptance criteria.

Training is provided to healthcare professionals who will provide feedback, explaining what observations are needed and how to document them accurately.

Quality controls are implemented to ensure data integrity. Verification steps confirm that feedback comes from qualified professionals with actual device experience.

The result is a system that produces data meeting the definition of clinical evidence under MDR.

This requires more effort than distributing satisfaction surveys. But it produces data that actually serves the intended purpose.

Key Insight
Effective healthcare professional feedback programs start with defining what clinical questions must be answered, then designing the data collection system to answer those specific questions with verifiable data. The program design follows from the evidentiary needs, not the other way around.

The Review Perspective

When I review a clinical evaluation that references healthcare professional feedback, I am looking for evidence that the manufacturer understands the difference between collecting opinions and generating clinical evidence.

The presence of a structured protocol signals this understanding. The connection between feedback data and specific clinical claims confirms it. The documentation trail and quality controls demonstrate it.

What raises concerns is when feedback is presented as evidence without this supporting structure. When summary statistics substitute for detailed analysis. When positive responses are highlighted while negative findings are dismissed without investigation.

These patterns suggest that the feedback program was designed primarily for marketing or quality purposes, then repurposed for regulatory use without the necessary methodological rigor.

The feedback may still be valuable for its original purpose. But it cannot carry the evidentiary weight the clinical evaluation requires.

Notified Bodies reach similar conclusions. The question they ask is whether the data collection system was fit for generating clinical evidence. If not, the data cannot support the clinical evaluation regardless of volume or positivity of responses.

This is not arbitrary strictness. It reflects the regulatory standard that clinical evaluation must be based on data that reliably demonstrates safety and performance.

Moving Forward

Healthcare professional feedback will continue to be a valuable data source for post-market clinical follow-up and clinical evaluation. The challenge is implementing collection systems that transform subjective observations into objective evidence.

This requires intentional program design. It requires documentation rigor. It requires integration of the feedback system into the overall clinical evaluation strategy.

Manufacturers who invest in building properly structured feedback programs create a sustainable source of clinical evidence that supports ongoing conformity assessment.

Those who continue collecting general feedback and hoping it can be used as clinical evidence will continue encountering deficiencies during review.

The difference is not in what healthcare professionals observe. It is in how that observation is structured, documented, and analyzed.

The opinions have value. The evidence requires work.

But that work produces data that actually supports what the clinical evaluation needs to demonstrate. And that makes the investment worthwhile.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61 (Clinical evaluation)
– MDR 2017/745 Annex XIV (Clinical evaluation and post-market clinical follow-up)
– MDCG 2020-13 (Clinical evaluation assessment report template)