Most PMCF surveys collect data that cannot be analyzed

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I reviewed a PMCF survey last month that collected responses from 87 users across three countries. The data looked impressive in volume. But when the manufacturer tried to extract clinical conclusions, they discovered that almost none of it could be used for the clinical evaluation report. The questions were too vague. The scales were inconsistent. The clinical endpoints were buried under usability opinions.

This is not an isolated case. Most PMCF surveys I review suffer from the same structural problems. They collect information, but not data. They generate responses, but not evidence that satisfies MDR requirements.

The problem is not effort. Manufacturers invest time in designing questionnaires, recruiting participants, and processing responses. The problem is that the survey was not built to generate the specific evidence that Article 61 and Annex XIV Part B demand.

Why surveys fail at the analysis stage

A survey becomes usable when you can extract conclusions that feed directly into your clinical evaluation. That means the data must be structured, quantifiable, and linked to defined clinical endpoints.

Most surveys fail because they were designed backward. The manufacturer starts with general questions about satisfaction or experience, then tries to interpret the responses as clinical evidence. By that point, it is too late.

The responses are subjective. The scales are not validated. The clinical relevance is unclear. When the clinical evaluator sits down to write the CER update, they find themselves with pages of opinions but no objective data they can analyze against safety and performance criteria.

Common Deficiency
Surveys that ask “How satisfied are you with the device?” or “Would you recommend this device?” without defining what aspect of clinical performance is being measured. These questions generate sentiment, not clinical data.

Start with the clinical endpoints

A usable PMCF survey begins with a clear definition of what you need to confirm. That definition comes from your clinical evaluation report and your PMCF plan.

What residual risks are you monitoring? What performance claims need post-market confirmation? What safety concerns require ongoing surveillance? These are your endpoints.

Each survey question should trace back to one of these endpoints. If a question does not contribute to a defined clinical conclusion, it does not belong in the survey.

This is not about restricting the survey. It is about discipline. When you know exactly what evidence you need, the survey design becomes straightforward.

Key Insight
Your PMCF survey should not explore what users think generally. It should measure specific parameters that your clinical evaluation identified as requiring post-market data. Every question is a measurement instrument.

Design questions that generate quantifiable data

The difference between a useful question and a useless one often comes down to how the response can be processed.

Compare these two approaches:

Weak question: “How do you feel about the ease of use of the device?”
Strong question: “How many times in the last 30 days did you require assistance from another person to operate the device?”

The first generates an opinion. The second generates a frequency that you can aggregate, compare, and analyze.

When you design questions, ask yourself: Can I calculate a percentage from this? Can I compare it to a baseline? Can I detect a trend over time?

If the answer is no, the question needs to be restructured.

Use validated scales where they exist

For some clinical domains, validated scales already exist. Pain scales, functional outcome scores, quality of life instruments. If your device impacts a domain where a validated tool is available, use it.

This does two things. It makes your data comparable to published literature. And it removes subjectivity from the measurement.

I have seen manufacturers reinvent assessment methods when established tools were already available. The result is data that cannot be benchmarked and credibility that suffers in front of reviewers.

Structure the response options carefully

How you ask the question is only half the issue. How you allow the respondent to answer determines whether the data is analyzable.

Avoid open text fields unless you have a plan to code and categorize the responses systematically. Open text is valuable for capturing unexpected findings, but it cannot be your primary data source.

Use structured response formats. Binary yes/no for presence or absence. Likert scales for degree or frequency, but define what each level means clinically. Numerical ranges for quantifiable parameters.

The structure must be consistent across all respondents. If one clinician interprets “moderate pain” differently from another, your data loses meaning.

Common Deficiency
Mixing response formats within the same survey, such as combining binary responses, five-point scales, and ten-point scales without clinical justification. This creates data sets that cannot be aggregated or compared meaningfully.

Define your denominator before you launch

One of the most overlooked aspects of survey design is the denominator problem. You will collect responses from a subset of users. But to calculate rates, frequencies, or percentages, you need to know what population those responses represent.

If 10 users report a complication, is that 10 out of 50 surveyed users? Or 10 out of 500 devices in the field? The clinical significance is completely different.

Your survey design must account for this from the beginning. You need a clear definition of the target population, the sampling method, and the expected response rate.

Without this, you end up with numerators but no denominators. You cannot calculate incidence. You cannot assess statistical significance. The data remains descriptive at best.

Plan for non-response bias

Not everyone will respond. The ones who do respond may not represent the full user population. Users who experienced problems are more likely to respond than those with uneventful use.

This is non-response bias, and it can distort your conclusions.

Address this by tracking who responds and who does not. If possible, collect basic demographic or usage data from non-respondents so you can assess whether your respondent group is representative.

When you report the survey results in your CER, acknowledge the response rate and discuss any potential bias. Reviewers will look for this. If it is missing, they will question the validity of your conclusions.

Pilot the survey before full deployment

I have seen surveys launched to hundreds of users, only to discover after data collection that a key question was ambiguous or that the response format did not capture what was intended.

By that point, the data is compromised. You cannot go back and re-survey the same population.

Pilot testing prevents this. Run the survey with a small group first. Five to ten respondents are often enough to identify problems with question clarity, response logic, or data export formats.

Ask the pilot participants to think aloud as they complete the survey. Where do they hesitate? What do they find confusing? What assumptions are they making?

Adjust the survey based on this feedback before full deployment.

Key Insight
Pilot testing is not about perfecting the survey aesthetically. It is about confirming that each question generates the type of data you planned to analyze. Test the data export and analysis process, not just the survey interface.

Link survey data to other PMCF sources

A survey should not stand alone. It is one data source within your broader PMCF system.

The most credible PMCF reports combine survey data with registry data, complaint data, and published literature. When these sources converge on the same conclusion, the evidence becomes stronger.

Design your survey so that the parameters you measure can be compared to what you collect elsewhere. If your complaint system tracks device malfunctions by type, your survey should use the same categorization. If your registry captures usage frequency, your survey questions should align with those metrics.

This alignment allows you to cross-validate findings. If survey respondents report low complication rates but your complaints show frequent issues, you have a discrepancy that needs investigation.

Notified Bodies look for this kind of triangulation. A survey that confirms what other data sources show is far more credible than one that exists in isolation.

Prepare the analysis plan before launching

Many manufacturers treat survey analysis as something to figure out after data collection. This is a mistake.

Your analysis plan should be written before the first survey is sent. It should specify which statistical methods you will use, how you will handle missing data, what comparisons you will make, and what thresholds will trigger further investigation.

This is not bureaucracy. It is scientific discipline.

If you define your analysis approach in advance, you avoid the risk of data dredging. You avoid the temptation to interpret results in whatever way makes the device look favorable.

Reviewers understand this risk. If they suspect that your analysis was shaped by the results rather than planned in advance, your entire PMCF report loses credibility.

Common Deficiency
Presenting survey results without a pre-specified analysis plan. This makes it impossible for a reviewer to assess whether the conclusions are objective or whether the analysis was adjusted to fit a preferred narrative.

Document everything in the PMCF report

When you write up the survey results in your PMCF report or CER update, transparency is critical.

Describe the survey design. Explain the rationale for each question. Report the response rate and discuss any limitations. Present the raw data, not just the interpreted conclusions.

If you made any post-hoc adjustments to the analysis, explain why and justify the approach.

Reviewers will scrutinize this section. They are trained to spot gaps, inconsistencies, and signs of selective reporting. The more transparent you are, the more credible your conclusions.

A well-documented survey can strengthen your clinical evaluation significantly. A poorly documented one raises questions about everything else in your file.

What usable data actually looks like

When a PMCF survey is designed correctly, the data integrates smoothly into the clinical evaluation. You can calculate rates. You can compare to baselines. You can detect trends. You can update your benefit-risk assessment with objective evidence.

The survey becomes a measurement system, not a feedback collection exercise.

This requires upfront planning. It requires clarity about what you need to prove. It requires discipline in question design and response structure.

But when you do this work properly, the survey data is not just usable. It is defensible. And in a regulatory environment where every claim must be justified, that distinction matters.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 Article 61 (Clinical Evaluation and Post-Market Clinical Follow-up)
– Regulation (EU) 2017/745 Annex XIV Part B (Post-Market Clinical Follow-up)
– MDCG 2020-7 Post-Market Clinical Follow-up (PMCF) Evaluation Report Template

PMCF surveys are one of several data collection methods. Learn all the approaches in our guide on PMCF plans and reports under MDR.

Related Resources

Read our complete guide to PMCF under EU MDR: PMCF Plan & Report under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR