Why your multi-site PMCF collapses under data inconsistency

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

You designed a multi-site PMCF study across four hospitals. Six months in, you realize the data is unusable. One site recorded complications differently. Another site defined follow-up intervals in their own way. The third site never documented half of the adverse events. Now you are sitting with incomplete datasets, no statistical power, and a Notified Body waiting for your annual update.

This is not a rare scenario. It happens in most multi-site PMCF studies that rely on routine clinical practice without strict harmonization protocols.

The problem is not the hospitals. The problem is the assumption that clinical teams will naturally align on data collection when no one has enforced a standardized approach from day one.

Under MDR Article 61 and Annex XIV Part B, PMCF must generate data that can be analyzed and interpreted. If your data is inconsistent across sites, it cannot serve its regulatory purpose. It becomes noise instead of evidence.

MDCG 2020-7 emphasizes that PMCF studies must be methodologically sound. That means harmonized definitions, unified data capture tools, and continuous monitoring of compliance across all sites. Without harmonization, you are running separate uncoordinated activities under the label of a single study.

Where harmonization breaks down

Most multi-site PMCF studies start with good intentions. You define endpoints. You write a protocol. You submit it to ethics committees and get approval.

But the protocol often stays abstract. It describes what should be collected, but it does not enforce how it should be collected in practice.

Then reality sets in.

One hospital uses paper case report forms. Another uses their electronic health record system with custom fields. A third hospital relies on a research nurse who interprets the protocol her own way.

The result is fragmentation. Every site collects data in their own format, their own rhythm, and their own interpretation of what qualifies as a reportable event.

Common Deficiency
PMCF protocols that define endpoints but fail to standardize data capture tools, adverse event classification, and follow-up schedules across all participating sites.

By the time you attempt to pool the data, you realize it cannot be merged. Variables are named differently. Timepoints do not align. Adverse events are coded using different terminologies. Some sites report device deficiencies. Others do not even track them.

Now you face a choice: either run additional analysis to reconcile the datasets, or discard parts of the data and lose statistical power.

Neither option satisfies the Notified Body.

Why standardization must be enforced before first enrollment

Harmonization is not something you add later. It must be embedded in the design phase.

This means creating a unified case report form that is identical across all sites. Every site must use the same form. Every investigator must classify adverse events using the same coding system. Every follow-up visit must occur at the same predefined intervals.

If you allow flexibility at the site level, you will get inconsistency in return.

I see this repeatedly in PMCF audits. Sponsors write detailed statistical analysis plans but never enforce the data capture structure needed to execute those plans. They assume that trained clinicians will naturally document events in a comparable way.

They do not.

Clinicians prioritize patient care over regulatory documentation. Unless you make data capture as simple and structured as possible, it will drift. Every site will develop its own habits. Every investigator will interpret the protocol slightly differently.

That drift kills your dataset.

Key Insight
Harmonization requires identical case report forms, unified adverse event coding, standardized follow-up intervals, and continuous data quality monitoring. Without these, you are running separate uncoordinated activities, not a single study.

The protocol should specify the exact tools used for data capture. It should define the exact classification system for adverse events. It should mandate the exact timing of follow-up visits. And it should require that every site uses the same terminology and format when documenting outcomes.

If the protocol is vague, execution will be chaotic.

Training investigators is not optional

Even with a standardized case report form, investigators will not automatically understand what you need.

You must train them.

This is where many sponsors underestimate the workload. They send the protocol by email and assume investigators will read it carefully. They hold one introductory meeting and consider the training complete.

In practice, investigators are busy. They skim the protocol. They focus on patient enrollment and clinical care. They do not internalize the data capture requirements unless you make those requirements explicit and repetitive.

Training should be live, interactive, and case-based. Walk through real scenarios. Show examples of how to classify adverse events. Demonstrate how to complete the case report form correctly. Explain what happens if data is missing or inconsistent.

And then follow up. Conduct site visits. Review the first few cases captured at each site. Provide feedback immediately when deviations occur.

If you do not monitor compliance early, deviations will compound. By the time you notice the problem, months of data may already be compromised.

Common Deficiency
Sponsors who conduct one training session and never follow up with site monitoring or data quality checks during the first months of enrollment.

MDCG 2020-7 does not explicitly mandate site training, but it does require methodological rigor. A study that produces inconsistent data lacks rigor, and that reflects directly on the quality of your clinical evaluation report.

Notified Bodies see this. When data inconsistency is evident in the PMCF report, they question whether the study was executed with proper oversight.

Building feedback loops into the study design

Harmonization is not a one-time setup. It requires continuous monitoring.

You need feedback loops that detect deviations early and correct them before they spread.

This means setting up data quality checks that run automatically. Every time a case report form is submitted, the system should flag missing fields, inconsistent entries, or outlier values.

Someone on your team must review those flags immediately. If a site is repeatedly making the same mistake, you intervene with targeted retraining. If a data field is consistently misunderstood, you clarify the protocol or update the case report form instructions.

Without these feedback loops, errors accumulate silently. By the time you lock the database for analysis, the dataset is already damaged.

I have reviewed PMCF datasets where entire sites had to be excluded from analysis because their data was too inconsistent to merge. That is not just a statistical problem. It is a regulatory problem.

If your study was designed to achieve a certain sample size, and you lose 30% of your data due to quality issues, you no longer have the statistical power you claimed in the protocol. The conclusions you draw become questionable.

And when the Notified Body reviews your PMCF report, they see that gap.

Key Insight
Continuous data quality monitoring is essential. Automatic flagging of missing or inconsistent entries, combined with immediate site-level feedback, prevents silent error accumulation and preserves dataset integrity.

Aligning with routine clinical practice without sacrificing rigor

One of the appeals of registry-based PMCF is that it integrates into routine clinical practice. You do not impose a strict clinical trial structure. You collect data as patients are treated normally.

But this flexibility introduces risk.

If you rely entirely on clinical routine without any standardization layer, you will inherit all the variability that exists in real-world practice. Different hospitals document differently. Different clinicians have different thresholds for reporting complications.

The solution is not to turn your PMCF study into a rigid clinical trial. The solution is to add a thin layer of standardization on top of routine practice.

Define a minimal dataset that every site must capture in a uniform way. Keep it focused. Do not ask for 200 variables. Ask for the 15 variables that matter for your safety and performance evaluation.

Then provide simple tools that make data capture efficient. If clinicians have to spend 20 minutes per patient filling out forms, they will not comply. If you give them a one-page structured form that takes three minutes, compliance improves dramatically.

Simplicity drives compliance. Complexity drives deviation.

And when deviations multiply across sites, your data becomes unusable.

What reviewers check when they assess your multi-site PMCF

When a Notified Body reviews your PMCF report, they do not just look at the results. They look at the methodology.

They ask whether the data collection was consistent across sites. They check whether follow-up intervals were standardized. They examine whether adverse events were classified using a unified system.

If your report shows obvious inconsistencies, they will question the validity of your conclusions. If you pooled data from sites that used different definitions, they will ask how you ensured comparability.

And if you cannot answer those questions convincingly, they will issue a deficiency.

I see this often in clinical evaluation assessments. The sponsor presents pooled data from multiple sites, but the report does not explain how harmonization was enforced. The Notified Body asks for site-level summaries. The sponsor provides them, and the differences become obvious.

At that point, the study loses credibility.

Common Deficiency
PMCF reports that present pooled results without documenting how data harmonization was ensured across participating sites.

Your PMCF report should explicitly describe the harmonization measures you implemented. Include the standardized case report form in the appendix. Document the training sessions you conducted. Show the data quality checks you ran during the study.

This transparency demonstrates methodological rigor. It shows that you anticipated the risk of inconsistency and took steps to mitigate it.

And it makes the reviewer’s job easier. They do not have to guess whether your data is reliable. You have already shown them that it is.

Final thought

Multi-site PMCF studies offer scale and real-world relevance. But they introduce complexity that must be managed actively.

If you do not enforce harmonization from the start, you will lose data quality. And without data quality, your PMCF study cannot fulfill its regulatory purpose.

Standardize your case report forms. Train your investigators properly. Monitor data quality continuously. And document every step in your PMCF report.

That is how you build a multi-site study that survives regulatory scrutiny.

In the next part, I will address something even more neglected: how to handle unexpected safety signals that emerge mid-study and what your PMCF protocol must include to respond appropriately.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-7

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61 and Annex XIV Part B
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Evaluation Report Template

Multi-site challenges are best addressed within a robust PMCF framework. See our full guide on PMCF plans and reports under MDR.

Related Resources

Read our complete guide to PMCF under EU MDR: PMCF Plan & Report under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR