Running CE and UKCA in parallel: Why your evidence splits fail

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I reviewed a submission last month where the manufacturer maintained separate clinical evaluation reports for CE and UKCA. Same device. Same clinical data. Two different conclusions about safety. The Notified Body flagged it immediately. The question wasn’t about format or translation. It was about fundamental coherence in clinical reasoning.

This is more common than it should be. When manufacturers commit to both CE marking under MDR and UKCA certification, they often treat them as separate compliance tracks. Two files. Two teams sometimes. Two sets of documentation that evolve independently.

The logic seems sound at first. Different regulations, different authorities, different submission portals. But clinical evidence doesn’t respect administrative boundaries. And when the same device generates conflicting clinical conclusions across jurisdictions, reviewers notice.

The problem isn’t usually intentional. It emerges from parallel workflows, different reviewers, and small interpretive choices that compound over time. But the regulatory consequence is the same: loss of credibility and extended review cycles.

The regulatory landscape for parallel submissions

Since Brexit, medical device manufacturers serving both EU and UK markets face dual compliance. MDR 2017/745 governs CE marking. UK MDR 2002 governs UKCA marking. On paper, the requirements look similar. Both demand clinical evaluation according to established principles. Both reference state of the art. Both require post-market clinical follow-up.

But similarity in structure doesn’t mean identity in interpretation. UK MDR retained much of the original Medical Device Directive framework with MDR-aligned updates. The transition creates subtle differences in how certain requirements are interpreted and enforced.

More importantly, even where requirements align, you’re dealing with different Notified Bodies or Approved Bodies, different reviewers, and different institutional expectations about what constitutes adequate clinical evidence.

Key Insight
The regulations may converge on paper, but review cultures don’t. What satisfies one body’s clinical reviewer may not satisfy another’s, even when both are technically correct.

This creates a temptation to customize. To shape the clinical evaluation slightly differently for each submission. To emphasize different studies. To draw conclusions that align with perceived expectations.

That’s where coherence breaks down.

Where evidence management fractures

The split usually begins innocently. You prepare the CE submission first because the EU market is larger. The clinical evaluation report goes through multiple iterations with the Notified Body. You refine arguments, add appraisal tables, adjust the state of the art analysis.

Then you turn to UKCA. The base clinical data is the same. The device hasn’t changed. But now you’re working with a different Approved Body. Their initial feedback questions your equivalence rationale differently. They want more emphasis on certain endpoints. They interpret the device classification slightly differently.

So you adjust. You rewrite sections. You restructure the argument. You add studies that weren’t in the CE version. You reweight conclusions.

At this point, you have two clinical evaluation reports that are supposed to describe the same device using the same evidence base, but now they tell slightly different stories.

Common Deficiency
Manufacturers treat clinical evaluation as a compliance document rather than a scientific conclusion. They adjust conclusions based on perceived regulatory expectations rather than maintaining a single coherent clinical position.

This isn’t just a documentation problem. It’s a scientific integrity problem. If the same evidence leads to different safety conclusions depending on which market you’re addressing, something fundamental is wrong with your clinical evaluation process.

The coherence principle in clinical evidence

Clinical evaluation is ultimately a scientific judgment about whether the device’s benefits outweigh its risks when used as intended. That judgment should not change based on administrative jurisdiction.

The evidence base is what it is. The device performs the same way regardless of whether it’s sold in Frankfurt or Manchester. The state of the art doesn’t shift at the border. Patient safety considerations are universal.

This means your core clinical conclusions must remain stable across submissions. The benefit-risk determination should be identical. The identification of residual risks should align. The clinical safety and performance assessment should reach the same verdict.

What can legitimately differ are regulatory interpretations of that evidence. How you classify the device under different frameworks. How you map claims to specific regulatory requirements. How you structure PMCF activities within different surveillance systems.

But the underlying clinical position must be singular and defensible everywhere.

Managing a unified evidence base

This requires deliberate structure. You need a master clinical evaluation that serves as the scientific foundation for both submissions. This isn’t a copy-paste document. It’s the authoritative clinical position from which both regulatory versions derive.

Start by establishing your clinical evaluation independently of regulatory formatting. Conduct your literature searches. Appraise your data. Analyze equivalence or clinical investigations. Determine benefit-risk. Identify gaps and PMCF needs. Reach conclusions.

Document this in a comprehensive clinical evaluation report that answers the scientific questions first, before addressing specific regulatory structures. This becomes your reference document.

Then create jurisdiction-specific versions that reference this master CER but adapt structure and presentation to meet CE and UKCA formatting expectations. The clinical conclusions remain identical. The evidence tables are the same. The benefit-risk determination doesn’t change.

What changes is how you organize the information, what regulatory language you use, and how you cross-reference specific articles in MDR versus UK MDR.

Key Insight
Think of it as one clinical truth expressed in two regulatory dialects. The science doesn’t change. The presentation adapts.

Handling divergent feedback

The real test comes when your Notified Body and Approved Body provide conflicting feedback. This happens more often than manufacturers expect, even when requirements are aligned.

One body questions your equivalence rationale. The other accepts it but wants more post-market data. One requests additional bench testing. The other wants clinical investigation data instead.

The instinct is to satisfy each reviewer independently. Give each body what they’re asking for in their specific submission. But this creates divergence.

Instead, evaluate the feedback clinically first. Is the concern scientifically valid? Does it reveal a gap in your evidence that genuinely affects safety conclusions? If yes, address it in your master CER and update both submissions accordingly.

If the feedback reflects interpretation differences rather than evidence gaps, engage both bodies in a unified discussion. Explain your clinical position. Show that you’re maintaining the same scientific standard across jurisdictions. Demonstrate coherence.

Most experienced reviewers respect this approach. They understand that clinical evidence should be consistent. They don’t expect you to reach different conclusions for administrative convenience.

What they won’t accept is evidence that shape-shifts depending on who’s reviewing it.

Version control and traceability

Managing parallel submissions requires strict version control. Not just document revision numbers, but clear traceability of clinical decisions across both tracks.

When you update clinical evidence for one submission, that update must flow through to the other. When you add a new clinical study to address CE feedback, it must appear in the UKCA file with the same appraisal and the same implications for benefit-risk.

This sounds obvious, but I’ve seen it fail repeatedly. Updates happen in one file. The other file gets overlooked during submission prep. Reviewers spot the inconsistency. The question becomes: did you hide data, or are you just disorganized? Neither answer is good.

Implement a change control process specifically for clinical evidence. Any change to clinical data, appraisal, or conclusions must be logged and assessed for impact across all active submissions. This includes CE, UKCA, and any other jurisdictions you’re pursuing.

Common Deficiency
Manufacturers maintain separate document control systems for CE and UKCA files. Clinical evidence updates in one system don’t trigger reviews in the other, creating version drift and inconsistent safety conclusions.

PMCF coordination across jurisdictions

Post-market clinical follow-up presents its own coordination challenge. MDR Article 61 requires PMCF as a continuous process. UK MDR has equivalent requirements. But the implementation details differ slightly.

The underlying PMCF plan should be identical. You’re collecting the same data about the same device to answer the same clinical questions. The methods don’t change based on where the device is sold.

What differs is how you report that data to different authorities and how you integrate it into periodic safety updates or surveillance reports under each system.

Don’t create separate PMCF studies for CE and UKCA. Create one comprehensive PMCF program that feeds both regulatory systems. Structure your data collection so it satisfies both sets of requirements simultaneously.

When you update your clinical evaluation based on PMCF findings, that update must apply equally to both submissions. New safety signals don’t respect borders. Clinical performance trends are universal. Your revised benefit-risk assessment should be consistent.

When divergence is legitimate

There are situations where CE and UKCA submissions can legitimately differ. These are limited and specific.

You might have different intended uses in different markets based on reimbursement or clinical practice patterns. This can affect clinical claims and therefore the scope of clinical evidence you present, though the underlying safety data remains the same.

You might face different device classification based on specific rule interpretations. This changes regulatory pathway but shouldn’t change clinical conclusions about safety and performance.

You might have market-specific labeling driven by language, measurement units, or local standards. This affects claims and instructions but not fundamental clinical evidence.

Even in these cases, the core clinical evaluation must remain coherent. What you’re varying is regulatory framing, not scientific truth.

The reviewer’s perspective

When I review parallel submissions or audit technical files, the first thing I check is clinical coherence. If the manufacturer has both CE and UKCA documentation available, I compare clinical conclusions.

Any divergence triggers deeper investigation. Why does the benefit-risk section emphasize different risks in different reports? Why does one equivalence claim reference studies that the other doesn’t mention? Why does PMCF data appear in one jurisdiction’s update but not the other’s?

Sometimes there are good explanations. Different submission timing. Phased updates. Legitimate scope differences. But often, the explanation is just poor coordination.

And when that happens, it raises questions about the manufacturer’s clinical evaluation process as a whole. If they can’t maintain consistent conclusions across similar regulatory frameworks, how reliable is their clinical methodology? What other gaps might exist?

This is why coherence matters. It’s not about satisfying a checklist. It’s about demonstrating that your clinical evaluation is grounded in science rather than shaped by regulatory convenience.

Building the right structure

The solution isn’t to make CE and UKCA submissions identical in format. It’s to make them identical in scientific substance while adapting regulatory presentation.

Maintain one master clinical evaluation that represents your authoritative position. Derive jurisdiction-specific versions from that master. Implement change control that ensures updates propagate consistently. Coordinate PMCF as a unified program. Engage reviewers with transparency about your approach.

This takes more discipline upfront. You need clear processes. You need communication between regulatory teams. You need reviewers who understand both clinical evaluation and regulatory strategy.

But it prevents the fractures that create deficiencies later. It maintains credibility with Notified Bodies and Approved Bodies. And it ensures that your clinical conclusions remain defensible everywhere your device is used.

Because ultimately, clinical evidence isn’t about satisfying different authorities. It’s about understanding whether your device is safe and performs as intended. That answer shouldn’t change depending on who’s asking the question.

Next week, I’ll address how post-market surveillance data flows back into clinical evaluation across multiple jurisdictions, and why manufacturers struggle to close that loop effectively.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). UK MDR 2002, MDR Article 61

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61
– UK MDR 2002 (as amended)

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.