When your device becomes a tool: the clinical evaluation gap
A manufacturer submits a CE-marked diagnostic device to be used in a pivotal trial. The ethics committee asks: where is the clinical evaluation for this use? The sponsor freezes. The device has a CER, yes. But for this specific investigational context? Nothing. This happens more often than it should.
In This Article
- The regulatory position: MDR Annex XV and the investigational context
- What goes wrong in practice
- Why this matters more under MDR
- What a clinical evaluation for a trial tool should include
- Who is responsible?
- When ethics committees and competent authorities push back
- What this means for your next trial
- Closing
The confusion is real and widespread. A device gets CE-marked under MDR. It has a clinical evaluation report. It has a technical file. Then it becomes part of a clinical trial protocol, not as the investigational device, but as a tool to measure outcomes, monitor patients, or support decision-making.
Suddenly, the regulatory ground shifts. The device is no longer just a marketed product. It is now part of the investigational framework. And that creates a question most sponsors overlook until an ethics committee or competent authority raises it: does this device need its own clinical evaluation for this trial context?
The answer is not always straightforward. But the consequences of getting it wrong are.
The regulatory position: MDR Annex XV and the investigational context
Under MDR Article 62 and Annex XV, clinical investigations must be supported by a clinical evaluation of the investigational device. That is clear. But what about devices used as tools within the trial?
The regulation does not explicitly address this scenario. However, the principles embedded in Annex I (General Safety and Performance Requirements) and the broader requirements of clinical evaluation under Article 61 do apply whenever a device is used in a manner that could affect patient safety or data validity.
When a CE-marked device is used in a clinical trial as a measurement tool, monitor, or diagnostic reference, it becomes part of the clinical evidence generation process. Its performance directly impacts the trial outcomes. If that device fails, misclassifies, or introduces bias, the entire clinical dataset can be compromised.
A device used as a tool in a trial is not neutral. It is an active component of the evidence generation system. If its clinical performance is not evaluated for that specific use, you are building evidence on an unvalidated foundation.
This is where many sponsors stumble. They assume that because the device is CE-marked, no further clinical evaluation is needed. But CE marking reflects conformity for a defined intended use. Using that device in a trial setting, often with a different patient population, different endpoints, or different operating conditions, is a different use context.
What goes wrong in practice
I see three recurring patterns in submissions and audit findings.
First, the sponsor includes the CE-marked device in the trial protocol without any clinical evaluation documentation. The investigator brochure mentions it. The protocol describes how it will be used. But there is no analysis of whether the device is suitable for this trial population, whether its measurement accuracy is adequate for the endpoints, or whether its failure modes could compromise safety.
The ethics committee asks for justification. The sponsor responds with the CE certificate and the manufacturer’s instructions for use. That is not a clinical evaluation. That is a product specification.
Providing a CE certificate and IFU as evidence of clinical suitability for a trial context. Notified Bodies and ethics committees increasingly reject this. They want a clinical evaluation specific to the investigational use.
Second, the sponsor commissions a literature search but stops there. They compile a list of studies where the device was used. They attach it to the investigational device dossier. But they do not appraise the data. They do not assess whether the clinical evidence supports the specific trial design, the specific patient population, or the specific measurement requirements.
This is a clinical data compilation, not a clinical evaluation. The difference matters.
Third, and this is the most dangerous pattern, the sponsor assumes the device manufacturer is responsible for this evaluation. They request documentation from the manufacturer. The manufacturer provides a general CER. The sponsor includes it in the submission without reviewing whether it addresses the trial-specific questions.
Then the competent authority raises concerns. The sponsor points to the manufacturer. The manufacturer points back to the sponsor. Meanwhile, the trial is on hold.
Why this matters more under MDR
Pre-MDR, many competent authorities and ethics committees did not scrutinize devices used as tools. The focus was on the investigational device itself. But MDR has shifted the regulatory culture toward a more evidence-based, risk-focused approach.
Annex XV Section 3.2 requires the investigational device dossier to include clinical evidence supporting the investigation. The MDCG has clarified in multiple guidance documents that clinical evaluation is not optional. It is a continuous, evidence-based process that must be tailored to the specific use context.
When a device is used in a trial, that is a specific use context. If the device measures a primary endpoint, its clinical performance directly determines whether the investigational device meets its objectives. If it monitors safety, its failure could result in unreported adverse events.
Competent authorities are now asking: where is the clinical evaluation for this tool? They are not satisfied with general statements. They want to see an analysis of the clinical data, an appraisal of the literature, a risk-benefit assessment for this specific use.
The shift is not arbitrary. It reflects a deeper understanding that clinical evidence quality depends on the quality of the tools used to generate it. If you cannot demonstrate that your measurement device is clinically valid for your trial, you cannot claim your trial data is reliable.
What a clinical evaluation for a trial tool should include
This is not about creating a full CER equivalent to what a manufacturer produces for market authorization. But it is also not about copying and pasting an existing CER without adaptation.
The clinical evaluation for a device used as a trial tool should address the following:
First, the intended use in the trial context. What is the device being used for? What measurements does it provide? What clinical decisions depend on those measurements? This must be explicit. A general IFU is not enough.
Second, the relevant clinical data. What studies exist that demonstrate the device performs adequately in a similar population, with similar endpoints, under similar conditions? This requires a targeted literature search and appraisal. Not every publication is relevant. The appraisal must focus on comparability.
Third, the performance characteristics. What is the accuracy, precision, sensitivity, specificity, or other relevant performance metric for this device? Is that performance adequate for the trial design? If the primary endpoint depends on a measurement with wide variability, how does that affect statistical power?
Fourth, the risk analysis. What happens if the device fails, malfunctions, or produces erroneous data? How would that affect patient safety? How would it affect data validity? What mitigation measures are in place?
Fifth, the justification. Based on the clinical data and risk analysis, is this device suitable for this trial? If there are limitations, are they acceptable? Are they disclosed in the protocol and the informed consent?
Treating this evaluation as a formality. Sponsors often produce a short document that states the device is CE-marked and therefore suitable. That is not a clinical evaluation. That is an assertion without evidence.
Who is responsible?
This is where the confusion deepens. The device manufacturer has a legal obligation to maintain a clinical evaluation for their marketed device. But that evaluation is for the intended use as defined in the IFU and the technical file.
The trial sponsor is responsible for demonstrating that the investigational device and all associated tools are appropriate for the trial. That includes devices used to measure endpoints, monitor safety, or support clinical decisions.
In practice, the sponsor should request clinical data from the manufacturer. The manufacturer should provide it. But the sponsor must then evaluate whether that data supports the trial-specific use. If it does not, the sponsor must either commission additional evaluation, choose a different device, or adjust the trial design.
This is not about transferring liability. It is about clarity. The manufacturer knows the device. The sponsor knows the trial. Both must collaborate to ensure the clinical evaluation covers the actual use.
When ethics committees and competent authorities push back
I have seen submissions rejected because the clinical evaluation for a trial tool was missing or inadequate. The rejection is often phrased as a request for additional information, but the underlying message is clear: you have not demonstrated that this device is suitable for this use.
The sponsor then scrambles. They ask the manufacturer for more documentation. The manufacturer provides a standard CER. The sponsor submits it. The authority responds: this does not address our question.
The problem is that the standard CER addresses the general intended use. It does not address the trial-specific questions: Is the device accurate enough for this endpoint? Is it safe in this population? What happens if it fails during the trial?
The solution is not more documentation. It is the right documentation. A targeted clinical evaluation that directly addresses the trial context.
When a competent authority or ethics committee asks for a clinical evaluation of a trial tool, they are not looking for a CE certificate. They are looking for a reasoned argument, supported by clinical data, that demonstrates the device is fit for purpose in this trial.
What this means for your next trial
If you are designing a clinical trial and planning to use CE-marked devices as tools, start with this question: do I have a clinical evaluation that justifies the use of this device in this trial context?
If the answer is no, you have a gap. That gap will surface during submission review, during ethics committee assessment, or during competent authority evaluation. It is better to address it early.
Work with the device manufacturer. Request clinical data. Request post-market surveillance data. Request performance data specific to the population and use case in your trial.
Then appraise that data. Do not assume it is sufficient. Evaluate whether it supports your trial design. If it does, document that evaluation. If it does not, either find additional data or reconsider the device choice.
This is not about creating extra paperwork. It is about building a solid foundation for your clinical evidence. If your trial tools are not clinically validated for your trial, your trial data is built on assumptions, not evidence.
Closing
The distinction between a CE-marked device and a clinically evaluated trial tool is subtle but critical. One reflects conformity for a defined use. The other reflects suitability for a specific investigational context.
Under MDR, that distinction is becoming a regulatory reality. Competent authorities and ethics committees are asking for clinical evaluations of trial tools. They are not satisfied with CE certificates. They want evidence.
If you are preparing a trial submission, do not wait for the question. Anticipate it. Conduct the evaluation. Document the rationale. Show that you have thought through the clinical performance of every device that touches your trial data.
Because when your device becomes a tool, it stops being neutral. It becomes part of the evidence generation system. And that system must be validated, just like everything else.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61, Article 62, Annex I, Annex XV
– MDCG 2020-6: Clinical evidence for legacy devices
– MDCG 2020-13: Clinical evaluation assessment report template
Related Resources
Read our complete guide to CER under EU MDR: Clinical Evaluation Report (CER) under EU MDR
Or explore Complete Guide to Clinical Evaluation under EU MDR





