Meet the Expert: Pharmacoepidemiology

As part of our Meet the Expert series, Strategy Director Steven Robery interviews Dorothee B. Bartels, our expert in real-world data and evidence.

Tell us a bit more about your background and role.

Thank you very much for having me. I'm an epidemiologist by training and I did my PhD and my professorship in perinatal epidemiology before I moved into Pharmacoepidemiology, and industry. The field of pharmacoepidemiology has evolved significantly over the last decades. It started really in the post-marketing and safety space, and now it's moving earlier and earlier into the early clinical development. My personal passion nowadays is really to combine the established pharmacoepidemiological methods with AI and machine learning. 

Briefly, we hear a lot about real world data, real world evidence. What's the difference?

There's no global consistent definition of real-world data, and what real world evidence means. In a nutshell, real world data are data collected from routine clinical care, sometimes also defined, e.g. by  EMA, as data collected outside from clinical trials. Real world evidence is generated from those real-world data with scientifically sound methods. 

In the broader context of drug development, why is your role so important?

I think we all know that drug development is still super resource intensive. It takes 10 to 15 years. It costs one to two billion, 90% of assets are failing. So, you want to speed up your clinical trial development, you want to make it more efficient. Complementary data and evidence can be a key pillar to make this process more efficient and de-risk the expensive clinical trial programmes. Furthermore,  along the value chain, Phase IV studies and post marketing requirements can nowadays often be addressed with real world data, so there's also a lot of saving of resources.  

Pharmacoepidemologists can provide these complementary insights and evidence along the whole value chain and, real world data and real-world evidence can be an enabler to bring the right treatments to the right patients earlier and thus have a significant impact for patients and public health. That's what it's all about, right, we want to get treatments as fast as possible to the right patients. 

When should we be thinking about real world evidence and real-world data?

It's Phase I, Phase II. Oftentimes, the argument against this is, “oh, but we don't know whether the asset will fail. So is the investment worth it?”. But the value it can bring is very high, and resources needed for real world data in comparison to clinical trials are low. If you think that you can significantly de-risk your Phase II/III trials, then I would recommend investing earlier rather than later.

Are there any common themes you find emerging biotech companies are not thinking about when designing their clinical trials from a real-world data perspective?

Companies’ biggest challenge probably is the awareness of real-world data use, how can colleagues in development be cognisant about when real world data can be helpful? In Phase I/II you want to understand the natural history of the disease, the patient journey, and initially you may think “it's only serving internal Phase III protocol optimisation, better defining in/exclusion criteria”. But it may also become the basis for an external control arm. There's a real hype on externally controlled studies, but also a lot of expectation management is needed. It is still considered by regulators as an emergency design, so only if a comparative study is not feasible or not ethical, will externally controlled studies be considered.  

When using data for an external control arm, it is crucial that the information is both fit-for-purpose and of high quality. Additionally, you must ensure that this data can be shared with regulatory bodies, keeping the ultimate objective in mind throughout the process.  

Real world data is also super important to better understand the safety profile of new assets. For example, evaluating background rates, what is the risk profile of the target population without being exposed to the new asset. This can be very important for avoiding development delays and/or speeding up label negotiation. You also may want to validate really early on new endpoints, for example for payers and reimbursement which becomes more and more important. 

In summary, one should start to think as early as possible about real-world data and evidence needs across the value chain and across all stakeholders. Emerging biotechs may tend to go step by step and say “OK, we will think about it when we have reached the next milestone”, and then you miss the opportunity to build a consistent, coherent, efficient, fast real world data strategy, let’s call it a real-world evidence engine.  

There's a good segue. What do you find companies often get wrong without expert input. And how can this impact drug development?

I think companies may not know what they don't know, how can they identify opportunities when real-world data/evidence can support and enhance the clinical trial programme and de-risk it? So really the key is to involve an expert early on. 


Oftentimes, the real world data questions are addressed in a siloed way, not connecting the dots resulting in missed opportunities, to use synergies and running into the risk of generating inconsistent data. 
One also tends to go with the low hanging fruits  “oh hey, let's fix this fast and let's get the insights, we need it quickly because time is everything”, but you pay later on for it. It is therefore important to have the experts with deep knowledge on how to identify data fit for purpose, you also need to understand the underlying healthcare system to design the study correctly. 


I tend to say, “Look, this is the magic of randomization. By chance you will have comparable groups, but in real world data it's very different. We always have heterogeneous groups and we need to be careful to not compare apples to oranges and to really engage super early with the relevant stakeholders.” Regulators are offering this, payers are offering this, so that you have clarity up front what is acceptable and what is not acceptable. 
The impact can be huge. For example, if you plan to do an external control arm, in a rare disease or in oncology, and you do your natural history study to get data for the external control arm, thinking, “OK, I can discuss it later with the regulators”, you may fail because data are not fit for purpose or of sufficient quality, or  you did not evaluate enough appropriate data sources etc. And then in the worst-case scenario, you then have to go back and repeat the whole study, be it phase two or phase three, and then the impact is really super huge.

So I guess it's very important to make sure you have these discussions with regulators if you're considering using an external control arm. 
Yes, super early. The earlier, the better. And listen to the regulators, don’t ignore their comments! 

Do you have any examples that you can share where you think this is particularly relevant and important in work that you've done in the past? 
I think nowadays, even regulators themselves publish examples from assets they have reviewed, which failed, where data were not acceptable and where they were partly acceptable. Usually one distinguishes between supportive and substantial real-world evidence for example, if you provide background rates from a real-world data study to evaluate a potential signal from a clinical trial, that would be supportive real-world data. 


Substantial evidence could be, for example, in a rare disease indication, using data from, an international registry with high data quality to form an external control arm. In a pivotal trial, that can lead to a regulatory approval. So that's where real-world evidence contributed substantially to the approval of a drug. There are also examples out there where historical controls from claims data or electronic medical records were used as an external control arm, and then regulators pushed back given low data quality, non-standardized assessments of exposures and outcomes, potential selection bias, changes in standard of care etc. So, they don't accept it. 

Finally, what novel approaches are you seeing to real world evidence and in real world data? And how can it be implemented in drug development? 
For me, one of the most impressive developments over the last years is the growing acceptance of real-world data/evidence by regulators. I would say it's really exploding, the number of guidelines published and issued by FDA is good evidence for this. Novel areas are also the plethora of data, tokenisations, and data linkages, leading to challenges in selecting fit for purpose data for each specific research question. 

Then there's improved science for causal inference. A few years ago, people were still saying if there's no randomization, you can't draw any causal conclusions. This has now changed.  

The analytical methods that are now required, being transparent and reproducible, which is also super important, building trust in real-world data/evidence.  

Real-world data can also enable what I call the 360° view of a patient, combining different data sources. So, you may combine the omics data with phenotype data from electronic health records and as such better understand patient population subgroups, better define your target patient population which again helps reducing trial failures. 
 

Finally, I personally hope that the combination of established epidemiological methods with ML/AI will speed up insights and evidence generation. You also need properly defined real world data cohorts for ML training. So, it's close collaboration that is needed. But if it works out, I think we can enhance drug development and surveillance significantly. 

Curious about how pharmacoepidemiology and/or real world evidence fits into your strategy? Get in touch to learn how our experts can give your drug the best chance of success in clinical trials. 

Next
Next

Meet the Expert - Clinical Pharmacology: Dr Simon Hutchings