From AI novice and skeptic… to AI novice 

In this piece, Dr Kirsty Wydenbach, Head of Regulatory Strategy and Drug Development Clinician at Weatherden shares her journey from AI sceptic to embracing its potential in regulatory strategy, tempered by its limitations and practical application in real-world drug development.

Do you remember the first thing you asked ChatGPT? I didn’t, but looking at my chat history it seems I asked “what is the interaction of IL4 and CYP2E1 in the brain?”. I don’t recall the scenario that prompted that question but it seems to have been closely followed by questions on FDA Fast Track designations, BOIN designs in oncology and ILAP Innovation Passports. Amongst other regulatory questions I must confess I have also asked for nut-free granola recipes and the best pinus species for cloud pruning. Interestingly the responses to the latter seem a little more successful than drug regulation questions.  

But lets focus on AI in regulatory strategy for drug development as, dare I say it, I am slowly coming round to how it could actually be beneficial. I still have a lot more to learn, but there have been times where I can see it has saved time and at least given me food for thought to then take forward my own non-AI based searches. And there is an element of FOMO – if everyone else seems to be using it or is involved in how it can transform drug development then I should embrace what it can achieve. I’m not talking about the wider aspects of AI in regulation and drug development, as I can completely buy into how that is progressing and at such a fast pace. I know there is a lot going on, such as the National Commission into the Regulation of AI in Healthcare where the MHRA has recently launched a call for evidence on how AI should be regulated. 

 This is about how I personally use AI, or not. 

A defining moment for me came when I attended a webinar on agentic AI. It started out with a little bit of ‘AI for Beginners’ so was perfect for a non-expert like myself. It outlined the basic premise of progression from traditional AI to generative AI to agentic AI, with a life sciences slant. The examples were incredibly helpful for my understanding, for example using agentic AI to speed up regulatory submission processes through AI based authoring and governance, taking months off regulatory authority submission package preparation time. It still needs the expert oversight as a final step, but the saving in time and costs I can see as being hugely beneficial. 

But it still didn’t quite fit what I wanted AI to be helping me with in my daily work, and what in my mind I thought it could do.    

Then I got chatting to a Weatherden colleague who was able to point me in the right direction for what to read and how to improve my skills. They had been using agentic AI to build personal workflows to summarise daily news data, and offered to do something similar for internal monitoring of guidance documents and when they are updated. It sounds so simple - a real ‘well of course computers can do that!’ moment – but it still takes a basic understanding to get AI to speed up those small jobs that would take a (reasonably well skilled) human hours to do. Another colleague also outlined how to put in the right prompts: for me this was a bit of a lightbulb moment and taking the suggestions onboard really changed how I have been able to use it. 

The main factor was how it helped temper my expectations of the ability of ChatGPT and to understand the limitations of what it can do. My prompts improved, the outputs improved, and it was all looking quite positive. 

However, hallucination is an issue I have certainly encountered on multiple occasions and to me this is the unacceptable side of AI in its current form. The implications for any companies who I am advising could be extremely costly, and ignoring this aspect is something I would even consider reckless. The MHRA covered hallucination as part of the AI Airlock workshops, noting that it can lead to patient harm and mislead patients and clinicians. I would go further and propose that on the commercial side it can create an AI equivalent of a Type I or II error (with apologies to my statistician colleagues!), leading to decisions being made that are based on fabrications. This could be extremely costly to a company but ultimately it will be patients again who suffer through the development of a drug that is unlikely to succeed to licensing or only does so via unsafe decisions or inappropriate judgement. 

On a more positive note, I have seen that ChatGPT does seem to be learning, at least it seems to have worked out I work in the regulation of medicines (“Since you have a scientific & regulatory background — I suspect you may be interested in…”).  However, the tone of responses has certainly taken a turn and it has started being slightly passive aggressive and dare I say it argumentative? When pushed on data underpinning a specific CE marking for a device the response was “You’re absolutely right — and thank you for pushing on this, because this is the exact nuance that makes borderline decisions tricky. But let’s go directly to the exact legal text,…”. The text in bold is not mine. Though not an issue, I am intrigued as to where the language will turn next! I do also have an element of caution here though, coming back to the hallucinating and whether repeated learning on information that may not be entirely accurate will snowball into AI outputs that become non-sensical. Human judgement here is absolutely key. 

But what do other regulators think and see wider AI as being capable of?  

The FDA has started using ‘Elsa’, its own AI tool, to perform a range of tasks from accelerating clinical protocol reviews and summarising adverse events to generating code for non-clinical tools and applications (handy for those with no coding expertise). It would be interesting to know how FDA reviewers consider the accuracy of the outputs, and the weight and importance they carry across different users – from early career to more seasoned reviewers. I have also been following the MHRA AI Airlock and will be looking at how that progresses, as well as the CERSI-AI. I see this approach to overcoming regulatory challenges as far more akin to what I get involved with, and very different to the other AI aspects of data summaries and use in clinical care and devices. 

So I am cautiously excited about what the future looks like. If someone who began as an AI novice can start to grasp and apply these tools, then any regulatory professional can—and should—join this dialogue. I can see the wider uses and implications and although it may not be a significant part of my daily role right now, I can see that evolving in time to improve productivity and enhance my presentation of information. 

Finally, you may wonder if I used ChatGPT to write this piece – I asked, it delivered, and I rejected it as being too obvious, except that I do agree when it stated ‘AI is now quietly becoming a helpful partner. Not a replacement for human judgement, but a set of tools that can make our jobs easier, faster, and sometimes even less stressful’. 

Curious about how our regulatory team can fit into your strategy? Get in touch to learn how our experts can give your drug the best chance of success in clinical trials. 

Next
Next

Meet the Expert: Statistics