AI driven healthcare has the power to solve pressing health problems but it has to be underpinned by international regulation, say Vincent Buscemi and Daniel Morris, partners at Bevan Brittan LLP
AI and data driven technology continues to revolutionise healthcare at a staggering pace and offers enormous opportunity to shape and future-proof health systems that could be more affordable, sustainable and equitable.
While countries in the Global North grapple with ageing populations, chronic health problems and lifestyle driven diseases, the Global South faces issues of affordability and accessibility of healthcare and the double burden of higher rates of infectious and noncommunicable diseases. AI and data driven healthcare has the potential to help solve some of the most pressing health problems in both hemispheres.
Governments around the world are backing the development of this technology with significant public investment, while VC and PE firms are also looking to the potential for returns.
The opportunities
While AI technology’s biggest impact has been in the diagnostic specialties, such as radiology, pathology – those specialties where AI can sift through and prioritise medical imaging, detecting patterns and identifying high risk cases – we are also seeing significant developments in the tech-bio field. These are manifested particularly in AI-led drug discoveries, where there is potential for massive efficiencies in time and costs due to savings in the lab work and hard chemistry over many years that conventional drug discovery entails.
In due course, AI will also revolutionise surgery, providing live decision-making support to surgeons while operating. In short, AI technology is going to radically alter almost every aspect of healthcare and the way that it is delivered, whether that’s in diagnostics, therapeutics, or medical decision making. It will also fundamentally change the landscape of medical education and training, staffing and workforce and healthcare administration.
The risks/blockers
Speed of adoption is as important as whether we adopt. Move too fast and we risk getting it wrong, causing harm and losing public trust; move too slowly and we risk falling behind and seeing innovation stymied and investment redirected, such as to jurisdictions where technology-led change might be embraced faster.
The balance is between opportunity and risk. And crucial to getting this balance right will be the following:
a. Public trust in AI: addressing concerns about patient safety, digital discrimination, algorithmic biases, privacy and data security concerns
b. Professional/medical education training and buy-in
c. Regulation and standards – different jurisdictions are approaching AI driven healthcare in a variety of ways: the UK approach for example is pro-innovation, light touch and principlesbased, utilising the existing regulatory machinery; whereas the European Union’s AI Act is more prescriptive and legislative and takes a risk-based approach to regulation.
d. Preventing unintended/unforeseen consequences: how do we prevent AI tech being used in ways it was never intended? Subverting the principles of AI tech was clearly seen during COVID when clinicians used WhatsApp to communicate and exchange patient data as it enabled more effective information-sharing even though it wasn’t secure.
e. Insurance and indemnity arrangements: how do developers and purchasers insure against the risk of AI-caused adverse outcomes; is there adequate coverage and how are such products to be priced?
f. Ensuring there are bright lines of legal responsibility when AI health tech causes harm: does liability sit with the tech companies/developers, the purchasers or the clinicians using it?
g. Understanding the role of human oversight of AI: is that a good or bad thing and does it always ensure better outcomes?
h. Recognising that harm cannot only arise by adopting AI driven healthcare but also not adopting: if AI can analyse many thousands of images/scans in a timescale that a human radiologist would take years to get through, is it acceptable not to adopt while patients are in the meantime dying from pathologies that could be diagnosed quicker and faster by technology? Is this a breach of duty of care?
The challenges
But these risks are where the fundamental dilemma lies. Take radiology for example. The great promise of AI is that it does the grunt work and frees up radiologists for more patient-facing and care roles. Even with human oversight, it’s clear that the aims of AI are altruistically beneficial in enabling better allocation of human resources across the board.
Equally, we know of examples where AI has shown to be insufficient, such as where it is had identified incorrect anatomy or picked up on anomalies that have later been shown to be artefacts following human oversight at MDT review.
So we’re back to the tried and tested method of medical training via textbook, lecture and on hands learning. But could the solution be down to language? Hippocratic A, a safety-focus large language model headed up by Munjal Shah that could power all the different healthcare bots, explores this hypothesis. The company has just secured Series A funding of $120m which is a huge investment. Testing its model in healthcare settings involves passing certifications, training with human feedback and testing for what the company calls “bedside manner.”
Inevitably, the regulatory landscape has to catch up with all this rapid innovation. In the run up to the Bletchley Park conference, the first global AI conference held in November 2023, the UK Deputy Prime Minister Oliver Dowden stressed that any regulation must be created in parallel with developers and requires buy-in from the national health systems rather than innovators. The resulting Bletchley Declaration saw up to 30 countries and unions commit to managing the risks of AI though regulation, legislation and international co-operation.
The international commitment to AI security is particularly important in view of the fact that much of AI seems to be an opaque process that is not easily understood. From an investment point of view, it’s much easier to invest in a drug that has spent six years in development and has been peer reviewed. The lack of hard science behind AI innovations has created a nervousness among investors, hence Munjal Shah and Hippocratic AI’s focus on recognisable certifications and human feedback.
Whatever wizardry AI might be able to achieve, and whether we consider it a partner, a rival or a threat, it will fundamentally change healthcare delivery across the globe. But we must never forget the centrality of the patient or end service user, who has to be at the core of every aspect of the debate, the deployment and the decisions made about AI driven healthcare.
