The challenge of our era is to bend the power of AI to serve humanity. And of all the areas of human life in which the power of AI has most potential and most danger is healthcare. The infinite complexity of human biology, and the life-altering impact of the subject assure that. 

Yet due to these two opposing forces – of opportunity and threat – this revolution will be slower in healthcare than in generic AI. A chatbot on your desktop, like the one I chose not to use in writing this piece, is rightly less regulated, and so faster to deploy. And for all its intricacies, language is less sophisticated than a human cell. 

Adoption will be slower not because health data is somehow special – privacy matters across many domains – but because of the regulated nature of healthcare. Proof is required, up to a high standard, before technology can be applied. We will increasingly face a dilemma between two strong demands. First, the demand to follow the scientific method: to use the best techniques known to improve outcomes. And second, to keep a human in the loop: to ensure that accountability to a human decision maker is central to the patient experience. Both of these principles have served medicine well for more than a century. 

Yet now they are set to be more and more in conflict. AI decision assisting tools are growing in reliability and usefulness. While they assist decisions, no dilemma arises. But imagine a case in which an AI decision is provable better than a human one. It may be reasonable to say that a human clinician should make the final call. This happens now in rare disease identification in primary care, for example, where a large language model can find connections in the data that may elude the decades-old memory of a once-seen condition of the GP in question. There the technology helps, no doubt. 

But what if it is proven, as can be easily imagined, that a human layer on top of an AI engine is less effective, not more, at delivering better outcomes. The scientific method says cut out the clinician. But the ethicist may say keep them in. 

Such dilemmas lie before us, and soon. 

This isn’t the first dilemma that will require us to rethink not just how we do healthcare, but how we think of the role of the clinician. As a technologist, then public servant, and ultimately Health Secretary, I have seen progress over many decades. 

A decade ago, even the adoption of technology was not a consensus, and caution about the deployment of data was pre-eminent. 

Then for a while, health data was regarded as somehow different to other sensitive personal data, for example, and health-specific systems were required to hold it. Yet privacy matters in many domains, and health systems are now moving away from such a narrow view. 

Now adoption is widespread, and across the medical spectrum from core clinical work to research and policy the demand is for better technology, quite rightly.   

So in contemplating the next decade, it is not enough simply to say that the pace of change is now the slowest it will be. We have become used to a world in flux. Now I think we need to consider what the impact of technology will bring, and how we can shape it for the common good. 

Drawing on my experience, I have three guiding principles to offer. 

First, the future of healthcare will be determined at the intersection of data, research and clinical practice. All three are vital. The application of AI to health has the greatest chance to improve healthy lifespan in at least a generation, since the great public health interventions of the nineteenth and twentieth centuries.

Second, as technology moves from deterministic to probabilistic analysis, so the mindset of the clinician, taught from the very first year of medical school, will have to fundamentally adopt too, to be more modest and dynamic. Like good legal advice, a clinical diagnosis should be about likelihoods and confidence levels, not arrogant false certainty. 

That mindset shift leads to the third principle. While medical shibboleths fall, holding on to the core principles of science will be both harder and more important. Just one example: clinical trial methodology can be radically improved with the high-quality use of data – as we did for example in approving the covid vaccine. But ensuring new processes enhance our confidence in the result – in both a mathematical and emotional sense – is more important than ever.

Face up to the myriad challenges, and the opportunity is bright. We stand at the cusp of great breakthroughs; we have the chance radically to reduce suffering and improve health. The power of the technology at our fingertips is immense. We all need to think hard about how to put that immense power to the best use of humanity. 

The Rt Hon Matt Hancock

Author

The challenge of our era is to bend the power of AI to serve humanity. And of all the areas of human life in which the power of AI has most potential and most danger is healthcare. The infinite complexity of human biology, and the life-altering impact of the subject assure that. 

Yet due to these two opposing forces – of opportunity and threat – this revolution will be slower in healthcare than in generic AI. A chatbot on your desktop, like the one I chose not to use in writing this piece, is rightly less regulated, and so faster to deploy. And for all its intricacies, language is less sophisticated than a human cell. 

Adoption will be slower not because health data is somehow special – privacy matters across many domains – but because of the regulated nature of healthcare. Proof is required, up to a high standard, before technology can be applied. We will increasingly face a dilemma between two strong demands. First, the demand to follow the scientific method: to use the best techniques known to improve outcomes. And second, to keep a human in the loop: to ensure that accountability to a human decision maker is central to the patient experience. Both of these principles have served medicine well for more than a century. 

Yet now they are set to be more and more in conflict. AI decision assisting tools are growing in reliability and usefulness. While they assist decisions, no dilemma arises. But imagine a case in which an AI decision is provable better than a human one. It may be reasonable to say that a human clinician should make the final call. This happens now in rare disease identification in primary care, for example, where a large language model can find connections in the data that may elude the decades-old memory of a once-seen condition of the GP in question. There the technology helps, no doubt. 

But what if it is proven, as can be easily imagined, that a human layer on top of an AI engine is less effective, not more, at delivering better outcomes. The scientific method says cut out the clinician. But the ethicist may say keep them in. 

Such dilemmas lie before us, and soon. 

This isn’t the first dilemma that will require us to rethink not just how we do healthcare, but how we think of the role of the clinician. As a technologist, then public servant, and ultimately Health Secretary, I have seen progress over many decades. 

A decade ago, even the adoption of technology was not a consensus, and caution about the deployment of data was pre-eminent. 

Then for a while, health data was regarded as somehow different to other sensitive personal data, for example, and health-specific systems were required to hold it. Yet privacy matters in many domains, and health systems are now moving away from such a narrow view. 

Now adoption is widespread, and across the medical spectrum from core clinical work to research and policy the demand is for better technology, quite rightly.   

So in contemplating the next decade, it is not enough simply to say that the pace of change is now the slowest it will be. We have become used to a world in flux. Now I think we need to consider what the impact of technology will bring, and how we can shape it for the common good. 

Drawing on my experience, I have three guiding principles to offer. 

First, the future of healthcare will be determined at the intersection of data, research and clinical practice. All three are vital. The application of AI to health has the greatest chance to improve healthy lifespan in at least a generation, since the great public health interventions of the nineteenth and twentieth centuries.

Second, as technology moves from deterministic to probabilistic analysis, so the mindset of the clinician, taught from the very first year of medical school, will have to fundamentally adopt too, to be more modest and dynamic. Like good legal advice, a clinical diagnosis should be about likelihoods and confidence levels, not arrogant false certainty. 

That mindset shift leads to the third principle. While medical shibboleths fall, holding on to the core principles of science will be both harder and more important. Just one example: clinical trial methodology can be radically improved with the high-quality use of data – as we did for example in approving the covid vaccine. But ensuring new processes enhance our confidence in the result – in both a mathematical and emotional sense – is more important than ever.

Face up to the myriad challenges, and the opportunity is bright. We stand at the cusp of great breakthroughs; we have the chance radically to reduce suffering and improve health. The power of the technology at our fingertips is immense. We all need to think hard about how to put that immense power to the best use of humanity. 

The Rt Hon Matt Hancock

Author

Trending

Discover more from Oxford Medical School Gazette

Subscribe now to keep reading and get access to the full archive.

Continue reading