36.1 C
New Delhi
Friday, March 29, 2024
HomeTechA bioethicist and a professor of medicine on regulating AI in health...

A bioethicist and a professor of medicine on regulating AI in health care


The artificial intelligence (AI) sensation ChatGPT, and rivals such as BLOOM and Stable Diffusion, are large language models for consumers. ChatGPT has caused particular delight since it first appeared in November. But more specialised AI is already used widely in medical settings, including in radiology, cardiology and ophthalmology. Major developments are in the pipeline. Med-PaLM, developed by DeepMind, the AI firm owned by Alphabet, is another large language model. Its 540bn parameters have been trained on data sets spanning professional medical exams, medical research and consumer health-care queries. Such technology means our societies now need to consider the best ways for doctors and AI to best work together, and how medical roles will change as a consequence.


The benefits of health AI could be vast. Examples include more precise diagnosis using imaging technology, the automated early diagnosis of diseases through analysis of health and non-health data (such as a person’s online-search history or phone-handling data) and the immediate generation of clinical plans for a patient. AI could make care cheaper as it enables new ways to assess diabetes or heart-disease risk, such as by scanning retinas rather than administering numerous blood tests, for example. AI has the potential to alleviate some of the challenges left by covid-19. These include drooping productivity in health services and backlogs in testing and care, among many other problems plaguing health systems around the world.

For all the promise of AI in medicine, a clear regime is badly needed to regulate it and the liabilities it presents. Patients must be protected from the risks of incorrect diagnoses, the unacceptable use of personal data and biased algorithms. They should also prepare themselves for the possible depersonalisation of health care if machines are unable to offer the sort of empathy and compassion found at the core of good medical practice. At the same time, regulators everywhere face thorny issues. Legislation will have to keep pace with ongoing technological developments—which is not happening at present. It will also need to take account of the dynamic nature of algorithms, which learn and change over time. To help, regulators should keep three principles in mind: co-ordination, adaptation and accountability.

First, there is an urgent need to co-ordinate expertise internationally to fill the governance vacuum. AI tools will be used in more and more countries, so regulators should start co-operating with each other now. Regulators proved during the pandemic that they can move together and at pace. This form of collaboration should become the norm and build on the existing global architecture, such as the International Coalition of Medicines Regulatory Authorities, which supports regulators working on scientific issues.

Second, governance approaches must be adaptable. In the pre-licensing phase, regulatory sandboxes (where companies test products or services under a regulator’s supervision) would help to develop needed agility. They can be used to determine what can and ought to be done to ensure product safety, for example. But a variety of concerns, including uncertainty about the legal responsibilities of businesses that participate in sandboxes, means this approach is not used as often as it should be. So the first step would be to clarify the rights and obligations of those participating in sandboxes. For reassurance, sandboxes should be used alongside a “rolling-review” market-authorisation process that was pioneered for vaccines during the pandemic. This involves completing the assessment of a promising therapy in the shortest possible time by reviewing packages of data on a staggered basis.

The performance of AI systems should also be continuously assessed after a product has gone to market. That would prevent health services getting locked into flawed patterns and unfair outcomes that disadvantage particular groups of people. America’s Food and Drug Administration (FDA) has made a start by drawing up specific rules that take into account the potential of algorithms to learn after they have been approved. This would allow AI products to update automatically over time if manufacturers present a well-understood protocol for how a product’s algorithm can change, and then test those changes to ensure the product maintains a significant level of safety and effectiveness. This would ensure transparency for users and advance real-world performance-monitoring pilots.

Third, new business and investment models are needed for co-operation between technology providers and health-care systems. The former want to develop products, the latter manage and analyse troves of high-resolution data. Partnerships are inevitable and have been tried in the past, with some notable failures. IBM Watson, a computing system launched with great fanfare as a “moonshot” to help improve medical care and support doctors in making more accurate diagnoses, has come and gone. Numerous hurdles, including an inability to integrate with electronic health-record data, poor clinical utility and the misalignment of expectations between doctors and technologists, proved fatal. A partnership between DeepMind and the Royal Free Hospital in London caused controversy. The company gained access to 1.6m NHS patient records without patients’ knowledge and the case ended up in court.

What we have learned from these examples is that the success of such partnerships will depend on clear commitments to transparency and public accountability. This will require not only clarity on what can be achieved for consumers and companies by different business models, but also constant engagement—with doctors, patients, hospitals and many other groups. Regulators need to be open about the deals that tech companies will make with health-care systems, and how the sharing of benefits and responsibilities will work. The trick will be aligning the incentives of all involved.

Good AI governance should boost both business and customer protection, but it will require flexibility and agility. It took decades for awareness of climate change to translate into real action, and we still are not doing enough. Given the pace of innovation, we cannot afford to accept a similarly pedestrian pace on AI.

Effy Vayena is the founding professor of the Health Ethics and Policy Lab at ETH Zurich, a Swiss university. Andrew Morris is the director of Health Data Research UK, a scientific institute.

© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves