Health Medical Pharma

Untested AI-abled LLMs may harm patients, erode trust: WHO says

A lax adoption of untested systems in artificial intelligence-generated large language model tools may harm patients, erode trust, and undermine long-term benefits, the WHO stated.

HQ Team

May 17, 2023: A lax adoption of untested systems in artificial intelligence-generated large language model tools may harm patients, erode trust, and undermine long-term benefits, the WHO stated.

The tools, or LLMs, must protect and promote human well-being, human safety, and autonomy, and preserve public health, according to a World Health Organization’s emailed statement.

The LLMs include some of the most rapidly expanding platforms such as ChatGPT, Bard, Bert, and others that imitate understanding, processing, and producing human communication.

The WHO was concerned that the data used to train artificial intelligence may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness.

It stated that LLMs generated responses that can appear authoritative and plausible to an end user. These responses may be completely incorrect or contain serious errors, especially for health-related responses, according to the global body.

Sensitive data

The machines may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data, including health data, that a user provides to an application to generate a response.

The AI-based models can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content. 

Policy-makers must ensure patient safety and protection while technology firms work to commercialize LLMs, the WHO stated.

Consistent caution

“While WHO is enthusiastic about the appropriate use of technologies, including LLMs, to support health-care professionals, patients, researchers, and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs.”

It is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity, according to the WHO.

The WHO wants its concerns to be addressed, and clear evidence of benefits to be measured before their widespread use in routine health care and medicine – whether by individuals, care providers, or health system administrators and policy-makers.

“WHO reiterates the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health.”

WHO’s Florence

In October last year, the WHO, with support from the Qatar Ministry of Health, unveiled the AI-powered WHO Digital Health Worker, Florence version 2.0.

Florence offers its services on health topics in seven languages. It can share advice on mental health, give tips to destress, provide guidance on how to eat right, be more active, and quit tobacco and e-cigarettes. 

The machine offered information on COVID-19 vaccines and is available in English with Arabic, French, Spanish, Chinese, Hindi, and Russian.

Leave a Reply

Your email address will not be published. Required fields are marked *

X