Skip to content

Health disinformation and generative AI

Government and industry action on generative AI is urgently needed to protect health and wellbeing, say Flinders University medical researchers.

Rapidly evolving generative AI, the cutting-edge domain prized for its capacity to create text, images and video, was used in the study to test how false information about health and medical issues might be created and spread.

The team attempted to create disinformation about vaping and vaccines using generative AI tools for text, image and video creation. In just over an hour, they produced over 100 misleading blogs, 20 deceptive images, and a convincing deep-fake video, which could be adapted into over 40 languages.

Bradley Menz, first author, said he has serious concerns about the findings, drawing upon prior examples of disinformation pandemics that have led to fear, confusion and harm.

β€œThe implications are clear: society currently stands at the cusp of an AI revolution, yet in its implementation governments must enforce regulations to minimise the risk of malicious use of these tools to mislead the community,” said Mr Menz.

Leave a Reply

Your email address will not be published. Required fields are marked *