
Artificial intelligence programs such as ChatGPT could potentially be used to motivate people to make healthier choices, but currently veer away from best practice, new Flinders University research has found.
“Rates of chronic diseases are increasing worldwide, putting pressure on our healthcare systems, yet those same systems lack the capacity to help address the issue,” says lead researcher Dr Candice Oster, Research Fellow in Flinders’ Caring Futures Institute.
“Artificial Intelligence chatbots offer a potential accessible, cost-effective tool for supporting people to undertake health behaviour change to address lifestyle risk factors for chronic diseases, but very little evidence exists for their capability.”

Using simulated patients, the team tested the ability of GPT-4, the model underpinning ChatGPT, to deliver a common type of health coaching called ‘Motivational Interviewing’, an evidence-based counselling method that helps people identify their own motivation to change.
The conversations were then analysed using a popular tool for assessing Motivational Interviewing skills, with the findings to be presented at this week’s Health Innovation and Community Conference in Melbourne.
“The key to Motivational Interviewing is that it works through a four-step process starting with rapport building and understanding the desired goal, before moving on to evoking a person’s own motivation to change and developing a plan together,” says Dr Oster.
“We know that simply telling people what to do and how to change doesn’t work, they need to want to change first.”
At first, the analysis found that GPT demonstrated some ability to deliver Motivational Interviewing, including complex reflections, affirmations, and seeking collaboration. It was also effective in avoiding confrontation.
However, the team found the AI eventually progressed to lengthy bouts of ‘telling’ and inappropriate interactions, including long bulleted lists of suggested actions, attempting to end the conversation and doubling down when the simulated patients reacted negatively to the advice.
“These initial results show there is potential for GPT to effectively support people through behaviour change but there are areas for improvement,” says Dr Oster.
“In this type of coaching, people often react to being told what to do as they feel that someone is trying to limit or control their choices.
“GPT wasn’t able to steer clear of that, highlighting areas that could be considered for model augmentation to improve AI’s capabilities.”
Results of the study will be presented at the Australasian Institute of Digital Health’s Health, Innovation and Community Conference, being held in Melbourne on August 18 – 20. The research was funded by a Flinders Foundation Seed Grant.