How AI tools are redefining modern medicine
By MAS Team
Recent years have seen a surge in artificial intelligence tools, some of which are revolutionising the medical sector. Health writer Nicky Pellegrino examines the benefits and risks of these digital devices and why humans will always have a place in healthcare.
No one really knows exactly how artificial intelligence will change the face of healthcare in the future. Unpredictable is the word most experts use. But one thing is certain: AI is set to revolutionise the way patients are diagnosed and treated, and change is coming fast.
It’s already making a difference for GPs in New Zealand, with AI scribes such as Heidi Health and iMedX taking care of admin so doctors can focus better on people.
At Tend, which has clinics all around the country, a bespoke AI scribing tool was launched in 2024, and chief medical officer Graham Denyer says it has already been used in more than 100,000 consultations. The tool listens in on a conversation and transcribes the patient’s notes, which are checked for accuracy by the doctor.
“It’s quite transformative,” says MAS Member Graham. “It does a better job of writing notes than I’d be able to in the time I’ve got available, and it improves the experience of the consultation. You get to sit and talk to the patient, which is lovely, and you’re much more tuned in to non-verbal cues that you might not notice if you were also trying to take notes. Obviously, it’s also a big timesaver.”
Auckland-based Graham reckons he saves about an hour a day when he would normally be typing up notes, which is significant for a time-pressed GP.
All patients at Tend are asked to consent to the use of this technology when they are booking an appointment; a small number do refuse, but most are willing. Note-taking is only the beginning, of course. Already the tool is starting to help with other administrative tasks like managing inboxes, billing and claiming. Eventually, it will be assisting doctors to make decisions about the best treatment plans for their patients.
“That’s something we would approach very cautiously as it will need careful implementation,” says Graham. “I suspect that, unofficially, some clinicians will already be using ChatGPT. Having worked with AI and health for about 3 years now, it’s really clear to me that the pace of change is extraordinary. The challenge ahead of us is how we integrate this technology in a way that is safe for patients and clinicians.”

There is a list of potential issues to consider. For instance, these systems do make errors, so how do you ensure a doctor is checking properly? What about safeguarding data privacy? If an AI product has been trained on information from patients in another country, will it be as effective for the needs of New Zealanders? And, perhaps most worryingly, might reliance on AI tools erode a doctor’s skills over time?
One Polish study, published in The Lancet, found that after just 3 months of using an AI tool designed to help spot precancerous growths on colonoscopies, doctors were significantly worse at finding growths without it.
“There’s always been this automation bias problem,” explains Chris Paton, a professor of Health AI at the Liggins Institute. “When you automate something, you stop learning how to do it yourself.”
ChatGPT is already almost capable of passing US medical licensing exams, according to researchers at California-based healthcare provider Ansible Health, who put it to the test. Other trials have seen AI outperforming humans when it comes to diagnosing illness.
“It’s not outside the realm of possibility that you’ll be talking to a robot at some point,” says Chris. “Already they are being used in some elder care settings.”
While AI research has been around since the 1950s, the past decade or so is when we have seen big leaps forward. The arrival of neural networks, also known as machine learning, meant that systems could be developed to help radiologists looking at a scan identify areas that look abnormal. AI started being used for predicting patients’ health risks and has become a useful tool in drug development.
Another leap has been the introduction of large language models like ChatGPT, with even the experts that developed them surprised by how much they can do.
“Everyone is trying to come to terms with that now and figure out how it can be used in healthcare,” says Chris. “There are all sorts of uses that might get introduced very quickly over the next few years. Pre-consultation triage is probably one of the things that will happen; instead of waiting to see your GP, you’ll chat to a 3D avatar which will ask you questions and produce a summary for a doctor or nurse to look at and decide how urgent your case is. Of course, that may not happen. There might be another solution developed.”
In the past, there have been problems with AI hallucinations, particularly with the older versions of large language models which would make up information that was wildly incorrect.
“You really don’t want that in healthcare,” says Chris. “But the AI companies don’t want it to happen either, and it’s becoming less of a problem as the models get better.”
Beyond healthcare, there are environmental concerns with the use of AI. A lot of water and energy is used to run massive computing centres, and it contributes to electronic waste. There is also a financial cost to consider.
However, the possibilities for improving people’s lives should not be understated either. Surgeons are already able to use AI-assisted robots to conduct high-risk surgeries with greater accuracy. Here in New Zealand, research is ongoing to streamline the diagnosis of conditions like autism, ADHD and dementia.
Meanwhile, at the University of Auckland, researcher Reza Shahamiri is working on developing AI that understands atypical speech. This means that people who have sustained damage to the language-controlling part of their brain – following an accident or a stroke – can use their smartphone to translate what they are saying to other people.

It is a brave and exciting new world, but also one that is going to need thoughtful regulation.
Angela Ballantyne, a bioethicist at the University of Otago, says one of the issues concerning her is accountability, particularly with AI tools that might triage patients or suggest treatments.
“If there’s an error and the patient is harmed, it still sits with the doctor, and I think, looking forward, that’s a real problem,” Angela says. “We know that as people get used to using the tools, they’ll stop checking them because they’ll be right most of the time. If we’re holding individual doctors accountable for errors that are actually happening on a systems level, we don’t really have a regulatory model for that kind of harm. One interesting question is, would ACC cover harms that are the result of an error in an AI tool?”
Angela has also experienced AI in use as a patient on a doctor’s visit with her daughter. While the GP explained at the time that the tool was being used, she feels that more care needs to be taken.
“It’s good in terms of transparency, but we want to be wary of calling that consent,” she says. “Consent requires that the patient knows enough about what is going on and has time to consider it, and that they have a genuine choice and don’t feel coerced.”
These sorts of issues are at the forefront of Graham Denyer’s mind as the technology advances. But so long as AI tools are appropriately regulated, just like any other medical device, he is optimistic about their scope to improve the lives of both patients and clinicians. For instance, tools that predict how a person’s health might change over their lifetime seem set to create a future where the focus for health professionals can be as much on preventing disease as on curing it.
“You’ll still need the human in the loop,” says Graham, who sees AI as enhancing the work of clinicians, rather than replacing them. “An experienced GP will tell you that the real impact and art of their work is actually around those more human aspects. It’s relationships, and influencing change in people’s lives, and connecting dots. Hopefully this technology will help doctors get to the top of the cliff, rather than being the ambulance at the bottom all the time.”

We all know chatbots aren’t real, yet despite that, people are forming strong bonds with AI avatars and social robots, listening to their advice and sharing their deepest feelings. There have even been cases of people falling in love with and marrying chatbots. So, what is going on?
It all comes down to the way humans have evolved over many thousands of years, explains Brigitte Viljoen, a lecturer in the psychotherapy department at Auckland University of Technology. “We’re designed to connect with other humans for survival and thriving,” she says. “These machines communicate in a human-like way and, unconsciously, we try to connect with them in a human-like way. They’re designed to keep us engaged and are feeding something that is a deep, innate need in us.”
Brigitte did a research project using chatbots and found that when people interacted with them, they quickly forgot they were communicating with a machine. One participant unconsciously smiled in response to a smile from a social robot; another found themself trying to make eye contact. “Some really felt it was their friend,” says Brigitte.
In a therapy setting, this can have worrying consequences. There have been examples of chatbots dispensing seriously harmful advice and suicides have been associated with these sorts of interactions.
Given the shortage of therapists, Brigitte can see a role for AI technology. “However, there needs to be rigorous research done, it has to be regulated and must be overseen by a clinician and real human.”
For MAS Talks 2019 we had award-winning Canadian science journalist, Alanna Mitchell, join us in New Zealand for our exclusive event. Here is a snippet from Alanna's evening with our Auckland Members.
We chatted with a young entrepreneur, Oakley Inkersell, who turned a part-time gig into a budding business empire.
In early 2020, MAS Foundation gave SIAOLA a grant to help them connect the Tongan community to health and wellbeing services during COVID-19.