Health NZ staff told to stop using ChatGPT to write clinical notes
Recorded: March 26, 2026, 4:02 a.m.
| Original | Summarized |
Health NZ staff told to stop using ChatGPT to write clinical notes | RNZ News News Navigation for News Categories New ZealandWorldPoliticsPacificTe Ao MāoriSportBusinessCountryLocal Democracy ReportingComment & AnalysisIn DepthWeather New Zealand 10:31 am today Health NZ staff told to stop using ChatGPT to write clinical notes 10:31 am today Share this Share on Twitter Kate Green , Health Correspondent kate.green@rnz.co.nz Photo: RNZ Health NZ (HNZ) says staff have been caught using free AI tools like ChatGPT and Gemini to write clinical notes, a move it says could result in formal disciplinary action. Tags: health Share this Share on Twitter Copyright © 2026, Radio New Zealand Subscribe to RNZ's Daily Newsletter View latest newsletter Next story in New Zealand Related Stories ED doctors estimate AI scribe saving up to 10 minutes per patient 17 Mar 2026 The tool, known as Heidi, was trialled in Hawke's Bay Hospital's ED, before the government announced it was being rolled out to all hospitals earlier this month. Audio Emergency doctors estimate AI scribe 'Heidi' saving up to 10 minutes per patient How online health tools show accessibility is still an afterthought 27 Mar 2024 Opinion - Accessibility in healthcare is often lacking or insufficient, both for staff and patients, Sally Britnell writes. Accessibility remains an afterthought – how NZ’s digital health tools risk excluding people with disabilities ChatGPT to get ads 22 Mar 2026 OpenAI will begin showing ads to some users in the United States in the coming weeks. OpenAI to introduce ads to all ChatGPT free and Go users in US ChatGPT to get ads 22 Mar 2026 OpenAI will begin showing ads to some users in the United States in the coming weeks. OpenAI to introduce ads to all ChatGPT free and Go users in US ChatGPT upgrade 'like talking to a PhD-level expert' 10 Aug 2025 GPT-5, which launched across OpenAI's free and paid tiers, will make ChatGPT better at tasks like writing, coding and answering health-related questions, OpenAI has claimed. ChatGPT upgrade 'like talking to a PhD-level expert' New Zealand Live: Floods close highways as heavy rain hits North Island Get the RNZ app Top News stories Live: Floods close highways as heavy rain hits North Island Subscribe Subscribe to RNZ's Daily Newsletter View latest newsletter New Zealand RSS |
Health New Zealand (HNZ) has instituted a firm prohibition on staff utilizing freely available artificial intelligence (AI) tools, specifically referencing instances where clinicians had employed platforms such as ChatGPT, Gemini, and Claude for drafting clinical notes. This directive, communicated via a memo distributed to Mental Health and Addiction Services personnel within the Rotorua Lakes district, stemmed from concerns surrounding data security, patient privacy, and accountability – critical considerations within the healthcare sector. The memo explicitly states that the use of these tools constitutes a breach of established protocols and could potentially trigger formal disciplinary action. The rationale behind this mandate is rooted in HNZ’s broader AI policy, which mandates registration of all AI tools with the Health NZ National Artificial Intelligence and Algorithm Expert Advisory Group (NAIAEAG). This group, including individuals like Heidi – an AI scribe currently being piloted across emergency departments – underscores HNZ’s strategic exploration of AI-driven solutions. Sonny Taite, the director of digital innovation and AI for HNZ, emphasizes that free AI tools present unacceptable risks, necessitating a cautious approach and highlighting that any exceptions are subject to rigorous case-by-case evaluation. Despite the overarching prohibition, HNZ is actively implementing AI solutions, notably through the ongoing trial of the “Heidi” AI scribe within Hawke’s Bay Hospital’s emergency department. Initial estimates from emergency doctors suggest that “Heidi” could be saving up to ten minutes per patient, streamlining the documentation process and freeing up clinician time. However, the extent of unauthorized AI usage within HNZ remains unclear, and the organization has not disclosed whether any disciplinary actions have been taken as a result. Public Service Association (PSA) national secretary Fleur Fitzsimons criticizes the memo’s tone and approach, arguing that it constitutes a “warning shot” that could stifle innovation and discourage staff from seeking assistance. Fitzsimons contends that the pressure clinicians face – evidenced by their potential reliance on AI – requires investment in proper training and approved tools, not punitive measures. She further highlights the concurrent trend of HNZ’s reduction in digital systems and IT support, raising concerns that staff are being forced to improvise with unsupported technologies. Fitzsimons asserts that the health agency must address the underlying reasons for staff resorting to unapproved tools rather than simply imposing disciplinary action. The situation reflects a broader national conversation surrounding the integration of AI in healthcare, particularly concerning data security and ethical considerations. The government's push to implement AI scribes like “Heidi” alongside the concurrent prohibition on free AI tools demonstrates a carefully managed approach to balancing innovation with established regulatory frameworks and risk mitigation. The ongoing evaluation of “Heidi” and the planned rollout across all hospitals signals HNZ's commitment to exploring the potential benefits of AI while carefully addressing potential challenges and maintaining patient privacy and data security. |