Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI no substitute for robust and inclusive public services

Analysis: Soon, users of some New Zealand government websites will be able to interact with GovGPT. This is a form of artificial intelligence often referred to as a conversational agent, which guides users to information, answers queries, and can even be multi-lingual. GovGPT will first be tested with users interested in information and support for small businesses, with the eventual ambition of wider roll-out across government services.
This type of system can be an effective tool for providing timely information, and for indicating to service providers what users are most interested in knowing. Innovation in access to public information and services is welcome, but are there ethical and other risks? And where should investment in improving access best be targeted?
A central consideration in any technology investment process (and definitely in AI-based innovation) is awareness of “techno-solutionism” – a mindset where the technology is valued more than solving the problem. We need to reflect thoroughly on the problem that needs to be solved.
Crucially, we need to consider the information seekers who most need support. This form of AI is a useful tool, but not a substitute for equitable access to public information. Barriers to accessing this information are most acute among those on the wrong side of the digital divide, such as some older people, those in rural areas who lack internet connectivity, Pasifika and Māori, and those for whom disability or language create communication difficulties.
Whether GovGPT can meet the information needs of different groups remains to be seen. As Hetan Shah, head of the British Academy, recently observed: “During a period of tight public finances, digitisation can sometimes be seen as a code word for cost-cutting: let’s lose the messy interactions with citizens and get them to deal with us through an app.”
Error rates and consequent liability are also significant concerns. Of course, all types of advice are subject to errors, and human interactions with government services and information are never error-free. But even when a system operates on publicly available information, or is ring-fenced to particular datasets, flaws in information remain a considerable risk. A system is only as good as the information it is trained on and which it can access. If the government website or repository has out-of-date information, broken links, or conflicting information in multiple places, this will be the information served up.  
Delivery of correct information in multi-lingual formats is also only as reliable as the translation infrastructure that supports the system. Smaller language groups such as Pacific languages may not be sufficiently supported. Deficiencies in GovGPT’s pronunciation of te reo Māori have already been a barrier to implementation.
Though AI systems are constantly improving in accuracy, caution should be exercised in relying on them for decisions with significant financial, health, or legal consequences. Systematic errors and AI “hallucinations” – false information – have been regularly observed, even where the system operates on an entity’s own information.
A recent example from the British Columbia Civil Resolution Tribunal shows how a similar system’s error had real-life consequences. The tribunal case involved an Air Canada passenger who booked a flight to attend his grandmother’s funeral. In the process, he used the services of the airline’s AI chatbot. The chatbot provided information relating to refunds in compassionate circumstances, which led the passenger to understand he would be eligible for a partial refund. However, the chatbot did not inform him or point him to the vital point that claims could not be made after travel was completed.
When the passenger put in a refund claim after his flight, it was refused by Air Canada. The airline attempted to shift liability for the incorrect information given to the passenger to the chatbot itself, claiming somewhat improbably that it was a separate legal entity. Unsurprisingly, this argument didn’t wash and it was held that the company was responsible for all information on its website, whether provided by a chatbot or not.
It is often the most vulnerable people, and those in the most stressful and confusing circumstances, who have the most need for reliable and simple access to information about available services or entitlements. AI is a tool, but it is not a substitute for robust and inclusive public services.

en_USEnglish