Criticism has been directed towards a New York City-created AI chatbot designed to assist small business owners, as it has been found dispensing unconventional advice that misinterprets local regulations and encourages legal violations. Despite these concerns initially reported by The Markup last week, the city has chosen to retain the tool on its official government website. Mayor Eric Adams has stood by this decision, acknowledging the chatbot’s inaccuracies while defending its continued presence.
Introduced in October as a comprehensive resource for business proprietors, the chatbot provides algorithmically generated text responses to inquiries regarding navigating the city’s administrative complexities. Despite a disclaimer warning of potential inaccuracies, harm, or bias in its responses, and an augmented disclaimer stating that its answers do not constitute legal advice, the chatbot persists in delivering erroneous guidance. This ongoing issue is concerning to experts, who argue that it underscores the risks associated with governments adopting AI-powered systems without adequate safeguards in place.
Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University, criticized the lack of oversight in the deployment of untested software. She stated, “It’s evident they have no intention of acting responsibly.”
In its responses on Wednesday, the chatbot inaccurately suggested that it is permissible for an employer to terminate an employee who reports sexual harassment, fails to disclose a pregnancy, or refuses to cut their dreadlocks. Additionally, in contradiction to two prominent waste initiatives in the city, it asserted that businesses can dispose of their trash in black garbage bags and are not obligated to compost.
On occasion, the chatbot’s responses became ludicrous. For instance, when asked whether a restaurant could serve cheese nibbled on by a rodent, it replied affirmatively, stating that serving the cheese to customers with rat bites was permissible. However, it also advised assessing the extent of damage caused by the rat and informing customers about the situation.
A spokesperson from Microsoft, which provides the bot’s functionality through its Azure AI services, stated that the company is collaborating with city officials to enhance the service and ensure that its outputs are accurate and aligned with the city’s official documentation.
During a press conference on Tuesday, Mayor Adams, a Democrat, suggested that encountering issues is a normal part of refining new technology. He remarked, “Anyone familiar with technology understands that this is the process. Only those who fear it would sit back and decide, ‘Oh, it’s not working as expected, so we must abandon it entirely.’ I don’t subscribe to that approach.”
In response, Stoyanovich criticized this stance as “reckless and irresponsible.”
Scientists have consistently raised concerns regarding the limitations of large language models like ChatGPT, which are trained on vast amounts of internet text and are susceptible to producing inaccurate and irrational responses.
Despite these concerns, the popularity of ChatGPT and similar chatbots has attracted the attention of private companies, leading to the development of their own products, with varying degrees of success. For instance, earlier this month, Air Canada was ordered by a court to reimburse a customer due to misinformation provided by a company chatbot regarding the airline’s refund policy. Similarly, TurboTax and H&R Block have faced recent criticism for their chatbots offering flawed tax preparation advice.
Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public, emphasized the heightened stakes when such models are endorsed by the public sector. He stated, “There’s a different level of trust placed in government entities. Public officials must consider the potential consequences if someone were to follow this advice and encounter trouble.”
Experts note that other cities employing chatbots have typically restricted their usage to a narrower range of inputs, thus reducing the risk of misinformation.
Ted Ross, the chief information officer in Los Angeles, highlighted that the city meticulously curates the content utilized by its chatbots, which do not rely on large language models.
The challenges faced by New York’s chatbot should serve as a cautionary example for other cities, according to Suresh Venkatasubramanian, the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University. He wrote in an email, “It should prompt cities to consider why they want to implement chatbots and what specific problem they aim to address. If chatbots are merely replacing human interaction, accountability is lost without any corresponding benefit.”