Three days after Amazon announced its AI chatbot Q, some employees are sounding alarms about accuracy and privacy issues. Q is “experiencing severe hallucinations and leaking confidential data,” including the location of AWS data centers, internal discount programs, and unreleased features, according to leaked documents obtained by Platformer.
An employee marked the incident as “sev 2,” meaning an incident bad enough to warrant paging engineers at night and make them work through the weekend to fix it.
Q’s early woes come at a time when Amazon is working to fight the perception that Microsoft, Google, and other tech companies have surpassed it in the race to build tools and infrastructure that take advantage of generative artificial intelligence. In September, the company announced it would invest up to $4 billion in AI startup Anthropic. On Tuesday, at its annual Amazon Web Services developer conference, it announced Q — arguably the highest-profile release in the series of new AI initiatives the company unveiled this week.
In a statement, Amazon played down the significance of the employee discussions.
“Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon,” a spokesperson said. “No security issue was identified as a result of that feedback. We appreciate all of the feedback we’ve already received and will continue to tune Q as it transitions from being a product in preview to being generally available.”
Q, which is now available in a free preview, was presented as a kind of enterprise-software version of ChatGPT. Initially, it will be able to answer developers’ questions about AWS, edit source code, and cite sources, Amazon executives said onstage this week. It will compete with similar tools from Microsoft and Google but be priced lower than rivals’, at least to start.
In unveiling Q, executives promoted it as more secure than consumer-grade tools like ChatGPT.
Adam Selipsky, CEO of Amazon Web Services, told the New York Times that companies “had banned these A.I. assistants from the enterprise because of the security and privacy concerns.” In response, the Times reported, “Amazon built Q to be more secure and private than a consumer chatbot.”
An internal document about Q’s hallucinations and wrong answers notes that “Amazon Q can hallucinate and return harmful or inappropriate responses. For example, Amazon Q might return out of date security information that could put customer accounts at risk.” The risks outlined in the document are typical of large language models, all of which return incorrect or inappropriate responses at least some of the time.