A recent revelation exposes vulnerabilities in ChatGPT and Amazon chatbots, revealing how certain prompts could extract sensitive internal data. Researchers uncovered a loophole that allowed the extraction of private information from OpenAI's ChatGPT, triggering concerns about data security.
OpenAI and Amazon chatbots security breach
OpenAI, a leading AI firm, faced a security breach in its flagship chatbot, ChatGPT, as researchers exploit coaxing the chatbot that led to the revelation of internal company data. The hack involved prompting ChatGPT to repeat a word indefinitely, classified by OpenAI as spam and a violation of its terms of service. The repeated word triggered the disclosure of private information, including emails, phone numbers, and fax numbers of OpenAI employees.
A joint report by researchers from the University of Washington, Carnegie Mellon University, Cornell University, UC Berkeley, ETH Zurich, and Google DeepMind detailed the method used to extract data by causing the model to ‘escape’ from its alignment training. OpenAI responded swiftly by blocking attempts to recreate the exploit. ChatGPT-3 and GPT-4 now issue warnings when users attempt such commands, citing potential violations of content policy or terms of use.
Must Read: AI Images consume as much energy as charging your smartphone
While OpenAI’s content policy did not explicitly reference forever loops, the terms of service prohibited users from attempting to access private information or discovering the source code of OpenAI’s AI tools. The report highlighted that attempting to make a chatbot repeat a word indefinitely could be seen as a concerted effort to cause a malfunction, akin to a Distributed Denial of Service (DDoS) attack.
OpenAI, currently experiencing disruptions due to a Distributed Denial of Service (DDoS) attack on ChatGPT, has not yet responded to inquiries about the security breach.
In a parallel development, Amazon faced its own data leakage concerns with its Q chatbot. Reports indicate that Amazon’s chatbot leaked private information, with employees sharing feedback through internal channels. Amazon downplayed the incident, stating that no security issues were identified as a result of the feedback. The Q chatbot is currently in preview, and Amazon pledged to continue refining it based on received feedback.
Also Read: Top 5 Deep Fake Videos of 2023: YoY 3000% Fraud Increased
As the security landscape for AI chatbots evolves, both OpenAI and Amazon are taking steps to address vulnerabilities and ensure the protection of sensitive information. However, concerns persist about potential exploitation of these systems, emphasising the need for robust security measures in the development and deployment of AI technologies. Decrypt’s requests for comments from OpenAI and Amazon remain unanswered at the time of reporting. Google Gemini AI Chatbot launch delayed says Sundar Pichai; Check here why
This post was last modified on December 5, 2023 3:27 pm
GPT-4o’s image generation is a new feature launched by OpenAI on March 25, 2025, which…
Google has enhanced its Gemini 2.5 Pro Experimental artificial intelligence model with the agentic Deep…
Anthropic has introduced a tiered plan for its Claude chatbot, Max, catering to higher budgets…
Anthropic has launched Claude for Education, a subscription tier for higher education institutions featuring a…
Bhavish Aggarwal's Krutrim, an AI firm, has hosted Meta's Llama 4 open-source models on its…
Microsoft has introduced the Muse AI model, which can create gameplay, to Copilot customers. The…