Technocrats, coders and general users across the world have reported that the OpenAI ChatGPT has become ‘lazy’. The users are raising concerns about the prompt processing time of ChatGPT. The paid subscribers of ChatGPT 4 has posted several posts on social media platforms highlighting the slow response to the given prompts.
Several posts on Social Media platforms indicate that ChatGPT refuses to complete tasks or provides minimal effort in its responses, sometimes even resorting to sass. This sudden shift in behaviour has various speculations about intentional changes made by OpenAI.
Reddit forums and OpenAI’s developer platforms are flooded with complaints about ChatGPT’s slowness in processing the prompts, especially in coding. Instead of providing comprehensive code now ChatGPT offers snippets and directs users to complete the task themselves. The behaviour and processing of prompts in delivering the desired result create problems for many users including the subscribers. Also, the prompt response time is longer as compared to the previous use cases. OpenAI Crisis deepens after Altman and Sutskever meeting
Some users suspect OpenAI deliberately modified ChatGPT to prioritize efficiency over detailed responses. AI systems like ChatGPT require substantial computing power, making extensive answers costly. This theory suggests OpenAI might seek a more economical solution, sacrificing user experience for resource optimization.
What are the major concerns raised against ChatGPT Laziness?
The current and degrading performance of OpenAI ChatGPT can summarized in the following pointers.
- OpenAI confirmed it was investigating reports of GPT-4 performance degradation via @ChatGPTapp on X
- The company also explained the complexities of training chat models and their impact on AI behaviour
- Google Reviews highlighted mixed experiences with GPT-4, account issues, and an allegedly volatile work environment.
What is the OpenAI statement on the ChatGPT Slow Process:
OpenAI acknowledged the user feedback on Twitter/X, expressing surprise at the perceived change in ChatGPT’s behaviour. They confirmed no recent model updates and emphasized their focus on resolving the issue
The company said “We’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional model behavior can be unpredictable, and we’re looking into fixing it”. The company also recently announced, or you might say, that it is improving its AI model with online A/B tests.
Read Here: What is GPAI Summit 2023: Celebrating AI and Indian talent
ChatGPT Slowness Complaints on OpenAI’s community forums:
Various OpenAI Community manager has raised concern on forums to seek a response from the experts and participants. Several requests have been raised from coders working on API integrations. List of Top 9 Documentaries on Artificial Intelligence

What are the limitations of OpenAI ChatGPT:
The officially claimed ChatGPT limitations are listed below. For detailed reading click here.
- ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging during RL training, there’s currently no source of truth
- Training the model to be more cautious causes it to decline questions that it can answer correctly
- Supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows
- ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
- The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization
- Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, the current models usually guess what the user intended
While making efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour.