OpenAI recently introduced CriticGPT to find GPT-4’s mistakes. As per the official blog, CriticGPT is a step towards evaluating outputs from advanced AI systems that can be difficult for people to rate without better tools. This GPT-4 series of models, which powers ChatGPT, is aligned to be helpful and interactive through “Reinforcement Learning from Human Feedback” (RLHF).
Now, read this article to learn and understand how CriticGPT works and what are its current limitations.
What is CriticGPT?
CriticGPT is a model based on GPT-4, which writes critiques of ChatGPT responses to help human trainers spot mistakes during RLHF. CriticGPT helps trainers write more comprehensive critiques than they do without help, while producing fewer hallucinations than critiques from the model alone. According to the OpenAI blog, “ As we make advances in reasoning and model behaviour, ChatGPT becomes more accurate and its mistakes become more subtle. This can make it hard for AI trainers to spot inaccuracies when they do occur, making the comparison task that powers RLHF much harder. This is a fundamental limitation of RLHF, and it may make it increasingly difficult to align models as they gradually become more knowledgeable than any person that could provide feedback.”
So, this is now when CriticGPT enters the picture. CriticGPT is trained to write critiques that highlight inaccuracies in ChatGPT answers. For Example
Global IndiaAI Summit 2024: Date, Place, Speakers and Discussion Pointers
How Does CriticGPT Work?
OpenAI LLM critics are auto-regressive Transformer policies similar to InstructGPT and ChatGPT. They are trained or prompted to accept a (question, answer) pair as input. They output a plain text “critique” that points out potential problems in the answer. The critiques output by the model follow a particular format by attaching comments to quotes from the answer, but each critique can contain multiple such quotes with comments about each problem.
What are the limitations of CriticGPT?
CriticGPT’s suggestions are not always correct, but we find that they can help trainers catch many more problems with model-written answers than they would without AI help. Various limitations of CriticGPT as per OpenAI are:
- We trained CriticGPT on ChatGPT answers that are quite short. To supervise the agents of the future, we will need to develop methods that can help trainers understand long and complex tasks.
- Models still hallucinate, and sometimes trainers make labelling mistakes after seeing those hallucinations.
- Sometimes real-world mistakes can be spread across many parts of an answer. Our work focuses on errors that can be pointed out in one place, but in the future, we must also tackle dispersed errors.
- CriticGPT can only help so much: if a task or response is extremely complex, even an expert with model help may not correctly evaluate it.
To align AI systems that are increasingly complex, we’ll need better tools. CriticGPT is just the first step, and applying RLHF to GPT-4 has the promise to help humans produce better RLHF data for GPT-4. Hence, OpenAI plans to scale this work further and put it into practice.