AI startup, OpenAI has released GPT-4o System Card, a report that outlines the safety measures that the company carried out before the release of GPT-4o. GPT-4o, where “o” stands for “omni”, was released to the public earlier this year in May. Before releasing a large language model like GPT, it is standard procedure to examine and evaluate the model for any potential risks or safety concerns. These evaluations are typically carried out by a group of red teamers or security experts.
OpenAI has been battling security and privacy allegations for quite some time now. Earlier in July, it was reported by an anonymous source that OpenAI hurried through their safety tests to meet the launch date. “We basically failed the test,” the source said.
GPT-4o vs GPT-4o Mini: Check the Key Differences Here
What is OpenAI System Card?
The OpenAI System Card is a report that provides a detailed look into a specific AI model’s capabilities, limitations, and most importantly, the safety measures implemented during its development and deployment. It provides insights into the model’s behavior and the steps taken to mitigate potential risks.
According to the System Card, OenAI’s latest flagship model GPT-4o was rated as having a “medium” risk. OpenAI conducted a thorough evaluation of GPT-4o’s text, vision, and audio capabilities. There are four categories for risk assessment- cybersecurity, biological threats, persuasion, and model autonomy.
Overall, three of the four risk categories—cybersecurity, biological threats, and model autonomy—were rated as low risk. The only category with a higher risk rating was persuasion.
The GPT-4o System Card is not the first system card released by OpenAI. The startup earlier released similar reports for GPT-4, GPT-4 with vision, and DALL-E 3.
OpenAI’s GPT-4o Mini: Check Features, Capabilities and Pricing
How is GPT-4o Following AI Safety Measurements?
Some of the key safety measures for GPT-4o include:
- External Red Teaming: According to the System Card, OpenAI engaged over 100 external red teamers from 29 different countries to stress-test the model for potential vulnerabilities and risks. It was carried out in four phases. External red teaming covered categories “that spanned violative & disallowed content (illegal erotic content, violence, self-harm, etc), mis/disinformation, bias, ungrounded inferences, sensitive trait attribution, private information, geolocation, person identification, emotional perception and anthropomorphism risks, fraudulent behavior and impersonation, copyright, natural science capabilities, and multilingual observations.”
- Preparedness Framework Evaluation: The model was assessed against a framework evaluating risks in cybersecurity, biological threats, persuasion, and model autonomy. GPT-4o scored low in three categories and medium in persuasion. After reviewing the Preparedness evaluations, the company’s Safety Advisory Group recommended classifying the LLM as the borderline medium risk for persuasion and low risk in all other areas. Since the overall risk score is based on the highest risk category, GPT-4o’s overall risk was classified as medium.
- Risk Identification, Assessment, and Mitigation: OpenAI identified key risk areas such as unauthorized voice generation, speaker identification, ungrounded inference, and generation of explicit or violent content. The company then developed specific mitigations and implemented them to address these risks.
- Continuous Evaluation: Facing the onslaught of criticism for its safety standards by its own employees as well as the Senate, the company continues to monitor and evaluate its model’s performance and safety. However, the company needs to be more transparent about its model training data as well as safety testing.
Some of the prominent safety measures that OpenAI has implemented in GPT-4o are:
- Preventing Unauthorized Voice Generation: OpenAI has restricted to use of pre-selected voices, following the Scarlett Johansson lawsuit. It is now using a classifier to detect deviations from these approved voices.
- Protecting Privacy: GPT-4o is trained to refuse requests for speaker identification based on voice input. However, it still complies with requests to identify famous personalities “associated with famous quotes.”
- Mitigating Bias: The model is designed to avoid making unfounded inferences about individuals and to provide safe responses to requests for sensitive trait attribution.
- Blocking Harmful Content: The company has also placed filters to prevent the generation of violent, erotic, or otherwise harmful content. Their moderation classifier runs over text transcriptions of audio prompts and blocks the output if the prompt contains explicit or violent language.
OpenAI’s AI Detection Tool Sparks Debate Over ChatGPT Watermarking
The Bottom Line
As stated above, OpenAI needs to be more transparent about its model training data as well as safety testing. The company has been called out numerous times for its safety issues, including founder and CEO Sam Altman’s dismissal in 2023.
Moreover, the company is reportedly developing a highly capable multimodal AI model, right before the Presidential Elections of the United States take place. We have already discussed how AI models like GPT can pose a severe threat to democratic elections and electoral processes. These models can easily mitigate misinformation and influence public opinion.
It is crucial for OpenAI as well as other firms operating in this domain to address these concerns and prioritize the safety and ethical implications of their technology. Transparency and accountability are key in ensuring that their AI models are not used to perpetuate harm or misinformation.
Open AI Search Engine: How this AI-Powered Search Product is Different from Google? Check Here