AI

What is OpenAI System Card and How is GPT-4o Following AI Safety Measures?

OpenAI has released GPT-4o System Card for it latest flagship model GPT-4o. It provides a detailed look into a specific AI model's capabilities, limitations, and most importantly, the safety measures implemented during its development and deployment.

AI startup, OpenAI has released GPT-4o System Card, a report that outlines the safety measures that the company carried out before the release of GPT-4o. GPT-4o, where “o” stands for “omni”, was released to the public earlier this year in May. Before releasing a large language model like GPT, it is standard procedure to examine and evaluate the model for any potential risks or safety concerns. These evaluations are typically carried out by a group of red teamers or security experts. 

OpenAI has been battling security and privacy allegations for quite some time now. Earlier in July, it was reported by an anonymous source that OpenAI hurried through their safety tests to meet the launch date. “We basically failed the test,” the source said.  

GPT-4o vs GPT-4o Mini: Check the Key Differences Here

What is OpenAI System Card?

The OpenAI System Card is a report that provides a detailed look into a specific AI model’s capabilities, limitations, and most importantly, the safety measures implemented during its development and deployment. It provides insights into the model’s behavior and the steps taken to mitigate potential risks. 

According to the System Card, OenAI’s latest flagship model GPT-4o was rated as having a “medium” risk. OpenAI conducted a thorough evaluation of GPT-4o’s text, vision, and audio capabilities. There are four categories for risk assessment- cybersecurity, biological threats, persuasion, and model autonomy. 

Overall, three of the four risk categories—cybersecurity, biological threats, and model autonomy—were rated as low risk. The only category with a higher risk rating was persuasion.

The GPT-4o System Card is not the first system card released by OpenAI. The startup earlier released similar reports for GPT-4, GPT-4 with vision, and DALL-E 3. 

OpenAI’s GPT-4o Mini: Check Features, Capabilities and Pricing

How is GPT-4o Following AI Safety Measurements?

Some of the key safety measures for GPT-4o include:   

  • External Red Teaming: According to the System Card, OpenAI engaged over 100 external red teamers from 29 different countries to stress-test the model for potential vulnerabilities and risks. It was carried out in four phases. External red teaming covered categories “that spanned violative & disallowed content (illegal erotic content, violence, self-harm, etc), mis/disinformation, bias, ungrounded inferences, sensitive trait attribution, private information, geolocation, person identification, emotional perception and anthropomorphism risks, fraudulent behavior and impersonation, copyright, natural science capabilities, and multilingual observations.”
  • Preparedness Framework Evaluation: The model was assessed against a framework evaluating risks in cybersecurity, biological threats, persuasion, and model autonomy. GPT-4o scored low in three categories and medium in persuasion.  After reviewing the Preparedness evaluations, the company’s Safety Advisory Group recommended classifying the LLM as the borderline medium risk for persuasion and low risk in all other areas. Since the overall risk score is based on the highest risk category, GPT-4o’s overall risk was classified as medium.
  • Risk Identification, Assessment, and Mitigation: OpenAI identified key risk areas such as unauthorized voice generation, speaker identification, ungrounded inference, and generation of explicit or violent content. The company then developed specific mitigations and implemented them to address these risks.
  • Continuous Evaluation: Facing the onslaught of criticism for its safety standards by its own employees as well as the Senate, the company continues to monitor and evaluate its model’s performance and safety. However, the company needs to be more transparent about its model training data as well as safety testing.

Some of the prominent safety measures that OpenAI has implemented in GPT-4o are:

  • Preventing Unauthorized Voice Generation: OpenAI has restricted to use of pre-selected voices, following the Scarlett Johansson lawsuit. It is now using a classifier to detect deviations from these approved voices.
  • Protecting Privacy: GPT-4o is trained to refuse requests for speaker identification based on voice input. However, it still complies with requests to identify famous personalities “associated with famous quotes.”
  • Mitigating Bias: The model is designed to avoid making unfounded inferences about individuals and to provide safe responses to requests for sensitive trait attribution.
  • Blocking Harmful Content: The company has also placed filters to prevent the generation of violent, erotic, or otherwise harmful content. Their moderation classifier runs over text transcriptions of audio prompts and blocks the output if the prompt contains explicit or violent language.

OpenAI’s AI Detection Tool Sparks Debate Over ChatGPT Watermarking

The Bottom Line

As stated above, OpenAI needs to be more transparent about its model training data as well as safety testing. The company has been called out numerous times for its safety issues, including founder and CEO Sam Altman’s dismissal in 2023. 

Moreover, the company is reportedly developing a highly capable multimodal AI model, right before the Presidential Elections of the United States take place. We have already discussed how AI models like GPT can pose a severe threat to democratic elections and electoral processes. These models can easily mitigate misinformation and influence public opinion. 

It is crucial for OpenAI as well as other firms operating in this domain to address these concerns and prioritize the safety and ethical implications of their technology. Transparency and accountability are key in ensuring that their AI models are not used to perpetuate harm or misinformation. 

 Open AI Search Engine: How this AI-Powered Search Product is Different from Google? Check Here

This post was last modified on August 9, 2024 8:35 am

Raya

Raya is a tech enthusiast diving deep into New-Age technology, especially Artificial Intelligence (AI) and Machine Learning (ML). She is passionate about decoding the complexities and uses of new-age tech. Raya is on a mission to write articles that bridge the gap between technical jargon and everyday understanding, making AI and ML accessible to a wider audience.

Recent Posts

Perplexity AI Voice Assistant: How to Use and Benefits for iOS and Android Phones

Perplexity AI Voice Assistant is a smart tool for Android devices that lets users perform…

May 10, 2025

Meta AI App: How to Download? Check Its Key Features and Benefits

Meta AI is a personal voice assistant app powered by Llama 4. It offers smart,…

May 10, 2025

AI in U.S. Education for American Youth by President DONALD TRUMP

On April 23, 2025, current President Donald J. Trump signed an executive order to advance…

May 10, 2025

Google is moving Android news to a virtual event before I/O

Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…

April 29, 2025

Top Generative AI Companies of the World 2025

The top 11 generative AI companies in the world are listed below. These companies have…

April 28, 2025

Veo 2 extends access to more Gemini Advanced Users

Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…

April 25, 2025