OpenAI and the US Artificial Intelligence Safety Institute (US AISI), a division of the broader US Department of Commerce’s National Institute of Standards and Technology, inked a Memorandum of Understanding (MOU) in an unprecedented endeavour last week.Â
This partnership seeks to advance OpenAI’s dedication to safety, transparency, and human-centric innovation at this pivotal point in the AI revolution by creating a framework to which everyone can contribute. This would allow the US AI Safety Institute to test and assess upcoming models before they are made available to the general public. Anthropic has consented to sign this agreement as well.
Also Read: What Is OpenAI’s New Safety & Security Committee, The Next Big Model?
The CEO of OpenAI, Sam Altman, used X (previously Twitter) to emphasize the importance of this collaboration. Altman stated, “We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models.” He emphasized the significance of this and recommended that it take place nationally. “The US must keep taking the lead!”Â
The US AI Safety Institute: Why, though?
Director of the US AI Safety Institute Elizabeth Kelly has facilitated numerous such strategic alliances in the past and has been a vocal advocate for safety in AI progress. “We are excited to start our technical collaborations with Anthropic and OpenAI to advance the science of AI safety now that these agreements are in place,” the spokesperson said in a statement.
Under the Biden-Harris administration, the US AI Safety Institute was established in 2023 to assist in creating testing protocols for safe AI innovation in the US.
Also Read: EU AI Act Enforced: What Tech Companies and Users Need to Know About New Restrictions
The United States may set the standard for increased voluntary adoption of AI safety through programs like this one. Anthropic, an OpenAI competitor, has already worked with governmental organizations to test their models before they are deployed, such as the UK’s Artificial Intelligence Safety Institute (AISI). If OpenAI collaborates with them as well, it would be interesting to see.
Why is it important?
Dario Amodei, CEO of Anthropic, has brought attention to the impact poorly managed AI systems have on democracy, which is why the US AI Safety Institute (US AISI) is necessary.
According to Amodei, for AI to properly serve democratic institutions, it must be in line with human values and ethics. Anthropic, OpenAI, and the US AISI are working together in response to the increasing power of AI, which, if unchecked, has the potential to surpass the capabilities of national governments and economies. Through this alliance, safety requirements, pre-deployment testing, and regulation of AI are intended to prevent misuse, especially in politically sensitive areas like elections.
Also Read: U.S. Department of Education Releases AI Guidance for EdTech Developers
“It’s imperative, in my opinion, that we deliver these services well. It strengthens democracy overall, and if we don’t give them, it erodes democracy itself, according to Amodei.