As per a press release on Friday, Apple voluntarily committed to building AI that is reliable, secure, and safe. Apple Intelligence, the company’s generative AI solution, will soon be integrated into its main products, exposing generative AI to Apple’s 2 billion consumers.
Apple has joined the ranks of fifteen other tech giants, such as Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, that have pledged to abide by the White House’s guidelines for creating generative artificial intelligence in July 2023.
Also Read: US Justice Department Shuts Down Russian AI Bot Network Spreading Misinformation
Apple had not yet disclosed the extent to which AI would be integrated into iOS. However, during WWDC in June, Apple made it very clear that it is fully committed to generative AI, beginning with a collaboration that integrates ChatGPT into the iPhone.
Apple wants to show ahead of time that it’s willing to follow the White House’s AI regulations because it is a regular target of federal regulators. This could be an attempt to gain favor before any more AI-related regulatory conflicts arise.
However, how many teeth do Apple’s voluntary White House promises have? Though it’s not much, it’s a start. This is referred to by the White House as the “first step” toward Apple and fifteen other AI companies creating reliable, safe, and secure AI.
President Biden’s AI executive order from October served as the second stage, and other proposals to further govern AI models are presently being considered by both federal and state legislatures.
Also Read: AMD Ryzen AI 9 HX 370 Benchmarks Leak: Outperforms Apple M3 Max and Intel Core Ultra 9
AI businesses have committed to red-teaming (behaving as an adversarial hacker to test an organization’s safety measures) AI models before their public release and disseminating the results to the public. In this voluntary pledge, the White House also requests that AI firms maintain the confidentiality of any unpublished AI model weights.
Apple and other businesses consent to working on AI model weights in secure settings, allowing the fewest personnel to have access to the model weights. Last but not least, AI businesses consent to create content tagging tools, including watermarking, to assist users in differentiating between content generated by AI and non-AI.
Separately, the Department of Commerce claims that a report on the possible advantages, dangers, and ramifications of open-source foundation models will be made available soon. Open-source AI is becoming a contentious regulatory battleground on the political front.
Also Read: U.S. Department of Education Releases AI Guidance for EdTech Developers
For reasons of safety, some groups may like to restrict the availability of model weights to highly capable AI models. However, doing so might severely restrict the ecosystem of AI startups and research. The AI industry as a whole may be greatly impacted by the White House’s position on this matter.
The October executive order’s responsibilities have been significantly advanced by federal agencies, according to the White House. So far, over 200 federal departments have hired people with a focus on artificial intelligence (AI), granted computational resources to over 80 research teams, and produced multiple frameworks for AI development (the government likes frameworks).
Also Read: US Regulators to Intensify Antitrust Scrutiny on Microsoft, OpenAI, and Nvidia