News

Stanford Foundation Model Transparency Index (FMTI) Says Top AI Companies Fail Transparency Test

Stanford Foundation Model Transparency Index (FMTI) was released on October 18, 2023 and had some severe allegation on Top Artificial Intelligence (AI) companies of the world which includes Meta, BigScience, OpenAI, StabilityAI, Google, Anthropic, Cohere, AL21Labs, Infelection and Amazon. FMTI research has been released only for the 10 most famous AI companies. And, in the released FMTI research findings, it is claimed that the AI model used by these top companies is becoming less transparent and researchers, policymakers, and the public find it very difficult to understand how these models work, as well as their limitations and impact. 

In the FMTI research reports, the Stanford Human-Centred Artificial Intelligence (HAI), the California-based university says that these prominent AI companies are becoming less transparent as their models become more powerful.

Must Check: Top 20 Best AI Influencers 2023 List: Check Social Media Links Here!

Rishi Bommasani, Society Lead at the Center for Research on Foundation Models (CRFM), within Stanford HAI said that the companies in the foundation model space are becoming less transparent. Citing an example he said, “OpenAI, which has the word open right in its name, has clearly stated that it will not be transparent about most aspects of its flagship model, GPT-4.”

What are the key findings of the Foundation Model Transparency Index (FMTI): 

Transparency is key to the FMTI research. All the AI models must increase the transparency for existing foundations so as to prepare better models for the future. In addition to this, the research also indicates that downstream developers must be used to increase transparency with less ambiguity. The current AL models must use transparency to improve trust, safety and reliability so that misinformation can be dealt with seriously without any major repercussions. 

  1. The top-scoring model scores only 54 out of 100: No major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry.
  2. The mean score is just 37%: Yet, 82 of the indicators are satisfied by at least one developer, meaning that developers can significantly improve transparency by adopting best practices from their competitors.
  3. Open foundation model developers lead the way: Two of the three open foundation model developers get the two highest scores. Both allow their model weights to be downloaded. Stability AI, the third open foundation model developer, is a close fourth, behind OpenAI.

Download Here Foundation Model Transparency Index (FMTI)

When the team scored 10 major foundation model companies using their 100-point index, they found plenty of room for improvement: The highest scores, which ranged from 47 to 54, aren’t worth crowing about, while the lowest score bottoms out at 12. “This is a pretty clear indication of how these companies compare to their competitors, and we hope will motivate them to improve their transparency,” Bommasani says.

What are the top Indicators of the Foundation Model Transparency Index (FMTI):  

While conducting the team of researchers considered 100 indicators that comprehensively characterize transparency for foundation model developers. These 100 FTMI indicators are categorised into three broad categories – Upstream, Model and Downstream. 

UpstreamThese indicators specify the ingredients and processes involved in building a foundation model, such as the computational resources, data, and labor used to build foundation models. Full list of upstream indicators.
DownstreamThe downstream indicators specify how the foundation model is distributed and used, such as the model’s impact on users, any updates to the model, and the policies that govern its use. Full list of downstream indicators.
ModelsThe model indicators specify the properties and function of the foundation model, such as the model’s architecture, and capabilities. Check the list of Models Here

Must Read: Top 20 Best Books on AI: The Ultimate Reading List on Artificial Intelligence

What is the Methodology of the Foundation Model Transparency Index (FMTI)

The methodology adopted for FTMI findings includes the selection of targets, gathering information, creating a score based on findings and noting down the companies’ responses against the research findings for validation. The FTMI methodology has been indicated below. 

  • Targets: 10 major foundation model developers based on their influence, heterogeneity, and status as established companies. These companies are accessed on the basis of their most salient and capable foundation model
  • Information gathering: The researchers have systematically gathered information publicly available by the developer from  September 15, 2023.
  • Initial scoring: For each AI developer, two researchers scored numbers that have been deeply analysed and for any disagreements have been resolved through discussion
  • Feedback and company response: Researchers have shared the initial scores with leaders at each AI company for their views and calcification before deriving research findings.

Who is involved in the development of Foundation Model Transparency Index (FMTI)

The 2023 Foundation Model Transparency Index was created by a group of eight AI researchers from Stanford University’s Center for Research on Foundation Models (CRFM) and Institute on Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton University’s Center for Information Technology Policy. The shared interest that brought the group together is improving the transparency of foundation models. 

Read Here The Foundation Model Transparency Index

Bommasani observes that consumers of digital technologies have consistently grappled with a lack of transparency. This has manifested in various ways, such as misleading advertisements and pricing on the web, ambiguous pay structures in ride-hailing services, deceptive design tactics that coerce users into unintended purchases, and a plethora of transparency challenges concerning content regulation. These issues have given rise to a substantial network of misinformation and disinformation on social media platforms. Bommasani warns that as transparency in commercial FMs diminishes, we risk encountering parallel threats to consumer safety.

What are the Top 10 Best AI Communities in 2023

This post was last modified on October 23, 2023 12:42 pm

Françoise

Francoise Hardy, A digital content creator and tech integration specialist with over 10 years of experience, is known for his deep knowledge in AI, ML, Data Science, Robotics, and Neural Networks. He began his career with a passion for emerging technologies, leading to innovative solutions and digital transformation in various businesses. Francoise's expertise extends to the ethical aspects of technology, advocating for responsible usage. Recognized by his peers, he is a sought-after speaker and writer in the tech industry. His commitment to advancing technology for societal benefit defines his career.

Recent Posts

Google is moving Android news to a virtual event before I/O

Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…

April 29, 2025

Top Generative AI Companies of the World 2025

The top 11 generative AI companies in the world are listed below. These companies have…

April 28, 2025

Veo 2 extends access to more Gemini Advanced Users

Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…

April 25, 2025

Perplexity launches the iPhone voice assistant

Perplexity's iOS app now makes its conversational AI voice assistant compatible with Apple devices, enabling…

April 24, 2025

Ola’s AI arm Krutrim intends to raise $300 million

Bhavish Aggarwal is in talks to raise $300 million for his AI company, Krutrim AI…

April 22, 2025

World’s first humanoid half-marathon pits people against robots

The Beijing Humanoid Robot Innovation Center won the Yizhuang Half-Marathon with the "Tiangong Ultra," a…

April 22, 2025