Stanford Foundation Model Transparency Index (FMTI) was released on October 18, 2023 and had some severe allegation on Top Artificial Intelligence (AI) companies of the world which includes Meta, BigScience, OpenAI, StabilityAI, Google, Anthropic, Cohere, AL21Labs, Infelection and Amazon. FMTI research has been released only for the 10 most famous AI companies. And, in the released FMTI research findings, it is claimed that the AI model used by these top companies is becoming less transparent and researchers, policymakers, and the public find it very difficult to understand how these models work, as well as their limitations and impact.
In the FMTI research reports, the Stanford Human-Centred Artificial Intelligence (HAI), the California-based university says that these prominent AI companies are becoming less transparent as their models become more powerful.
Must Check: Top 20 Best AI Influencers 2023 List: Check Social Media Links Here!
Rishi Bommasani, Society Lead at the Center for Research on Foundation Models (CRFM), within Stanford HAI said that the companies in the foundation model space are becoming less transparent. Citing an example he said, “OpenAI, which has the word open right in its name, has clearly stated that it will not be transparent about most aspects of its flagship model, GPT-4.”
What are the key findings of the Foundation Model Transparency Index (FMTI):
Transparency is key to the FMTI research. All the AI models must increase the transparency for existing foundations so as to prepare better models for the future. In addition to this, the research also indicates that downstream developers must be used to increase transparency with less ambiguity. The current AL models must use transparency to improve trust, safety and reliability so that misinformation can be dealt with seriously without any major repercussions.
- The top-scoring model scores only 54 out of 100: No major foundation model developer is close to providing adequate transparency, revealing a fundamental lack of transparency in the AI industry.
- The mean score is just 37%: Yet, 82 of the indicators are satisfied by at least one developer, meaning that developers can significantly improve transparency by adopting best practices from their competitors.
- Open foundation model developers lead the way: Two of the three open foundation model developers get the two highest scores. Both allow their model weights to be downloaded. Stability AI, the third open foundation model developer, is a close fourth, behind OpenAI.
Download Here Foundation Model Transparency Index (FMTI)
When the team scored 10 major foundation model companies using their 100-point index, they found plenty of room for improvement: The highest scores, which ranged from 47 to 54, aren’t worth crowing about, while the lowest score bottoms out at 12. “This is a pretty clear indication of how these companies compare to their competitors, and we hope will motivate them to improve their transparency,” Bommasani says.
What are the top Indicators of the Foundation Model Transparency Index (FMTI):
While conducting the team of researchers considered 100 indicators that comprehensively characterize transparency for foundation model developers. These 100 FTMI indicators are categorised into three broad categories – Upstream, Model and Downstream.
Upstream | These indicators specify the ingredients and processes involved in building a foundation model, such as the computational resources, data, and labor used to build foundation models. Full list of upstream indicators. |
Downstream | The downstream indicators specify how the foundation model is distributed and used, such as the model’s impact on users, any updates to the model, and the policies that govern its use. Full list of downstream indicators. |
Models | The model indicators specify the properties and function of the foundation model, such as the model’s architecture, and capabilities. Check the list of Models Here |
Must Read: Top 20 Best Books on AI: The Ultimate Reading List on Artificial Intelligence
What is the Methodology of the Foundation Model Transparency Index (FMTI)
The methodology adopted for FTMI findings includes the selection of targets, gathering information, creating a score based on findings and noting down the companies’ responses against the research findings for validation. The FTMI methodology has been indicated below.
- Targets: 10 major foundation model developers based on their influence, heterogeneity, and status as established companies. These companies are accessed on the basis of their most salient and capable foundation model
- Information gathering: The researchers have systematically gathered information publicly available by the developer from September 15, 2023.
- Initial scoring: For each AI developer, two researchers scored numbers that have been deeply analysed and for any disagreements have been resolved through discussion
- Feedback and company response: Researchers have shared the initial scores with leaders at each AI company for their views and calcification before deriving research findings.
Who is involved in the development of Foundation Model Transparency Index (FMTI)
The 2023 Foundation Model Transparency Index was created by a group of eight AI researchers from Stanford University’s Center for Research on Foundation Models (CRFM) and Institute on Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton University’s Center for Information Technology Policy. The shared interest that brought the group together is improving the transparency of foundation models.
Read Here The Foundation Model Transparency Index
Bommasani observes that consumers of digital technologies have consistently grappled with a lack of transparency. This has manifested in various ways, such as misleading advertisements and pricing on the web, ambiguous pay structures in ride-hailing services, deceptive design tactics that coerce users into unintended purchases, and a plethora of transparency challenges concerning content regulation. These issues have given rise to a substantial network of misinformation and disinformation on social media platforms. Bommasani warns that as transparency in commercial FMs diminishes, we risk encountering parallel threats to consumer safety.
What are the Top 10 Best AI Communities in 2023