Stanford Foundation Model Transparency Index (FMTI)
Stanford Foundation Model Transparency Index (FMTI) was released on October 18, 2023 and had some severe allegation on Top Artificial Intelligence (AI) companies of the world which includes Meta, BigScience, OpenAI, StabilityAI, Google, Anthropic, Cohere, AL21Labs, Infelection and Amazon. FMTI research has been released only for the 10 most famous AI companies. And, in the released FMTI research findings, it is claimed that the AI model used by these top companies is becoming less transparent and researchers, policymakers, and the public find it very difficult to understand how these models work, as well as their limitations and impact.
In the FMTI research reports, the Stanford Human-Centred Artificial Intelligence (HAI), the California-based university says that these prominent AI companies are becoming less transparent as their models become more powerful.
Must Check: Top 20 Best AI Influencers 2023 List: Check Social Media Links Here!
Rishi Bommasani, Society Lead at the Center for Research on Foundation Models (CRFM), within Stanford HAI said that the companies in the foundation model space are becoming less transparent. Citing an example he said, “OpenAI, which has the word open right in its name, has clearly stated that it will not be transparent about most aspects of its flagship model, GPT-4.”
What are the key findings of the Foundation Model Transparency Index (FMTI):
Transparency is key to the FMTI research. All the AI models must increase the transparency for existing foundations so as to prepare better models for the future. In addition to this, the research also indicates that downstream developers must be used to increase transparency with less ambiguity. The current AL models must use transparency to improve trust, safety and reliability so that misinformation can be dealt with seriously without any major repercussions.
Download Here Foundation Model Transparency Index (FMTI)
When the team scored 10 major foundation model companies using their 100-point index, they found plenty of room for improvement: The highest scores, which ranged from 47 to 54, aren’t worth crowing about, while the lowest score bottoms out at 12. “This is a pretty clear indication of how these companies compare to their competitors, and we hope will motivate them to improve their transparency,” Bommasani says.
What are the top Indicators of the Foundation Model Transparency Index (FMTI):
While conducting the team of researchers considered 100 indicators that comprehensively characterize transparency for foundation model developers. These 100 FTMI indicators are categorised into three broad categories – Upstream, Model and Downstream.
Upstream | These indicators specify the ingredients and processes involved in building a foundation model, such as the computational resources, data, and labor used to build foundation models. Full list of upstream indicators. |
Downstream | The downstream indicators specify how the foundation model is distributed and used, such as the model’s impact on users, any updates to the model, and the policies that govern its use. Full list of downstream indicators. |
Models | The model indicators specify the properties and function of the foundation model, such as the model’s architecture, and capabilities. Check the list of Models Here |
Must Read: Top 20 Best Books on AI: The Ultimate Reading List on Artificial Intelligence
What is the Methodology of the Foundation Model Transparency Index (FMTI)
The methodology adopted for FTMI findings includes the selection of targets, gathering information, creating a score based on findings and noting down the companies’ responses against the research findings for validation. The FTMI methodology has been indicated below.
Who is involved in the development of Foundation Model Transparency Index (FMTI)
The 2023 Foundation Model Transparency Index was created by a group of eight AI researchers from Stanford University’s Center for Research on Foundation Models (CRFM) and Institute on Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton University’s Center for Information Technology Policy. The shared interest that brought the group together is improving the transparency of foundation models.
Read Here The Foundation Model Transparency Index
Bommasani observes that consumers of digital technologies have consistently grappled with a lack of transparency. This has manifested in various ways, such as misleading advertisements and pricing on the web, ambiguous pay structures in ride-hailing services, deceptive design tactics that coerce users into unintended purchases, and a plethora of transparency challenges concerning content regulation. These issues have given rise to a substantial network of misinformation and disinformation on social media platforms. Bommasani warns that as transparency in commercial FMs diminishes, we risk encountering parallel threats to consumer safety.
What are the Top 10 Best AI Communities in 2023
This post was last modified on October 23, 2023 12:42 pm
Are you looking to advance your engineering career in the field of robotics? Check out…
Artificial intelligence is a topic that has recently made internet users all over the world…
Boost your learning journey with the power of AI communities. The article below highlights the…
Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…
Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…
Discover the 13 best yield farming platforms of 2025, where you can safely maximize your…