On the first day of its Redacted conference in Bangkok, Thailand, Near Protocol revealed a bold proposal to create the world’s largest open-source artificial intelligence model. Meta’s open-source Llama model would be 3.5 times smaller than the 1.4 trillion parameter model.
It will be developed by hundreds of contributors using competitive crowdsourcing research and development on the new Near AI Research Hub. Starting Nov. 10, people can join the training of a tiny model with 500 million parameters.
Also Read: Intel Launches AI PC Experience Centers at IISc and IIT Hyderabad to Boost AI Education in India
What Next?
Only the top contributors will advance to work on increasingly larger and more sophisticated models as the project expands in scope and complexity across seven models. To compensate contributors and promote continuous updates as technology advances, the models will be commercialized while maintaining anonymity through the use of encrypted Trusted Execution Environments.
Illia Polosukhin, a co-founder of Near Protocol, told at the Redacted conference in Bangkok that token sales would pay for the costly training and computation.
“It’s a lot, costing about $160 million, but in reality, it’s raiseable money in cryptocurrency,” he stated. Polosukhin went on to say: “After all the deductions made using this mechanism, the tokenholders receive their money back. We have a business plan, a method for making money off of it, a method for raising funds, and a method for completing the loop. Thus, individuals can also put their money back into the following model.”
First Humanoid Robot-Painted Piece of Art to Sell at Auction Brings $1 Million
Past and Future
Skidanov, who currently leads Near AI, acknowledged that there is a significant obstacle to overcome and that it is a huge job.
One of the few cryptocurrency initiatives capable of carrying out such a bold endeavor is Near. Co-founder Alex Skidanov worked at OpenAI before the introduction of the game-changing model in late 2022, and Polosukhin was one of the authors of the pioneering transformer research paper that gave rise to ChatGPT.
How does the Decentralized AI address privacy concerns?
The project would require “tens of thousands of GPUs in one place” to train such a massive model, which is not ideal. But, since all of the distributed training methods we currently employ demand extremely quick interconnection, using a decentralized network of computation “would require a new technology that doesn’t exist today.” But a new study from Deep Mind indicates it may be achievable, he added.
Although he hasn’t discussed it with ongoing initiatives like the Artificial Superintelligence Alliance, Polosukhin stated that he would be pleased to see if there are any overlaps. He asserted that decentralized AI technology must prevail for the benefit of all.
In the present and most likely the future, this technology is the most significant. And the truth is that we will essentially follow the company’s instructions if AI is under its control,” he added.
“There is no decentralization at that point if one corporation is handling all AI and, in effect, the entire economy. The only way Web3 remains philosophically important is if we create AI that adheres to the same standards.
Edward Snowden, a conference guest speaker, made the point clear with a terrifying illustration of how centralized AI might turn the entire planet into a massive surveillance state.
He also discussed the necessity of internet civil rights and the need to acknowledge that “the only way to preserve our digital sovereignty is to create our systems that are enforced through math, and there are legitimate limits on their authority to regulate.”