The world’s first entirely optical artificial intelligence chip was created by a Beijing-based team of experts, who claim it greatly improves performance and efficiency.
Professors Fang Lu and Dai Qionghai led a team at Tsinghua University that developed the Taichi-II chip; their work was published in the journal Nature on Wednesday.
It is a significant improvement over their previous Taichi device, which the researchers claimed in April to have surpassed the energy efficiency of Nvidia’s H100 GPU by a factor of more than a thousand.
Also Read: Mistral Announces the Release of the AI Agent Building Platform
Electronic computers are necessary for the Taichi chip’s artificial intelligence training. However, the team claims that Taichi-II may now be used for training and modeling that is solely based on light, which has increased its efficiency and enhanced its performance.
According to the report, the increase represents a significant advancement for optical computing and has the potential to move the technology beyond theoretical concepts to large-scale experimental applications. It can also help meet the growing need for computational capacity that consumes less energy.
It might also offer an option in light of the US’s restriction on China’s ability to obtain the strongest GPU processors for artificial intelligence training.
The report found that Taichi II performed better than its predecessor in some instances.
Also Read: New AMD Ryzen 9000: Cheaper Than Ryzen 7000 at Launch
It improved the accuracy of classification jobs by 40% and expedited the training of optical networks with millions of parameters by an order of magnitude.
Taichi II’s low-light energy efficiency increased by six orders of magnitude in the field of complex scenario imaging.
According to Fang, traditional optical artificial intelligence techniques usually entailed modelling electronic artificial neural networks on photonic architecture, which is based on light and created on electronic computing systems.
“It is impossible to precisely model a general optical system due to system imperfections and the complexity of light-wave propagation; there is always a mismatch between the offline model and the real system,” the speaker stated.
Also Read: WhatsApp will use Meta AI to enable real-time audio talks
To get around these obstacles, the group created a technique that allows for the majority of machine learning to be done in parallel. This involves conducting a computer-intensive training process directly on the optical chip. This type of learning was dubbed fully forward mode, or FFM.
Doctoral student and study lead author Xue Zhiwei stated, “This architecture supports large-scale network training and enables high-precision training.”
Utilizing readily available high-speed optical modulators and detectors, FFM learning has the potential to outperform GPUs in terms of accelerated learning.
According to Fang’s research, “these chips will serve as the basis for optical computing power in the future, enabling the construction of AI models.”
Also Read: Napkin.ai’s Innovative AI Tool Converts Text into Eye-Catching Visuals