Apple’s Visual Intelligence is a built-in "personal intelligence" system that allows you to quickly access information about objects or places by taking a picture. While iPhone lovers are singing praises of Apple for this innovative feature, others are quick to point out that this ‘innovative’ feature is basically Google Lens but Apple.
Apple Visual Intelligence
A few days ago, Apple Inc. announced the latest iPhone 16 series, Watch series 10, Visual Intelligence, and more. The newest iPhone models feature larger screens, faster processors, improved camera capabilities, a variety of new colors, and a convenient Camera Control button for visual intelligence. The newer iPhones are coming with a lot of features, that are definitely not new, including Visual intelligence.
As a result, people are divided over the new announcements, some are ecstatic that Apple is stepping up their game, whereas the rest are saying that these features have existed for years and that the company is only implementing them now.
Anyways, Visual Intelligence, one of the features coming in with Apple Intelligence later this year, is drawing a lot of attention. Especially because of its similarities with Google Lens, which has been around since 2017.
So, let’s see what Apple’s Visual Intelligence is and how it compares to Google Lens.
Difference Between Apple Intelligence and Google AI Overview
Apple’s Visual Intelligence is a built-in “personal intelligence” system that allows you to quickly access information about objects or places by taking a picture. This system is designed to process images efficiently, both on-device and using dedicated Apple silicon servers.
To use this feature, you will need to click and hold the camera control button, then point the phone’s camera at what you would like to know about.
While iPhone lovers are singing praises of Apple for this innovative feature, others are quick to point out that this ‘innovative’ feature is basically Google Lens but Apple.
What is Apple Intelligence? Devices Supported, How to Use, and When it is Available?
Google Lens is an image recognition technology developed by Google. To use it, you simply have to point your camera at an object or take a photo. Then, Google Lens will be able to identify objects in the picture, copy and translate text, browse the web for similar images, and provide other relevant information. This image recognition technology was released on October 4, 2017.
Both Apple’s Visual Intelligence and Google Lens are image recognition technologies. However, there are subtle differences between the two. Here is how they differ:
Google Photos Launches AI-Powered “Ask Photos” for Advanced Search, Available to U.S. Users Soon
Strictly speaking, there is no winner in this debate. Apple Visual Intelligence may offer stronger privacy protection. It is better if you are deeply invested in the Apple ecosystem or prioritize on-device processing.
Google Lens, however, provides broader, more mature features and integrates well into the Google ecosystem. This is ideal if you want versatility and comprehensive visual search capabilities.
Apple’s iOS 18.1 Update: AI-Powered Photo Clean-Up Tool Enhances Editing Experience
This post was last modified on September 13, 2024 6:04 am
Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…
The top 11 generative AI companies in the world are listed below. These companies have…
Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…
Perplexity's iOS app now makes its conversational AI voice assistant compatible with Apple devices, enabling…
Bhavish Aggarwal is in talks to raise $300 million for his AI company, Krutrim AI…
The Beijing Humanoid Robot Innovation Center won the Yizhuang Half-Marathon with the "Tiangong Ultra," a…