Apple’s Visual Intelligence is a built-in "personal intelligence" system that allows you to quickly access information about objects or places by taking a picture. While iPhone lovers are singing praises of Apple for this innovative feature, others are quick to point out that this ‘innovative’ feature is basically Google Lens but Apple.
Apple Visual Intelligence
A few days ago, Apple Inc. announced the latest iPhone 16 series, Watch series 10, Visual Intelligence, and more. The newest iPhone models feature larger screens, faster processors, improved camera capabilities, a variety of new colors, and a convenient Camera Control button for visual intelligence. The newer iPhones are coming with a lot of features, that are definitely not new, including Visual intelligence.
As a result, people are divided over the new announcements, some are ecstatic that Apple is stepping up their game, whereas the rest are saying that these features have existed for years and that the company is only implementing them now.
Anyways, Visual Intelligence, one of the features coming in with Apple Intelligence later this year, is drawing a lot of attention. Especially because of its similarities with Google Lens, which has been around since 2017.
So, let’s see what Apple’s Visual Intelligence is and how it compares to Google Lens.
Difference Between Apple Intelligence and Google AI Overview
Apple’s Visual Intelligence is a built-in “personal intelligence” system that allows you to quickly access information about objects or places by taking a picture. This system is designed to process images efficiently, both on-device and using dedicated Apple silicon servers.
To use this feature, you will need to click and hold the camera control button, then point the phone’s camera at what you would like to know about.
While iPhone lovers are singing praises of Apple for this innovative feature, others are quick to point out that this ‘innovative’ feature is basically Google Lens but Apple.
What is Apple Intelligence? Devices Supported, How to Use, and When it is Available?
Google Lens is an image recognition technology developed by Google. To use it, you simply have to point your camera at an object or take a photo. Then, Google Lens will be able to identify objects in the picture, copy and translate text, browse the web for similar images, and provide other relevant information. This image recognition technology was released on October 4, 2017.
Both Apple’s Visual Intelligence and Google Lens are image recognition technologies. However, there are subtle differences between the two. Here is how they differ:
Google Photos Launches AI-Powered “Ask Photos” for Advanced Search, Available to U.S. Users Soon
Strictly speaking, there is no winner in this debate. Apple Visual Intelligence may offer stronger privacy protection. It is better if you are deeply invested in the Apple ecosystem or prioritize on-device processing.
Google Lens, however, provides broader, more mature features and integrates well into the Google ecosystem. This is ideal if you want versatility and comprehensive visual search capabilities.
Apple’s iOS 18.1 Update: AI-Powered Photo Clean-Up Tool Enhances Editing Experience
This post was last modified on September 13, 2024 6:04 am
Are you looking to advance your engineering career in the field of robotics? Check out…
Artificial intelligence is a topic that has recently made internet users all over the world…
Boost your learning journey with the power of AI communities. The article below highlights the…
Demystify the world of Artificial Intelligence with our comprehensive AI Glossary and Terminologies Cheat Sheet.…
Scott Wu is the co-founder and Chief Executive Officer of Cognition Labs, an artificial intelligence…
Discover the 13 best yield farming platforms of 2025, where you can safely maximize your…