Apple’s Visual Intelligence is a built-in "personal intelligence" system that allows you to quickly access information about objects or places by taking a picture. While iPhone lovers are singing praises of Apple for this innovative feature, others are quick to point out that this ‘innovative’ feature is basically Google Lens but Apple.
Apple Visual Intelligence
A few days ago, Apple Inc. announced the latest iPhone 16 series, Watch series 10, Visual Intelligence, and more. The newest iPhone models feature larger screens, faster processors, improved camera capabilities, a variety of new colors, and a convenient Camera Control button for visual intelligence. The newer iPhones are coming with a lot of features, that are definitely not new, including Visual intelligence.
As a result, people are divided over the new announcements, some are ecstatic that Apple is stepping up their game, whereas the rest are saying that these features have existed for years and that the company is only implementing them now.
Anyways, Visual Intelligence, one of the features coming in with Apple Intelligence later this year, is drawing a lot of attention. Especially because of its similarities with Google Lens, which has been around since 2017.
So, let’s see what Apple’s Visual Intelligence is and how it compares to Google Lens.
Difference Between Apple Intelligence and Google AI Overview
Apple’s Visual Intelligence is a built-in “personal intelligence” system that allows you to quickly access information about objects or places by taking a picture. This system is designed to process images efficiently, both on-device and using dedicated Apple silicon servers.
To use this feature, you will need to click and hold the camera control button, then point the phone’s camera at what you would like to know about.
While iPhone lovers are singing praises of Apple for this innovative feature, others are quick to point out that this ‘innovative’ feature is basically Google Lens but Apple.
What is Apple Intelligence? Devices Supported, How to Use, and When it is Available?
Google Lens is an image recognition technology developed by Google. To use it, you simply have to point your camera at an object or take a photo. Then, Google Lens will be able to identify objects in the picture, copy and translate text, browse the web for similar images, and provide other relevant information. This image recognition technology was released on October 4, 2017.
Both Apple’s Visual Intelligence and Google Lens are image recognition technologies. However, there are subtle differences between the two. Here is how they differ:
Google Photos Launches AI-Powered “Ask Photos” for Advanced Search, Available to U.S. Users Soon
Strictly speaking, there is no winner in this debate. Apple Visual Intelligence may offer stronger privacy protection. It is better if you are deeply invested in the Apple ecosystem or prioritize on-device processing.
Google Lens, however, provides broader, more mature features and integrates well into the Google ecosystem. This is ideal if you want versatility and comprehensive visual search capabilities.
Apple’s iOS 18.1 Update: AI-Powered Photo Clean-Up Tool Enhances Editing Experience
This post was last modified on September 13, 2024 6:04 am
What is digital arrest, and why is it becoming critical in today’s cybercrime-ridden world? This…
AI in Cybersecurity segment: AI has the potential to revolutionize cybersecurity with its ability to…
Explore the best AI security solutions of 2025 designed to protect against modern cyber threats.…
Autonomous agent layers are self-governing AI programs capable of sensing their environment, making decisions, and…
Artificial Intelligence is transforming the cryptocurrency industry by enhancing security, improving predictive analytics, and enabling…
In 2025, Earkick stands out as the best mental health AI chatbot. Offering free, real-time…