HomeNewsTechnologyEnvision Launches Ask Envision: A Virtual Visual Assistant Built on ChatGPT Technology

Envision Launches Ask Envision: A Virtual Visual Assistant Built on ChatGPT Technology

follow us on Google News

Envision, which was launched in 2021, can recognize faces, objects, colors, and even describe scenes around the user. In addition, it can translate any type of text into over 60 different languages.  Today the company announces the launch of Ask Envision, a virtual visual assistant built upon ChatGPT by OpenAI, for all editions of its smartglasses.

The new technology provides a level of utility and context on text scanned by the Envision Glasses, making it easier for users to access information and enjoy greater independence. Ask Envision provides an astonishing level of intelligence to deliver more in-depth information or answer questions about pieces of text scanned by the Envision Glasses. Users can easily scan any text, pose their questions, and answers are spoken out by the glasses in the language of their choice with just one tap within the Scan Text feature.

Ask Envision is now available to all Envision Glasses customers, along with several other key features. These include Ally Video Calling, which enables Envision users to make hands-free video calls to chosen family and friends directly from the smart glasses for personal support and visual interpretation, and Aira Integration, which allows Envision users to call Aira agents, who provide professional human-to-human, visual interpreting 24/7, directly from their Envision Glasses.

Ask Envision
Ask Envision

Scene Recognition provides the user with a visual interpretation of the scene seen through the glasses, while Facial Recognition enables Envision users to teach their smart glasses to recognize faces, allowing them to tell the user who is in the room or at a location when it recognizes the face.

Ads

Envision Glasses also come with Instant Text, Scan Text, and Batch Scan, enabling users to read any type of short text, such as signs and labels, and read aloud more enhanced text, such as letters and documents, with Scan Text.

Layout Detection provides a more realistic reading environment by deciphering the document layout and giving verbal guidance to the user. Envision Glasses also have Enhanced Offline Language Capabilities, recognizing four additional Asian languages accurately when offline, and Bank Note/Currency Recognition, recognizing banknotes in over 100 currencies.

Document Guidance for Accurate Capture removes the frustrations of taking multiple images to fully capture a document’s complete text, and Optimizing Optical Character Recognition has significantly improved image capture and interpretation accuracy through processing tens of millions of data points by the Envision Glasses and Apps.

Julie Nguyen
Julie Nguyen
Julie is the visionary founder of SNAP TASTE and a dynamic force in global storytelling, innovation and creative leadership. She is a respected member of the Harvard Business Review Advisory Council and serves as a judge for the CES Innovation Awards (2024, 2025 and 2026), where she contributes thought leadership on the intersections of business, culture and breakthrough technologies. As Managing Director, she also oversees the Fine Art, Digital Art, Portfolios and Marketing departments, ensuring the brand’s strategic vision and creative direction are realized across disciplines. Her immersive reporting has brought audiences behind the scenes of global milestones such as the FIFA World Cup Qatar 2022, Expo 2020 Dubai, CES, D23 Expo, and the Milano Monza Motor Show, offering exclusive access to moments that define contemporary culture. An accomplished film critic and editorial voice, Julie is also recognized for her compelling reviews of National Geographic documentaries and other cinematic works. Her ability to combine analytical depth with narrative finesse inspires audiences seeking intelligent, meaningful, and globally relevant content. With a multidisciplinary perspective that bridges art, technology, and culture, Julie continues to shape the dialogue on how storytelling and innovation converge to influence the way we experience the world.
Ad

Leave a Reply

More to Explore

Blender 5.1: The Precision Refinement Every Designer Needs

Released on March 17, 2026, Blender 5.1 arrives not as a radical departure, but as a masterclass in refinement. While version 5.0 was the...

How to Set Up Firefox’s New Free Built-in VPN and Use Native Split View

Digital privacy often feels like a full-time job, requiring users to juggle various extensions and subscriptions just to keep their personal data from leaking...

OpenAI Shuts Down Sora as Disney’s $1 Billion Deal Collapses

The sudden closure of OpenAI's AI video platform marks one of the most dramatic reversals in the brief history of generative AI, and leaves...

Sony’s Tokyo Studio Is Where the Future of Filmmaking Gets Made

Sony is bringing its global media production hub network to Japan, opening the Digital Media Production Center Japan (DMPC Japan) inside the company's Group...

Anthropic’s Claude Cowork Lets You Assign AI Tasks From Your Phone and Walk Away

Artificial intelligence is getting better at doing things. The harder challenge has always been getting it to do things without you watching. Anthropic's Claude...

NVIDIA’s Dynamo 1.0 Is Free, Open Source Software That Makes AI Inference Up to 7x Faster

Running AI models at scale is harder than it looks. Training a model is a one-time investment. Inference, the process of actually using that...

Adobe and NVIDIA Are Teaming Up to Reinvent Creative and Marketing Workflows With AI

Two of the most influential companies in creative technology are deepening a partnership that goes back more than two decades. Adobe and NVIDIA have...

NVIDIA Is Trying to Become the Default Platform for Every Kind of Robot

Jensen Huang has a bold prediction: every industrial company will become a robotics company. Whether or not that timeline plays out exactly as he...

NVIDIA and T-Mobile Want to Turn the 5G Network Into a Distributed AI Computer

Most conversations about AI infrastructure focus on data centers, the massive facilities packed with GPU racks that train and run the world's most powerful...

BYD, Nissan, Geely and More Are Building Self-Driving Cars on NVIDIA’s Platform — and Robotaxis Are Coming to Uber by 2027

Self-driving vehicles have been a promise for a long time. The technology has advanced significantly, but wide-scale deployment has remained perpetually just around the...

NVIDIA Wants to Be the Platform That Powers Every Enterprise AI Agent

Autonomous AI agents are moving from experiment to enterprise infrastructure faster than most organizations anticipated. The question is no longer whether companies will deploy...

NVIDIA Is Building a Coalition of AI Labs to Develop Open Frontier Models Together

The race to build the most powerful AI models has largely been a competition, with labs guarding their research, their data, and their techniques...

Handpicked for You

You Might Also Like