HomeNewsTechnologyGoogle Unveils Virtual Try-On for Apparel with Different Body Shapes

Google Unveils Virtual Try-On for Apparel with Different Body Shapes

follow us on Google News

Google announced a virtual try-on for apparel, a new feature that shows users what clothes look like on real models with different body shapes and sizes. The feature includes details such as how the clothing drapes, folds, clings, stretches, and wrinkles. In order to accomplish this, Google’s shopping AI researchers developed a new generative AI model that creates lifelike depictions of clothing on individuals.

The significance of this feature lies in the fact that while apparel is one of the most-searched shopping categories, many online shoppers struggle to envision how clothes will look on them prior to making a purchase. A survey revealed that 42 percent of online shoppers feel that the images of models do not represent them, and 59 percent express dissatisfaction when the item they bought online looks different on them compared to their expectations. Now, Google’s virtual try-on tool on Search allows users to determine if a particular piece of clothing is suitable for them before making a purchase.

The virtual try-on for apparel feature displays how clothes appear on a variety of real models. The process involves Google’s new generative AI model, which takes a single clothing image and accurately portrays how it would drape, fold, cling, stretch, and form wrinkles and shadows on a diverse range of real models in different poses. The selection of models encompasses various sizes, ranging from XXS to 4XL, representing different skin tones (according to the Monk Skin Tone Scale), body shapes, ethnicities, and hair types.

To comprehend the functioning of this model, it is necessary to explain the concept of diffusion. Diffusion involves gradually adding additional pixels or “noise” to an image until it becomes unrecognizable, and then removing the noise entirely until the original image is perfectly reconstructed. Text-to-image models, such as Imagen, employ diffusion in conjunction with text from a large language model (LLM) to generate a realistic image solely based on the entered text.

Ads
Google Virtual Try-On
Google Virtual Try-On

Inspired by Imagen, Google has taken on the challenge of virtual try-on (VTO) using diffusion with a unique approach. Instead of relying on text input during the diffusion process, Google employs a pair of images—a garment image and a person image. Each image is fed into its own neural network, specifically a U-net, and they communicate with each other through a process called “cross-attention.” This exchange of information between the networks generates the desired output: a photorealistic image depicting the person wearing the garment. This innovative combination of image-based diffusion and cross-attention constitutes Google’s new AI model for virtual try-on.

As of now, shoppers in the United States have the opportunity to virtually try on women’s tops from various brands available on Google, including Anthropologie, Everlane, H&M, and LOFT. By tapping on products marked with the “Try On” badge on Search, users can select the model that resonates most with them.

This technology works in conjunction with the Shopping Graph, an extensive database encompassing products and sellers, allowing it to scale and accommodate more brands and items in the future. Keep an eye out for additional options in the virtual try-on for apparel feature, including the upcoming launch of men’s tops later this year.

To assist shoppers in finding the perfect piece, Google has introduced new guided refinements. Leveraging machine learning and novel visual matching algorithms, users can fine-tune their product search using inputs such as color, style, and pattern. Unlike shopping in a physical store, this feature grants users access to options from various online retailers across the web. The guided refinements can be found within the product listings and are currently available for tops, with the potential for expansion to other categories in the future.

Julie Nguyen
Julie Nguyen
Julie is the visionary founder of SNAP TASTE and a dynamic force in global storytelling, innovation and creative leadership. She is a respected member of the Harvard Business Review Advisory Council and serves as a judge for the CES Innovation Awards (2024, 2025 and 2026), where she contributes thought leadership on the intersections of business, culture and breakthrough technologies. As Managing Director, she also oversees the Fine Art, Digital Art, Portfolios and Marketing departments, ensuring the brand’s strategic vision and creative direction are realized across disciplines. Her immersive reporting has brought audiences behind the scenes of global milestones such as the FIFA World Cup Qatar 2022, Expo 2020 Dubai, CES, D23 Expo, and the Milano Monza Motor Show, offering exclusive access to moments that define contemporary culture. An accomplished film critic and editorial voice, Julie is also recognized for her compelling reviews of National Geographic documentaries and other cinematic works. Her ability to combine analytical depth with narrative finesse inspires audiences seeking intelligent, meaningful, and globally relevant content. With a multidisciplinary perspective that bridges art, technology, and culture, Julie continues to shape the dialogue on how storytelling and innovation converge to influence the way we experience the world.
Ad

Leave a Reply

More to Explore

Blender 5.1: The Precision Refinement Every Designer Needs

Released on March 17, 2026, Blender 5.1 arrives not as a radical departure, but as a masterclass in refinement. While version 5.0 was the...

How to Set Up Firefox’s New Free Built-in VPN and Use Native Split View

Digital privacy often feels like a full-time job, requiring users to juggle various extensions and subscriptions just to keep their personal data from leaking...

OpenAI Shuts Down Sora as Disney’s $1 Billion Deal Collapses

The sudden closure of OpenAI's AI video platform marks one of the most dramatic reversals in the brief history of generative AI, and leaves...

Sony’s Tokyo Studio Is Where the Future of Filmmaking Gets Made

Sony is bringing its global media production hub network to Japan, opening the Digital Media Production Center Japan (DMPC Japan) inside the company's Group...

Anthropic’s Claude Cowork Lets You Assign AI Tasks From Your Phone and Walk Away

Artificial intelligence is getting better at doing things. The harder challenge has always been getting it to do things without you watching. Anthropic's Claude...

NVIDIA’s Dynamo 1.0 Is Free, Open Source Software That Makes AI Inference Up to 7x Faster

Running AI models at scale is harder than it looks. Training a model is a one-time investment. Inference, the process of actually using that...

Adobe and NVIDIA Are Teaming Up to Reinvent Creative and Marketing Workflows With AI

Two of the most influential companies in creative technology are deepening a partnership that goes back more than two decades. Adobe and NVIDIA have...

NVIDIA Is Trying to Become the Default Platform for Every Kind of Robot

Jensen Huang has a bold prediction: every industrial company will become a robotics company. Whether or not that timeline plays out exactly as he...

NVIDIA and T-Mobile Want to Turn the 5G Network Into a Distributed AI Computer

Most conversations about AI infrastructure focus on data centers, the massive facilities packed with GPU racks that train and run the world's most powerful...

BYD, Nissan, Geely and More Are Building Self-Driving Cars on NVIDIA’s Platform — and Robotaxis Are Coming to Uber by 2027

Self-driving vehicles have been a promise for a long time. The technology has advanced significantly, but wide-scale deployment has remained perpetually just around the...

NVIDIA Wants to Be the Platform That Powers Every Enterprise AI Agent

Autonomous AI agents are moving from experiment to enterprise infrastructure faster than most organizations anticipated. The question is no longer whether companies will deploy...

NVIDIA Is Building a Coalition of AI Labs to Develop Open Frontier Models Together

The race to build the most powerful AI models has largely been a competition, with labs guarding their research, their data, and their techniques...

Handpicked for You

You Might Also Like