The future of Augmented Reality for selling complex products

In a previous article, I outlined 5 great examples of why Augmented Reality helps consumers make purchase decisions. For many types of complex products, consumers don’t have the knowledge or experience to make an important decision they won’t regret.

Augmented Reality works well for shopping today because it gives consumers a greater ability to connect to things they are considering for their home, clothing or accessories to wear, or even makeup. Manufacturers and retailers can deliver more meaningful experiences with AR by helping consumers see products in scale, evaluate aesthetic choices compared to what they have now, get up close details from any angle, and more.

In this article, I cover some specific features that will take Augmented Reality even further in the future—extending reality to make dreaming, planning, and selecting experiences for marketing/shopping even better.

What Augmented Reality is capable of today

Before explaining what’s coming next, here’s a simple explanation for how Augmented Reality works. (For more details on how Augmented Reality does its thing, read this article with examples by Josh Jacob.)

AR allows you to look at the real world through your smartphone camera, while the software augments it with digital information.

You’re seeing a live feed of your world with 3D objects on top. In simplest terms, the software keeps track of where your floors and walls are, and so as you move your phone, the virtual objects appear to stay in the same physical locations.

Today, developers have standard frameworks to build on in the form of Apple’s ARKit and Google’s ARCore, so device owners can experience AR with no special effort. And it’s estimated that there are 1 billion AR-compatible phones in use today.

So basically AR today has some “baked in” capabilities:

  • Managing a same-scale “virtual world,” built by understanding the world around you, and continuously calculated as you move.
  • Understanding flat surfaces (like tables, floors, and walls), so you can set a chair on the floor, put a mixer on the kitchen counter, or hang a TV on the wall.
  • Displaying 3D objects in scale, so you shouldn’t have to do anything to make the illusion work.
  • Letting people move objects around as they see fit, and keep them where they were placed (as long as you don’t move around too much).
  • Applying a realistic brightness and simulating shadows — with ambient light estimation, as the lights brighten, object surfaces can brighten too.

What’s Next: new capabilities are going to further extend reality, and make AR even better

For all you can do right now, AR’s future is really exciting. There’s major innovation being worked on by Apple, Google, and many other developers to extend the capabilities. Reality-enhancing experiences will be more immersive and relevant over time.

Enter the new terminology of XR (Extended Reality or Cross Reality), and MR (Mixed Reality). These are the acronyms you’ll hear more about, and for the purposes of delivering dreaming, planning, and selecting experiences for marketing/shopping, it really doesn’t matter which term is used.

You can use the term XR today to mean anything in between AR and VR.

So, what should you expect next? Below I’ve introduced five new functionalities that will make using AR even better for all types of consumer uses, whether for gaming or shopping for complex products.

✅ Diminished Reality — Hiding real things that are in the way

Where AR giveth, “diminished reality” taketh away. For some products, there are some situations in which the product you currently have might still be in place. For example, things like furniture, rugs, window treatments, and faucets.

Virtual faucet over real faucet.

Enter Diminished Reality: where the system can remove undesired objects from the scene in real time.

This has been done pretty successfully with still images that don’t have too much going on—you might know of it as the Photoshop feature called “Content-Aware Fill.” It’s also called “inpainting,” where someone repairs or retouches missing content.

Here’s an example from Jan Herling and Wolfgang Broll, from work done in 2010 at the Ilmenau University of Technology, Department of Virtual Worlds / Digital Games. They started a company called Fayteq that was acquired by Facebook in 2017. It’s impressive, but remains difficult to achieve.

✅ Occlusion — Keeping real things in front (hiding 3D things behind)

Imagine considering new furniture, and placing a 3D chair into your real room, which has an existing couch and some tables. Depending on where you stand, the 3D chair should be partially hidden by other objects in the room.

Today 3D objects (and their shadows) are always in front of everything else.

The bottom image is simulated (the real chair should hide the virtual one).

This is actually a difficult mathematical problem to solve (and not all hardware works the same), but a lot of people are working on it.

Something that’s related to this: Collision with real objects.
Solving occlusion is related to “3D scanning” the objects in the room, and making a simple model of them. With greater knowledge of physical objects, you’ll also see features like bouncing a ball off a chair. For product selection experiences, collision detection could help a virtual chair automatically drop-in next to (not on top of) a real couch, and help the user more logically/realistically position objects.

✅ Co-creation and shared experiences

Think about playing a multi-person game—it’s easy to imagine having several people gathered together with their own devices while seeing the same virtual world appear in relation to where they are standing.

But beyond games, there are practical applications for accurately positioning objects for everyone, and letting multiple people interact or work together on a project. Also, the ability to pick up where you left off, or have others peer into your world. Not to mention the ability to take your 3D objects to some other physical place.

Apple’s ARKit 2 introduced some features for multi-user AR experiences (in this example, two people are playing a game together, with graphics positioned relative to each player).

This also has huge potential with product repair and support, where someone might be on location, while another person “looks on” through their remote device (seeing live video as well from the first).

So with shopping, imagine if a homeowner laid out a room full of furniture, and then later went to visit a showroom and see products in real life. Here a consultant might help review their selections and make adjustments. At the showroom, there could be a high-resolution display, or “holodeck-type” room, or other ways to interact. Back at home, the customer could see the results. The homeowner might have the ability to request an expert help them remotely: a designer could help make some changes while seeing their room.

✅ Working on big products and large-scale surfaces, especially outside

One of the issues with AR software is the range at which the software/hardware can examine the world and make the 3D world map. AR is incredible in room-scale solutions, or where the user goes through extra steps to move about and map a larger area.

So, imagine wanting to change a large part of your home’s exterior. You can stand off the porch and position a new door or window area, but if you stepped back to the curb, AR software/hardware doesn’t have the ability today to “see” the whole house.

But it’s possible to visualize large scale object(s), or even a whole 3D home or addition, it just takes more work. The example video below takes this idea to the extreme: the Realar platform is aimed at creating full 3D homes that users can experience (presumably on an empty lot). Keep in mind, users are provided with the experience from a builder, vs. creating the model themselves. You’ll have to imagine an experience where the user does know the size of their house’s measurements, and creates a large model to try out products on.

You might be wondering: How does this relate to Google’s new AR feature for Maps, which can provide directional arrows and other position details? The difference here is one of scale and accuracy — Google is using looser GPS-accurate knowledge of the world, not positioning information down to the fractions of an inch that AR provides when done by the individual user.

✅ Intelligent identification of real objects/spaces and areas — and putting this information to use

Some features to come in the near future will provide people with a more immersive or simpler experience by automatically preparing a scene, or making some decisions for them. These will involve machine learning / artificial intelligence to better understand what’s behind the camera.

Smarter object placement: First, most experiences have the user generally place an object into their room, and then allow them to move it around, or rotate into place. If the system could better understand what is in the room, items could be placed more intelligently. Image a person shopping for a table setting, where instead of placing items onto their table, the system could virtually set the entire table, because it recognizes a table and its shape, and where chairs are placed. Or the homeowner shopping for window treatments wouldn’t have to move the virtual curtains over the windows, because the system would know where windows are.

Smarter, more automated preparation for covered surfaces: Today’s AR applications that help people shop for tile, wood floors, or wall coverings involve the user in a first step that designates where they want product to go. For example, the Graham and Brown wallpaper app has the user define walls, and then cut out any openings, before trying out the different wallpaper samples. As the shopper tries out wallpaper for example, these holes are maintained and the illusion works.

Example user interface having the user define a flat surface, and then “erase” openings.

What will make these types of shopping experiences more engaging is when this “masking” step can be more automated. In other words, an application that can examine the real world and make a guess about the full surface of a wall, and automatically cut holes for doorways and windows.

Cambrian is a startup developing intelligent AR features to automatically identify areas of interest, which will save the user time and effort to define these areas themselves. (

Related to the occlusion feature above, imagine an application where the floor surface can be automatically located and scaled, and even work around furniture that is sitting on the existing floor in real time.

AR will continue to enhance customer purchase journeys

AR today shouldn’t be taken for granted — it’s capable of delivering amazing experiences we only dreamed of just a few years ago, and can really assist and inform shoppers who need more information and confidence to make high-ticket purchases.

And the capabilities identified here will certainly further extend reality and deliver experiences that are more immersive and meaningful. Manufacturers should be seriously considering the ways they can incorporate 3D technology into their marketing and sales experiences. Do you want to know how to proceed? At the conclusion of my previous article, I laid out some recommendations for “what you need to do to get started with an Augmented Reality strategy.”

0 Tweet
You May Also Like