Interactions in a Post-COVID World: The Ultimate Guide to Touchless Technology

--

Over the past year, consumers have become more concerned than ever with physical interactions, hygiene, and the availability of touchless technology in public spaces.

According to the McKinsey consumer sentiment research, touchless purchases and physical distancing are among the top two priorities for a consumer when they decide to make a purchase in-store. And they are right to have concern.

These concerns are valid as research shows that the concentration of bacteria found on a public self-serve kiosk is 2000 times higher than on a toilet seat.

To address consumer concerns and industry challenges, we have developed this resource to act as a guide to touchless technology as a solution for end-users.

Big Data Jobs

Key Points

In this guide, we will discuss interaction challenges in a post-COVID world, including a definition of touchless technology, and how best to implement it as a solution.

Part A will Cover the Trend Toward Zero UI Touchless Technology

  1. Touchscreens and Shared Devices: Hygiene Concerns — How can touchless technology solve these challenges?
  2. Trending Towards a Zero UI (User Interface): Voice Recognition, Gesture Recognition, and Biometrics as Touchless Interaction Models.
  3. Limitations to Voice and Biometrics: When do you Need Gesture-Based Touchless Interactions?

Part II Will Cover Guidelines to Implementing Touchless Interaction Technology

We will look at use cases and user environment considerations for touchless gesture technology and guidelines to choosing the right touchless technology provider for each scenario.

  1. RGB vs Time-of-Flight cameras
  2. Hand Tracking v. Gesture Recognition: Different Approaches of Touchless Technology
  3. Hardware and Software Bundles v. a Hardware Agnostic Solution

Part C wil cover User Experience Considerations

  1. UI feedback, best practices, the inclusion of technology partners, and accessibility.

Trending AI Articles:

1. Why Corporate AI projects fail?

2. How AI Will Power the Next Wave of Healthcare Innovation?

3. Machine Learning by Using Regression Model

4. Top Data Science Platforms in 2021 Other than Kaggle

Part A — Touchless Technology: Moving Towards Zero UI

1. Touchscreens and Shared Devices: Hygiene Concerns

Touchscreens and shared devices such as kiosks, self-service counters and interactive displays are present in our everyday lives. Each of these interactions however is a concern for consumers in a post-COVID world where hygiene is of utmost importance.

Touchless technology offers seamless integration as a hygiene-conscious solution.

Let’s look at a few examples of touch-based interactions that can be replaced with touch-free interactions.

Examples of Touch-Based Interactions

Touchless controls can be implemented into a wide range of touchscreens, shared devices, and even non-digital interfaces:

Public Spaces

  • Payment terminals
  • Phones
  • ATMs
  • Wayfinding interfaces
  • Interactive displays for public use

Workplace

  • Light switches
  • Elevator signals
  • Door handles

Transportation

  • Ticket and self-service kiosks
  • Parking meters
  • Bus fares
  • Baggage claim

Retail

  • Wayfinding
  • Checkout kiosk
  • In-store virtual try-on experience

Restaurants & Hospitality

  • Check-in
  • Order
  • Special requests
Example of gesture recognition

Touch-Interaction Hygiene Challenges

Many existing solutions such as sanitizing interfaces between use, implementing anti-bacterial sprays, offering hand sanitizer and UV radiation are not easily scalable and require costly supervision.

This poses a challenge for interface and digital display manufacturers and users of those displays such as retail brands, transportation services companies, medical device operators, and many more industries where touch-based controls have traditionally been used.

Touchless Technology as a Solution for Hygienic Interactions

The one-stop solution to hygienic interactions includes touchless technology such as voice control, remote mobile app-based interactions, biometrics, and gesture controls as Zero User interfaces, which we will cover in the next section.

II. Trending Towards Zero UI (User Interface)

With the breakthroughs in image recognition and natural language processing, powered by advanced computer vision and machine learning, we are heading towards what is called “Zero UI” (user interface).

Touchless technology can be implemented in digital interfaces

Zero UI: Touch-Free Control Interfaces

Zero UI is a control interface that enables users to interact with technology through voice, gestures, and biometrics such as facial recognition.

It’s also a part of “invisible technology,” interfaces that are so integrated into our day-to-day lives that users rarely consider the technology behind the interaction or experience.

Smart devices, IoT sensors, smart appliances, smart TVs, smart assistants and consumer robotics are predominant examples of devices in which Zero UI is becoming increasingly integrated. Control interfaces include natural interaction modes such as voice or gestures.

Multimodal Zero UI: Interacting with Smart Devices Through Voice, Gestures, or Biometrics

The end-user solution for universal safe and seamless interactions calls for a multimodal Zero UI. This presents options for interaction such as voice and gesture controls, and biometric interactions, often co-existing with touchscreen interfaces.

Presenting users with the right Zero UI solution in each use case will improve service, brand image and accessibility by giving users options that they are comfortable with based on intuition and ability.

3. Limitations to Voice Control and Biometric Interactions, and Benefits of Touchless Gesture Technology

Voice controls may not function in noisy environments such as public spaces or construction sites.

In the case of biometrics, such as face recognition, these have limited functionalities in terms of user controls and inputs, and raise data privacy concerns.

Why should brands and manufacturers consider implementing touchless gesture controls?

  • They work in multimodal user interfaces in various environmental conditions
  • They are not as intrusive as biometric controls
  • Their universal comprehension as interactions works across languages and cultures for a global user base. In applications such as retail centers and airports, this significantly improves the UX at scale.

When should you consider implementing gesture-based interactions?

Loud and noisy spaces such as construction sites, plants and factories where:

  • Voice controls hampered by noise
  • Need precise interactions without taking off gloves/gear

Silent environments where voice interaction is disrupting the silence

Where removing gear is not safe: Manufacturing, building sites or medical spaces

  • When employees wear protection glasses or helmets, complicating biometrics, audio or touch interactions

Where users need accessibility

  • Unreliable voice recognition (people with accents, the elderly and women’s voices tend to experience more error rates than other groups)

Part B — Guidelines to Implementing Touchless Interaction Technology

In this section, we will look at various offerings of hand tracking and gesture recognition technology. This includes guidelines to choosing the right technology.

  • First we will look at which hardware is the most appropriate for the business objective of your company, focusing on two types of sensors: RGB cameras, as they are the most commonly found, and Time of Flight (ToF)
  • We will also look at the difference between hardware and software bundles
  • Lastly we will look at the differences between gesture recognition and hand tracking technologies.

I. Hardware Considerations: Comparing RGB and Time-of-Flight Cameras

Choosing the right technology provider involves careful consideration as touchless technology solutions are dependent on a device’s sensors and camera(s), and therefore rely on the OEM’s (original equipment manufacturer’s) specifications.

Do you need a specific camera to implement gesture controls?

Factors such as existing hardware, budget, and user experience will determine integration specifications.

Depending on a device’s onboard camera and sensors, some may require a custom gesture control solution.

Other technology providers will offer hardware agnostic solutions, able to adapt to hardware configurations, including onboard monochrome cameras.

In this section, we take a look at the two most common cameras embedded on smart displays: RGB and Time-of-Flight.

Touchless interface

Leveraging RGB to Create Engaging Interactive Experiences

Where it works best

  • Well suited for self-service kiosks inviting the user for a fun and interactive experience in shopping or entertainment centers.

How does it work

  • RGB cameras capture the same wavelength of light that the human eye is sensitive to red, green, blue.

Pros

  • Common due to their accessible price, found on smartphones
  • Allow for consumer-friendly features such as autofocus and blur
  • High-resolution images

Cons

  • Color sensors are intrinsically slower, as it needs to process 3 times the amount of data that a monochrome sensor processes. The frame rate is slower.
  • More sensitive to light and shade, which makes segmentation more challenging than when using IR sensors

RGB cameras can be complemented by a depth sensor, augmenting the traditional image information with depth data.

RGB vs ToF image — Source

Time of Flight: High-End Camera for Precision Use-Cases

Where it works best

  • Environments where high precision interactions are used

How does it work

  • ToF cameras calculate distances between the camera and the object by projecting an artificial light (laser or LED) onto the object and measuring the ‘time’ it takes for the light to travel out and back.

Pros

  • More compact than other setups like stereo vision
  • Captures more details than other cameras at a faster pace

Cons

  • Using various ToF at the same time may disturb another user’s experience
  • Background light can mislead the depth measurement
  • More expensive than RGB or monochrome cameras at scale

2. Hand Tracking Versus Gesture Recognition: Different Approaches to Touchless Technology

Above we discussed implementation considerations of the various types of sensors used to capture gestures.

Regardless of the hardware used to capture images, touchless gesture technologies can vary: from simple gesture recognition to advanced hand-tracking technology.

While it is commonly thought that hand tracking technology is the same as gesture recognition technology, they are substantially different approaches.

Gesture-Based Recognition Technology: A Low Battery-Consumption Option for Basic Touchless Interactions

Gesture-based recognition technologies outline a hand pose and match it to a predefined form.

Most gesture recognition technologies train machine learning models to recognize the form of gesture created by the hand.

Simply, algorithms are trained to identify a hand pose or gesture.

Hand segmentation with contour extraction. Some machine learning algorithms use a shape (contour extraction) to identify hands. — Source

Applications

  • Practical for situations where a limited number of gestures are needed for simple interactions.
  • Example: a binary request such as: validate or cancel, and previous or next page

Pros

  • Low battery consumption

Cons

  • Not practical for advanced or dynamic user interactions
  • The software may have difficulty recognizing gestures that vary slightly from the predetermined gesture pose.
  • Higher risk of false positives
  • Higher risk of user friction

Devices

  • Recommended for devices that require low battery consumption

Tracking-First Gesture Recognition Technology: Universal and Highly Accurate Gesture Recognition for Seamless Interactions

Tracking-first gesture recognition technology is significantly different from gesture-based recognition technology.

Tracking-first gesture recognition technology detects the hand, and then places it in a bounding box. At this point, the recognition technology identifies key points of interest, usually located on the knuckles and palm of a hand.

Depending on the software provider three to twenty-one points of interest are tracked on a hand continuously over the duration of the interaction.

Gestures are recognized when the points tracked in space create a certain combination and therefore more precise than gesture-based recognition technology.

Clay AIR tracking-first approach

Applications

  • High performance and safety critical use cases.
  • Automotive, training, and simulations where highly accurate interactions are integral.

Pros

  • Advanced gesture recognition for complex interactions.
  • Highly accurate, even when gestures differ slightly between users.
  • Self-learning, adapts to users.

Cons

  • In some cases, higher battery consumption (requires more computational power).

Devices

  • Self-service kiosks, check-in booths, touchless check-in, wayfinding kiosks, in-car gesture controls, retail, hospitality and service interaction displays and interfaces.

III. Hardware and Software Bundles v. a Hardware Agnostic Solution

A hardware and software bundle may offer a compelling package upfront in terms of initial deployment and pricing.

In the long term, co-dependencies between the hardware and software and other providers may result in a provider lock-in that is harmful to long-term flexibility and growth.

On the other hand, a hardware agnostic solution means that gesture control models can be implemented with a variety of cameras and multimodal interfaces.

It is a more reliable solution to deploy as a cohesive experience across multiple devices. This is far more cost-effective in the long run and highly adaptable.

Part C — User Experience Considerations: From Provider Selection to UI design

In this section, we will consider the user experience in relation to the technology used for touchless gesture controls, inclusivity and accessibility design considerations, and the importance of choice for multimodal interactions.

Minimize the User’s Effort

Technology partners play a key role in designing the user experience. A track record in human-machine interaction design, user experience design guideline standards and best practices for usability are important to evaluate ahead of time to optimize the impact of a solution’s implementation.

This will reduce points of friction during user interactions, and boost the overall rating of the experience.

UX capabilities to look for when choosing your technology partner include:

  • Guidelines on visual feedback, cues and signals to guide a user through a new interface
  • Awareness and guidelines on the cultural meanings of hand poses and gestures
  • Inclusive technology and UX

Inclusive and Accessible by Design

Hand tracking and gesture recognition technology is advancing to accommodate individuals who may be uncomfortable with touch or voice interactions.

All hand colors and sizes, hands with or without accessories, and hands that belong to eldery or youthful individuals should be equally recognized and understood.

This ability comes down to the practice and approach of a technology provider. Inclusivity and accessibility is deeply linked with machine learning and the data used. As such, a technology provider must work with a diverse range of data to create a solution that works for a diverse body of users.

Multimodal Interactions: Providing Consumer Preferences

As discussed above in multimodal touchless control solutions, voice, and gesture controls are solutions that can complement one another. In specific environments, a certain combination of multimodal touchless solutions will provide the best result.

Choice also provides the consumer with their preferred means of interactivity, creating a meaningful experience to consumers who feel their needs have been addressed with a personalized solution.

Summary

In summary, interactions in a post-COVID world are leaning towards Zero UI, accelerated by hygiene considerations.

Touchless technologies for gesture controls is optimal in a multimodal control interface, and addresses consumer concerns with data protection, privacy, and ease of use.

When it comes to choosing and implementing gesture-based interactions, many factors are to be taken into consideration:

  • What is your existing technology stack? Do you need retrofitting?
  • Is the technology partner compatible with your hardware?
  • What technology is your partner offering? Is the solution power-efficient, accurate and versatile?
  • Is the technology provider offering advice and support on the user experience design?
  • What about post-implementation improvement?

With any additional questions on these guidelines, don’t hesitate to contact our team here.

Don’t forget to give us your 👏 !

--

--

Thomas has a background in Science, Arts & Leadership. His passion for immersive experiences led him to co-found Clay AIR in 2015, acquired by Qualcomm in 2021.