Home Technology A Google Patent points to their Soli Motion Technology being used with...

A Google Patent points to their Soli Motion Technology being used with Future Chromebooks and Wearables


 

Google introduced their Soli technology in their Pixel 4 smartphone that Google described as a miniature radar that understands human motions at various scales: from the tap of your finger to the movements of your body. Patently Apple covered their technology in a number of reports including one titled “Google is reportedly set to introduce a new In-Air Gesturing System for the Pixel 4″ along with a few patents ((01 and 02). Tomsguide reports that the Pixel 5 smartphone has dropped Soli but that Google has said the technology will return.

 

Of course Google hasn’t publicly stated where Soli would return, though Tomsguide speculates that it could be used in future products from their Nest division.

 

Then again, Patently Apple has discovered a Q4 Google patent that suggests that Soli could be used with future Chromebooks and Pixel and/or other Android Wear OS based watches.

 

Google’s patent begins by noting that computing devices such as desktop and laptop computers have various user interfaces that allow users to interact with the computing devices. However, interactions with these interfaces can be inconvenient or unnatural at times, such as when trying to manipulate a three-dimensional object on the screen by using a keyboard or clicking on a mouse.

 

Google’s patent covers technology that generally relates to detecting user gestures, namely, gestures provided by a user for the purpose of interacting with a computing device.

 

Computing devices with limited sensors, such as a laptop with a single front-facing camera, may collect and analyze image data in order to detect a gesture provided by a user. For example, the gesture may be a hand swipe or rotation corresponding to a user command, such as scrolling down or rotating a display.

 

However, such cameras may not be able to capture sufficient image data to accurately detect a gesture. For instance, all or portions of the gesture may occur too fast for a camera with a relatively slow frame rate to keep up. Further, since many cameras provide little, if any, depth information, it may be difficult for a typical laptop camera to detect complex gestures via the camera. To address these issues, a system may be configured to use data from sensors external to the system for gesture detection.

 

In this regard, the system may include one or more visual sensors configured to collect image data, and one or more processors configured to analyze the image data in combination with data from external sensors.

 

As a specific example, the system may be a laptop computer, where the one or more visual sensors may be a single front-facing camera provided on the laptop computer. Examples of external sensors may include various sensors provided in one or more wearable devices worn by the user, such as a smartwatch or a head-mountable device.

 

The processors may receive image data from the one or more visual sensors capturing a motion of the user provided as a gesture. For example, the image data may include a series of frames taken by the front-facing camera of the laptop that capture the motion of the user’s hand.

 

However, the motion data may lack sufficient precision to fully capture all of the relevant information embodied in the motion because of a slow camera frame rate or lack of depth information.

 

As such, the processors may also receive motion data from one or more wearable devices worn by the user. For instance, the motion data may include inertial measurements measured by an IMU of a smartwatch from the perspective of the smartwatch, and where each measurement may be associated with a timestamp provided by a clock of the smartwatch. For example, the inertial measurements may include acceleration measurements from an accelerometer in the smartwatch.

 

For another example, the inertial measurements may include rotation or orientation measurements from a gyroscope of the smartwatch.

 

Google’s patent FIG. 5 below illustrates an example of detecting gestures using signal strength measurements; FIG. 9 shows an example flow chart that may be performed by one or more processors. Gestures may be detected based on the recognized portion of the user’s body and the one or more correlations between the image data and the received motion data.

 

 

In the next round of patent figures, Google’s patent FIG. 4 below illustrates an example of detecting gestures using inertial measurements; FIG. 6 illustrates an example of detecting gestures using audio data; and FIG. 8 illustrates an example of detecting gestures using sensor data from multiple wearable devices including smartglasses.

 

3 Google patent figs 4  6  8

 

Google’s patent was filed for in Q2 2019 and published last month by the U.S. Patent Office.

 

10.0F3 - Patently Extra News





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Leading 10 Best Cross Country Moving Firms

If you're planning to move to Chula Panorama, The Golden State, you must recognize that there are many relocating firms that offer long-distance moving...

PETER HITCHENS: USA wants war to drive Russia back to the Stone Age

This is not a war between Ukraine and . It is a war between the USA and Russia, in which both sides are cynically...

nft хасбика

Добрый День, Дорогие Друзья. Сейчас я бы хотел поведать малость про aglet nft. Я думаю Вы искали именно про nft pop it trade?! Значит эта больше актуальная...

Nintendo Switch tops lifetime sales of Wii console

By Sam Nussey TOKYO, Feb 3 (Reuters) - Nintendo Co Ltd sold 18.95 million Switch video game consoles in the nine months to the...

The Fact About karaoke That No One Is Suggesting

kangnam 셔츠룸후기 강남셔츠룸시스템 - An Overview