G06V20/20

Hands-Free Crowd Sourced Indoor Navigation System and Method for Guiding Blind and Visually Impaired Persons

The present invention discloses an indoor Electronic Traveling Aid (ETA) system for blind and visually impaired (BVI) people. The system comprises a headband, intuitive tactile display with myographic (EMG) feedback, controller, and server-based methods corresponding to three operation modalities. In 1.sup.st modality, sighted users mark routes, map navigational directions, and create semantic comments for BVIs. This information of routes is continuously collected and estimated in ETA servers. In the 2.sup.nd modality, BVIs choose the routes from servers, thereby, are supplied with real-time navigational guidance. Also, an EMG interface is used, where the user's facial muscles are enabled is to send commands to the ETA system. In the 3.sup.rd modality, BVIs receive real-time audio guidance in complex or unforeseen situations: ETA provides a crowd-assisted interface and real-time sensory (e.g., video) data, where crowd-assistants analyze the situation and help the BVI to navigate.

APPARATUS OF SELECTING VIDEO CONTENT FOR AUGMENTED REALITY, USER TERMINAL AND METHOD OF PROVIDING VIDEO CONTENT FOR AUGMENTED REALITY
20230051112 · 2023-02-16 ·

A video content selecting apparatus for augmented reality is provided. The apparatus includes a communication interface; and an operation processor configured to perform: (a) collect a plurality of video contents through the Internet; (b) extract feature information and metadata for each of the plurality of video contents, and generate a hash value corresponding to the feature information by using a predetermined hashing function; (c) manage a database to include at least the hash value and the metadata of each of the plurality of video contents; (d) receive object information corresponding to an object in a real-world environment from a user terminal through the communication interface; (e) search the database based on the object information and select a video content corresponding to the object information from among the plurality of video contents; and (f) transmit the metadata of the selected video content to the user terminal through the communication interface.

APPARATUS OF SELECTING VIDEO CONTENT FOR AUGMENTED REALITY, USER TERMINAL AND METHOD OF PROVIDING VIDEO CONTENT FOR AUGMENTED REALITY
20230051112 · 2023-02-16 ·

A video content selecting apparatus for augmented reality is provided. The apparatus includes a communication interface; and an operation processor configured to perform: (a) collect a plurality of video contents through the Internet; (b) extract feature information and metadata for each of the plurality of video contents, and generate a hash value corresponding to the feature information by using a predetermined hashing function; (c) manage a database to include at least the hash value and the metadata of each of the plurality of video contents; (d) receive object information corresponding to an object in a real-world environment from a user terminal through the communication interface; (e) search the database based on the object information and select a video content corresponding to the object information from among the plurality of video contents; and (f) transmit the metadata of the selected video content to the user terminal through the communication interface.

ONE-TOUCH SPATIAL EXPERIENCE WITH FILTERS FOR AR/VR APPLICATIONS
20230049175 · 2023-02-16 ·

A method to assess user condition for wearable devices using electromagnetic sensors is provided. The method includes receiving a signal from an electromagnetic sensor, the signal being indicative of a health condition of a user of a wearable device, selecting a salient attribute from the signal, and determining, based on the salient attribute, the health condition of the user of the wearable device. A non-transitory, computer-readable medium storing instructions which, when executed by a processor, cause a system to perform the above method, and the system, are also provided.

INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD

An information processing apparatus includes a processor configured to: acquire a captured image of an object; specify a first area of the object in the captured image, the first area being an area occupied by a work target that is a target to be worked on; process the captured image to make a second area other than the first area invisible to generate a processed image; in response to a change in the first area with a deformation of the work target, apply a deformation area instead of the first area to make a second area obtained by the application invisible to generate a processed image, the deformation area being an area defined by a pre-registered shape of the work target after deformation; and transmit the processed image.

Messaging system with augmented reality makeup
11580682 · 2023-02-14 · ·

Systems, methods, and computer readable media for messaging system with augmented reality (AR) makeup are presented. Methods include processing a first image to extract a makeup portion of the first image, the makeup portion representing the makeup from the first image and training a neural network to process images of people to add AR makeup representing the makeup from the first image. The methods may further include receiving, via a messaging application implemented by one or more processors of a user device, input that indicates a selection to add the AR makeup to a second image of a second person. The methods may further include processing the second image with the neural network to add the AR makeup to the second image and causing the second image with the AR makeup to be displayed on a display device of the user device.

Virtual and augmented reality signatures

A method implemented on a visual computing device to authenticate one or more users includes receiving a first three-dimensional pattern from a user. The first three-dimensional pattern is sent to a server computer. At a time of user authentication, a second three-dimensional pattern is received from the user. The second three-dimensional pattern is sent to the server computer. An indication is received from the server computer as to whether the first three-dimensional pattern matches the second three-dimensional pattern within a margin of error. When the first three-dimensional pattern matches the second three-dimensional pattern within the margin of error, the user is authenticated at the server computer. When the first three-dimensional pattern does not match the second three-dimensional pattern within the margin of error, user is prevented from being authenticated at the server computer.

System and method for an augmented reality goal assistant

A method for an augmented reality goal assistant is described. The method includes detecting an object associated with a behavioral goal of a user. The method also includes altering an appearance of the object based on the behavioral goal of the user. The method further includes displaying the altered appearance of a detected object on an augmented reality headset, such that the altered appearance of the detected object is modified based on the behavioral goal of the user.

Adaptive model updates for dynamic and static scenes

In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.

3D user interface depth forgiveness
11579747 · 2023-02-14 · ·

A head-worn device system includes one or more cameras, one or more display devices and one or more processors. The system also includes a memory storing instructions that, when executed by the one or more processors, configure the system to generate a virtual object, generate a virtual object collider for the virtual object, determine a conic collider for the virtual object, provide the virtual object to a user, detect a landmark on the user's hand in the real-world, generate a landmark collider for the landmark, and determine a selection of the first virtual object by the user based on detecting a collision between the landmark collider with the conic collider and with the virtual object collider.