G06F3/013

RENDERING INFORMATION IN A GAZE TRACKING DEVICE ON CONTROLLABLE DEVICES IN A FIELD OF VIEW TO REMOTELY CONTROL

Provided are a computer program product, system, and method for rendering information in a gaze tracking device on controllable devices in a field of view to remotely control. A determination is made of a field of view from the gaze tracking device of a user based on a user position. Devices are determined in the field of view the user is capable of remotely controlling to render in the gaze tracking device. An augmented reality representation of information on the determined devices is rendered in a view of the gaze tracking device. User controls are received to remotely control a target device comprising one of the determined devices for which information is rendered in the gaze tracking device. The received user controls are transmitted to the target device to control the target device.

DIGITAL ASSISTANT REFERENCE RESOLUTION

Systems and processes for operating a digital assistant are provided. An example process for performing a task includes, at an electronic device having one or more processors and memory, receiving a spoken input including a request, receiving an image input including a plurality of objects, selecting a reference resolution module of a plurality of reference resolution modules based on the request and the image input, determining, with the selected reference resolution module, whether the request references a first object of the plurality of objects based on at least the spoken input, and in accordance with a determination that the request references the first object of the plurality of objects, determining a response to the request including information about the first object.

METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
20230048185 · 2023-02-16 ·

Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.

PAIN MEDICATION MANAGEMENT SYSTEM
20230051342 · 2023-02-16 ·

A pain management system for treating pain and/or detecting potential drug abuse in a patient suffering from pain, the system comprising at least one human machine interface (HMI), operable to acquire data generated by a patient responsive to pain that the patient experiences and data responsive to the patients intake of a drug for controlling the pain; at least one processor operable to process the pain and drug intake data to generate a pain control regimen; and at least one communication interface operable to support communications between an attending medical professional and the at least one HMI and/or the processor to enable the attending medical professional to access the pain and drug intake data and the pain control regimen

SYSTEM AND METHOD FOR ENHANCING VISUAL ACUITY

A head wearable display system comprising a target object detection module receiving multiple image pixels of a first portion and a second portion of a target object, and the corresponding depths; a first light emitter emitting multiple first-eye light signals to display a first-eye virtual image of the first portion and the second portion of the target object for a viewer; a first light direction modifier for respectively varying a light direction of each of the multiple first-eye light signals emitted from the first light emitter; a first collimator; a first combiner, for redirecting and converging the multiple first-eye light signals towards a first eye of the viewer. The first-eye virtual image of the first portion of the target object in a first field of view has a greater number of the multiple first-eye light signals per degree than that of the first-eye virtual image of the second portion of the target object in a second field of view.

In-Vehicle Speech Interaction Method and Device
20230048330 · 2023-02-16 ·

An in-vehicle speech interaction method and a device are provided. The method includes: obtaining user speech information; determining a user instruction based on the user speech information; determining, based on the user instruction, whether response content to the user instruction is privacy-related; and determining, based on whether the response content is privacy-related, whether to output the response content in a privacy protection mode, to protect privacy from being leaked.

DETECTING AND RESPONDING TO LIGHT SOURCE FAILURE

In various examples, a head-mountable display (“HMD”) may include a light source to emit light across a target region of a wearer, a light sensor, and a circuitry operably coupled with the light source and the light sensor. The circuitry may operate the light source to periodically emit light across the light sensor. Based on a determination that a time interval since the circuitry last received a signal from the light sensor satisfies a threshold, the circuitry may trigger a remedial action to cause the light source to cease emission of light across the target region of the wearer.

METHOD AND APPARATUS FOR IDENTIFYING OBJECT OF INTEREST OF USER
20230046258 · 2023-02-16 ·

The present disclosure relates to methods and apparatuses for identifying an object of interest of a user. One example method includes obtaining information about a line-of-sight-gazed region of the user and an environment image corresponding to the user, obtaining information about a first gaze region of the user in the environment image based on the environment image, where the first gaze region is used to indicate a sensitive region determined by using a physical feature of a human body, and obtaining a target gaze region of the user based on the information about the line-of-sight-gazed region and the information about the first gaze region. The gaze region is used to indicate a region in which a target object gazed by the user in the environment image is located.

IMAGE GAZE CORRECTION METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

An image gaze correction method, apparatus, electronic device, computer-readable storage medium, and computer program product related to the field of artificial intelligence technologies are provided. The image gaze correction method includes: acquiring an eye image from an image; performing feature extraction processing on the eye image to obtain feature information of the eye image; performing, based on the feature information and a target gaze direction, gaze correction processing on the eye image to obtain an initially corrected eye image and an eye contour mask; performing, by using the eye contour mask, adjustment processing on the initially corrected eye image to obtain a corrected eye image; and generating a gaze corrected image based on the corrected eye image.

VIRTUAL CONTENT EXPERIENCE SYSTEM AND CONTROL METHOD FOR SAME
20230052104 · 2023-02-16 · ·

Disclosed is a virtual content experience system. In the virtual content experience system, a central server for driving the system contains: a content conversion unit which converts two-dimensional image content, received by means of a data transmission and reception unit or input by a user, into a stereoscopic image; a motion information generation unit which recognizes text information extracted from the two-dimensional image content and converts the text information into motion information; a content playback control unit which is provided to transmit the motion information to a motion information management unit provided in a virtual reality experience chair, or receive start information and end information about the motion information from the motion information management unit to generate and change control information for controlling whether to provide new two-dimensional image content; and a display unit for displaying the content conversion unit, and the motion information or control information.