G06F3/0304

Systems, methods, and graphical user interfaces for updating display of a device relative to a user's body

An electronic device, while the electronic device is worn over a predefined portion of the user's body, displays, via a display generation component arranged on the electronic device opposite the predefined portion of the user's body, a graphical representation of an exterior view of a body part that corresponds to the predefined portion of the user's body. The electronic device detects a change in position of the electronic device with respect to the predefined portion of the user's body. The electronic device, in response to detecting the change in the position of the electronic device with respect to the predefined portion of the user's body, modifies the graphical representation of the exterior view of the body part that corresponds to predefined portion of the user's body in accordance with the detected change in position of the electronic device with respect to the predefined portion of the user's body.

Realistic virtual/augmented/mixed reality viewing and interactions

The present invention discloses systems and methods for both viewing and interacting with a virtual reality (VR), an augmented reality (AR) or a mixed reality (MR). More specifically, the systems and methods allow the user to interact with aspects of such realities including virtual items presented in such realities or within such environments by manipulating a control device that has an inside-out camera mounted on-board. The apparatus or system uses two distinct representations including a reduced representation in determining the pose of the control device and uses these representations to compute an interactive pose portion of the control device to be used for interacting with the virtual item. The reduced representation is consonant with a constrained motion of the control device.

Navigation device capable of estimating contamination and denoising image frame
11582413 · 2023-02-14 · ·

There is provided an optical navigation device including an image sensor and a processing unit. The image sensor outputs successive image frames. The processing unit calculates a contamination level and a motion signal based on filtered image frames, and determines whether to update a fixed pattern noise (FPN) stored in a frame buffer according to a level of FPN subtraction, the calculated contamination level and the calculated motion signal to optimize the update of the fixed pattern noise.

CONTACTLESS TOUCH INPUT SYSTEM

A proximity sensor, including light emitters and light detectors mounted on a circuit board, two stacked lenses, positioned above the emitters and the detectors, including an extruded cylindrical lens and a Fresnel lens array, wherein each emitter projects light through the two lenses along a common projection plane, wherein a reflective object located in the projection plane reflects light from one or more emitters to one or more detectors, and wherein each emitter-detector pair, when synchronously activated, generates a greatest detection signal at the activated detector when the reflective object is located at a specific 2D location in the projection plane corresponding to the emitter-detector pair, and a processor sequentially activating the emitters and synchronously co-activating one or more detectors, and identifying a location of the object in the projection plane, based on amounts of light detected by the detector of each synchronously activated emitter-detector pair.

TRACKING SYSTEM, TRACKING METHOD AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
20230041519 · 2023-02-09 ·

A tracking method, for tracking an object based on a computer vision, includes following steps. A series of images is captured by a tracking camera. A first position of a trackable device is tracked within the images. An object is recognized around the first position in the images. In response to the object being recognized, a second position of the object is tracked in the images.

METHOD AND DEVICE FOR OPERATING A LASER UNIT AS A FUNCTION OF A DETECTED STATE OF AN OBJECT, AND LASER DEVICE
20230044259 · 2023-02-09 ·

A method for operating a laser unit as a function of a detected state of an object. The method includes: outputting a light beam having a light beam intensity, using the laser unit, during a first time period and a second time period; receiving at least one reflected partial beam having a partial beam intensity during the first and second time periods; making the light beam and the partial beam interfere with each other in the first and second time periods to obtain a first interference parameter for the first time period and a second interference parameter for the second time period; ascertaining the state of the object; changing an operating state of the laser unit as a function of the ascertained state of the object.

Devices, systems and methods for predicting gaze-related parameters using a neural network
11556741 · 2023-01-17 · ·

A method for creating and updating a database is disclosed. In one example, the method includes presenting a first stimulus to a first user wearing a head-wearable device, using a first camera of the head-wearable device to generate. When the first user is expected to respond to the first stimulus or expected to have responded to the first stimulus, using a second camera of the head-wearable device to generate a first right image of at least a portion of the right eye of the first user. A data connection is established between the head-wearable device and the database. A first dataset is generated comprising the first left image, the first right image and a first representation of a gaze-related parameter, the first representation being correlated with the first stimulus, and adding the first dataset to a device database.

Mobile terminal and control method therefor

The present invention relates to a device and a control method therefor and, more specifically, the device comprises: a memory for storing at least one command; a depth camera for capturing at least one hand of a user; a display module; and a controller for controlling the memory, the depth camera, and the display module. The controller controls the depth camera so as to capture the at least one hand of a user and controls the display module so as to output a visual feedback that changes on the basis of the captured hand of a user.

System and method to convert two-dimensional video into three-dimensional extended reality content

System and method are provided to detect objects in a scene frame of two-dimensional (2D) video using image processing and determine object image coordinates of the detected objects in the scene frame. The system and method deploy a virtual camera in a three-dimensional (3D) environment to create a virtual image frame in the environment and generate a floor in the environment in a plane below the virtual camera. The system and method adjust the virtual camera to change a height and angle relative to the virtual image frame. The system and method generate at an extended reality (XR) coordinate location relative to the floor for placing the detected object in the environment. The XR coordinate location is a point of intersection of a ray cast of the virtual camera through the virtual frame on the floor that translates to the image coordinate in the virtual image frame.

Enhanced graphical user interface for voice communications
11574633 · 2023-02-07 · ·

Enhanced graphical user interfaces for transcription of audio and video messages is disclosed. Audio data may be transcribed, and the transcription may include emphasized words and/or punctuation corresponding to emphasis of user speech. Additionally, the transcription may be translated into a second language. A message spoken by a user depicted in one or more images of video data may also be transcribed and provided to one or more devices.