G06F3/013

Radio frequency sensing in a television environment

Techniques are provided for performing radio frequency (RF) sensing to determine the viewing status of a television user. This can be used to determine user behavior during the playback of content (e.g., whether a user is watching the content), which can be used as a data point for determining the user's level of interest in the content. Using the status of the television user, embodiments can provide additional or alternative functionality, such as powering down and/or powering up the television. Furthermore, RF sensing may be performed by existing television hardware, such a Wi-Fi transceiver, and may therefore provide RF sensing functionality to a television with little or no added cost.

Augmented reality placement for user feedback

Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes include one or more artificial intelligence elements (AIEs) that are rendered as visual objects in the AR scenes. The method includes generating an AR scene for rendering on a display; the AR scene includes a real-world space and virtual objects projected in the real-world space. The method includes analyzing a field of view into the AR scene; the analyzing is configured to detect an action by a hand of the user when reaching into the AR scene. The method includes generating one or more AIEs rendered as virtual objects in the AR scene, each AIE is configured to provide a dynamic interface that is selectable by a gesture of the hand of the user. In one embodiment, each of the AIEs is rendered proximate to a real-world object present in the real-world space; the real-world object is located in a direction of where the hand of the user is detected to be reaching when the user makes the action by the hand.

Systems, methods, and media for displaying interactive augmented reality presentations

Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.

Mid-air volumetric visualization movement compensation

A wearable computing device generates a volumetric visualization at a first position that is located in a three-dimensional space. The wearable computing device includes a volumetric source configured to create the volumetric visualization. The wearable computing device includes one or more sensors configured to determine movement of the wearable computing device. A movement of the wearable computing device is identified by the wearable computing device. Based on the movement the wearable computing device adjusts the volumetric source.

Eye image selection
11579694 · 2023-02-14 · ·

Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.

Apparatus and method for displaying contents on an augmented reality device

A system for displaying contents on an augmented reality (AR) device comprises a capturing module configured to capture a field of view of a user, a recording module configured to record the captured field of view, a user input controller configured to track a vision of the user towards one or more objects and a server. The server comprises a determination module, an identifier, and an analyser. The determination module is configured to determine at least one object of interest. The identifier is configured to identify a frame containing disappearance of the determined object of interest. The analyser is configured to analyse the identified frame based on at least one disappearance of the object of interest, and generate analysed data. The display module is configured to display a content of the object of interest on the AR device.

Augmented reality for vehicle operations

A method, includes saving in-flight data from an aircraft during a simulated training exercise, wherein the in-flight data includes geospatial locations of the aircraft, positional attitudes of the aircraft, and head positions of a pilot operating the aircraft, saving simulation data relating to a simulated virtual object presented to the pilot as augmented reality content in-flight, wherein the virtual object was programmed to interact with the aircraft during the simulated training exercise and representing the in-flight data from the aircraft and the simulation data relating to the simulated virtual object as a replay of the simulated training exercise.

Imaging display device and wearable device

An imaging display device includes an imaging unit, a processing unit, a display unit, and a pupil detection unit. The imaging unit includes a plurality of photoelectric conversion elements and is configured to acquire first image information. The processing unit is configured to process a signal from the imaging unit and generate second image information. The display unit is configured to display an image that is based on the signal from the processing unit. The pupil detection unit is configured to detect vector information of a pupil. The processing unit generates the second image information by processing the first image information based on the vector information on the pupil.

Virtual 3D communications with actual to virtual cameras optical axes compensation

A method for conducting a three dimensional (3D) video conference between multiple participants, the method may include determining, for each participant, updated 3D participant representation information within the virtual 3D video conference environment, that represents participant; wherein the determining comprises compensating for difference between an actual optical axis of a camera that acquires images of the participant and a desired optical axis of a virtual camera; and generating, for at least one participant, an updated representation of virtual 3D video conference environment, the updated representation of virtual 3D video conference environment represents the updated 3D participant representation information for at least some of the multiple participants.

Reducing head mounted display power consumption and heat generation through predictive rendering of content

Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.