G06F3/012

Systems, methods, and media for displaying interactive augmented reality presentations

Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.

Apparatus and method for displaying contents on an augmented reality device

A system for displaying contents on an augmented reality (AR) device comprises a capturing module configured to capture a field of view of a user, a recording module configured to record the captured field of view, a user input controller configured to track a vision of the user towards one or more objects and a server. The server comprises a determination module, an identifier, and an analyser. The determination module is configured to determine at least one object of interest. The identifier is configured to identify a frame containing disappearance of the determined object of interest. The analyser is configured to analyse the identified frame based on at least one disappearance of the object of interest, and generate analysed data. The display module is configured to display a content of the object of interest on the AR device.

Augmented reality for vehicle operations

A method, includes saving in-flight data from an aircraft during a simulated training exercise, wherein the in-flight data includes geospatial locations of the aircraft, positional attitudes of the aircraft, and head positions of a pilot operating the aircraft, saving simulation data relating to a simulated virtual object presented to the pilot as augmented reality content in-flight, wherein the virtual object was programmed to interact with the aircraft during the simulated training exercise and representing the in-flight data from the aircraft and the simulation data relating to the simulated virtual object as a replay of the simulated training exercise.

Viewpoint dependent brick selection for fast volumetric reconstruction

A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.

Reducing head mounted display power consumption and heat generation through predictive rendering of content

Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.

Systems and methods for providing mixed-reality experiences under low light conditions

Systems and methods are provided for facilitating computer vision tasks (e.g., simultaneous location and mapping) and pass-through imaging include a head-mounted display (HMD) that includes a first set of one or more cameras configured for performing computer vision tasks and a second set of one or more cameras configured for capturing image data of an environment for projection to a user of the HMD. The first set of one or more cameras is configured to detect at least a visible spectrum light and at least a particular band of wavelengths of infrared (IR) light. The second set of one or more cameras includes one or more detachable IR filters configured to attenuate IR light, including at least a portion of the particular band of wavelengths of IR light.

Method and device for carrying out eye gaze mapping

The invention relates to a device and a method for performing an eye gaze mapping (M), in which at least one point of vision (B) and/or a viewing direction of at least one person (10) in relation to at least one scene recording (S) of a scene (12) viewed by the at least one person (10) is mapped onto a reference (R). At least a part of an algorithm (A1, A2, A3) for performing the eye gaze mapping (M) is thereby selected from multiple predetermined algorithms (A1, A2, A3) as a function of at least one parameter (P), and the eye gaze mapping (M) is performed on the basis of the at least one part of the algorithm (A1, A2, A3).

Head mounted display and setting method
11580939 · 2023-02-14 · ·

In a head mounted display 100, a memory 71 stores an application. An image pickup unit 74 takes an image of a site of a user 1, and a position specifying unit 73 specifies a position and a direction of the head mounted display 100. A detector 75 detects a position indicated by the user 1 on the basis of the image taken by the image pickup unit 74, and a setting unit 76 sets a position indicating a home position on the basis of a result detected by the detector 75 and the position and the direction specified by the position specifying unit 73.

Realistic virtual/augmented/mixed reality viewing and interactions

The present invention discloses systems and methods for both viewing and interacting with a virtual reality (VR), an augmented reality (AR) or a mixed reality (MR). More specifically, the systems and methods allow the user to interact with aspects of such realities including virtual items presented in such realities or within such environments by manipulating a control device that has an inside-out camera mounted on-board. The apparatus or system uses two distinct representations including a reduced representation in determining the pose of the control device and uses these representations to compute an interactive pose portion of the control device to be used for interacting with the virtual item. The reduced representation is consonant with a constrained motion of the control device.

Waypoint creation in map detection

An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.