Patent classifications
G06T19/00
Depth estimation using biometric data
Method of generating depth estimate based on biometric data starts with server receiving positioning data from first device associated with first user. First device generates positioning data based on analysis of a data stream comprising images of second user that is associated with second device. Server then receives a biometric data of second user from second device. Biometric data is based on output from a sensor or a camera included in second device. Server then determines a distance of second user from first device using positioning data and biometric data of the second user. Other embodiments are described herein.
Apparatus and method for displaying contents on an augmented reality device
A system for displaying contents on an augmented reality (AR) device comprises a capturing module configured to capture a field of view of a user, a recording module configured to record the captured field of view, a user input controller configured to track a vision of the user towards one or more objects and a server. The server comprises a determination module, an identifier, and an analyser. The determination module is configured to determine at least one object of interest. The identifier is configured to identify a frame containing disappearance of the determined object of interest. The analyser is configured to analyse the identified frame based on at least one disappearance of the object of interest, and generate analysed data. The display module is configured to display a content of the object of interest on the AR device.
Augmented reality for vehicle operations
A method, includes saving in-flight data from an aircraft during a simulated training exercise, wherein the in-flight data includes geospatial locations of the aircraft, positional attitudes of the aircraft, and head positions of a pilot operating the aircraft, saving simulation data relating to a simulated virtual object presented to the pilot as augmented reality content in-flight, wherein the virtual object was programmed to interact with the aircraft during the simulated training exercise and representing the in-flight data from the aircraft and the simulation data relating to the simulated virtual object as a replay of the simulated training exercise.
Color-sensitive virtual markings of objects
Disclosed are systems, methods, and non-transitory computer readable media for making virtual colored markings on objects. Instructions may include receiving an indication of an object; receiving from an image sensor an image of a hand of an individual holding a physical marking implement; detecting in the image a color associated with the marking implement; receiving from the image sensor image data indicative of movement of a tip of the marking implement and locations of the tip; determining from the image data when the locations of the tip correspond to locations on the object; and generating, in the detected color, virtual markings on the object at the corresponding locations.
Computer-implemented interfaces for identifying and revealing selected objects from video
A computer-implemented visual interface for identifying and revealing objects from video-based media provides visual cues to enable users to interact with video-based media. Objects in videos are inferred and identified based upon automatic interpretations of the video and/or audio that is associated with the video. The automatic interpretations may be performed by a computer-implemented neural network. The computer-implemented visual interface is integrated with the video to enable users to interact with the identified objects. User interactions with the visual interface may be through either touch or non-touch means. Information is delivered to users that is based upon the identified objects, including in augmented or virtual reality-based form, responsive to user interactions with the computer-implemented visual interface.
Viewpoint dependent brick selection for fast volumetric reconstruction
A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.
Viewpoint dependent brick selection for fast volumetric reconstruction
A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.
Range of motion control in XR applications on information handling systems
More realistic experiences can be provided to a user through the use of a wearable suit. The xR wearable suit may include materials with adjustable characteristics, such as friction, and electronics for controlling the materials to provide feedback to the user wearing the xR suit. In an xR game, the materials may be used to translate virtual damage to physical constraints on the user. For example, when an avatar gets shot in the leg and is debilitated, the user's leg motion can be constricted to understand that shortcoming and stay in sync with the avatar. Examples of such feedback materials include inflating ribs, sheet jamming, and mechanical devices.
Virtual 3D communications with actual to virtual cameras optical axes compensation
A method for conducting a three dimensional (3D) video conference between multiple participants, the method may include determining, for each participant, updated 3D participant representation information within the virtual 3D video conference environment, that represents participant; wherein the determining comprises compensating for difference between an actual optical axis of a camera that acquires images of the participant and a desired optical axis of a virtual camera; and generating, for at least one participant, an updated representation of virtual 3D video conference environment, the updated representation of virtual 3D video conference environment represents the updated 3D participant representation information for at least some of the multiple participants.
Virtual 3D communications with actual to virtual cameras optical axes compensation
A method for conducting a three dimensional (3D) video conference between multiple participants, the method may include determining, for each participant, updated 3D participant representation information within the virtual 3D video conference environment, that represents participant; wherein the determining comprises compensating for difference between an actual optical axis of a camera that acquires images of the participant and a desired optical axis of a virtual camera; and generating, for at least one participant, an updated representation of virtual 3D video conference environment, the updated representation of virtual 3D video conference environment represents the updated 3D participant representation information for at least some of the multiple participants.