Patent classifications
G06T19/00
Reducing head mounted display power consumption and heat generation through predictive rendering of content
Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.
Systems and methods for providing mixed-reality experiences under low light conditions
Systems and methods are provided for facilitating computer vision tasks (e.g., simultaneous location and mapping) and pass-through imaging include a head-mounted display (HMD) that includes a first set of one or more cameras configured for performing computer vision tasks and a second set of one or more cameras configured for capturing image data of an environment for projection to a user of the HMD. The first set of one or more cameras is configured to detect at least a visible spectrum light and at least a particular band of wavelengths of infrared (IR) light. The second set of one or more cameras includes one or more detachable IR filters configured to attenuate IR light, including at least a portion of the particular band of wavelengths of IR light.
Artificial reality collaborative working environments
- Michael James LeBeau ,
- Manuel Ricardo Freire Santos ,
- Aleksejs Anpilogovs ,
- Alexander Sorkine Hornung ,
- Bjorn Wanbo ,
- Connor Treacy ,
- Fangwei Lee ,
- Federico Ruiz ,
- Jonathan Mallinson ,
- Jonathan Richard Mayoh ,
- Marcus Tanner ,
- Panya Inversin ,
- Sarthak Ray ,
- Sheng Shen ,
- William Arthur Hugh Steptoe ,
- Alessia Marra ,
- Gioacchino Noris ,
- Derrick Readinger ,
- Jeffrey Wai-King Lock ,
- Jeffrey Witthuhn ,
- Jennifer Lynn Spurlock ,
- Larissa Heike Laich ,
- Javier Alejandro Sierra Santos
Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
Interactive virtual reality system
Provided herein are method, apparatus, and computer program products for generating a first and second three dimensional interactive environment. The first three dimensional interactive environment may contain one or more engageable virtual interfaces that correspond to one or more items. Upon engagement with a virtual interface the second three dimensional interactive environment is produced to virtual simulation related to the one or more items.
Pixel intensity modulation using modifying gain values
A visual perception device has a look-up table stored in a laser driver chip. The look-up table includes relational gain data to compensate for brighter areas of a laser pattern wherein pixels are located more closely than areas where the pixels are further apart and to compensate for differences in intensity of individual pixels when the intensities of pixels are altered due to design characteristics of an eye piece.
Augmented-reality-based video record and pause zone creation
According to one embodiment, a method, computer system, and computer program product for operating a camera to perform video capture of a subject is provided. The present invention may include pausing or recording video capture of the subject based on the camera's location within one or more designated recording zones, wherein the recording zones are two-dimensional or three-dimensional regions comprising or within view of the subject, and wherein the recording zones are designated within an augmented reality environment.
Systems and methods for numerically evaluating vasculature
Systems and methods are disclosed for providing a cardiovascular score for a patient. A method includes receiving, using at least one computer system, patient-specific data regarding a geometry of multiple coronary arteries of the patient; and creating, using at least one computer system, a three-dimensional model representing at least portions of the multiple coronary arteries based on the patient-specific data. The method also includes evaluating, using at least one computer system, multiple characteristics of at least some of the coronary arteries represented by the model; and generating, using at least one computer system, the cardiovascular score based on the evaluation of the multiple characteristics. Another method includes generating the cardiovascular score based on evaluated multiple characteristics for portions of the coronary arteries having fractional flow reserve values of at least a predetermined threshold value.
Systems and methods for numerically evaluating vasculature
Systems and methods are disclosed for providing a cardiovascular score for a patient. A method includes receiving, using at least one computer system, patient-specific data regarding a geometry of multiple coronary arteries of the patient; and creating, using at least one computer system, a three-dimensional model representing at least portions of the multiple coronary arteries based on the patient-specific data. The method also includes evaluating, using at least one computer system, multiple characteristics of at least some of the coronary arteries represented by the model; and generating, using at least one computer system, the cardiovascular score based on the evaluation of the multiple characteristics. Another method includes generating the cardiovascular score based on evaluated multiple characteristics for portions of the coronary arteries having fractional flow reserve values of at least a predetermined threshold value.
Adjustable waveguide assembly and augmented reality eyewear with adjustable waveguide assembly
An adjustable frame assembly for augmented reality eyewear. The frame assembly includes a face portion for supporting at least one waveguide that creates an eye box, a support rest for supporting the face portion on a user, and a coupling for adjusting the position of the face portion relative to the support rest. This enables movement of the waveguide eye box relative to the support rest to position the eye box in front of the wearer's eyes.
Systems and methods for controlling virtual scene perspective via physical touch input
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.