Patent classifications
G06T19/003
CONSTRUCTION OF ENVIRONMENT VIEWS FROM SELECTIVELY DETERMINED ENVIRONMENT IMAGES
A computing system may include a client device and a server. The client device may be configured to access a stream of image frames that depict an environment, determine, from the stream of image frames, environment images that satisfy selection criteria, and transmit the environment images to the server. The server may be configured to receive the environment images from the client device, construct a spatial view of the environment based on position data included with the environment images, and navigate the spatial view, including by receiving a movement direction and progressing from a current environment image depicted for the spatial view to a next environment image based on the movement direction.
UNMANNED AERIAL VEHICLE (UAV) AND METHOD FOR OPERATING THE UAV
An improved UAV system and methods for operation in an inventory management system. The methods include generating a three dimensional (3D) map and estimating a position and orientation of the UAV based upon this map; autonomously navigating the UAV in the environment by using the generated 3d map in conjunction with the position and the orientation of the UAV; performing static and dynamic obstacle avoidance in the environment using collision avoidance; and finding the optimal path from a source node to a destination node within the environment.
AUGMENTED REALITY GUIDANCE OVERLAP
Embodiments of the present invention provide computer-implemented methods, computer program products and computer systems. Embodiments of the present invention can, in response to receiving a request, identify a core component from source material based on topic analysis. Embodiments of the present invention can then generate three-dimensional representations of physical core components associated with the request. Finally, embodiments of the present invention then render the generated three-dimensional representations of the physical core components over the physical core components.
Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
Artificial reality collaborative working environments
- Michael James LeBeau ,
- Manuel Ricardo Freire Santos ,
- Aleksejs Anpilogovs ,
- Alexander Sorkine Hornung ,
- Bjorn Wanbo ,
- Connor Treacy ,
- Fangwei Lee ,
- Federico Ruiz ,
- Jonathan Mallinson ,
- Jonathan Richard Mayoh ,
- Marcus Tanner ,
- Panya Inversin ,
- Sarthak Ray ,
- Sheng Shen ,
- William Arthur Hugh Steptoe ,
- Alessia Marra ,
- Gioacchino Noris ,
- Derrick Readinger ,
- Jeffrey Wai-King Lock ,
- Jeffrey Witthuhn ,
- Jennifer Lynn Spurlock ,
- Larissa Heike Laich ,
- Javier Alejandro Sierra Santos
Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
Systems and methods for controlling virtual scene perspective via physical touch input
Systems, methods, and non-transitory computer readable media for controlling perspective in an extended reality environment are disclosed. In one embodiment, a non-transitory computer readable medium contains instructions to cause a processor to perform the steps of: outputting for presentation via a wearable extended reality appliance (WER-appliance), first display signals reflective of a first perspective of a scene; receiving first input signals caused by a first multi-finger interaction with the touch sensor; in response, outputting for presentation via the WER-appliance second display signals to modify the first perspective of the scene, causing a second perspective of the scene to be presented via the WER-appliance; receiving second input signals caused by a second multi-finger interaction with the touch sensor; and in response, outputting for presentation via the WER-appliance third display signals to modify the second perspective of the scene, causing a third perspective of the scene to be presented via the WER-appliance.
INFORMATION PROCESSING DEVICE THAT DISPLAYS A VIRTUAL OBJECT RELATIVE TO REAL SPACE
An information processing device including a display unit, a detector, and a first control unit and a method of using same. The display unit may be a head-mounted display. The display unit is capable of providing the user with a field of view of a real space and a virtual object. The detector detects an azimuth of the display unit around at least one axis and display of the virtual object is controlled based in the detected azimuth.
METHOD AND APPARATUS FOR INTERACTION PROCESSING OF VIRTUAL ITEM, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
This application provides a method and an apparatus for interaction processing of a virtual item, an electronic device, and a computer-readable storage medium. The method includes displaying at least one idle virtual item in a virtual scene; moving a first virtual object in the virtual scene in response to a movement operation on the first virtual object; displaying a pickable prompt of the idle virtual item when there is no obstacle between the idle virtual item and the first virtual object; and controlling the first virtual object to pick up the idle virtual item in response to a picking-up operation by the first virtual object.
Technique for transferring a registration of image data of a surgical object from one surgical navigation system to another surgical navigation system
A method, a controller, and a surgical hybrid navigation system for transferring a registration of three dimensional image data of a surgical object from a first to a second surgical navigation system are described. A first tracker that is detectable by a first detector of the first surgical navigation system is arranged in a fixed spatial relationship with the surgical object and a second tracker that is detectable by a second detector of the second surgical navigation system is arranged in a fixed spatial relationship with the surgical object. The method includes registering the three dimensional image data of the surgical object in a first coordinate system of the first surgical navigation system and determining a first position and orientation of the first tracker in the first coordinate system and a second position and orientation of the second tracker in a second coordinate system of the second surgical navigation system.
Methods for manipulating objects in an environment
In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on a viewpoint of a user in the three-dimensional environment. In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on viewpoints of a plurality of users in the three-dimensional environment. In some embodiments, the electronic device modifies an appearance of a real object that is between a virtual object and the viewpoint of a user in a three-dimensional environment. In some embodiments, the electronic device automatically selects a location for a user in a three-dimensional environment that includes one or more virtual objects and/or other users.