G06T19/003

Occlusion of virtual objects in augmented reality by physical objects

In one embodiment, one or more computing devices access an image comprising at least a portion of a hand of a user of a head-mounted display and generate a planar representation of the hand and a height map associated with the planar representation. A first portion of the planar representation that is closer than a first portion of a virtual object to a viewpoint and a second portion of the planar representation that is farther than a second portion of the virtual object from the viewpoint is determined based on the height map and the virtual object. A display image is rendered from the viewpoint for display, the display image comprising a first set of pixels corresponding to the first portion of the planar representation and a second set of pixels corresponding to the second portion of the virtual object.

Virtual reality for situational handling
11556937 · 2023-01-17 · ·

Systems, methods, and computer programmable products are described herein for situational handling using a virtual reality application. A procurement system receives an order including one or more goods and a situation. A cloud platform receives sensor data of a package containing the one or more goods. A scanner scans the package and a storage location of the package. The procurement system provides the storage location to an virtual reality (VR) application for display and a notification of the situation once it occurs.

Efficient capture and delivery of walkable and interactive virtual reality or 360 degree video

Disclosed are systems and methods for generating a walkable 360-degree video or virtual reality (VR) environment. 360-degree video data is obtained for a real-world environment and comprises a plurality of chronologically ordered frames captured by traversing a first path through the real-world environment. One or more processing operations are applied to generate a processed 360-degree video, which can be displayed to a user of an omnidirectional treadmill. Locomotion information is received from one or more sensors of the omnidirectional treadmill, wherein the locomotion information is generated based on a physical movement on or within the omnidirectional treadmill. Using the received locomotion information, one or more playback commands for controlling playback of the processed 360-degree video are generated. One or more selected frames of the processed 360-degree video are rendered for presentation and display to the user, based on the one or more playback commands.

Playback of a stored networked remote collaboration session

Various implementations of the present application set forth a method comprising generating three-dimensional data and two-dimensional data representing a physical space that includes a real-world asset, generating an extended-reality (XR) stream representing a remote collaboration session between a host device and a set of remote devices, where the XR stream includes a combination of the three-dimensional data and the two-dimensional data, a set of augmented-reality (AR) elements associated with the real-world asset, and a set of performed actions associated with a portion of the digital representation or at least one AR element, serializing the XR stream into a set of serialized chunks, transmitting the serialized chunks to the remote devices, where the remote devices recreate the XR stream in a set of remote XR environments, and transmitting the serialized chunks to a remote storage device, where a device subsequently retrieves the serialized chunks to replay the remote collaboration session.

VIRTUAL REALITY SIMULATOR AND VIRTUAL REALITY SIMULATION PROGRAM
20230008807 · 2023-01-12 · ·

A VR (Virtual Reality) simulator projects or displays a virtual space image on a screen installed at a position distant from a user in a real space and not integrally moving with the user. More specifically, the VR simulator acquires a real user position being a position of the user's head in the real space. The VR simulator acquires a virtual user position being a position in a virtual space corresponding to the real user position. Then, the VR simulator acquires the virtual space image by imaging the virtual space by using a camera placed at the virtual user position in the virtual space, based on virtual space configuration information indicating a configuration of the virtual space. Here, the VR simulator performs adjusts a focal length of the camera such that perspective corresponding to a distance between the real user position and the screen is cancelled.

Systems and methods for generating three dimensional geometry
11574439 · 2023-02-07 · ·

Systems and methods are described for creating three dimensional models of building objects by creating a point cloud from a plurality of input images, defining edges of the building object's surfaces represented by the point cloud, creating simplified geometries of the building object's surfaces and constructing a building model based on the simplified geometries. Input images may include ground, orthographic, or oblique images. The resultant model may be scaled according to correlation with select image types and textured.

Systems and methods for generating three dimensional geometry
11574442 · 2023-02-07 · ·

Systems and methods are described for creating three dimensional models of building objects by creating a point cloud from a plurality of input images, defining edges of the building object's surfaces represented by the point cloud, creating simplified geometries of the building object's surfaces and constructing a building model based on the simplified geometries. Input images may include ground, orthographic, or oblique images. The resultant model may be scaled according to correlation with select image types and textured.

Efficient shadows for alpha-mapped models
11593989 · 2023-02-28 · ·

Disclosed herein is a web-based videoconference system that allows for video avatars to navigate within a virtual environment. Various methods for efficient modeling, rendering, and shading are disclosed herein.

Server system for processing a virtual space
11595480 · 2023-02-28 · ·

A server system (100) for processing a virtual space, the virtual space comprising a plurality of entities (A-E), the server system (100) comprising: one or more back-end servers (108); and one or more front-end servers (114); wherein each back-end server (108) stores a respective subset of the plurality of entities (A-E); each front-end server (114) is communicatively coupled to each back-end server (108); each front-end server (114) is configured to be communicatively coupled to one or more client devices (106); each front-end server (114) stores one or more entity references (RefA-RefE); and each entity reference (RefA-RefE) comprises a first identifier for identifying a respective entity (A-E) and a second identifier for identifying the back-end server (108) on which the entity (A-E) identified by the first identifier is stored.

Immersive virtual entertainment system

Aspects of the subject disclosure may include, for example, a method that includes generating a virtual venue for the virtual reality space, wherein the generating the virtual venue including replicating an architecture of a venue associated with the event and generating a plurality of virtual stores for the virtual venue, wherein each virtual store is associated with each participant of the plurality of participants, accessing a plurality of cameras and a plurality of microphones associated with the event, generating the virtual reality space based on the plurality of participants, the virtual venue, the plurality of microphones, and the plurality of cameras, generating a plurality of images for each participant of the plurality of participants according to each profile for each participant of the plurality of participants to participate in the event, and presenting the virtual reality space to user equipment in a virtual reality format. Other embodiments are disclosed.