G06T2219/024

Mesh updates via mesh frustum cutting

Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.

Efficient shadows for alpha-mapped models
11593989 · 2023-02-28 · ·

Disclosed herein is a web-based videoconference system that allows for video avatars to navigate within a virtual environment. Various methods for efficient modeling, rendering, and shading are disclosed herein.

Multipoint SLAM capture

“Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.

Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
20180005429 · 2018-01-04 ·

Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.

Editing a virtual reality space
11710288 · 2023-07-25 · ·

An editing terminal includes a simple display data acquisition unit that acquires simple display data from an item management server, an item selection processing unit that receives selection of an item from a plurality of items displayed using the simple display data, a three-dimensional data acquisition unit that acquires three-dimensional data of a selected item from the item management server, and an editing processing unit that displays an editing space on an editing screen on the basis of editing space information, receives an input of operation information regarding editing of the editing space using the three-dimensional data of the selected item, transmits the operation information to an editing server, and displays the editing space after editing on the editing screen.

Computer-Implemented Human-Machine Interaction Method and User Interface
20230237743 · 2023-07-27 ·

A human-machine interaction, HMI, user interface (1) connected to at least one controller or actuator of a complex system (SYS) having a plurality of system components, C, represented by associated blocks, B, of a hierarchical system model (SYS-MOD) stored in a database, DB, (5) said user interface (1) comprising: an input unit (2) adapted to receive user input commands and a display unit (3) having a screen adapted to display a scene within a three-dimensional workspace, WS.sub.B1, associated with a selectable block, B1, representing a corresponding system component, C, of said complex system (SYS) by means of a virtual camera, VC.sub.B1, associated to the respective block, B1, and positioned in a three-dimensional coordinate system within a loaded three-dimensional workspace, WS.sub.B1, of said block, B1, wherein the virtual camera, VC.sub.B1, is moveable automatically in the three-dimensional workspace, WS.sub.B1, of the associated block, B1, in response to a user input command input to the input unit (2) of said user interface (1) to perform a zooming operation on the respective block, B1, to reveal or hide its content areas, CAs, wherein the content areas, CAs, of the zoomed block, B1, include nested child blocks, B1_1, B1_2, of the respective block, B1.

SYSTEM AND METHOD TO PREVENT SURVEILLANCE AND PRESERVE PRIVACY IN VIRTUAL REALITY
20230004676 · 2023-01-05 ·

Preserving user privacy and preventing surveillance on behalf of users of a virtual reality world. One or more plans are available when a privacy or surveillance risk to a user is detected. In one plan, configurable scripts execute on behalf of the user to create a confusing array of clone avatars that obfuscate the real user avatar behavior. A malevolent avatar, attempting to surveil the user, may have difficulty distinguishing the clones from the user and may miss out on private insights he might otherwise have learned from the user's behavior. In another exemplary privacy plan, a copy of part of the virtual world is spawned, occupied exclusively by the user's avatar, and then merged into the main world. Privacy plans may be selected manually or automatically in response to perceived privacy threats to strike a balance between privacy and enjoyment within the virtual world.

EXTENDED REALITY SERVICE PROVIDING METHOD AND SYSTEM FOR OPERATION OF INDUSTRIAL INSTALLATION
20230237744 · 2023-07-27 ·

The present application relates to an extended reality service providing method and system for operation of an industrial installation. More specifically, various types of data required for operation (e.g., inspection, examination, maintenance, repair, and reinforcement) of an industrial installation are digitalized, extended reality content, such as an augmented reality image or a mixed reality image based on the digitalized data, is provided to a site worker or a remote place manager, and the worker and the manager can communicate via a video call in real-time, whereby the work efficiency of the worker and the manager can be enhanced.

GENERATION AND IMPLEMENTATION OF 3D GRAPHIC OBJECT ON SOCIAL MEDIA PAGES
20230237754 · 2023-07-27 ·

Disclosed herein is digital object generator that builds unique digital objects based on the user specific input. The unique digital objects are part of a graphic presentation to users. The user specific input is positioned on pre-configured regions of a 3D object such as a polygon. Examples of the pre-configured regions include faces of the 3D object, orbits around the 3D object, or identifiable regions associated with the 3D object. The 3D object is rendered as a part of a social media page and enables social interactions between users. In the social media page, the 3D object rotates displaying regions/faces to page visitors. In some embodiments, the 3D object is implemented as a pet or companion of a user avatar in a virtual, augmented, or extended reality space.

METHOD AND SYSTEM FOR REPRESENTING AVATAR FOLLOWING MOTION OF USER IN VIRTUAL SPACE

A non-transitory computer-readable recording medium storing instructions that, when executed by a processor, cause the processor to set a communication session in which a plurality of users participate through a server, generate data for a virtual space, share motion data related to motions of the plurality of users through the communication session, generate a video in which avatars following the motions of the plurality of users are represented in the virtual space, based on the motion data, and share the generated video with the plurality of users through the communication session.