Patent classifications
G06T2219/2012
Editing a virtual reality space
An editing terminal includes a simple display data acquisition unit that acquires simple display data from an item management server, an item selection processing unit that receives selection of an item from a plurality of items displayed using the simple display data, a three-dimensional data acquisition unit that acquires three-dimensional data of a selected item from the item management server, and an editing processing unit that displays an editing space on an editing screen on the basis of editing space information, receives an input of operation information regarding editing of the editing space using the three-dimensional data of the selected item, transmits the operation information to an editing server, and displays the editing space after editing on the editing screen.
Generative latent textured proxies for object category modeling
Systems and methods are described for generating a plurality of three-dimensional (3D) proxy geometries of an object, generating, based on the plurality of 3D proxy geometries, a plurality of neural textures of the object, the neural textures defining a plurality of different shapes and appearances representing the object, providing the plurality of neural textures to a neural renderer, receiving, from the neural renderer and based on the plurality of neural textures, a color image and an alpha mask representing an opacity of at least a portion of the object, and generating a composite image based on the pose, the color image, and the alpha mask.
METHOD AND DEVICE OF LABELING LASER POINT CLOUD
The present application discloses a method and device of labeling laser point cloud. The method comprises: receiving data of a laser point cloud; constructing a 3D scene and establishing a 3D coordinate system corresponding to the 3D scene; converting a coordinate of each laser point in the laser point cloud into a 3D coordinate in the 3D coordinate system; mapping laser points included in the laser point cloud into the 3D scene respectively according to the 3D coordinate of the laser points; labeling the laser points in the 3D scene.
EXTENDED REALITY SERVICE PROVIDING METHOD AND SYSTEM FOR OPERATION OF INDUSTRIAL INSTALLATION
The present application relates to an extended reality service providing method and system for operation of an industrial installation. More specifically, various types of data required for operation (e.g., inspection, examination, maintenance, repair, and reinforcement) of an industrial installation are digitalized, extended reality content, such as an augmented reality image or a mixed reality image based on the digitalized data, is provided to a site worker or a remote place manager, and the worker and the manager can communicate via a video call in real-time, whereby the work efficiency of the worker and the manager can be enhanced.
BEAUTIFICATION TECHNIQUES FOR 3D DATA IN A MESSAGING SYSTEM
The subject technology receives a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator for applying a 3D effect, the 3D effect including at least one beautification operation. The subject technology captures image data and depth data using a camera. The subject technology applies, to the image data and the depth data, the 3D effect including the at least one beautification operation based at least in part on the augmented reality content generator, the beautification operation being performed as part of applying the 3D effect. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation. The subject technology renders a view of the 3D message based at least in part on the applied 3D effect including the at least one beautification operation.
USING 6DOF POSE INFORMATION TO ALIGN IMAGES FROM SEPARATED CAMERAS
Techniques for aligning images generated by an integrated camera physically mounted to an HMD with images generated by a detached camera physically unmounted from the HMD are disclosed. A 3D feature map is generated and shared with the detached camera. Both the integrated camera and the detached camera use the 3D feature map to relocalize themselves and to determine their respective 6 DOF poses. The HMD receives the detached camera's image of the environment and the 6 DOF pose of the detached camera. A depth map of the environment is accessed. An overlaid image is generated by reprojecting a perspective of the detached camera's image to align with a perspective of the integrated camera and by overlaying the reprojected detached camera's image onto the integrated camera's image.
BIOMETRIC ENABLED VIRTUAL REALITY SYSTEMS AND METHODS FOR DETECTING USER INTENTIONS AND MODULATING VIRTUAL AVATAR CONTROL BASED ON THE USER INTENTIONS FOR CREATION OF VIRTUAL AVATARS OR OBJECTS IN HOLOGRAPHIC SPACE, TWO-DIMENSIONAL (2D) VIRTUAL SPACE, OR THREE-DIMENSIONAL (3D) VIRTUAL SPACE
Biometric enabled virtual reality (VR) systems and methods are disclosed for detecting user intention(s) and modulating virtual avatar control based on the user intention(s) for creation of virtual avatar(s) or object(s) in holographic space, two-dimensional (2D) virtual space, or three-dimensional (3D) virtual space. A virtual representation of an intended motion of a user corresponding to an intention of muscle activation of the user is determined based on analysis of a biometric signal data of the user as collected by a biometric detection device. The virtual representation of the intended motion is used to modulate virtual avatar control or output to create at least one of a virtual avatar representing aspect(s) of the user or an object manipulated by the user in a holographic space, virtual 2D space, or virtual 3D space. The avatar or the object is created based on: (1) the biometric signal data of a user, or (2) user-specific specifications as provided by the user.
Support method, server, and design support system
A design support system for permitting a design that easily meets desired conditions regarding an entire item group is provided. An automatic estimation system acting as the design support system comprises: an item recognition section that recognizes each of the items included in the item group by individually recognizing elements making up the item; a designated condition recognition section that recognizes a designated condition from the manufacturing conditions; and a recommended-to-be-examined element recognition section that recognizes, with respect to the designated condition recognized, the element corresponding to any of the manufacturing conditions that is recommended to be examined for changes. The recommended-to-be-examined element recognition section displays a model of the item group by highlighting the element for which the manufacturing condition is recommended to be examined for changes.
Systems and methods for automatic measurement and scanning of complex surfaces
Systems and methods are provided for collecting measurement data for a three-dimensional object. In embodiments, a computer model of the three-dimensional object is generated that correlates points on the three-dimensional object with points in three-dimensional space; the computer model is used to collect measurement data of the three-dimensional object and associate the collected measurement data with points in three-dimensional space; and a plan view computer model of the three-dimensional object is generated that depicts the measurement data in two dimensions and that associates the depicted measurement data with points in three-dimensional space.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, IMAGE DISPLAY METHOD, AND IMAGE PROCESSING PROGRAM
An image processing device is an image processing device configured to cause a display to display three-dimensional data as a three-dimensional image, the three-dimensional data representing a biological tissue. The image processing device includes: a control unit configured to adjust a color tone of each pixel of the three-dimensional image according to a dimension of the biological tissue in a linear direction from a viewpoint when the three-dimensional image is displayed on the display.