G06T2210/56

Three-dimensional data creation method, three-dimensional data transmission method, three-dimensional data creation device, and three-dimensional data transmission device

A three-dimensional data creation method includes: creating first three-dimensional data from information detected by a sensor; receiving encoded three-dimensional data that is obtained by encoding second three-dimensional data; decoding the received encoded three-dimensional data to obtain the second three-dimensional data; and merging the first three-dimensional data with the second three-dimensional data to create third three-dimensional data.

POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD
20230239501 · 2023-07-27 ·

A point cloud data transmission method according to embodiments comprises the steps of: encoding point cloud data; and transmitting signaling data and the encoded point cloud data, wherein the step for encoding may comprise the steps of: dividing the point cloud data into a plurality of compression units; sorting, for each compression unit, the point cloud data in each compression unit; generating a prediction tree on the basis of the sorted point cloud data in the compression units; and compressing the point cloud data in the compression units by predicting on the basis of the prediction tree.

LANE EXTRACTION METHOD USING PROJECTION TRANSFORMATION OF THREE-DIMENSIONAL POINT CLOUD MAP
20230005278 · 2023-01-05 ·

A lane extraction method uses projection transformation of a 3D point cloud map, by which the amount of operations required to extract the coordinates of a lane is reduced by performing deep learning and lane extraction in a two-dimensional (2D) domain, and therefore, lane information is obtained in real time. In addition, black-and-white brightness, which is most important information for lane extraction on an image, is substituted by the reflection intensity of a light detection and ranging (LiDAR) sensor so that a deep learning model capable of accurately extracting a lane is provided. Therefore, reliability and competitiveness is enhanced in the field of autonomous driving, the field of road recognition, the field of lane recognition, and the field of HD road maps for autonomous driving, and the fields similar or related thereto, and more particularly, in the fields of road recognition and autonomous driving using LiDAR.

Method and system for virtual real estate tours and virtual shopping
11714518 · 2023-08-01 ·

A process and method for generation of 3D interactive rendition of space by first digitizing the space as a TXLD file from point clouds, 2-D floor plans, photographs and/or videos, and/or elevation views then processing the file to create 3D interactive renditions of the space using virtual reality, augmented reality, or mixed reality technologies, constantly growing and evolving 3D library of digital representations of both fixtures and non-fixtures based on real products, automating and personalizing product selection and placement for a given user and target environment, using an ensemble recommendation system, which relies on weighted averages of probabilistic, content-based, clustering, and collaborative filtering models, among other suitable models that may be added to the ensemble in the future. If a medium does not necessitate 3D model creation of the environment the products are automatically placed in the medium view based on the encoded position in the TXLD file.

Floorplan generation based on room scanning

Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.

A METHOD FOR CAPTURING AND DISPLAYING A VIDEO STREAM
20230024396 · 2023-01-26 ·

The present invention relates to a method for capturing and displaying a video stream, comprising: capturing with one or a plurality of cameras a plurality of video streams of a scene, said scene comprising at least one person; reconstructing from said plurality of video streams a virtual environment representing the scene, determining the gaze direction of said person using at least one of said plurality of video streams; projecting said virtual environment onto a plane normal to said gaze direction for generating a virtual representation corresponding to what that person is looking at and from the point of view of that person; displaying said virtual representation on a display.

DETERMINING VOLUME OF A SELECTABLE REGION USING EXTENDED REALITY
20230237612 · 2023-07-27 ·

A system for determining volume of a selectable region is configurable to (i) obtain user input directed to a 3D representation of a set of 2D images and (ii) based on the user input, selectively modify one or more mask pixels of one or more respective selection masks. Each 2D image of the set of 2D images is associated with a respective selection mask. The 3D representation represents pixels of the set of 2D images with corresponding voxels. The user input selects one or more voxels of the 3D representation. The one or more mask pixels is associated with one or more pixels of the set of 2D images that correspond to the one or more voxels of the 3D representation selected via the user input.

TECHNOLOGY CONFIGURED TO PROVIDE USER INTERFACE VISUALISATION OF AGRICULTURAL LAND, INCLUDING 3D VISUALIZED MODELLING OF AN AGRICULTURAL LAND REGION BASED ON FLOW, HYBRIDIZED MULTIPLE RESOLUTION VISUALISATION AND/OR AUTOMATED FIELD SEGREGATION
20230022508 · 2023-01-26 ·

Technology is configured to provide technology configured to provide user interface visualization of agricultural land, including 3D visualized modelling of an agricultural land region based on flow, hybridized multiple resolution visualization and/or automated field segregation. Embodiments of the present disclosure are primarily directed to providing what is in essence a digital twin interface for agricultural land, which provides technical attributes that solve technical problems present in the art.

MULTI-VIEW NEURAL HUMAN RENDERING
20230027234 · 2023-01-26 ·

An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.

Z-PLANE IDENTIFICATION AND BOX DIMENSIONING USING THREE-DIMENSIONAL TIME-OF-FLIGHT IMAGING
20230228883 · 2023-07-20 ·

A sensor system that obtains and processes time-of-flight data (TOF) is provided. A TOF sensor obtains raw data describing various surfaces. A processor applies an averaging filter to the raw data to smooth the raw data for increasing signal-to-noise ratio (SNR) of flat surfaces represented in the raw data, performs a depth compute process on the raw data, as filtered, to generate distance data, generates a point cloud based on the distance data, and identifies the Z-planes in the point cloud.