G06T2207/10028

Single-pass object scanning

Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.

Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas

An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.

Systems, devices, and methods for in-field diagnosis of growth stage and crop yield estimation in a plant area

Methods, devices, and systems may be utilized for detecting one or more properties of a plant area and generating a map of the plant area indicating at least one property of the plant area. The system comprises an inspection system associated with a transport device, the inspection system including one or more sensors configured to generate data for a plant area including to: capture at least 3D image data and 2D image data; and generate geolocational data. The datacenter is configured to: receive the 3D image data, 2D image data, and geolocational data from the inspection system; correlate the 3D image data, 2D image data, and geolocational data; and analyze the data for the plant area. A dashboard is configured to display a map with icons corresponding to the proper geolocation and image data with the analysis.

High-definition city mapping
11580688 · 2023-02-14 · ·

A vehicle generates a city-scale map. The vehicle includes one or more Lidar sensors configured to obtain point clouds at different positions, orientations, and times, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform registering, in pairs, a subset of the point clouds based on respective surface normals of each of the point clouds; determining loop closures based on the registered subset of point clouds; determining a position and an orientation of each of the subset of the point clouds based on constraints associated with the determined loop closures; and generating a map based on the determined position and the orientation of each of the subset of the point clouds.

System and method for automated surface assessment

Embodiments described herein provide a system for assessing the surface of an object for detecting contamination or other defects. During operation, the system obtains an input image indicating the contamination on the object and generates a synthetic image using an artificial intelligence (AI) model based on the input image. The synthetic image can indicate the object without the contamination. The system then determines a difference between the input image and the synthetic image to identify an image area corresponding to the contamination. Subsequently, the system generates a contamination map of the contamination by highlighting the image area based on one or more image enhancement operations.

System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function

A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.

Method of localization by synchronizing multi sensors and robot implementing same

Disclosed herein are a method of localization by synchronizing multi sensors and a robot implementing the same. The robot according to an embodiment includes a controller that, when a first sensor acquires first type information, generates first type odometry information using the first type information, that, at a time point when the first type odometry information is generated, acquires second type information by controlling a second sensor and then generates second type odometry information using the second type information, and that the robot by combining the first type odometry information and the second type odometry information.

Method and system for detecting and picking up objects

A method includes steps of: capturing an image of a container; recognizing at least one object in the container based on the image; determining at least one first coordinate set corresponding to the at least one object; determining at least one second coordinate set that corresponds to target one (s) of the at least one first coordinate set and that relates to a fixed picking device of a robotic arm; adjusting position(s) of unfixed picking device(s) of the robotic arm if necessary; controlling the robotic arm to pick up one (s) of the at least one object that correspond(s) to the at least one second coordinate set with the fixed picking device and/or at least one unfixed picking device.

Object detection using multiple three dimensional scans

One exemplary implementation facilitates object detection using multiple scans of an object in different lighting conditions. For example, a first scan of the object can be created by capturing images of the object by moving an image sensor on a first path in a first lighting condition, e.g., bright lighting. A second scan of the object can then be created by capturing additional images of the object by moving the image sensor on a second path in a second lighting condition, e.g., dim lighting. Implementations determine a transform that associates the scan data from these multiple scans to one another and use the transforms to generate a 3D model of the object in a single coordinate system. Augmented content can be positioned relative to that object in the single coordinate system and thus will be displayed in the appropriate location regardless of the lighting condition in which the physical object is later detected.

Photogrammetric alignment for immersive content production

A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.