G06T2207/10028

MODEL PREDICTION

Examples of methods for model prediction are described herein. In some examples, a method includes predicting a compensated model. In some examples, the compensated model is predicted based on a three-dimensional (3D) object model. In some examples, a method includes predicting a deformed model. In some examples, the deformed mode is predicted based on the compensated model.

TREE CROWN EXTRACTION METHOD BASED ON UNMANNED AERIAL VEHICLE MULTI-SOURCE REMOTE SENSING
20230039554 · 2023-02-09 ·

A tree crown extraction method based on UAV multi-source remote sensing includes: obtaining a visible light image and LIDAR point clouds, taking a digital orthophoto map (DOM) and the LIDAR point clouds as data sources, using a method of watershed segmentation and object-oriented multi-scale segmentation to extract single tree crown information under different canopy densities. The object-oriented multi-scale segmentation method is used to extract crown and non-crown areas, and a tree crown distribution range is extracted with the crown area as a mask; a preliminary segmentation result of single tree crown is obtained by the watershed segmentation method based on a canopy height model; a brightness value of DOM is taken as a feature, the crown area of the DOM is performed secondary segmentation based on a crown boundary to obtain an optimized single tree crown boundary information, which greatly increases the accuracy of remote sensing tree crown extraction.

HIGH-DEFINITION MAP CREATION METHOD AND DEVICE, AND ELECTRONIC DEVICE

A high-definition map creation method includes: obtaining point cloud data collected with respect to a target region, the point cloud data including K frames of point clouds and an initial pose of each frame of point cloud, K being an integer greater than 1; associating the K frames of point clouds with each other in accordance with the initial pose to obtain a first point cloud relation graph of the K frames of point clouds; performing point cloud registration on the K frames of point clouds in accordance with the first point cloud relation graph and the initial pose to obtain a target relative pose of each frame of point cloud in the K frames of point clouds; and splicing the K frames of point clouds in accordance with the target relative pose to obtain a point cloud map of the target region.

DETERMINING MINIMUM REGION FOR FINDING PLANAR SURFACES
20230037328 · 2023-02-09 ·

Systems, devices, methods, and computer-readable media for determining planarity in a 3D data set are provided. A method can include receiving or retrieving three-dimensional (3D) data of a geographical region, dividing the 3D data into first contiguous regions of specified first geographical dimensions, determining, for each first contiguous region of the first contiguous regions, respective measures of variation, identifying, based on the respective measures of variation, a search radius, dividing the 3D data into respective second contiguous or overlapping regions with dimensions the size of the identified search radius, and determining, based on the identified search radius, a planarity of each of the respective second contiguous or overlapping regions.

SYSTEM AND METHOD FOR 3D MULTI-OBJECT TRACKING IN LIDAR POINT CLOUDS
20230043061 · 2023-02-09 ·

A method and a device for multi-object tracking, and an electronic device are provided. The method includes: determining a hybrid-time position map of a current point cloud fragment; converting a tracked position map of a previous point cloud fragment into a temporary tracked position map of the current point cloud fragment; and averaging the hybrid-time position map and the temporary tracked position map of the current point cloud fragment, to generate a tracked position map of the current point cloud fragment. With the method and the device for multi-object tracking, and the electronic device, the hybrid-time position map and temporary tracked position map of the current point cloud fragment are averaged, so that not only the tracked position map of the current point cloud fragment is accurately generated, but also an object ID is inherited. Based on the object ID, the same object in different point cloud fragments are associated, so that multi-object tracking is implemented without an association step in the conventional solutions. It is unnecessary to set additional hyper-parameters, and strong versatility is achieved.

ROBOTIC SYSTEM WITH IMAGE-BASED SIZING MECHANISM AND METHODS FOR OPERATING THE SAME

A system and method for estimating aspects of target objects and/or associated task implementations is disclosed.

Systems and Methods for Image Based Perception

Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.

APPARATUS FOR ACQUIRING DEPTH IMAGE, METHOD FOR FUSING DEPTH IMAGES, AND TERMINAL DEVICE
20230042846 · 2023-02-09 · ·

Provided are an apparatus for acquiring a depth image, a method for fusing depth images, and a terminal device. The apparatus for acquiring a depth image includes an emitting module, a receiving module, and a processing unit. The emitting module is configured to emit a speckle array to an object, where the speckle array includes p mutually spaced apart speckles. The receiving module includes an image sensor. The processing unit is configured to receive the pixel signal and generate a sparse depth image based on the pixel signal, align an RGB image at a resolution of a*b with the sparse depth image, and fuse the aligned sparse depth image with the RGB image using a pre-trained image fusion model to obtain a dense depth image at a resolution of a*b.

PRODUCT TARGET QUALITY CONTROL SYSTEM

A process includes receiving a target quality value, receiving a measured quality value, receiving a source quality value, and sending a source control instruction. The source control instruction is based at least in part on the target quality value, the measured quality value, and the source quality value. The target quality value, the measured quality value, the source quality value, and the source control instruction are communicated via the communication port. The measured quality value is generated by an inspection device configured to inspect a sample. The source quality value is associated with a quality level of a first group of samples. The target quality value indicates a desired quality value of an output group of samples. The source control instruction causes a source selecting device to select one of a plurality of groups of samples, each group having identified quality characteristics.

Systems and Methods for Image Based Perception

Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.