Patent classifications
G06T7/521
LOOP CLOSURE DETECTION METHOD AND SYSTEM, MULTI-SENSOR FUSION SLAM SYSTEM, ROBOT, AND MEDIUM
The present invention provides a loop closure detection method and system, a multi-sensor fusion SLAM system, a robot, and a medium. Said system runs on a mobile robot, and comprises a similarity detection unit, a visual pose solving unit, and a laser pose solving unit. According to the loop closure detection system, the multi-sensor fusion SLAM system and the robot provided in the present invention, the speed and accuracy of loop closure detection in cases of a change in a viewing angle of the robot, a change in the environmental brightness, a weak texture, etc. can be significantly improved.
LOOP CLOSURE DETECTION METHOD AND SYSTEM, MULTI-SENSOR FUSION SLAM SYSTEM, ROBOT, AND MEDIUM
The present invention provides a loop closure detection method and system, a multi-sensor fusion SLAM system, a robot, and a medium. Said system runs on a mobile robot, and comprises a similarity detection unit, a visual pose solving unit, and a laser pose solving unit. According to the loop closure detection system, the multi-sensor fusion SLAM system and the robot provided in the present invention, the speed and accuracy of loop closure detection in cases of a change in a viewing angle of the robot, a change in the environmental brightness, a weak texture, etc. can be significantly improved.
ADDING AN ADAPTIVE OFFSET TERM USING CONVOLUTION TECHNIQUES TO A LOCAL ADAPTIVE BINARIZATION EXPRESSION
An apparatus comprising an interface, a structured light projector and a processor. The interface may receive pixel data. The structured light projector may generate a structured light pattern. The processor may process the pixel data arranged as video frames, perform operations using a convolutional neural network to determine a binarization result and an offset value and generate disparity and depth maps in response to the video frames, the structured light pattern, the binarization result, the offset value and a removal of error points. The convolutional neural network may perform a partial block summation to generate a convolution result, compare the convolution result to a speckle value to determine the offset value, generate an adaptive result in response to performing a convolution operation, compare the video frames to the adaptive result to generate the binarization result for the video frames, and remove the error points from the binarization result.
ADDING AN ADAPTIVE OFFSET TERM USING CONVOLUTION TECHNIQUES TO A LOCAL ADAPTIVE BINARIZATION EXPRESSION
An apparatus comprising an interface, a structured light projector and a processor. The interface may receive pixel data. The structured light projector may generate a structured light pattern. The processor may process the pixel data arranged as video frames, perform operations using a convolutional neural network to determine a binarization result and an offset value and generate disparity and depth maps in response to the video frames, the structured light pattern, the binarization result, the offset value and a removal of error points. The convolutional neural network may perform a partial block summation to generate a convolution result, compare the convolution result to a speckle value to determine the offset value, generate an adaptive result in response to performing a convolution operation, compare the video frames to the adaptive result to generate the binarization result for the video frames, and remove the error points from the binarization result.
SYSTEMS AND METHODS FOR IDENTIFYING INCLINED REGIONS
Systems and methods for identifying inclined regions are provided. In one aspect, a method is provided that includes receiving shadow data for at least one first ground object in a first region, wherein each first ground object is depicted in one overhead image of the first region, wherein the shadow data comprises a length of the respective first ground object as identified from the respective overhead image; receiving shadow data for at least one second comparable ground object in a second region, wherein each second ground object is depicted in one overhead image of the second region, wherein the shadow data comprises a length of the respective second ground object as identified from the respective overhead image; calculating a statistical measure describing the variability of the shadow lengths between objects in the first region and the second region; comparing the statistical measure to a predetermined threshold; and based on the comparison, identifying that the first region is inclined relative to the second region.
SYSTEMS AND METHODS FOR IDENTIFYING INCLINED REGIONS
Systems and methods for identifying inclined regions are provided. In one aspect, a method is provided that includes receiving shadow data for at least one first ground object in a first region, wherein each first ground object is depicted in one overhead image of the first region, wherein the shadow data comprises a length of the respective first ground object as identified from the respective overhead image; receiving shadow data for at least one second comparable ground object in a second region, wherein each second ground object is depicted in one overhead image of the second region, wherein the shadow data comprises a length of the respective second ground object as identified from the respective overhead image; calculating a statistical measure describing the variability of the shadow lengths between objects in the first region and the second region; comparing the statistical measure to a predetermined threshold; and based on the comparison, identifying that the first region is inclined relative to the second region.
Systems and methods for detecting and correcting data density during point cloud generation
A point cloud capture system is provided to detect and correct data density during point cloud generation. The system obtains data points that are distributed within a space and that collectively represent one or more surfaces of an object, scene, or environment. The system computes the different densities with which the data points are distributed in different regions of the space, and presents an interface with a first representation for a first region of the space in which a first subset of the data points are distributed with a first density, and a second representation for a second region of the space in which a second subset of the data points are distributed with a second density.
Systems and methods for detecting and correcting data density during point cloud generation
A point cloud capture system is provided to detect and correct data density during point cloud generation. The system obtains data points that are distributed within a space and that collectively represent one or more surfaces of an object, scene, or environment. The system computes the different densities with which the data points are distributed in different regions of the space, and presents an interface with a first representation for a first region of the space in which a first subset of the data points are distributed with a first density, and a second representation for a second region of the space in which a second subset of the data points are distributed with a second density.
System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.