Patent classifications
G01C11/30
Systems and Methods of Determining Image Scaling
An example system includes two objects each having a known dimension and positioned spaced apart by a known distance, and a fixture having an opening for receiving an imaging device and for holding the two objects in a field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a surface of the base. The fixture holds the imaging device at a fixed distance from an object being imaged and controls an amount of incident light on the imaging device. An example method of determining image scaling includes holding an imaging device at a fixed distance from an object being imaged, and positioning the two objects in the field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a line formed by the known distance.
Method for 2D picture based conglomeration in 3D surveying
A method for a three dimensional surveying of a 3D-scene for deriving a true-to-size 3D-model. It involves deriving a first 3D-partial-model of a section of the 3D-scene together with a capturing of at least one first 2D-visual-image and a second 3D-partial-model of another section of the 3D-scene, together with a capturing of at least one second 2D-visual-image, wherein the 3D-partial-models are partially overlapping. The first 3D-partial-model is conglomerated with the second 3D-partial-model to form the 3D-model of the 3D-scene, which is done with defining a first line segment in the first 2D-visual-image and a second line segment in the second 2D-visual-image, which first and second line segments are representing a visual feature, which is common in both of the 2D-visual-images. The line segments in the 2D-visual-images are utilized in conglomerating the corresponding 3D-partial models to form the 3D-model of the whole 3D-scene.
Method for 2D picture based conglomeration in 3D surveying
A method for a three dimensional surveying of a 3D-scene for deriving a true-to-size 3D-model. It involves deriving a first 3D-partial-model of a section of the 3D-scene together with a capturing of at least one first 2D-visual-image and a second 3D-partial-model of another section of the 3D-scene, together with a capturing of at least one second 2D-visual-image, wherein the 3D-partial-models are partially overlapping. The first 3D-partial-model is conglomerated with the second 3D-partial-model to form the 3D-model of the 3D-scene, which is done with defining a first line segment in the first 2D-visual-image and a second line segment in the second 2D-visual-image, which first and second line segments are representing a visual feature, which is common in both of the 2D-visual-images. The line segments in the 2D-visual-images are utilized in conglomerating the corresponding 3D-partial models to form the 3D-model of the whole 3D-scene.
Measurement method and apparatus
A measurement method and apparatus are provided. The measurement method is applicable to an image acquisition device, and includes: acquiring image data to generate an image data file (S101); capturing an object to be measured in an image corresponding to the image data file (S102); obtaining a first distance between a horizontal line going through a lowest point of the object to be measured in the image and a horizontal line going through a center point of the image (S103); and calculating a second distance between the object to be measured and the image acquisition device based on the first distance, an installation height of the image acquisition device and a pitch angle of the image acquisition device (S104). Compared with the relevant art, the image acquisition device can measure the distance to an object to be measured while achieving relatively low production cost and easy installation. As a result, actual demands can be better satisfied.
Visual odometry and pairwise alignment for high definition map creation
As an autonomous vehicle moves through a local area, pairwise alignment may be performed to calculate changes in the pose of the vehicle between different points in time. The vehicle comprises an imaging system configured to capture image frames depicting a portion of the surrounding area. Features are identified from the captured image frames, and a 3-D location is determined for each identified feature. The features of different image frames corresponding to different points in time are analyzed to determine a transformation in the pose of the vehicle during the time period between the image frames. The determined poses of the vehicle are used to generate an HD map of the local area.
Visual odometry and pairwise alignment for high definition map creation
As an autonomous vehicle moves through a local area, pairwise alignment may be performed to calculate changes in the pose of the vehicle between different points in time. The vehicle comprises an imaging system configured to capture image frames depicting a portion of the surrounding area. Features are identified from the captured image frames, and a 3-D location is determined for each identified feature. The features of different image frames corresponding to different points in time are analyzed to determine a transformation in the pose of the vehicle during the time period between the image frames. The determined poses of the vehicle are used to generate an HD map of the local area.
IMAGE PROCESSING DEVICE, MOBILE ROBOT CONTROL SYSTEM, AND MOBILE ROBOT CONTROL METHOD
This image processing device includes: a detection object including cells having first cells capable of reflecting emitted light and second cells incapable of reflecting the emitted light, the cells being squares or rectangles, the first cells and the second cells being arranged in an aa or ab (where a, b=3, 4, 5, 6, . . . ) matrix on a two-dimensional plane; and a detector including: an illuminator emitting light; imagers imaging, by a camera, light reflected from the first cells after the first cells and the second cells constituting the detection object are illuminated with the light emitted from the illuminator; and a calculator obtaining information set on the detection object 11, based on imaged data items taken by the imagers. Such a configuration can accurately identify a compact marker and measure the distance, and achieve a system inexpensively.
Imaging device
An imaging device having an optical system including a free-form surface lens with rotationally asymmetric shape that forms an image on an imaging surface such that a resolution of a first region in front of the predetermined region is higher than a resolution of a second region at a lateral side of the predetermined region. The free-form surface lens has a shape that forms the image such that a resolution of a portion at a predetermined first distance away from a center of the first region in a vertical direction is different from a resolution of a portion at the predetermined first distance away from the center of the first region in a horizontal direction, the vertical direction being orthogonal to the horizontal direction, in the imaging element, in which the first region and the second region are aligned.
Imaging device
An imaging device having an optical system including a free-form surface lens with rotationally asymmetric shape that forms an image on an imaging surface such that a resolution of a first region in front of the predetermined region is higher than a resolution of a second region at a lateral side of the predetermined region. The free-form surface lens has a shape that forms the image such that a resolution of a portion at a predetermined first distance away from a center of the first region in a vertical direction is different from a resolution of a portion at the predetermined first distance away from the center of the first region in a horizontal direction, the vertical direction being orthogonal to the horizontal direction, in the imaging element, in which the first region and the second region are aligned.
Method for Determining Distance Information from Images of a Spatial Region
A method includes defining a disparity range having discrete disparities and taking first, second, and third images of a spatial region using first, second, and third imaging units. The imaging units are arranged in an isosceles triangle geometry. The method includes determining first similarity values for a pixel of the first image for all the discrete disparities along a first epipolar line associated with the pixel in the second image. The method includes determining second similarity values for the pixel for all discrete disparities along a second epipolar line associated with the pixel in the third image. The method includes combining the first and second similarity values and determining a common disparity based on the combined similarity values. The method includes determining a distance to a point within the spatial region for the pixel from the common disparity and the isosceles triangle geometry.