Patent classifications
G01C11/30
Real time position and orientation tracker
The present disclosure relates to a tracking system for tracking the position and/or orientation of an object in an environment, the tracking system including: at least one camera mounted to the object; a plurality of spaced apart targets, at least some of said targets viewable by the at least one camera; and, one or more electronic processing devices configured to: determine target position data indicative of the relative spatial position of the targets; receive image data indicative of an image from the at least one camera, said image including at least some of the targets; process the image data to: identify one or more targets in the image; determine pixel array coordinates corresponding to a position of the one or more targets in the image; and, use the processed image data to determine the position and/or orientation of the object by triangulation.
Systems and methods of determining image scaling
An example system includes two objects each having a known dimension and positioned spaced apart by a known distance, and a fixture having an opening for receiving an imaging device and for holding the two objects in a field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a surface of the base. The fixture holds the imaging device at a fixed distance from an object being imaged and controls an amount of incident light on the imaging device. An example method of determining image scaling includes holding an imaging device at a fixed distance from an object being imaged, and positioning the two objects in the field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a line formed by the known distance.
Systems and methods of determining image scaling
An example system includes two objects each having a known dimension and positioned spaced apart by a known distance, and a fixture having an opening for receiving an imaging device and for holding the two objects in a field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a surface of the base. The fixture holds the imaging device at a fixed distance from an object being imaged and controls an amount of incident light on the imaging device. An example method of determining image scaling includes holding an imaging device at a fixed distance from an object being imaged, and positioning the two objects in the field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a line formed by the known distance.
Detection of misalignment hotspots for high definition maps for navigating autonomous vehicles
A high-definition map system receives sensor data from vehicles travelling along routes and combines the data to generate a high definition map for use in driving vehicles, for example, for guiding autonomous vehicles. A pose graph is built from the collected data, each pose representing location and orientation of a vehicle. The pose graph is optimized to minimize constraints between poses. Points associated with surface are assigned a confidence measure determined using a measure of hardness/softness of the surface. A machine-learning-based result filter detects bad alignment results and prevents them from being entered in the subsequent global pose optimization. The alignment framework is parallelizable for execution using a parallel/distributed architecture. Alignment hot spots are detected for further verification and improvement. The system supports incremental updates, thereby allowing refinements of subgraphs for incrementally improving the high-definition map for keeping it up to date.
Detection of misalignment hotspots for high definition maps for navigating autonomous vehicles
A high-definition map system receives sensor data from vehicles travelling along routes and combines the data to generate a high definition map for use in driving vehicles, for example, for guiding autonomous vehicles. A pose graph is built from the collected data, each pose representing location and orientation of a vehicle. The pose graph is optimized to minimize constraints between poses. Points associated with surface are assigned a confidence measure determined using a measure of hardness/softness of the surface. A machine-learning-based result filter detects bad alignment results and prevents them from being entered in the subsequent global pose optimization. The alignment framework is parallelizable for execution using a parallel/distributed architecture. Alignment hot spots are detected for further verification and improvement. The system supports incremental updates, thereby allowing refinements of subgraphs for incrementally improving the high-definition map for keeping it up to date.
METHOD FOR DESIGNING PACKAGING PLANTS
Disclosed is a method for designing a packaging plant, wherein a measuring vehicle is moved within an area in which the production plant is to be erected or modified, and a position of the measuring vehicle relative to the area is detected. The measuring vehicle is positioned at a plurality of positions within this area and at this position records respective images and/or data of the area with a first image capturing device, wherein at least one geometric property of the area and/or the packaging plant is detected.
Method for determining distance information from images of a spatial region
A method includes defining a disparity range having discrete disparities and taking first, second, and third images of a spatial region using first, second, and third imaging units. The imaging units are arranged in an isosceles triangle geometry. The method includes determining first similarity values for a pixel of the first image for all the discrete disparities along a first epipolar line associated with the pixel in the second image. The method includes determining second similarity values for the pixel for all discrete disparities along a second epipolar line associated with the pixel in the third image. The method includes combining the first and second similarity values and determining a common disparity based on the combined similarity values. The method includes determining a distance to a point within the spatial region for the pixel from the common disparity and the isosceles triangle geometry.
Method for determining distance information from images of a spatial region
A method includes defining a disparity range having discrete disparities and taking first, second, and third images of a spatial region using first, second, and third imaging units. The imaging units are arranged in an isosceles triangle geometry. The method includes determining first similarity values for a pixel of the first image for all the discrete disparities along a first epipolar line associated with the pixel in the second image. The method includes determining second similarity values for the pixel for all discrete disparities along a second epipolar line associated with the pixel in the third image. The method includes combining the first and second similarity values and determining a common disparity based on the combined similarity values. The method includes determining a distance to a point within the spatial region for the pixel from the common disparity and the isosceles triangle geometry.
MULTIPLE VIEW TRIANGULATION WITH IMPROVED ROBUSTNESS TO OBSERVATION ERRORS
Triangulation is applied, by a method and a device, to determine a scene point location in a global coordinate system, GCS, based on location coordinates for observed image points in plural views of a scene, and global-local transformation matrices for the views. The local coordinates are given in local coordinate systems, LCSs, arranged to define distance coordinate axes that are perpendicular to image planes of the views. The scene point location is determined by minimizing a plurality of differences between first and second estimates of the scene point location in the LCSs, the first estimates being given by linear scaling of the location coordinates, and the second estimate being given by operating the transformation matrices on the scene point location. An improved robustness to observation errors is achieved by defining the plurality of differences to penalize differences that includes the linear scaling as applied along the distance coordinate axes.
Systems and Methods of Determining Image Scaling
An example system includes two objects each having a known dimension and positioned spaced apart by a known distance, and a fixture having an opening for receiving an imaging device and for holding the two objects in a field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a surface of the base. The fixture holds the imaging device at a fixed distance from an object being imaged and controls an amount of incident light on the imaging device. An example method of determining image scaling includes holding an imaging device at a fixed distance from an object being imaged, and positioning the two objects in the field of view of the imaging device such that the field of view of the imaging device originates from a point normal to a line formed by the known distance.