Patent classifications
G06T7/277
Camera-based enhancement of vehicle kinematic state estimation
Methods and systems implemented in a vehicle involve obtaining a single camera image from a camera arranged on the vehicle. The image indicates a heading angle ψ.sub.0 between a vehicle heading x and a tangent line that is tangential to road curvature of a road on which the vehicle is traveling and also indicates a perpendicular distance y.sub.0 from a center of the vehicle to the tangent line. An exemplary method includes obtaining two or more inputs from two or more vehicle sensors, and estimating kinematic states of the vehicle based on applying a Kalman filter to the single camera image and the two or more inputs to solve kinematic equations. The kinematic states include roll angle and pitch angle of the vehicle.
Camera-based enhancement of vehicle kinematic state estimation
Methods and systems implemented in a vehicle involve obtaining a single camera image from a camera arranged on the vehicle. The image indicates a heading angle ψ.sub.0 between a vehicle heading x and a tangent line that is tangential to road curvature of a road on which the vehicle is traveling and also indicates a perpendicular distance y.sub.0 from a center of the vehicle to the tangent line. An exemplary method includes obtaining two or more inputs from two or more vehicle sensors, and estimating kinematic states of the vehicle based on applying a Kalman filter to the single camera image and the two or more inputs to solve kinematic equations. The kinematic states include roll angle and pitch angle of the vehicle.
LIGHT EMITTING DEVICE POSITIONAL TRACKING FOR MOBILE PLATFORMS
Light emitting device positional tracking systems and methods are provided. In one example, a method includes receiving images captured of a target location comprising a plurality of light emitting devices, where each of the light emitting devices has an associated blinking pattern. The method may further include detecting the blinking pattern for each of the light emitting devices in the images. The method may further include determining a classification for each of the light emitting devices based on its detected blinking pattern. The method may further include aligning a mobile platform with the target location based on the classifications of the light emitting devices. Related devices and systems are also provided.
Control device, control method, and mobile body
The present disclosure relates to a control device, and a control method, a program, and a mobile body that enable efficient search for surrounding information when it is in an own position indefinite state. When it is in an own position indefinite state, on the basis of an own position, obstacle position information around oneself, and information of a surface sensing possible range of a surface sensing unit including a stereo camera for determining the own position, information of a surface-sensed area of an obstacle is recorded, and a search route is planned on the basis of the information of the surface-sensed area of the obstacle. The present technology can be applied to a multi-legged robot, a flying body, and an in-vehicle system that autonomously move according to a mounted computer.
Learned state covariances
Techniques are disclosed for a covariance model that may generate observation covariances based on observation data of object detections. Techniques may include determining observation data for an object detection of an object represented in sensor data, determining that track data of a track is associated with the object, and inputting the observation data associated with the object detection into a machine-learned model configured to output a covariance (a covariance model). The covariance model may output one or more observation covariance values for the observation data. In some examples, the techniques may include determining updated track data based on the track data, the one or more observation covariance values, and the observation data.
Learned state covariances
Techniques are disclosed for a covariance model that may generate observation covariances based on observation data of object detections. Techniques may include determining observation data for an object detection of an object represented in sensor data, determining that track data of a track is associated with the object, and inputting the observation data associated with the object detection into a machine-learned model configured to output a covariance (a covariance model). The covariance model may output one or more observation covariance values for the observation data. In some examples, the techniques may include determining updated track data based on the track data, the one or more observation covariance values, and the observation data.
OBJECTION DETECTION USING IMAGES AND MESSAGE INFORMATION
Disclosed are techniques for performing object detection and tracking. In some implementations, a process for performing object detection and tracking is provided. The process can include steps for obtaining, at a tracking object, an image comprising a target object, obtaining, at the tracking object, a first set of messages associated with the target object, determining a bounding box for the target object in the image based on the first set of messages associated with the target object, and extracting a sub-image from the image. In some approaches, the process can further include steps for detecting, using an object detection model, a location of the target object within the sub-image. Systems and machine-readable media are also provided.
OBJECTION DETECTION USING IMAGES AND MESSAGE INFORMATION
Disclosed are techniques for performing object detection and tracking. In some implementations, a process for performing object detection and tracking is provided. The process can include steps for obtaining, at a tracking object, an image comprising a target object, obtaining, at the tracking object, a first set of messages associated with the target object, determining a bounding box for the target object in the image based on the first set of messages associated with the target object, and extracting a sub-image from the image. In some approaches, the process can further include steps for detecting, using an object detection model, a location of the target object within the sub-image. Systems and machine-readable media are also provided.
METHODS AND DEVICES FOR OBJECT TRACKING APPLICATIONS
The present disclosure relates to a computer-implemented method for object tracking applications, preferably in Bayesian object tracking applications. The method includes the steps of providing a finite element model representing a sensor model of at least one sensor. Further, the method trains said finite element model based on observations, wherein each observation includes an output of the at least one sensor paired with a known state of at least one training object, at the time of the output of the at least one sensor, in an environment sensed by the at least one sensor. Further, the method includes the steps of obtaining signals associated with at least one tracked object in an environment sensed by the at least one sensor. Furthermore, the method determines additional outputs of the at least one sensor based on the obtained signals.
METHOD FOR GENERATING 3D REFERENCE POINTS IN A MAP OF A SCENE
A method of complementing a map of a scene with 3D reference points including four steps. In a first step, data is collected and recorded based on samples of at least one of an optical sensor, a GNSS, and an IMU. A second step includes initial pose generation by processing of the collected sensor data to provide a track of vehicle poses. A pose is based on a specific data set, on at least one data set re-coded before that dataset and on at least one data set recorded after that data set. A third step includes SLAM processing of the initial poses and collected optical sensor data to generate keyframes with feature points. In a fourth step 3D reference points are generated by fusion and optimization of the feature points by using future and past feature points together with a feature point at a point of processing. This second and fourth steps provides significantly better results than SLAM or VIO methods known from prior art, as the second and the fourth steps are based on recorded data. Wherein a normal SLAM or VIO algorithm only can access data of the past, in these steps, processing may also be done by looking at positions ahead, by using the recorded data.