Patent classifications
G06T2207/30248
Camera-based enhancement of vehicle kinematic state estimation
Methods and systems implemented in a vehicle involve obtaining a single camera image from a camera arranged on the vehicle. The image indicates a heading angle ψ.sub.0 between a vehicle heading x and a tangent line that is tangential to road curvature of a road on which the vehicle is traveling and also indicates a perpendicular distance y.sub.0 from a center of the vehicle to the tangent line. An exemplary method includes obtaining two or more inputs from two or more vehicle sensors, and estimating kinematic states of the vehicle based on applying a Kalman filter to the single camera image and the two or more inputs to solve kinematic equations. The kinematic states include roll angle and pitch angle of the vehicle.
System and method for verification of vehicle service target positioning
A machine-vision vehicle service system, and methods of operation, incorporating at least one at least one camera and an optical projector for guiding placement of vehicle service components relative to a vehicle undergoing service. The camera and optical projector are operatively coupled to a processing system configured with software instructions to selectively control a projection axis orientation for the optical projector to enable projection of visible indicia onto various surfaces visible within the field of view of the camera.
Position-window extension for GNSS and visual-inertial-odometry (VIO) fusion
Techniques provided herein are directed toward virtually extending an updated set of output positions of a mobile device determined by a VIO by combining a current set of VIO output positions with one or more previous sets of VIO output positions in such a way that ensure all outputs positions among the various combined sets of output positions are consistent. The combined sets can be used for accurate position determination of the mobile device. Moreover, the position determination further may be based on GNSS measurements.
Incremental segmentation of point cloud
A method for segmentation of a point cloud includes receiving a first frame of point cloud from a sensor; segmenting the first frame of point cloud to obtain a first set of point clusters representing a segmentation result for the first frame of point cloud; receiving a second frame of point cloud from the sensor; mapping the first set of point clusters to the second frame of point cloud; determining points within the second frame of point cloud which do not belong to the mapped first set of point clusters; segmenting the points within the second frame of point cloud which do not belong to the mapped first set of point clusters to obtain a second set of point clusters; and generating a segmentation result for the second frame of point cloud by combining the first set of point clusters and the second set of point clusters.
INFORMATION PROVIDING APPARATUS, INFORMATION PROVIDING METHOD, INFORMATION PROVIDING PROGRAM, AND STORAGE MEDIUM
An information providing apparatus includes: an image obtaining unit that obtains a captured image having, captured therein, surroundings of a moving body; an area extracting unit that extracts an area of interest on which a line of sight is focused in the captured image; an object recognizing unit that recognizes an object included in the area of interest in the captured image; and an information providing unit that provides object information related to the object included in the area of interest.
METHODS AND DEVICES FOR OBJECT TRACKING APPLICATIONS
The present disclosure relates to a computer-implemented method for object tracking applications, preferably in Bayesian object tracking applications. The method includes the steps of providing a finite element model representing a sensor model of at least one sensor. Further, the method trains said finite element model based on observations, wherein each observation includes an output of the at least one sensor paired with a known state of at least one training object, at the time of the output of the at least one sensor, in an environment sensed by the at least one sensor. Further, the method includes the steps of obtaining signals associated with at least one tracked object in an environment sensed by the at least one sensor. Furthermore, the method determines additional outputs of the at least one sensor based on the obtained signals.
DEFENDING MULTIMODAL FUSION MODELS AGAINST SINGLE-SOURCE ADVERSARIES
A multimodal perception system for an autonomous vehicle includes a first sensor that is one of a video, RADAR, LIDAR, or ultrasound sensor, and a controller. The controller may be configured to, receive a first signal from a first sensor, a second signal from a second sensor, and a third signal from a third sensor, extract a first feature vector from the first signal, extract a second feature vector from the second signal, extract a third feature vector from the third signal, determine an odd-one-out vector from the first, second, and third feature vectors via an odd-one-out network of a machine learning network, based on inconsistent modality prediction, fuse the first, second, and third feature vectors and odd-one-out vector into a fused feature vector, output the fused feature vector, and control the autonomous vehicle based on the fused feature vector.
Method For Training A Neural Network For Semantic Image Segmentation
The present invention relates to a method, a computer program, and an apparatus for training a neural network for semantic image segmentation. The invention further relates to an in-car control unit or a backend system, which make use of such a method or apparatus, and to a vehicle comprising such an in-car control unit. In some embodiments and in a first step, image data of a sequence of image frames are received. Then a frame-based evaluation of semantic segmentation predictions of one or more objects in individual image frames is performed. Furthermore, a sequence-based evaluation of temporal characteristics of semantic segmentation predictions of said one or more objects in at least two image frames is performed. The results of the frame-based evaluation and the sequence-based evaluation are combined.
Systems and methods for automated trade-in with limited human interaction
Aspects described herein may facilitate an automated trade-in of a vehicle with limited human interaction. A server may receive a request to begin a value determination of a vehicle associated with the user. The server may receive first data comprising: vehicle-specific identifying information, and multimedia content showing a first aspect of the vehicle. The user may be directed to place the vehicle within a predetermined staging area. The server may receive, from one or more image sensors associated with the staging area, second data comprising multimedia content showing a second aspect of the vehicle. The server may create a feature vector comprising the first data and the second data. The feature vector may be inputted into a machine learning algorithm corresponding to the vehicle-specific identifying information of the vehicle. Based on the machine learning algorithm, the server may determine a value of the vehicle.
SELECTIVE OBFUSCATION OF OBJECTS IN MEDIA CONTENT
Described herein are techniques that may be used to provide for automatic obfuscation of one or more objects in a media data. Such techniques may comprise receiving, from a data source, a media data comprising a depiction of a number of objects, identifying, within the received media data, a set of objects associated with the media data, and storing an indication of one or more locations of the objects in the set of objects within the media data with respect to time. Upon receiving a request for the media data, such techniques may further comprise updating the media data by applying an obfuscation effect to the one or more locations with respect to time, and providing the updated media data in response to the request.