Patent classifications
G06T2207/30256
Generating training data from overhead view images
The present invention relates to a method of generating an overhead view image of an area. More particularly, the present invention relates to a method of generating a contextual multi-image based overhead view image of an area using ground map data and field of view image data. Various embodiments of the present technology can include methods, systems and non-transitory computer readable media and computer programs configured to determine a ground map of the geographical area; receiving a plurality of images of the geographical area; process the plurality of images to select a subset of images to generate the overhead view of the geographical area; divide the ground map into a plurality of sampling points of the geographical area; and determine a color of a plurality of patches of the overhead view image from the subset of images, each patch representing each sampling point of the geographical area.
Calibration for vision in navigation systems
A computer-implemented method includes receiving a video comprising image frames depicting multiple objects. The video is captured by a video capture device moving relative to the surface of the Earth while the video is captured. A geographic location of the video capture device is received for each of the image frames and an angular orientation of the video capture device is determined based on the image frames. The determining the angular orientation includes determining a line in the image frames of the video for each object of a plurality of the multiple objects. The determined line corresponds to two-dimensional positions of the object in the image frames. The computer-implemented method includes determining a vanishing point of the image frames based on the determined lines and determining the angular orientation of the video capture device based on the determined vanishing point.
Calibration for vision in navigation systems
A computer-implemented method includes receiving a video comprising image frames depicting multiple objects. The video is captured by a video capture device moving relative to the surface of the Earth while the video is captured. A geographic location of the video capture device is received for each of the image frames and an angular orientation of the video capture device is determined based on the image frames. The determining the angular orientation includes determining a line in the image frames of the video for each object of a plurality of the multiple objects. The determined line corresponds to two-dimensional positions of the object in the image frames. The computer-implemented method includes determining a vanishing point of the image frames based on the determined lines and determining the angular orientation of the video capture device based on the determined vanishing point.
Method and system for detecting and managing obfuscation of a road sign
A method, a system and a computer program product for detecting and managing obfuscation of a road sign may be provided herein. The method may include receiving, a plurality of images from a plurality of vehicles over a time period, determining an extent of obfuscation of road sign in each of set of images. A current extent of obfuscation of the road sign is the extent of obfuscation of the road sign in most recent of the set of images. The method further includes performing a time-series analysis of extent of obfuscation of the road sign in each of set of images to determine rate at which the extent of obfuscation of road sign is increasing and determining an impending risk of failing to spot road sign, from an appropriate distance by vehicle. The method further includes providing a recommendation based on impending risk of failing to spot road sign.
VEHICULAR DRIVING ASSIST SYSTEM WITH LANE DETECTION USING REAR CAMERA
A vehicular vision system includes a camera disposed at and viewing exterior and rearward of a vehicle. The system, as the vehicle is driven forward along a traffic lane of a road, and responsive to processing of image data captured by the camera, determines a traffic lane marker rearward of the vehicle. The system, responsive to determining the traffic lane marker, determines a position of the vehicle within the traffic lane the vehicle is moving along. The system, responsive to determining the position of the vehicle within the traffic lane is within a threshold distance of a side of the traffic lane, alerts an occupant of the vehicle.
CAMERA BASED DISTANCE MEASUREMENTS
Systems, and method and computer readable media that store instructions for distance measurement, the method may include obtaining, from a camera of a vehicle, an image of a surroundings of the vehicle; searching, within the image, for an anchor, wherein the anchor is associated with at least one physical dimension of a known value; and when finding the anchor, determining a distance between the camera and the anchor based on, (a) the at least one physical dimension of a known value, (b) an appearance of the at least one physical dimension of a known value in the image, and (c) a distance-to-appearance relationship that maps appearances to distances, wherein the distance-to-appearance relationship is generated by a calibration process that comprises obtaining one or more calibration images of the anchor, and obtaining one or more distance measurements to the anchor.
APPARATUS FOR DISPLAYING INFORMATION BASED ON AUGMENTED REALITY
An information display apparatus may include a processor configured to display a display object in augmented reality; and a storage configured to store data and algorithms driven by the processor, wherein the processor determines a position of the display object by use of at least one of a total number of lanes or a number of lanes in a road in a driving direction of a host vehicle, possible traveling direction information for each lane, and driving direction information related to the host vehicle, and the information display apparatus is disposed within a vehicle or outside the vehicle, and when disposed outside the vehicle, is configured to transmit display information related to the display object to the vehicle or a mobile device.
Methods and apparatus for automatic collection of under-represented data for improving a training of a machine learning model
In some embodiments, a method can include executing a first machine learning model to detect at least one lane in each image from a first set of images. The method can further include determining an estimate location of a vehicle for each image, based on localization data captured using at least one localization sensor disposed at the vehicle. The method can further include selecting lane geometry data for each image, from a map and based on the estimate location of the vehicle. The method can further include executing a localization model to generate a set of offset values for the first set of images based on the lane geometry data and the at least one lane in each image. The method can further include selecting a second set of images from the first set of images based on the set of offset values and a previously-determined offset threshold.
Method and system for video-based positioning and mapping
A method and system for determining a geographical location and orientation of a vehicle travelling through a road network is disclosed. The method comprises obtaining, from one or more cameras associated with the vehicle travelling through the road network, a sequence of images reflecting the environment of the road network on which the vehicle is travelling, wherein each of the images has an associated camera location at which the image was recorded. A local map representation representing an area of the road network on which the vehicle is travelling is then generated using at least some of the obtained images and the associated camera locations. The generated local map representation is compared with a section of a reference map, the reference map section covering the area of the road network on which the vehicle is travelling, and the geographical location and orientation of the vehicle within the road network is determined based on the comparison. Methods and systems for generating and/or updating an electronic map using data obtained by a vehicle travelling through a road network represented by the map are also disclosed.
Map points-of-change detection device
A map points-of-change detection device includes: a camera capturing an image of an area around a vehicle; a bird's-eye-view transformation section transforming the image into a bird's-eye view image; a map storage portion storing a road map including a road surface map; a collation processing section determining whether a point of change in the road surface map exits, the point of change being a position at which a change has occurred on an actual road surface; and a collation region identification section that determines a region for collation in a width direction of the vehicle from the bird's-eye view image.