Patent classifications
G06T2207/30256
Lane line positioning method and apparatus, and storage medium thereof
This disclosure is directed to a lane line positioning method and apparatus. The method includes obtaining inertial information, target traveling information, and first position information of a vehicle. The inertial information comprises information measured by an inertial measurement unit of the vehicle. The target traveling information comprises traveling information of the vehicle acquired at a first moment. The first position information comprises a position of the vehicle at the first moment. The method includes determining second position information according to the target traveling information and the first position information and determining third position information of the vehicle at a second moment based on the inertial information of the vehicle and the second position information. The method includes determining a position of a lane line in a map according to the third position information and relative position information.
INFORMATION PROCESSING APPARATUS, IMAGE TRANSMISSION SYSTEM, AND INFORMATION PROCESSING METHOD
The present disclosure provides an information processing apparatus and the like capable of specifying an object in an image that may affect traveling of a vehicle. An information processing apparatus includes: an acquisition unit that acquires an image captured by an image capturing unit mounted on a vehicle; an object detection unit that detects one or more objects in the acquired image; a traveling region specifying unit that specifies a traveling region, in which the vehicle is traveling, from regions in the acquired image; and a determination unit that determines, based on the traveling region, an image processing region, which is subjected to image processing, among regions of the one or more objects.
SYSTEMS AND METHODS FOR ROAD SEGMENT MAPPING
A system for automatically mapping a road segment may include: at least one processor programmed to: receive, from at least one camera mounted on a vehicle, a plurality of images acquired as the vehicle traversed the road segment; convert each of the plurality of images to a corresponding top view image to provide a plurality of top view images; aggregate the plurality of top view images to provide an aggregated top view image of the road segment; analyze the aggregated top view image to identify at least one road feature associated with the road segment; automatically annotate the at least one road feature relative to the aggregated top view image; and output to at least one memory the aggregated top view image including the annotated at least one road feature.
Vehicular vision system with road contour detection feature
A vehicular driving assist system includes a camera disposed at a vehicle equipped with the vehicular driving assist system and viewing forward of the vehicle, the camera capturing image data. An electronic control unit (ECU) includes electronic circuitry and associated software. The electronic circuitry of the ECU includes an image processor for processing image data captured by the camera. The ECU, responsive to processing by the image processor of image data captured by the camera, determines presence of a leading vehicle traveling in front of the equipped vehicle and in the same traffic lane as the equipped vehicle. The ECU, responsive to determining presence of the leading vehicle, determines presence of a pothole in front of the vehicle and in the same traffic lane as the equipped vehicle.
SYSTEMS AND METHODS FOR DETECTING OBJECTS IN AN IMAGE OF AN ENVIRONMENT
In some implementations, a device may receive an image that depicts an environment associated with a vehicle. The device may partition the image into a plurality of subsections. The device may analyze the plurality of subsections to determine respective subsection information, wherein subsection information, for an individual subsection, indicates: a probability score that the subsection includes a line segment associated with an object class, a position of a representative point of the line segment, and a direction of the line segment. The device may identify, based on the respective subsection information of the plurality of subsections, a line associated with the object class that is associated with a set of subsections of the plurality of subsections. The device may perform one or more actions based on identifying the line associated with the object class.
Method and device to determine the camera position and angle
The present disclosure provides a method and an apparatus for determining an attitude angle of a camera, capable of improving the accuracy of the attitude angle of the camera, and in turn the accuracy of the attitude of the camera that is obtained based on the attitude angle of the camera. The present disclosure can also improve the accuracy of object distance measurement and vehicle positioning based on the attitude angle of the camera. In the method for determining an attitude angle of a camera, the camera is fixed to one and the same rigid object in a vehicle along with an Inertial Measurement Unit (IMU). The method includes: obtaining IMU attitude angles outputted from the IMU and images captured by the camera; determining a target IMU attitude angle corresponding to each frame of image based on respective capturing time of the frames of images and respective outputting time of the IMU attitude angles; and determining an attitude angle of the camera corresponding to each frame of image based on a predetermined conversion relationship between a camera coordinate system for the camera and an IMU coordinate system for the IMU and the target IMU attitude angle corresponding to each frame of image.
Method for detecting and modeling of object on surface of road
A method for detecting and modelling of an object on a surface of a road by first scanning the road and generating a 3D model of the scanned road (which 3D model of the scanned road contains a description of a 3D surface of the road) and then creating a top-view image of the road. The object is detected on the surface of the road by evaluating the top-view image of the road. The detected object is projected on the surface of the road in the 3D model of the scanned road. The object projected on the surface of the road in the 3D model of the scanned road is modelled.
Location estimating device, storage medium storing computer program for location estimation and location estimating method
A location estimating device has a processor that is configured to calculate a first estimated location of a vehicle using positional information representing the location of a vehicle and first map data overlapping with a first section of a vehicle traveling route, to calculate a second estimated location of the vehicle using positional information and second map data that overlaps with a second section of the traveling route, the first section and the second section having an overlapping section, and to assess whether or not the precision of the second estimated location satisfies a predetermined assessment criterion when the vehicle is traveling in the overlapping section from the first section toward the second section.
SYSTEMS AND METHODS FOR VEHICLE SPEED AND LATERAL POSITION CONTROL
A system for navigating a host vehicle may include memory and at least one processor configured to receive a plurality of images acquired by a camera onboard the host vehicle; generate, based on analysis of the plurality of images, a road geometry model for a segment of road forward of the host vehicle; determine, based on analysis of at least one of the plurality of images, one or more indicators of an orientation of the host vehicle; and generate, based on the one or more indicators of orientation of the host vehicle and the road geometry model for the segment of road forward of the host vehicle, one or more output signals configured to cause a change in a pointing direction of a movable headlight onboard the host vehicle.
DRIVING SUPPORT DEVICE, DRIVING SUPPORT UNIT, STORAGE MEDIUM, AND DRIVING SUPPORT METHOD
A driving support device is detachably attached to a vehicle via a detachable member and performs: acquiring one or more images obtained by imaging a surrounding situation of the vehicle; and determining a notification intensity of a notification for a driver of the vehicle on the basis of a change of a target in the one or more images when it is predicted on the basis of information acquired from the one or more images that the vehicle is to depart from a traveling lane in which the vehicle is traveling and causing a notifier to output a notification of the determined notification intensity.