Patent classifications
G06T2207/30256
DISPLAY CONTROL DEVICE AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR DISPLAY CONTROL ON HEAD-UP DISPLAY
In a display control device for a head-up display in a vehicle, a boundary information regarding a boundary of a travel lane recognized for driving control of the vehicle is acquired, and guidance information used for route guidance is acquired. When an estimated trajectory content indicating an estimated trajectory of the vehicle controlled based on the boundary information and a route guidance content providing guidance on a route at a predetermined point based on the guidance information are displayed together, the estimated trajectory content and the route guidance content are displayed in different modes from each other, or a display range of the estimated trajectory content is limited to a range not beyond a predetermined point for the route guidance. As another example, when the guidance information is acquired during display of the estimated trajectory content, the route guidance content is displayed while hiding the estimated trajectory content.
COLLISION AVOIDANCE METHOD AND APPARATUS USING DEPTH SENSOR
Provided are a collision avoidance method using a depth sensor, the method comprises receiving depth-based image information and identifying a path in the received depth-based image information, determining a depth level for each region of the depth-based image information, setting one or more distance-based sensing regions on the identified path based on the determined depth level, determining whether an object is detected in each of the set distance-based sensing regions and outputting a control signal for controlling the operation of a transport when determining that the object has been detected.
POSITIONAL PRECISION ASSESSMENT DEVICE, STORAGE MEDIUM STORING COMPUTER PROGRAM FOR POSITIONAL PRECISION ASSESSMENT, AND METHOD FOR DETERMINING POSITIONAL PRECISION
The positional precision assessment device has a processor configured to estimate a first location of a moving object at a first time based on an image representing road features and positional information, estimate a second location at the first time based on a location of the moving object at the second time and an amount of movement and change in direction from the second time to the first time, input the first and second location into a filter and calculate a current location at the first time, and determine a state of precision of the current location based on a difference between the current location of the moving object and a location estimated based on a location of the moving object at a time prior to the first time and an amount of movement and change in direction from the prior time to the first time.
VEHICLE AND METHOD OF CONTROLLING THE SAME
Disclosed is a vehicle including a camera having an external field of view of the vehicle and obtaining image data. The vehicle further includes a controller including a processor configured to process the image data. In particular, the controller is configured to determine a target vehicle from at least one surrounding vehicle by processing the image data, determine a first reference point and a second reference point based on the target vehicle, and control the vehicle to follow the target vehicle based on the first reference point and the second reference point.
Systems and methods for vehicle braking
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Image-based road cone recognition method and apparatus, storage medium, and vehicle
An image-based road cone recognition method, apparatus, storage medium, and vehicle. Said method comprises: acquiring, during vehicle driving, an image of an object to be recognized; performing differential processing of the image, so as to acquire an image on which the differential processing has been performed, and performing, according to a preset threshold, ternary processing of the image on which the differential processing has been performed, so as to acquire a ternary image comprising forward boundary pixels and negative boundary pixels; acquiring, according to the forward boundary pixels and the negative boundary pixels, a forward straight line segment and a negative straight line segment which represent the trend of the boundaries of the object to be recognized; when position information of the forward and negative straight line segments matches boundary position information of a known road cone, determining that the object to be recognized is a road cone.
Method and system for ground truth determination in lane departure warning
Systems and methods for ground truth determination in lane departure warnings are provided. The methods include receiving time slice images of a lane captured at different time frames from an image capturing unit. The intensity profiles of these time slice images are determined and these intensity profiles of the images are smoothened to obtain a smoothened histogram. A threshold value of these time sliced images are determined and further refined to extract the lane marking of a lane. The lane extended to multiple rows of a lane to determine the ground truth value used for validating a lane departure warning.
Information processing apparatus, image capture apparatus, image processing system, and method of processing a plurality of captured images of a traveling surface where a moveable apparatus travels
An information processing apparatus includes an acquisition unit configured to acquire a plurality of captured images of a traveling surface where a movable apparatus travels, each of the captured images including distance information in a depth direction transverse to the traveling surface, the plurality of captured images having been captured using a plurality of stereo image capture devices, and an image processing unit configured to stitch together the plurality of images of the traveling surface captured by the plurality of stereo image capture devices by identifying partially overlapping portions of one or more pairs of the images captured by respective stereo image capture devices which are adjacent in a width direction of the traveling surface.
Apparatus and method for generating distribution information about positioning difference between GNSS positioning and precise positioning based on image and high-definition map
According to an embodiment, an apparatus and method for generating distribution information may include periodically generating GNSS information including GNSS positioning information and a positioning time, generating image information including an image of at least one or more facility object, at the positioning time, while a vehicle drives, obtaining precise positioning information for a capturing position at the positioning time based on the image information, a high-definition map, and the GNSS information, calculating a positioning difference which is a difference between the GNSS positioning information and the precise positioning information, and generating distribution information including the GNSS information, the positioning difference, and the precise positioning information. The high-definition map includes information for feature point spatial coordinates and a property for each facility object.
Lane line reconstruction using future scenes and trajectory
A vehicle capable of autonomous driving includes a lane detection system. The lane detection system is trained to predict lane lines using training images. The training images are automatically processed by a training module of the lane detection system in order to create ground truth data. The ground truth data is used to train the lane detection system to predict lane lines that are occluded in real-time images of roadways. The lane detection system predicts lane lines of a roadway in a real-time image even though the lane lines maybe indiscernible due to objects on the roadway or due to the position of the lane lines being in the horizon.