Patent classifications
B60R2300/8093
System and methods for object detection and tracking using a lidar observation model
A system for detecting and tracking objects using lidar can include one or more processors configured to receive lidar data. The one or more processors can determine shape data from the lidar data. The shape data can be indicative of an object. The one or more processors can determine a plurality of extents of the object based on the shape data. The one or more processors can update a state of the object based on the plurality of extents, the state including a boundary of the object. The one or more processors can provide the state of the object to an autonomous vehicle controller to cause the autonomous vehicle controller to control an autonomous vehicle responsive to the state of the object.
Method and system for assisting drivers in locating objects that may move into their vehicle path
A system and method for assisting drivers of vehicles are described. The systems and methods provide an extended view of the area surrounding the driver's vehicle while providing real-time object trajectory for objects and other vehicles that may enter the driver's reactionary zone. The system and methods capture images of the area surrounding the driver's vehicle and create a composite image of that area in real-time and using Augmented Reality (AR) create a 3-D overlay to warn the driver as objects or other vehicles enter the driver's reactionary zone so that a driver can make more informed driving decisions.
METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
The disclosure relates to a method for determining object information relating to an abject in an environment of a vehicle having a camera. The method includes: capturing the environment with the camera from a first position; changing the position of the camera; capturing the environment with the camera from a second position; determining object information relating to an object by selecting at least one first pixel in the first image and at least one second pixel in the second image, by selecting the first pixel and the second pixel such that they are assigned to the same object point of the object, and determining object coordinates of the assigned object point by triangulation. Changing the position of the camera is brought about by controlling an active actuator system in the vehicle. The actuator system adjusts the camera by an adjustment distance without changing a driving condition of the vehicle.
Method and device for outputting lane information
A vehicle driving assistance device includes: a sensing unit configured to capture a front image of a running vehicle; a processor configured to detect a lane by using the front image and estimate the lane by detecting objects around a road, on which the vehicle is running, by using the sensing unit; and an output unit configured to output the estimated lane.
Outside recognition apparatus for vehicle
An on-board outside recognition apparatus extracts a feature point from an image including an environment around a user's own vehicle, measures a three-dimensional position of the feature point based on movement of the feature point tracked in time series on the image and calculates a foot position of the feature point on the image from the three-dimensional position of the feature point is performed. Then, the on-board outside recognition apparatus extracts a road surface area where the user's own vehicle can travel, from the image using a degree of similarity of a texture of the image, and judges a feature point the foot position of which is judged not to be included in the road surface area to be highly reliable.
PERIPHERY MONITORING DEVICE FOR WORKING MACHINE
A periphery monitoring device calculates an expected passage range indicating a range of a locus of a machine body when a lower travelling body travels in an imaging direction of a camera, based on a slewing angle of an upper slewing body and an attitude of an attachment, and superimposes a range image indicating the calculated expected passage range on an image captured by the camera to display the superimposed image on the display.
Collision avoidance and/or pedestrian detection system
A collision avoidance and/or pedestrian detection system for a large passenger vehicle such as commuter bus, which includes one or more exterior and/or interior sensing devices positioned strategically around the exterior and interior of the vehicle for recording data, method for avoiding collisions and/or detecting pedestrians, and features/articles of manufacture for improving same, is described herein in various embodiments. The sensing devices may be responsive to one or more situational sensors, and may be connected to one or more interior and/or exterior warning systems configured to alert a driver inside the vehicle and/or a pedestrian outside the vehicle that a collision may be possible and/or imminent based on a path of the vehicle and/or a position of the pedestrian as detected by one or more sensing devices and/or situational sensors.
Work vehicle display systems and methods for generating visually-manipulated context views
A work vehicle display system utilized in piloting a work vehicle includes a display device having a display screen, a context camera mounted to the work vehicle and positioned to capture a context camera feed of the work vehicle's exterior environment, and a controller architecture. The controller architecture is configured to: (i) receive the context camera feed from the context camera; (ii) generate a visually-manipulated context view utilizing the context camera feed; and (iii) output the visually-manipulated context view to the display device for presentation on the display screen. In the process of generating the visually-manipulated context view, the controller architecture applies a dynamic distortion-perspective (D/P) modification effect to the context camera feed, while gradually adjusting a parameter of the dynamic D/P modification effect in response to changes in operator viewing preferences or in response to changes in a current operating condition of the work vehicle.
Vehicular control system with traffic lane detection
A vehicular control system includes a forward viewing camera disposed at an in-cabin side of a windshield of a vehicle and viewing forward of the vehicle. Road curvature of a road along which the vehicle is traveling is determined responsive at least in part to processing by an image processor of image data captured by the camera. Responsive at least in part to processing of captured image data, a traffic lane of the road along which the vehicle is traveling is determined. Upon approach of the vehicle to a curve in the road, speed of the vehicle is reduced to a reduced speed for traveling around the curve in the road at least in part responsive to at least one selected from the group consisting of (a) processing of image data captured by the forward viewing camera and (b) data relevant to a current geographical location of the equipped vehicle.
Vehicle camera system
A processor for a vehicle camera system, the processor arranged to: receive image data captured by a camera, the image data providing a field of view surrounding a host vehicle; determine the presence of an obstruction in the field of view; receive inputs from one or more other sensors; determine the presence or absence of obstructions in a field of view of the one or more other sensors, wherein each determined presence or absence carries an associated confidence score and wherein each confidence score contributes to a confidence total; and control a display to display the image data based on the detection of the obstruction in the field of view provided by the image data and based on a comparison of the confidence total to a confidence threshold.