Patent classifications
G06T2207/30261
Operating an autonomous vehicle according to road user reaction modeling with occlusions
The disclosure provides a method for operating an autonomous vehicle. To operate the autonomous vehicle, a plurality of lane segments that are in an environment of the autonomous vehicle is determined and a first object and a second object in the environment are detected. A first position for the first object is determined in relation to the plurality of lane segments, and particular lane segments that are occluded by the first object are determined using the first position. According to the occluded lane segments, a reaction time is determined for the second object and a driving instruction for the autonomous vehicle is determined according to the reaction time. The autonomous vehicle is then operated based on the driving instruction.
DISTANCE MEASURING APPARATUS AND DISTANCE MEASURING METHOD
A distance measuring apparatus is provided. The apparatus comprises an image acquisition unit that acquires time-series images by an imaging unit, and a processing unit that performs image processing. The processing unit generates a plurality of enlarged images obtained by enlarging a reference image acquired before a target image among the time-series images at a plurality of types of different enlargement rates. The processing unit obtains a difference between each of the plurality of enlarged images and the target image, detects a target object that is a subject from the target image, and specifies a distance to the detected target object on the basis of the enlargement rate of the enlarged image with which the difference is minimum.
DRIVING CONTROL SYSTEM AND METHOD OF CONTROLLING THE SAME USING SENSOR FUSION BETWEEN VEHICLES
The present disclosure relates to a driving control system and a method of controlling the same using sensor fusion between vehicles, and the driving control system allows vehicles to share sensor data, fuses the sensor data, matches the adjacent vehicle and the sensor data, improves recognition performance in respect to a periphery, controls driving in accordance with a change in peripheral environment and a traveling state of another vehicle, receives sensor data of another vehicle, converts a coordinate of sensor data of the matched vehicle, and fuses host vehicle sensor information, which makes it possible to improve recognition performance and accuracy in respect to a surrounding environment and object, enable stable autonomous driving, and improve stability of the vehicle.
System and method for visibility enhancement
A system for visibility enhancement for a motor vehicle assistant system for warning the driver of hazardous situations due to at least one object being located within a critical range defined relative to the motor vehicle includes at least a first sensor means comprising a camera installed in a rear view equipment of the motor vehicle adapted to record at least one image, and an image processing means adapted to receive a first input signal from the first sensor means containing the at least one image and a second input signal containing at least one position profile of the at least one object located within the critical range, and manipulate the at least one image to generate a contrast manipulated image. A corresponding method of visibility enhancement is also described.
METHOD OF ESTIMATING THREE-DIMENSIONAL COORDINATE VALUE FOR EACH PIXEL OF TWO-DIMENSIONAL IMAGE, AND METHOD OF ESTIMATING AUTONOMOUS DRIVING INFORMATION USING THE SAME
Proposed are a method of estimating a three-dimensional coordinate value for each pixel of a two-dimensional image, and a method of estimating autonomous driving information using the same, and more specifically, a method that can efficiently acquire information needed for autonomous driving using a mono camera. This method is able to acquire information having sufficient reliability in real-time without using expensive equipment such as a high-precision GPS receiver, a stereo camera or the like required for autonomous driving.
Surface profile estimation and bump detection for autonomous machine applications
In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.
Navigation system with camera assist
One embodiment is a navigation system for an aircraft including a positioning system to generate information related to a position of the aircraft, a group of cameras mounted to a body of the aircraft, each camera of the group of cameras to simultaneously capture images of a portion of an environment that surrounds the aircraft, and a processing component coupled to the positioning system and the group of cameras, the processing component to determine a current position of the aircraft based on the information related to the position of the aircraft and the images.
Driving assistance device, driving situation information acquisition system, driving assistance method, and program
A driving assistance device includes: a line-of-sight direction detection unit that detects a direction of a line of sight of a driver of a moving body; an obstacle detection unit that detects a position of an obstacle in environs of the moving body; an assessment criteria determination unit that determines assessment criteria of a look at the obstacle by the driver based on at least one of time, location, weather, a state of the moving body, and a state of the driver; and a warning processing unit that obtains an assessment result by applying, to the assessment criteria, a score computed based on the direction of the line of sight of the driver and the position of the obstacle, the warning processing unit determining at least one of whether or not a warning needs to be issued to the driver and a level of the warning, based on the assessment result.
Moving body behavior prediction device and moving body behavior prediction method
The present invention improves the accuracy of predicting rarely occurring behavior of moving bodies, without reducing the accuracy of predicting commonly occurring behavior of moving bodies. A vehicle 101 is provided with a moving body behavior prediction device 10. The moving body behavior prediction device 10 is provided with a first behavior prediction unit 203 and a second behavior prediction unit 207. The first behavior prediction unit 203 learns first predicted behavior 204 so as to minimize the error between behavior prediction results for moving bodies and behavior recognition results for the moving bodies after a prediction time has elapsed. The second behavior prediction unit 207 learns future second predicted behavior 208 of the moving bodies around the vehicle 101 so that the vehicle 101 does not drive in an unsafe manner.
Robot climbing control method and robot
A robot climbing control method is disclosed. A gravity direction vector in a gravity direction in a camera coordinate system of a robot is obtained. A stair edge of stairs in a scene image is obtained and an edge direction vector of the stair edge in the camera coordinate system is determined. A position parameter of the robot relative to the stairs is determined according to the gravity direction vector and the edge direction vector. Poses of the robot are adjusted according to the position parameter to control the robot to climb the stairs.