Patent classifications
G01S2013/9315
Apparatus for assisting driving of a vehicle and method thereof
An apparatus for assisting driving of a vehicle includes a camera mounted to the vehicle for viewing an area in front of the vehicle, a radar sensor mounted to the vehicle to sense around the vehicle, and a controller connected to the camera and/or the radar sensor to detect obstacles and perform collision avoidance control. The controller is further configured to recognize a first obstacle approaching in a lateral direction from an outside of a driving lane of the vehicle, generate and store a collision avoidance path for avoiding collision with the first obstacle, recognize a third obstacle passing behind a second obstacle in the lateral direction after the recognition of the first obstacle is interrupted by the second obstacle, and perform collision avoidance control of the vehicle based on a similarity between the first obstacle and the third obstacle and based on the stored collision avoidance path.
Scenario aware perception system for an automated vehicle
A scenario aware perception system (10) suitable for use on an automated vehicle includes a traffic-scenario detector (14), an object-detection device (24), and a controller (32). The traffic-scenario detector (14) is used to detect a present-scenario (16) experienced by a host-vehicle (12). The object-detection device (24) is used to detect an object (26) proximate to the host-vehicle (12). The controller (32) is in communication with the traffic-scenario detector (14) and the object-detection device (24). The controller (32) configured to determine a preferred-algorithm (36) used to identify the object (26). The preferred-algorithm (36) is determined based on the present-scenario (16).
Stereo depth estimation using deep neural networks
Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
Straddle type vehicle
The present invention provides a straddle type vehicle, comprising: a taillight disposed in a rear portion of a vehicle and configured to emit light to rearward of the vehicle and a detection unit configured to emit radio waves and detect surrounding conditions behind the vehicle, wherein the taillight includes a light source and a housing for accommodating the light source, the detection unit is provided inside the housing, the housing has a transmitting portion that includes a first region for transmitting light emitted from the light source and a second region for transmitting the radio waves emitted from the detection unit, and the first region has an uneven shape for diffusing the light emitted from the light source, and the second region does not have the uneven shape.
REAR LATERAL BLIND-SPOT WARNING SYSTEM AND METHOD FOR VEHICLE
A rear lateral blind-spot warning system for a vehicle includes a sensor configured to sense position information and movement information on an external obstacle, a determiner configured to determine the type of the external obstacle located in a rear blind spot or a lateral blind spot of the vehicle based on the position information and the movement information sensed by the sensor, a setter configured to set a rear lateral blind-spot warning range or a rear lateral blind-spot warning time based on the type of the external obstacle determined by the determiner, and a controller configured to control rear lateral blind-spot warning operation based on the rear lateral blind-spot warning range or the rear lateral blind-spot warning time set by the setter.
VEHICLE CONTROL DEVICE, VEHICLE, VEHICLE CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
A vehicle control device is mountable on a vehicle. The vehicle control device includes: a processor; and a memory storing instructions that, when executed by the processor, cause the vehicle control device to perform operations including: acquiring detection information obtained by detecting an obstacle around the vehicle; performing collision determination of evaluating a possibility of collision with the obstacle; generating, based on the detection information, information on an approaching object that is an obstacle approaching the vehicle and information on a detection point indicating an obstacle that does not move; estimating a position of a shielding object based on the information on the detection point; evaluating, based on the position of the shielding object and the information on the approaching object, a ghost likelihood indicating a possibility that the approaching object is a ghost; and excluding, based on the ghost likelihood, the approaching object from the collision determination.
Blind spot detection
Herein is disclosed a detection device comprising one or more sensors, configured to receive sensor input from a vicinity of a first vehicle, and to generate sensor data representing the received sensor input; one or more processors, configured to detect a second vehicle from the received sensor data; determine from the sensor data a region of a first type relative to the second vehicle and a region of a second type relative to the second vehicle; and control the first vehicle to avoid or reduce travel in the one or more regions of the first type or to travel from a region of the first type to a region of the second type.
Vehicle radar system
A vehicle radar device includes a radar control unit, a first antenna array, a second antenna array, a first circuit board and a second circuit board. The first antenna array is communicatively connected to the radar control unit. The first antenna array includes a plurality of first transmitting elements and a plurality of first receiving elements. The second antenna array is communicatively connected to the radar control unit. The second antenna array includes a plurality of second transmitting elements and a plurality of second receiving elements. The first antenna array is a plurality of circuit board antennas and disposed on the first circuit board. The second antenna array is a plurality of circuit board antennas and disposed on the second circuit board.
SENSOR ASSEMBLY WITH LIDAR FOR AUTONOMOUS VEHICLES
A sensor assembly for autonomous vehicles includes a side mirror assembly configured to mount to a vehicle. The side mirror assembly includes a first camera having a field of view in a direction opposite a direction of forward travel of the vehicle; a second camera having a field of view in the direction of forward travel of the vehicle; and a third camera having a field of view in a direction substantially perpendicular to the direction of forward travel of the vehicle. The first camera, the second camera, and the third camera are oriented to provide, in combination with a fourth camera configured to be mounted on a roof of the vehicle, an uninterrupted camera field of view from the direction of forward travel of the vehicle to a direction opposite the direction of forward travel of the vehicle.
Target-Velocity Estimation Using Position Variance
The techniques and systems herein enable target-velocity estimation using position variance. Specifically, a plurality of detections of a target are received for respective times as the target moves relative to a host vehicle. Based on the detections, two-dimensional positions of the target relative to the host vehicle are determined for the respective times. Based on the positions of the target at the respective times, a first variance is determined for a first dimension of the positions, and a second variance is determined for a second dimension of the positions. Based on the first and second variances, an estimated velocity of the target is calculated. By basing the estimated velocity on the variances of the positions, more-accurate estimated velocities may be generated sooner, thus enabling better performance of downstream operations.