Patent classifications
B60W2554/4048
TRACKING VANISHED OBJECTS FOR AUTONOMOUS VEHICLES
Aspects of the disclosure relate to methods for controlling a vehicle having an autonomous driving mode. For instance, sensor data may be received from one or more sensors of the perception system of the vehicle, the sensor data identifying characteristics of an object perceived by the perception system. When it is determined that the object is no longer being perceived by the one or more sensors of the perception system, predicted characteristics for the object may be generated based on one or more of the identified characteristics. The predicted characteristics of the object may be used to control the vehicle in the autonomous driving mode such that the vehicle is able to respond to the object when it is determined that the object is no longer being perceived by the one or more sensors of the perception system.
Method for Controlling Vehicle and Vehicle Control Device
A method for controlling a vehicle including: based on map information including information of an installation position of a traffic light and information of a lane controlled by the traffic light and a range of the angle of view of a camera mounted on the own vehicle, calculating an imaging-enabled area in which an image of the traffic light can be captured on the lane by the camera; determining whether or not the own vehicle is positioned in the imaging-enabled area; and when the own vehicle is positioned in the imaging-enabled area, controlling the own vehicle in such a way that the traffic light is not shielded from the range of the angle of view of the camera by a preceding vehicle of the own vehicle.
BLIND SPOT ASSIST DEVICE
A blind spot assist device includes a first optical element and a second optical element. The first optical element reflects a part of incident light having a first angle of incidence at different second angles such that light is incident at the first angle on the first optical element from an outdoor view and reflected at the second angles on the first optical element. The second optical element is positioned to face the first optical element and reflects incident lights having different angles of incidence at a third angle such that lights reflected by the first optical element are incident at the second angles on the second optical element and reflected at the third angle on the second optical element toward a user.
SENSOR PERFORMANCE VALIDATION IN ADVANCED DRIVER-ASSISTANCE SYSTEM VERIFICATION
Systems and methods are provided for generating data for sensor system validation. A representative vehicle is equipped with a set of sensors positioned to provide a collective field of view defining a set of sensor locations as a set of master data and encompassing a field of view of a sensor positioned at any of the set of sensor locations. The set of sensor locations includes a sensor location at which no sensor of the set of sensors is placed. The representative vehicle is driven for a distance required for validation of a sensor system to provide master data representing the entire distance required for validation.
VEHICLE MONITORING SYSTEM
A monitoring system for a combination vehicle comprises at least one image capture device mounted on a tractor, which has a trailer mounted control system within its field of view. The trailer mounted control system has a visual indicator. A controller is associated with the at least one image capture device. The controller captures images of the visual indicator, determines if the visual indicator meets a predetermined event condition and provides notification to at least one of a driver of the tractor and a remote fleet operator in response to the visual indicator meeting the predetermined condition.
Road User Categorization Through Monitoring
Categorizing driving behaviors of other road users includes maintaining a first history of first lateral-offset values of a road user with respect to a center line of a lane of a road; determining a first pattern based on the first history of the first lateral-offset values; determining a driving behavior of the road user based on the first pattern; and autonomously performing, by a host vehicle, a driving maneuver based on the driving behavior of the road user. The first history can be maintained for a predetermined period of time. An apparatus includes a processor that is configured to track a trajectory history of a road user; determine, based on the trajectory history, a driving behavior of the road user; and transmit a notification of the driving behavior.
Avoidance of obscured roadway obstacles
The systems and methods described herein disclose detecting obstacles in a vehicular environment using host vehicle input and associated trust levels. As described here, measured vehicles, either manual or autonomous, that detect an obstacle in the environment will operate to respond to the obstacle. As such, those movements can be used to determine if an obstacle exists in the environment, even if the obstacle cannot be detected directly. The systems and methods can include a host vehicle receiving prediction data about an evasive behavior from one or more measured vehicles in a vehicular environment. A trust level can then be established for the measured vehicles. An obscured obstacle can be determined using the evasive behavior and the trust level which can then be mapped in the vehicular environment. A guidance input can then be created for the host vehicle using the obscured obstacle and the trust level.
REMOTE OPERATION DEVICE, REMOTE DRIVING SYSTEM, REMOTE OPERATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING A REMOTE OPERATION PROGRAM
A remote operation device that drives a vehicle by remote operation exterior to the vehicle, including: an image acquiring section that acquires an image surroundings of the vehicle from the vehicle; a first display portion that displays the image of the surroundings; and second display portions that are provided at peripheral visual field regions, which are other than a central visual field region of the first display portion, at at least one of an upper side or a lower side of the first display portion, and that display objects that move from the central visual field region side toward transverse direction end portions.
GAZE AND AWARENESS PREDICTION USING A NEURAL NETWORK MODEL
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for predicting gaze and awareness using a neural network model. One of the methods includes obtaining sensor data (i) that is captured by one or more sensors of an autonomous vehicle and (ii) that characterizes an agent that is in a vicinity of the autonomous vehicle in an environment at a current time point. The sensor data is processed using a gaze prediction neural network to generate a gaze prediction that predicts a gaze of the agent at the current time point. The gaze prediction neural network includes an embedding subnetwork that is configured to process the sensor data to generate an embedding characterizing the agent, and a gaze subnetwork that is configured to process the embedding to generate the gaze prediction.
DETERMINING OBJECT CHARACTERISTICS USING UNOBSTRUCTED SENSOR EMISSIONS
Techniques for determining occupancy using unobstructed sensor emissions. For instance, a vehicle may receive sensor data from one or more sensors. The sensor data may represent at least locations to points within an environment. Using the sensor data, the vehicle may determine areas within the environment that are obstructed by objects (e.g., locations where objects are located). The vehicle may also use the sensor data to determine areas within the environment that are unobstructed by objects (e.g., locations where objects are not located). In some examples, the vehicle determines the unobstructed areas as including areas that are between the vehicle and the identified objects. This is because sensor emissions from the sensor(s) passed through these areas and then reflected off of objects located farther distances from the vehicle. The vehicle may then generate a map indicating at least the obstructed areas and the unobstructed areas within the environment.