Patent classifications
B60W2554/4029
DETERMINING OBJECT MOBILITY PARAMETERS USING AN OBJECT SEQUENCE
A system can use semantic images, lidar images, and/or 3D bounding boxes to determine mobility parameters for objects in the semantic image. In some cases, the system can generate virtual points for an object in a semantic image and associate the virtual points with lidar points to form denser point clouds for the object. The denser point clouds can be used to estimate the mobility parameters for the object. In certain cases, the system can use semantic images, lidar images, and/or 3D bounding boxes to determine an object sequence for an object. The object sequence can indicate a location of the particular object at different times. The system can use the object sequence to estimate the mobility parameters for the object.
METHOD FOR OPERATING A DRIVER ASSISTANCE SYSTEM
A method for operating a vehicle includes detecting a road user in a predetermined portion of a vehicle's environment, detecting a start-up attempt by vehicle driver, blocking a vehicle start-up in response to the detection of a road user in the vehicle's environment and to the detected start-up attempt, and altering a vehicle control such that the comfort of the driver is reduced in comparison with an unaltered vehicle control when the vehicle start-up has been blocked.
GAZE AND AWARENESS PREDICTION USING A NEURAL NETWORK MODEL
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for predicting gaze and awareness using a neural network model. One of the methods includes obtaining sensor data (i) that is captured by one or more sensors of an autonomous vehicle and (ii) that characterizes an agent that is in a vicinity of the autonomous vehicle in an environment at a current time point. The sensor data is processed using a gaze prediction neural network to generate a gaze prediction that predicts a gaze of the agent at the current time point. The gaze prediction neural network includes an embedding subnetwork that is configured to process the sensor data to generate an embedding characterizing the agent, and a gaze subnetwork that is configured to process the embedding to generate the gaze prediction.
System of configuring active lighting to indicate directionality of an autonomous vehicle
Systems, apparatus and methods may be configured to implement actively-controlled light emission from a robotic vehicle. A light emitter(s) of the robotic vehicle may be configurable to indicate a direction of travel of the robotic vehicle and/or display information (e.g., a greeting, a notice, a message, a graphic, passenger/customer/client content, vehicle livery, customized livery) using one or more colors of emitted light (e.g., orange for a first direction and purple for a second direction), one or more sequences of emitted light (e.g., a moving image/graphic), or positions of light emitter(s) on the robotic vehicle (e.g., symmetrically positioned light emitters). The robotic vehicle may not have a front or a back (e.g., a trunk/a hood) and may be configured to travel bi-directionally, in a first direction or a second direction (e.g., opposite the first direction), with the direction of travel being indicated by one or more of the light emitters.
Autonomous machine motion planning in a dynamic environment
An autonomous robot system to enable automated movement of goods and materials in a dynamic environment including one or more dynamic objects. The autonomous robot system includes an autonomous ground vehicle (AGV) including a vehicle management system. The vehicle management system provides real time resource planning and path optimization to enable the AGV to operate safely and efficiently alongside humans in a dynamic environment. The vehicle management system includes one or more processing devices to execute a moving object trajectory prediction module to predict a trajectory of a dynamic or moving object in a shared environment.
Vehicle Action Determining Method and Vehicle Action Determining Device
A method for determining a vehicle action includes: by a controller that acquires travel situation information of a road on which a host vehicle travels and determines a driving action from the travel situation information, setting at least one control determining point on a first route on which the host vehicle travels, the control determining point determining whether to run or stop the host vehicle; and determining whether to run or stop the host vehicle at the control determining point before the host vehicle reaches the point. The controller determines whether or not the host vehicle enters a road on which another vehicle or a pedestrian travels or walks with priority over the host vehicle on the first route; and where it is determined that the host vehicle enters the road on which the other vehicle or pedestrian travels or walks with priority, sets the control determining points more densely.
SYSTEMS AND METHODS OF ASSISTING VEHICLE NAVIGATION
Systems and methods for assisting navigation of a vehicle are disclosed. In one embodiment, a method of assisting navigation of a vehicle includes receiving navigational data relating to an intended route of the vehicle, receiving object data relating to at least one external object detected within a vicinity of a current position of the vehicle, determining whether the at least one external object affects an ability of the vehicle to proceed along the intended route, and generating at least one instruction relating to the ability of the vehicle to proceed along the intended route.
PREDICTING NEAR-CURB DRIVING BEHAVIOR ON AUTONOMOUS VEHICLES
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting near-curb driving behavior. One of the methods includes obtaining agent trajectory data for an agent in an environment, the agent trajectory data comprising a current location and current values for a predetermined set of motion parameters of the agent; processing a model input generated from the agent trajectory data using a trained machine learning model to generate a model output comprising a prediction of whether the agent will exhibit near-curb driving behavior within a predetermined timeframe, wherein an agent exhibits near-curb driving behavior when the agent operates within a particular distance of an edge of a road in the environment; and using the prediction to generate a planned path for a vehicle in the environment.
Driver assistance system and method of controlling the same
A driver assistance system includes a detector configured to detect pedestrians or obstacles in a front detection area and a rear detection area of a vehicle; an accelerator pedal sensor configured to detect a position of an accelerator pedal of the vehicle; and a controller configured to selectively activate the front detection area and the rear detection area according to a gear state of the vehicle, when there is the pedestrians or the obstacles in the activated detection area, to recognize an acceleration pedal change amount from an acceleration pedal position detected through the accelerator pedal sensor, to determine whether emergency braking of the vehicle is necessary based on the recognized acceleration pedal change amount, and when the emergency braking is necessary, to perform the emergency braking for the vehicle.
Data augmentation for detour path configuring
This application is directed to augmenting training images used for generating vehicle driving models. A computer system obtains a first image of a road, identifies within the first image a drivable area of the road, obtains an image of a traffic safety object, and determines a detour path on the drivable area. The computer system determines positions of a plurality of traffic safety objects to be placed adjacent to the detour path, and generates a second image from the first image by adaptively overlaying a respective copy of the image of the traffic safety object at each of the positions of the plurality of traffic safety objects on the drivable area within the first image. The second image is added to a corpus of training images to be used by a machine learning system to generate a model for facilitating driving a vehicle (e.g., at least partial autonomously).