Patent classifications
B60W2554/4026
Predictive turning assistant
A method for assisting in turning a vehicle, the method may include detecting or estimating that the vehicle is about to turn to a certain direction or is turning to the certain direction; sensing a relevant portion of an environment of the vehicle to provide sensed information, wherein the relevant portion of the environment is positioned at a side of the vehicle that corresponds with the certain direction; applying an artificial intelligence process on the sensed information to (i) detect objects within the relevant portion of the environment and (ii) estimate expected movement patterns of the objects within a time frame that ends with an expected completion of the turn of the vehicle; determining, given an expected trajectory of the vehicle during the turn and the expected movement patterns of the objects, whether at least one of the objects is expected to cross the trajectory of the vehicle during the turn; and responding to an outcome of the determining.
Obstacle detection in road scenes
Systems and methods for obstacle detection are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes one or more road scenes having obstacles. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
DETERMINING OBJECT MOBILITY PARAMETERS USING AN OBJECT SEQUENCE
A system can use semantic images, lidar images, and/or 3D bounding boxes to determine mobility parameters for objects in the semantic image. In some cases, the system can generate virtual points for an object in a semantic image and associate the virtual points with lidar points to form denser point clouds for the object. The denser point clouds can be used to estimate the mobility parameters for the object. In certain cases, the system can use semantic images, lidar images, and/or 3D bounding boxes to determine an object sequence for an object. The object sequence can indicate a location of the particular object at different times. The system can use the object sequence to estimate the mobility parameters for the object.
System of configuring active lighting to indicate directionality of an autonomous vehicle
Systems, apparatus and methods may be configured to implement actively-controlled light emission from a robotic vehicle. A light emitter(s) of the robotic vehicle may be configurable to indicate a direction of travel of the robotic vehicle and/or display information (e.g., a greeting, a notice, a message, a graphic, passenger/customer/client content, vehicle livery, customized livery) using one or more colors of emitted light (e.g., orange for a first direction and purple for a second direction), one or more sequences of emitted light (e.g., a moving image/graphic), or positions of light emitter(s) on the robotic vehicle (e.g., symmetrically positioned light emitters). The robotic vehicle may not have a front or a back (e.g., a trunk/a hood) and may be configured to travel bi-directionally, in a first direction or a second direction (e.g., opposite the first direction), with the direction of travel being indicated by one or more of the light emitters.
PREDICTING NEAR-CURB DRIVING BEHAVIOR ON AUTONOMOUS VEHICLES
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting near-curb driving behavior. One of the methods includes obtaining agent trajectory data for an agent in an environment, the agent trajectory data comprising a current location and current values for a predetermined set of motion parameters of the agent; processing a model input generated from the agent trajectory data using a trained machine learning model to generate a model output comprising a prediction of whether the agent will exhibit near-curb driving behavior within a predetermined timeframe, wherein an agent exhibits near-curb driving behavior when the agent operates within a particular distance of an edge of a road in the environment; and using the prediction to generate a planned path for a vehicle in the environment.
System and Method for Intent Monitoring of Other Road Actors
Systems, methods, and autonomous vehicles may obtain one or more images associated with an environment surrounding an autonomous vehicle; determine, based on the one or more images, an orientation of a head worn item of protective equipment of an operator of a vehicle; determine, based on the orientation of the head worn item of protective equipment, a direction of a gaze of the operator and a time period associated with the direction of the gaze of the operator; determine, based on the direction of the gaze of the operator and the time period associated with the direction of the gaze of the operator, a predicted motion path of the vehicle; and control, based on the predicted motion path of the vehicle, at least one autonomous driving operation of the autonomous vehicle.
Data augmentation for detour path configuring
This application is directed to augmenting training images used for generating vehicle driving models. A computer system obtains a first image of a road, identifies within the first image a drivable area of the road, obtains an image of a traffic safety object, and determines a detour path on the drivable area. The computer system determines positions of a plurality of traffic safety objects to be placed adjacent to the detour path, and generates a second image from the first image by adaptively overlaying a respective copy of the image of the traffic safety object at each of the positions of the plurality of traffic safety objects on the drivable area within the first image. The second image is added to a corpus of training images to be used by a machine learning system to generate a model for facilitating driving a vehicle (e.g., at least partial autonomously).
VEHICLE DRIVING ASSISTANCE DEVICE AND NON-TRANSITORY STORAGE MEDIUM
A vehicle driving assistance device includes a processor. The processor is configured to execute acceleration suppression control for suppressing acceleration of a driver's vehicle in a case where a predetermined prohibition condition is not satisfied when an erroneous acceleration operation precondition is satisfied while a traveling condition is satisfied. The traveling condition is a condition for determining that the driver's vehicle is traveling. The erroneous acceleration operation precondition is a precondition for determining that an acceleration operation is erroneously performed. The acceleration operation is an operation performed by a driver of the driver's vehicle to request the acceleration of the driver's vehicle. The predetermined prohibition condition is based on a relationship between the driver's vehicle and an external environment of the driver's vehicle.
LATERAL GAP PLANNING FOR AUTONOMOUS VEHICLES
Aspects of the disclosure provide for controlling an autonomous vehicle. For instance, a trajectory for the autonomous vehicle to traverse in order to follow a route to a destination may be generated. A first error value for a boundary of an object, a second error value for a location of the autonomous vehicle, a third error value for a predicted future location of the object may be received. An uncertainty value for the object may be determined by combining the first error value, the second error value, and the third error value. A lateral gap threshold for the object may be determined based on the uncertainty value. The autonomous vehicle may be controlled in an autonomous driving mode based on the lateral gap threshold for the object.
METHOD AND SYSTEM FOR DEVELOPING AUTONOMOUS VEHICLE TRAINING SIMULATIONS
Method and systems for generating vehicle motion planning model simulation scenarios are disclosed. The method receives a base simulation scenario with features of a scene through which a vehicle may travel, defines an interaction zone in the scene, generates an augmentation element that includes an object and a behavior for the object, and adds the augmentation element to the base simulation scenario at the interaction zone to yield an augmented simulation scenario. The augmented simulation scenario is applied to a vehicle motion planning model to train the model.