Patent classifications
B60W2554/4029
SMART CAR
Smart car operations are detailed including capturing a point cloud from a vehicle street view and converting the point cloud to a 3D model; applying a trained neural network to detect street signs, cross walks, obstacles, or bike lanes; and generating driving recommendations based on driver behavior parameters by comparing the driver behavior parameters with one or more drivers with substantially similar behavior parameters.
DRIVER BEHAVIOR RISK ASSESSMENT AND PEDESTRIAN AWARENESS
Driver behavior risk assessment and pedestrian awareness may include an receiving an input stream of images of an environment including one or more objects within the environment, estimating an intention of an ego vehicle based on the input stream of images and a temporal recurrent network (TRN), generating a scene representation based on the input stream of images and a graph neural network (GNN), generating a prediction of a situation based on the scene representation and the intention of the ego vehicle, and generating an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.
Navigation with Drivable Area Detection
Enclosed are embodiments for navigation with drivable area detection. In an embodiment, a method comprises: receiving a point cloud from a depth sensor, receiving image data from a camera; predicting at least one label indicating a drivable area by applying machine learning to the image data; labeling the point cloud using the at least one label; obtaining odometry information; generating a drivable area by registering the labeled point cloud and odometry information to a reference coordinate system; and controlling the vehicle to drive within the drivable area.
ADAPTIVE APERTURE SIZE AND SHAPE BY ALGORITHM CONTROL
An optical system, and method related thereto, includes a camera configured to capture images. The camera has an adaptive aperture plane configured to change both an aperture size and an aperture shape in response to an aperture signal. The camera also includes a first polarized surface and second polarized surface positioned relative to the adaptive aperture plane, such that light strikes the first polarized surface, the adaptive aperture plane, then the second polarized surface. First and second lenses may be located on opposite sides of the adaptive aperture plane. An image sensor is beyond the second polarized surface and configured to output image signals, and a processor is configured to execute image perception algorithms based on the image signals. The image perception algorithms alter the aperture size and the aperture shape by sending an aperture signal from the processor to the camera for subsequent captured images.
Method for Determining a Trajectory for Controlling a Vehicle
The present disclosure relates to a method comprising the following steps: receiving sensor data which have been generated by a sensor system, in a control module of a vehicle computer; inputting the sensor data into a safety algorithm to detect safety-relevant objects; inputting the sensor data into a comfort algorithm to detect comfort-relevant objects; estimating future states of the objects using an environment model which represents the environment of the vehicle and in which the objects are stored and tracked over time; calculating a safety trajectory taking into account safety rules and a comfort trajectory taking into account comfort rules based on the estimated future states of the detected objects; using the comfort trajectory to control the vehicle if the comfort trajectory satisfies the safety rules; and using the safety trajectory to control the vehicle if the comfort trajectory does not satisfy the safety rules.
Collision zone detection for vehicles
Techniques and methods for determining regions. For instance, a vehicle may determine a trajectory of the vehicle and a trajectory of an agent, such as a pedestrian. The vehicle may then determine one or more contextual factors. In some examples, the one or more contextual factors are associated with a location of the agent with respect to a crosswalk, a location of the vehicle with respect to the crosswalk, a state of the crosswalk, and/or the like. The vehicle may then determine the region using the trajectory of the vehicle, the trajectory of the agent, and the one or more contextual factors. Additionally, using a time buffer value and a distance buffer value associated with the region, the vehicle may determine whether to yield to the agent within the region.
Systems and methods for increasing the safety of voice conversations between drivers and remote parties
A system for increasing the safety of voice conversations between drivers and remote parties is shown. The system includes an in-vehicle subsystem and a remote subsystem. The system includes a plurality of sensors which are configured to generate monitoring data. The system includes a computing device, which may be distributed between the subsystems and is configured to calculate a risk level as a function of the monitoring data. The computing device may engage an automatic safety response as a function of the risk level, that may include suspension or termination of on-going conversations among the parties, together with notification about the status of the communication channel. The safety response may be communicated to the driver by generating an alert. The in-vehicle and the remote subsystems communicate using a wireless connection and collaborate in engaging the automatic safety response and communicating any alerts to the driver and remote party using notifications.
ASSISTANCE METHOD AND ASSISTANCE SYSTEM AND ASSISTANCE DEVICE USING ASSISTANCE METHOD THAT EXECUTE PROCESSING RELATING TO A BEHAVIOR MODEL
A driving assistance device executes processing relating to a behavior model of a vehicle. Detected information from the vehicle is input to a detected information inputter. An acquirer derives at least one of a travel difficulty level of a vehicle, a wakefulness level of a driver, and a driving proficiency level of the driver on the basis of the detected information that is input to the detected information inputter. A determiner determines whether or not to execute processing on the basis of at least one information item derived by the acquirer. If the determiner has made a determination to execute the processing, a processor executes the processing relating to the behavior model. It is assumed that the processor does not execute the processing relating to the behavior model if the determiner has made a determination to not execute the processing.
Planning stopping locations for autonomous vehicles
Aspects of the disclosure relate to generating a speed plan for an autonomous vehicle. As an example, the vehicle is maneuvered in an autonomous driving mode along a route using pre-stored map information. This information identifies a plurality of keep clear regions where the vehicle should not stop but can drive through in the autonomous driving mode. Each keep clear region of the plurality of keep clear regions is associated with a priority value. A subset of the plurality of keep clear regions is identified based on the route. A speed plan for stopping the vehicle is generated based on the priority values associated with the keep clear regions of the subset. The speed plan identifies a location for stopping the vehicle. The speed plan is used to stop the vehicle in the location.
DRIVING ASSISTANCE APPARATUS
A driving assistance apparatus includes a controller programmed to perform a deceleration assistance process of assisting in decelerating a vehicle before the vehicle arrives at a deceleration object, and to control a display apparatus to display, in a first display area, first notification information for notifying an occupant of the vehicle of the deceleration object that is a target for the deceleration assistance process. When a first object and a second object that is different from the first object are both detected as the deceleration object and the second object is the target for the deceleration assistance process but the first object is not the target for the deceleration assistance process, the controller is programmed to control the display apparatus to display, in a second display area, second notification information for notifying the occupant of the first object, the second display area is different from the first display area.