Patent classifications
B60W2400/00
System of configuring active lighting to indicate directionality of an autonomous vehicle
Systems, apparatus and methods may be configured to implement actively-controlled light emission from a robotic vehicle. A light emitter(s) of the robotic vehicle may be configurable to indicate a direction of travel of the robotic vehicle and/or display information (e.g., a greeting, a notice, a message, a graphic, passenger/customer/client content, vehicle livery, customized livery) using one or more colors of emitted light (e.g., orange for a first direction and purple for a second direction), one or more sequences of emitted light (e.g., a moving image/graphic), or positions of light emitter(s) on the robotic vehicle (e.g., symmetrically positioned light emitters). The robotic vehicle may not have a front or a back (e.g., a trunk/a hood) and may be configured to travel bi-directionally, in a first direction or a second direction (e.g., opposite the first direction), with the direction of travel being indicated by one or more of the light emitters.
Adjusting an operating mode of a vehicle based on an expected resource level
A method for controlling an operating mode of a vehicle is presented. The method includes determining a current range of the vehicle while the vehicle is operating in a first operating mode. The method also includes determining a distance to a destination. The method further includes controlling the vehicle to operate in a second operating mode instead of the first operating mode when the range is less than the distance to the destination.
VEHICLE SYSTEM FOR RECOGNIZING OBJECTS
A vehicle system includes an electronic control unit. The electronic control unit is configured to execute a first program, a second program, and a third program. The first program is configured to recognize an object present around a vehicle, the second program is configured to store information related to the recognized object as time-series map data, and the third program is configured to predict a future position of the object based on the stored time-series map data. The first program and the third program are configured to be (i) first, individually optimized based on first training data corresponding to output of the first program and second training data corresponding to output of the third program, and (ii) then, collectively optimized based on the second training data corresponding to the output of the third program.
ENSEMBLE OF NARROW AI AGENTS FOR AUTONOMOUS EMERGENCY BREAKING
A method for A method for automatic emergency braking (AEB) of a vehicle, the method may include obtaining sensed information regarding an environment of the vehicle and regarding a motion of the vehicle; determining an occurrence of current situation, based on the sensed information; selecting, based on the current situation, a selected narrow AI agent out of multiple narrow AI agents; wherein different narrow AI agents are trained, using a machine learning process, to perform AEB related decisions at different scenarios; wherein the selected narrow AI agent is associated with the occurrence of the current situation; processing, by the selected narrow AI agent, at least one out of (a) at least a first part of the sensed information, and (b) an outcome of a pre-processing of at least a second part of the sensed information; wherein the processing results in a AEB related decision; and responding to the AEB related decision, wherein the responding comprises at least one out of (a) executing the AEB related decision, and (b) suggesting executing the AEB related decision.
Method of monitoring localization functions in an autonomous driving vehicle
In one embodiment, a method for monitoring a localization function in an autonomous driving vehicle (ADV) can use known static objects as ground truths to determine when the localization function encounter errors. The known static objects are marked on a high definition (HD) map for the real-time driving environment. When the ADV detects one or more known static objects, the ADV can use sensor data, locations of the one or more static objects, and one or more error tolerance parameters to create a localization error tolerance area surrounding a current location of the ADV. The ADV can project the tolerance area on the HD map, performs a localization operation to generate an expected location of the ADV on the HD map, and determines whether the generated location falls within the projected tolerance area. If the generated location falls outside the projected tolerance area, indicating a localization function of the ADV encounter errors, the ADV can generate an alarm to alert a human driver to switch to a manual driving mode. If no human driver is available in the ADV, the ADV can activate a vision-based fail-safe localization procedure.
CONTACT DETERMINATION SYSTEM, CONTACT DETERMINATION DEVICE, CONTACT DETERMINATION METHOD AND PROGRAM
A contact determination system includes a processor, and determines whether a contact event occurs between target objects in a vicinity of a host vehicle. The processor is configured to acquire, regarding each of the target objects including at least one of (i) a road user other than the host vehicle and (ii) a road-installed object, a type and a moving speed of the target object. The processor is also configured to perform an overlap determination regarding whether at least two target models respectively modeling the target objects overlap with each other. The processor is further configured to determine whether or not the contact event occurs for a pair of the target objects that have been determined to have an overlap based on the type and moving speed of the target object.
Systems and methods for providing nature sounds
Systems and methods for generating sound elements in a vehicle are present. In one example, a method comprises selecting a sound element, the sound element corresponding to a natural environment; and broadcasting the sound element via one or more speakers of a vehicle. In this way, a sound environment may be provided to a vehicle user based on the at least one vehicle state.
Verifying timing of sensors used in autonomous driving vehicles
In some implementations, a method of verifying operation of a sensor is provided. The method includes causing a sensor to obtain sensor data at a first time, wherein the sensor obtains the sensor data by emitting waves towards a detector. The method also includes determining that the detector has detected the waves at a second time. The method further includes receiving the sensor data from the sensor at a third time. The method further includes verifying operation of the sensor based on at least one of the first time, the second time, or the third time.
ENVIRONMENTAL LIMITATION AND SENSOR ANOMALY SYSTEM AND METHOD
Embodiments for operational envelope detection (OED) with situational assessment are disclosed. Embodiments herein relate to an operational envelope detector that is configured to receive, as inputs, information related to sensors of the system and information related to operational design domain (ODD) requirements. The OED then compares the information related to sensors of the system to the information related to the ODD requirements, and identifies whether the system is operating within its ODD or whether a remedial action is appropriate to adjust the ODD requirements based on the current sensor information. Other embodiments are described and/or claimed.
Method for a Supercomputer to Mitigate Traffic Collisions with 5G or 6G
A supercomputer, with traffic-modeling software and 5G/6G connectivity, can assist an autonomous vehicle in avoiding, or at least minimizing the expected harm of, an imminent collision. Upon detecting the imminent collision, the autonomous vehicle can transmit a message to an access point, requesting an uncontested direct communication link to the supercomputer, and then transfer sensor data and other traffic data to the supercomputer through the access point. The supercomputer can calculate a multitude of sequences of braking, steering, and accelerating actions of the autonomous vehicle, and can select the sequence that enables the autonomous vehicle to avoid the collision if possible. If all sequences cannot avoid the collision, the supercomputer can select the sequence that results in the least harm (fatalities, injuries, and property damage) in the unavoidable collision. The supercomputer relays the selected best sequence of actions through the access point to the autonomous vehicle, thereby mitigating the collision.