Patent classifications
B60W2040/0863
Appearance and Movement Based Model for Determining Risk of Micro Mobility Users
The systems and methods disclosed herein provide a risk prediction system that uses trained machine learning models to make predictions that a VRU will take a particular action. The system first receives, in a video stream, an image depicting a VRU operating a micro-mobility vehicle and extract the depictions from the image. The extraction process may be determined by bounding box classifiers trained to identify various VRUs and micro-mobility vehicles. The system feeds the extracted depictions to machine learning models and receives, as an output, risk profiles for the VRU and the micro-mobility vehicle. The risk profile may include data associated with the VRU/micro-mobility vehicle determined based on classifications of the VRU and the micro-mobility vehicles. The system may then generate a prediction that the VRU operating the micro-mobility vehicle will take a particular action based on the risk profile.
Warning and adjusting the longitudinal speed of a motor vehicle based on the recognized road traffic lights
An automotive adaptive cruise control system for a host motor vehicle configured to operate in at least two different operating modes comprising a first operating mode, in which a current speed of the host vehicle is controlled to maintain a cruise speed, and a second operating mode, in which the current speed of the host vehicle is controlled to maintain a cruise distance to a leading vehicle, wherein the system is configured to: detect approaching to a traffic light and determine a light signal emitted thereby, signal to the driver the presence of the detected traffic light and the determined light signal, if the traffic light emits a red or amber light signal, estimating a driver reaction time, determining a higher threshold distance and a lower threshold distance from the traffic light, and warning the driver of the host vehicle of the need to slow it down if, after the driver reaction time has elapsed: i) the host motor vehicle has not decreased its speed by more than a calibratable threshold, ii) the current speed of the host vehicle is higher than a minimum speed, iii) either the distance of the host vehicle from the traffic light is lower than the higher threshold distance and the light signal emitted by the traffic light is red, or the distance of the host vehicle from the traffic light is between the higher and lower threshold distances and the light signal emitted by the traffic light is amber, and iv) a service brake of the host vehicle is unoperated.
AUTOMATED PACING OF VEHICLE OPERATOR CONTENT INTERACTION
In one example, a computing device includes one or more user input detection components, and one or more processors configured to receive an indication of a first user input detected by the one or more user input detection components, responsive to receiving the indication of the first user input, adjust a level of an attention buffer at a defined rate; responsive to determining that the level of the attention buffer satisfies a first threshold, prevent further interaction with a user interface of the computing device, responsive to determining that an indication of a second user input has not been received within a time period, adjust a level of the attention buffer, and responsive to determining that the level of the attention buffer satisfies a second threshold, allow further interaction with the user interface.
SYSTEMS AND METHODS TO LIMIT OPERATING A MOBILE PHONE WHILE DRIVING
Systems and non-transitory computer-readable media for determining an expected interaction between a driver and a mobile device are disclosed, for limiting operation of the mobile device. The disclosed systems may include at least one processor that may be configured to receive, from at least one image sensor in the vehicle, first information associated with an interior area of the vehicle. The processor may extract at least one feature associated with at least one body part of the driver from the received first information. Based on the at least one extracted feature, the processor may determine an expected interaction between the driver and a mobile device, and generate at least one of a message, command, or alert based on the determination.
Systems and methods for verifying whether vehicle operators are paying attention
Systems and methods for verifying whether vehicle operators are paying attention are disclosed. Exemplary implementations may: generate output signals conveying information related to a first vehicle operator; make a first type of determination of at least one of an object on which attention of the first vehicle operator is focused and/or a direction in which attention of the first vehicle operator is focused; make a second type of determination regarding fatigue of the first vehicle operator; make a third type of determination of at least one of a distraction level of the first vehicle operator and/or a fatigue level of the first vehicle operator; and effectuate a notification regarding the third type of determination to at least one of the first vehicle operator and/or a remote computing server.
Vehicular driver monitoring system
A vehicular driver monitoring system includes an illumination source that emits non-visible light that illuminates at least a portion of a driver of the vehicle, a reflector disposed at the vehicle and within a line of sight of the illuminated portion of the driver, a camera disposed in the vehicle and having a field of view that encompasses the reflector, and a control having an image processor that processes image data captured by the camera. The reflector reflects at least some non-visible light and allows visible light to pass through. The camera captures image data representative of the non-visible light emitted by the illumination source that reflects off the illuminated portion of the driver of the vehicle and reflects off the reflector so as to be directed toward the camera. The control, responsive to processing of image data captured by the camera, monitors the illuminated portion of the driver.
Training a vehicle to accommodate a driver
A system can train a vehicle electronically to accommodate a driver. The system can train the vehicle to accommodate the ability, condition, and/or personality of the driver. The system can change the controls of the vehicle, responsive to the inputs from the driver, to match with the patterns of controls resulting from a predetermined model (such as a safe-driver model). Accordingly, the vehicle can appear as it is being driven by a safe driver when it may not be the case. A driver with a lower driving competence may apply physical controls in a pattern that may be slow, unstable, or insufficient. However, the vehicle can be trained to adjust the transformation from the UI signals to the drive-by-wire signals such that the transformed signals appear to be applied by a more competent driver on the road. And, the transformation can improve over time with training via machine learning.
Measuring operator readiness and readiness testing triggering in an autonomous vehicle
This disclosure relates to a system and method for transitioning vehicle control between autonomous operation and manual operation. The system includes sensors configured to generate output signals conveying information related to the vehicle and its operation. During autonomous vehicle operation, the system gauges the level of responsiveness of a vehicle operator through challenges and corresponding responses. The system determines when to present a challenge to the vehicle operator based on internal and external factors. If necessary, the system will transition from an autonomous operation mode to a manual operation mode.
VARYING EXTENDED REALITY CONTENT BASED ON DRIVER ATTENTIVENESS
Extended reality content in a video can be varied based on driver attentiveness. The video can be of an external environment of a vehicle and can be presented in real-time on a display located within the vehicle. The display can be a video pass through display. The display can be an in-vehicle display, or it can be a part of a video pass-through extended reality headset. The video can present a view of an external environment of the vehicle as well as extended reality content. A level of attentiveness of a driver of the vehicle can be determined. An amount of the extended reality content presented in the video can be varied based on the level of attentiveness.
Vehicle driving assist system
A vehicle driving assist system includes an imaging device, an alarm, and a controller. The controller includes a facial recognition unit, an occupant determination unit, and a loss determination unit. The facial recognition unit recognizes at least a part of an occupant's face in first captured images acquired from the imaging device. The occupant determination unit determines a driving state of the occupant on the basis of a second captured image in which at least the part of the face is recognizable, and generate a first alert request if the occupant is unable to drive the vehicle. The loss determination unit generates a second alert request if a third captured image in which at least the part of the face is unrecognizable occurs two or more times. The alarm outputs an alert to the occupant on the basis of the first and second alert requests.