Patent classifications
B60W2040/0863
EYE-GAZE INPUT APPARATUS
An eye-gaze input apparatus includes a hardware processor functioning as an eye-gaze determination unit, an input/output processing unit, and a careless-state determination unit. The eye-gaze determination unit determines a first input by an eye gaze of a user with respect to first images of input elements displayed in a display region set in front of a windshield and a driver seat of a vehicle. The careless-state determination unit determines whether the user is in a careless state. The input/output processing unit confirms the first input in a case where the eye-gaze determination unit determines that there is the first input on any of the one or more first images and the careless-state determination unit determines that the user is not in the careless state, and does not confirm the first input in a case where the careless-state determination unit determines that the user is in the careless state.
System and Methods For Detecting Vehicle Braking Events Using Data From Fused Sensors in Mobile Devices
One or more braking event detection computing devices and methods are disclosed herein based on fused sensor data collected during a window of time from various sensors of a mobile device found within an interior of a vehicle. The various sensors of the mobile device may include a GPS receiver, an accelerometer, a gyroscope, a microphone, a camera, and a magnetometer. Data from vehicle sensors and other external systems may also be used. The braking event detection computing devices may adjust the polling frequency of the GPS receiver of the mobile device to capture non-consecutive data points based on the speed of the vehicle, the battery status of the mobile device, traffic-related information, and weather-related information. The braking event detection computing devices may use classification machine learning algorithms on the fused sensor data to determine whether or not to classify a window of time as a braking event.
AUGMENTING TRANSPORT SERVICES USING REAL-TIME EVENT DETECTION
A method for augmenting transport services using event detection is provided. The method includes collection of first sensor data generated by various sensors associated with a plurality of vehicles. The first sensor data includes sensor outputs that indicate a plurality of rash driving events. The sensor outputs are augmented based on angular rotation to obtain augmented sensor outputs. A prediction model is trained based on the augmented sensor outputs. Target sensor data associated with a target vehicle is provided as input to the trained prediction model, and based on an output of the trained prediction model an occurrence of a rash driving event is detected in real-time or near real-time. Based on a count of rash driving events associated with the target driver within a cumulative driving distance, a driver score of the target driver is determined.
METHOD FOR TRANSFERRING A MOTOR VEHICLE FROM AN AUTONOMOUS INTO A MANUAL DRIVING MODE, TAKING A COGNITIVE MODEL OF THE DRIVER INTO CONSIDERATION
A method for transferring a motor vehicle from an autonomous driving mode, in which the motor vehicle is guided autonomously, into a manual driving mode, in which the motor vehicle is guided by a vehicle driver. In the method, pieces of information for supporting the transfer are ascertained with the aid of a cognitive model of the vehicle driver, the cognitive model describing at least one perception process of the vehicle driver with respect to a driving situation and at least one decision-making process of the driver with respect to an action option. A device configured for executing the method is also described.
Appearance and movement based model for determining risk of micro mobility users
The systems and methods disclosed herein provide a risk prediction system that uses trained machine learning models to make predictions that a VRU will take a particular action. The system first receives, in a video stream, an image depicting a VRU operating a micro-mobility vehicle and extract the depictions from the image. The extraction process may be determined by bounding box classifiers trained to identify various VRUs and micro-mobility vehicles. The system feeds the extracted depictions to machine learning models and receives, as an output, risk profiles for the VRU and the micro-mobility vehicle. The risk profile may include data associated with the VRU/micro-mobility vehicle determined based on classifications of the VRU and the micro-mobility vehicles. The system may then generate a prediction that the VRU operating the micro-mobility vehicle will take a particular action based on the risk profile.
Positive and negative reinforcement systems and methods of vehicles for driving
A system includes: a first camera configured to capture first images of a driver on a driver's seat within a passenger cabin of the vehicle; a second camera configured to capture second images in front of the vehicle; a driver module configured to determine a driver and a present rank of the driver; a module configured to detect a condition based on at least one of (a) a first image, (b) a second image, and (c) a parameter measured by a sensor; a reinforcement module configured to: display the present rank within the passenger cabin; generate an output within the passenger cabin in response to the detection of the condition; when no conditions are detected, increment a rank period of the driver; selectively increase the present rank of the driver; and generate an alert within the passenger cabin in response to the increasing of the present rank.
MANUAL CONTROL RE-ENGAGEMENT IN AN AUTONOMOUS VEHICLE
Vehicles may have the capability to navigate according to various levels of autonomous capabilities, the vehicle having a different set of autonomous competencies at each level. In certain situations, the vehicle may shift from one level of autonomous capability to another. The shift may require more or less driving responsibility from a human operator. Sensors inside the vehicle collect human operator parameters to determine an alertness level of the human operator. An alertness level is determined based on the human operator parameters and other data including historical data or human operator-specific data. Notifications are presented to the user based on the determined alertness level that are more or less intrusive based on the alertness level of the human operator and on the urgency of an impending change to autonomous capabilities. Notifications may be tailored to specific human operators based on human operator preference and historical performance.
MANAGING COMMUNICATIONS FOR CONNECTED VEHICLES USING A CELLULAR NETWORK
Systems and methods are described herein for managing communications for a connected vehicle, such as between the connected vehicle and other connected vehicle and/or between the connected vehicle and infrastructure entities, such as providers of services to the connected vehicle. For example, a communication network, such as a network provided by a network carrier, may include various cloud engines or other network-based servers that manage, coordinate, and/or provision communications between the connected vehicle and other parties, such as vehicles, road devices, buildings, and other infrastructure entities.
Vehicle control system
A vehicle includes a light switch for manually operating a lighting state of a lighting device. The light switch includes a light-off position and an auto-light position for executing an auto-light process. A vehicle control system includes a first controller for executing an automated driving of the vehicle, and a second controller for controlling a lighting state of the lighting device based on a request from the first controller or operation information of the light switch. The first controller is configured to transmit an auto-light request for executing the auto-light process to the second controller during execution of the automated driving. The second controller is configured to execute the auto-light process when the auto-light request is received from the first controller in a state where the light switch is operated to the light-off position.
VEHICLE CONTROL SYSTEM
A vehicle includes a light switch for manually operating a lighting state of a lighting device. The light switch includes a light-off position and an auto-light position for executing an auto-light process. A vehicle control system includes a first controller for executing an automated driving of the vehicle, and a second controller for controlling a lighting state of the lighting device based on a request from the first controller or operation information of the light switch. The first controller is configured to transmit an auto-light request for executing the auto-light process to the second controller during execution of the automated driving. The second controller is configured to execute the auto-light process when the auto-light request is received from the first controller in a state where the light switch is operated to the light-off position.