G06V20/597

Vehicle and mobile device interface for vehicle occupant assistance

Systems, methods, and non-transitory media are provided for a vehicle and mobile device interface for vehicle occupant assistance. An example method can include determining, based on one or more images of an interior portion of a vehicle, a position of a mobile device relative to a coordinate system of the vehicle; receiving, from the vehicle, data associated with one or more sensors of the vehicle; and displaying, using a display device of the mobile device, virtual content based on the data associated with the one or more sensors and the position of the mobile device relative to the coordinate system of the vehicle.

Multimodal machine learning for vehicle manipulation

Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.

Affective-cognitive load based digital assistant

Embodiments of the present disclosure sets forth a computer-implemented method comprising receiving, from at least one sensor, sensor data associated with an environment, computing, based on the sensor data, a cognitive load associated with a user within the environment, computing, based on the sensor data, an affective load associated with an emotional state of the user, determining, based on both the cognitive load at the affective load, an affective-cognitive load, determining, based on the affective-cognitive load, a user readiness state associated with the user, and causing one or more actions to occur based on the user readiness state.

Travel controller and method for travel control

A travel controller detects, from an image representing the face of a driver of a vehicle, an act of the driver checking surroundings of the vehicle, records a time at which the act of checking is detected, and suggests to the driver a lane change for the vehicle to change a travel lane to an adjoining lane. In the case that the act of checking is detected in a precheck period before a suggestion time at which the lane change is suggested, the travel controller makes the vehicle execute the lane change regardless of whether the act of checking is detected in a post-suggestion check period after the suggestion time of the lane change.

Systems and methods to reduce audio distraction for a vehicle driver

The disclosed technologies relate to reducing audible distractions for a driver of a vehicle. A method includes obtaining audio data based on sound detected inside the vehicle, identifying an audio event based on the audio data, determining a distraction rating for the audio event, the distraction rating indicating an estimated level of distraction caused by the audio event, and generating an alert when the distraction rating exceeds a threshold.

Data augmentation for driver monitoring
11702011 · 2023-07-18 · ·

This application is directed to augmenting training images used for generating a model for monitoring vehicle drivers. A computer system obtains a first image of a first driver in an interior of a first vehicle and separates, from the first image, a first driver image from a first background image of the interior of the first vehicle. The computer system obtains a second background image and generates a second image by overlaying the first driver image on the second background image. The second image is added to a corpus of training images to be used by a machine learning system to generate a model for monitoring vehicle drivers. In some embodiments, at least one of the first driver image and the second background image is adjusted to match lighting conditions, average intensities, and sizes of the first driver image and the second background image.

Methods and Systems for a Pitch Angle Recognition of a Steering Column in a Vehicle
20230018008 · 2023-01-19 ·

The present disclosure discloses a computer-implemented method for a pitch angle recognition of a steering column in a vehicle. In aspects, the computer-implemented method includes measuring first acceleration data using a first acceleration sensor and measuring second acceleration data using a second acceleration sensor. The computer-implemented method further includes determining drift data of at least one of the first acceleration sensor and the second acceleration sensor based on the first acceleration data and the second acceleration data. Additionally, the computer-implemented method includes determining a pitch angle of the steering column based on the drift data, the first acceleration data, and the second acceleration data.

INFORMATION PROCESSING APPARATUS
20230012843 · 2023-01-19 · ·

An autonomous driving system for a vehicle reduces the amount of computations for object extraction carried out by a DNN, using information a traveling environment or the like. An information processing apparatus including a processor, a memory, and an arithmetic unit that executes a computation using an inference model is provided. The information processing apparatus includes a DNN processing unit that receives external information, the DNN processing unit extracting an external object from the external information, using the inference model, and a processing content control unit that controls processing content of the DNN processing unit. The DNN processing unit includes an object extracting unit that executes the inference model in a deep neural network having a plurality of layers of neurons, and the processing content control unit includes an execution layer determining unit that determines the layers used by the object extracting unit.

ELECTRONIC DEVICE FOR DISPLAYING IMAGE BY USING CAMERA MONITORING SYSTEM (CMS) SIDE DISPLAY MOUNTED IN VEHICLE, AND OPERATION METHOD THEREOF

A method, performed by an electronic device installed in a vehicle, of switching a view of an image displayed on a camera monitoring system (CMS) side display, and an electronic device are provided. The disclosure includes an electronic device for displaying, on a camera monitoring system (CMS) side display, a first image representing a surrounding environment image, detecting a lane change signal of the vehicle, and, in response to the detected lane change signal, switching the first image displayed on the CMS side display to a second image representing a top view image showing locations of the vehicle and a surrounding vehicle in a virtual image as looking down from above the vehicle, and displaying the second image, and displaying a lane change user interface (UI) indicating information about whether a lane change is possible on the second image, and an operation method thereof.

DANGEROUS DRIVING WARNING DEVICE, DANGEROUS DRIVING WARNING SYSTEM, AND DANGEROUD DRIVING WARNING METHOD
20230014192 · 2023-01-19 ·

A travel information sensor senses travel information of a host-vehicle. A biological information sensor senses biological information of a driver. A camera unit senses a facial expression of the driver. A communication unit acquires an agitating degree indicating a degree to which an other-vehicle agitates the driver of the host-vehicle, via a network. An agitated degree calculation unit calculates an agitated degree indicating a degree to which the driver of the host-vehicle is agitated by the other-vehicle. A danger degree determination unit determines a danger degree including whether the driver of the host-vehicle is agitated by the other-vehicle, based on the agitated degree and the agitating degree. A presentation unit warns the host-vehicle of the danger degree if it is determined that the driver of the host-vehicle is agitated by the other-vehicle.