Patent classifications
G06V20/597
Control system and method using in-vehicle gesture input
A control system and method for controlling a vehicle's functions using an in-vehicle gesture input, and more particularly, a system for receiving an occupant's gesture and controlling the execution of vehicle functions. The control system using an in-vehicle gesture input includes an input unit configured to receive a user's gesture, a memory configured to store a control program using an in-vehicle gesture input therein, and a processor configured to execute the control program. The processor transmits a command for executing a function corresponding to a gesture according to a usage pattern.
Driver alertness monitoring including a predictive sleep risk factor
An illustrative example system includes at least one alertness detector that is configured to detect an alertness condition of a driver of a vehicle and an alertness condition of a passenger in the vehicle. A controller is configured to determine a sleep risk factor based on the alertness condition of the passenger. The controller is also configured to determine a likelihood that the driver is sleepy based on the alertness condition of the driver and the sleep risk factor. The controller is configured to control a feature of the vehicle to assist the driver when the determined likelihood satisfies a predetermined criterion.
SYSTEM FOR CONTROLLING MEDIA PLAY
According to an aspect of the present disclosure, there is provided a system for controlling media play comprising: a control target recognition unit configured to recognize target medium that is to be controlled by the user, among mediums that are played using a vehicle display, a control command generation unit configured to generate a control command according to a result of recognizing a user's request for the target medium; and a medium control unit configured to control the target medium according to the control command.
DASH CAM WITH ARTIFICIAL INTELLIGENCE SAFETY EVENT DETECTION
- Mathew Chasan Calmer ,
- Justin Delegard ,
- Justin Pan ,
- Sabrina Shemet ,
- Meelap Shah ,
- Kavya Joshi ,
- Brian Tuan ,
- Sharan Srinivasan ,
- Muhammad Ali Akhtar ,
- John Charles Bicket ,
- Margaret Finch ,
- Vincent Shieh ,
- Bruce Kellerman ,
- Mitch Lin ,
- Marvin Arroz ,
- Siddhartha Datta Roy ,
- Jason Symons ,
- Tina Quach ,
- Cassandra Lee Rommel ,
- Saumya Jain
A vehicle dash cam may be configured to execute one or more neural networks (and/or other artificial intelligence), such as based on input from one or more of the cameras and/or other sensors associated with the dash cam, to intelligently detect safety events in real-time. Detection of a safety event may trigger an in-cab alert to make the driver aware of the safety risk. The dash cam may include logic for determining which asset data to transmit to a backend server in response to detection of a safety event, as well as which asset data to transmit to the backend server in response to analysis of sensor data that did not trigger a safety event. The asset data transmitted to the backend server may be further analyzed to determine if further alerts should be provided to the driver and/or to a safety manager.
APPARATUS AND METHOD FOR CONTROLLING AN IN-VEHICLE LIGHTING ENVIRONMENT
An apparatus for controlling an in-vehicle lighting environment includes: a passenger state determination unit that determines a state of a passenger using a gaze of the passenger photographed by a camera of a vehicle; a driving state determination unit that determines a driving state of the vehicle using an acceleration value measured by an acceleration sensor of the vehicle; an external environment state determination unit that determines an external environment state of the vehicle using an external illuminance value measured by an external illuminance sensor of the vehicle; and a lighting environment control unit that controls an illuminance and a color of a first light disposed inside the vehicle based on data determined by at least one determination unit among the passenger state, driving state, and external environment state determination units.
Method and device for evaluating a degree of fatigue of a vehicle occupant in a vehicle
A method evaluates a degree of fatigue of a vehicle occupant in a vehicle. A number of first fatigue indicators is provided which are determined according to computation rules from a plurality of first sensor values and each represent a degree of fatigue of the vehicle occupant. The first sensor values represent measured values of the vehicle and/or measured values relating to a current journey. A first metadata record is associated with each of the number of first fatigue indicators, wherein the first metadata records represent information about the characteristics of the sensors. The first sensor values are processed in the respective first fatigue indicators. A number of second fatigue indicators is provided which are determined according to computation rules from one or more second sensor values and each represent a degree of fatigue of the vehicle occupant. The second sensor values represent physiological and/or physical parameters of the vehicle occupants. A second metadata record is associated with each of the number of second fatigue indicators. The second metadata records represent information about the characteristics of the sensors. The second sensor values are processed in the respective second fatigue indicators. An overall fatigue indicator is determined which represents the degree of fatigue of the vehicle occupant by weighting the number of first fatigue indicators and the number of second fatigue indicators. The fatigue indicators are weighted according to the information about the characteristics of the sensors contained in the first metadata record and the second metadata record.
Neural network image processing apparatus
A neural network image processing apparatus arranged to acquire images from an image sensor and to: identify a ROI containing a face region in an image; determine at plurality of facial landmarks in the face region; use the facial landmarks to transform the face region within the ROI into a face region having a given pose; and use transformed landmarks within the transformed face region to identify a pair of eye regions within the transformed face region. Each identified eye region is fed to a respective first and second convolutional neural network, each network configured to produce a respective feature vector. Each feature vector is fed to respective eyelid opening level neural networks to obtain respective measures of eyelid opening for each eye region. The feature vectors are combined and to a gaze angle neural network to generate gaze yaw and pitch values substantially simultaneously with the eyelid opening values.
Method and apparatus for 3D modeling
A method for three-dimensional modeling. The method may include: acquiring coordinate points of obstacles in a surrounding environment of an autonomous driving vehicle in a vehicle coordinate system; determining a position of eyes of a passenger in the autonomous driving vehicle, and establishing an eye coordinate system using the position of the eyes as a coordinate origin; converting the coordinate points of the obstacles in the vehicle coordinate system to coordinate points in the eye coordinate system, and determining a visualization distance between the obstacles in the surrounding environment based on an observation angle of the eyes; and performing three-dimensional modeling of the surrounding environment, based on visualization distance between the coordinate points of the obstacles in the eye coordinate system and the obstacles.
VEHICLE MOUNTED VIRTUAL VISOR SYSTEM WITH OPTIMIZED BLOCKER PATTERN
A virtual visor system is disclosed that includes a visor having a plurality of independently operable pixels that are selectively operated with a variable opacity. A camera captures images of the face of a driver or other passenger and, based on the captured images, a controller operates the visor to automatically and selectively darken a limited portion thereof to block the sun or other illumination source from striking the eyes of the driver, while leaving the remainder of the visor transparent. The virtual visor system advantageously updates the optical state with blocker patterns that including padding in excess of what is strictly necessary to block the sunlight. This padding advantageously provides robustness against errors, allows for a more relaxed response time, and minimizes frequent small changes to the position of the blocker in the optical state of the visor.
VEHICLE MOUNTED VIRTUAL VISOR SYSTEM THAT LOCALIZES A RESTING HEAD POSE
A virtual visor system is disclosed that includes a visor having a plurality of independently operable pixels that are selectively operated with a variable opacity. A camera captures images of the face of a driver or other passenger and, based on the captured images, a controller operates the visor to automatically and selectively darken a limited portion thereof to block the sun or other illumination source from striking the eyes of the driver, while leaving the remainder of the visor transparent. The virtual visor system advantageously detects whether the driver of other passenger is performing particular head gestures and updates the optical state of the visor using suitable modified procedures that accommodate the intent or goals of the driver or other passenger that are inferred from the predefined head gesture. In general, the modified procedures reduce distracting or frustrating updates to the optical state of the visor.