Patent classifications
G06V20/597
METHOD AND SYSTEM FOR PERSONALIZED EYE BLINK DETECTION
Unlike state of art eye blink detection techniques that are generalized for usage across individuals affecting accuracy of eye blink prediction from subject to subject, embodiments of the present disclosure provide a method and system for personalized eye blink detection using passive camera-based approach. The method first generates a subject specific annotation data, which is then further processed to derive subject specific personalized blink threshold values. The method disclosed provides three unique approaches to compute the personalized blink threshold values which is one time calibration process. The personalized blink threshold values are then used to generate a binary decision vector (D) while analyzing input test images (video sequences) of the subject of interest. Further, values taken by elements of the decision vector (D) are analyzed for a predefined time period to predict possible eye blinks of the subject.
SAFETY BELT DETECTION METHOD, APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
A safety belt detection method, apparatus, computer device, and computer readable storage medium are disclosed. In the detection method, an image to be detected is obtained. The image to be detected is inputted into a detection network which includes an image classification branch network and an image segmentation branch network. A classification result, which indicates whether a driver is wearing a safety belt and is output from the image classification branch network, is obtained. A segmentation image, which indicates a position information of the safety belt and is output from the image segmentation branch network, is obtained. A detection result of the safety belt, indicating whether the driver wears the safety belt normatively, is obtained based on the classification result and the segmentation image.
ON-VEHICLE RECORDING CONTROL APPARATUS AND RECORDING CONTROL METHOD
A control device includes a video data acquisition unit, an orientation detection unit that determines whether a first condition that the driver faces a direction other than a traveling direction of the vehicle is met, an event detection unit, and a recording control unit that, if an event is detected, stores first video data including at least an event detection time point as event recording data.
SYSTEM FOR A MOTOR VEHICLE AND METHOD FOR ASSESSING THE EMOTIONS OF A DRIVER OF A MOTOR VEHICLE
A system for a motor vehicle includes a sensor apparatus having a sensor for determining motor vehicle data and/or driving data of the motor vehicle, and an evaluation unit. The evaluation unit includes an emotion determination unit configured to assess the emotions of the driver of the motor vehicle on the basis of sensor signals transmitted by the sensor.
VEHICLE MOUNTED VIRTUAL VISOR SYSTEM HAVING PREDICTIVE PRE-SHADING OF VISOR SEGMENTS
A virtual visor system is disclosed that includes a visor having a plurality of independently operable pixels that are selectively operated with a variable opacity. A camera captures images of the face of a driver or other passenger and, based on the captured images, a controller operates the visor to automatically and selectively darken a limited portion thereof to block the sun or other illumination source from striking the eyes of the driver, while leaving the remainder of the visor transparent. The visor system advantageously predicts future positions of the head or eyes of the driver when the driver’s head is in motion. Based on the predictions, the optical state of the visor is updated proactively to anticipate future movements of head of the driver. In this way, some of the negative effects of measurement and processing latencies are mitigated when responding to rapid head motions.
VEHICLE MOUNTED VIRTUAL VISOR SYSTEM HAVING FACIAL EXPRESSION CONTROL GESTURES
A virtual visor system is disclosed that includes a visor having a plurality of independently operable pixels that are selectively operated with a variable opacity. A camera captures images of the face of a driver or other passenger and, based on the captured images, a controller operates the visor to automatically and selectively darken a limited portion thereof to block the sun or other illumination source from striking the eyes of the driver, while leaving the remainder of the visor transparent. The visor system advantageously detects certain combinations of facial expression and head gestures from which an error or issue with the operation of the visor system can be inferred. In response to such designated combinations of head gestures and facial expressions, the visor system adapts one or more operating or calibration parameters of the visor system to provide more accurate updates to the optical state of the visor.
VEHICULAR DRIVER MONITORING SYSTEM
A vehicular driver monitoring system includes a camera disposed within and viewing within an interior cabin of a vehicle. The camera includes a lens and an image sensor. The camera is operable to capture image data. Electronic circuitry of an electronic control unit (ECU) includes an image processor for processing image data captured by the camera. With a driver of the vehicle sitting in a driver seat of the vehicle, light is reflected off a portion of the driver to impinge at the lens of the camera. The vehicular driver monitoring system, via processing at the ECU of image data captured by the camera, determines a deficiency in captured image data arising from light impinging at the lens. The determined deficiency in captured image data arises from an occlusion at the lens of the camera.
FACIAL STRUCTURE ESTIMATING DEVICE, FACIAL STRUCTURE ESTIMATING METHOD, AND FACIAL STRUCTURE ESTIMATING PROGRAM
A facial structure estimating device 10 includes an acquiring unit 11 and a controller 13. The acquiring unit 11 acquires a facial image. The controller 13 functions as an identifier 15, an estimator 16, and an evaluator 17. The identifier 15 identifies an individual based on a facial image. The estimator 16 estimates a facial structure based on the facial image. The evaluator 17 calculates a validity of the facial structure estimated by the estimator 16. The evaluator 17 allows facial images and facial structures whose validity is greater than or equal to a threshold to be applied to training of the estimator 16. The controller 13 causes application of facial images and facial structures whose validity is greater than or equal to a threshold to training of the estimator 16 to be based on identification results of individuals produced by the identifier 15.
ELECTRONIC DEVICE, INFORMATION PROCESSING DEVICE, ALERTNESS LEVEL CALCULATING METHOD, AND ALERTNESS LEVEL CALCULATING PROGRAM
An electronic device 10 includes an image-capturing unit 11, a line-of-sight detector 12, and a controller 14. The image-capturing unit 11 generates an image corresponding to the view by performing image capturing. The line-of-sight detector detects a line of sight of a subject with respect to the view. The controller 14 functions as a first estimator 15. The first estimator 15 is capable of estimating a first heat map based on the image. The controller 14 calculates the alertness level of the subject based on the first heat map and the line of sight of the subject. The first estimator 15 is constructing using learning data obtained by machine learning the relationship between a learning image and a line of sight of a training subject when an alertness level of the training subject is in a first range.
Vehicle collision alert system and method for detecting driving hazards
An impairment analysis (“IA”) computer system for alerting a first driver of a first vehicle to a driving hazard posed by a second vehicle operated by a second driver is provided. The IA computer system is associated with the first vehicle, and includes at least one processor in communication with at least one memory device. The at least one processor is programmed to: (i) receive second vehicle data including second driver data and second vehicle condition data, where the second vehicle data is collected by a plurality of sensors included on the first vehicle; (ii) analyze the second vehicle data by applying a baseline model to the second vehicle data; (iii) determine that the second vehicle poses a driving hazard to the first vehicle based upon the analysis; and/or (iv) generate an alert signal based upon the determination that the second vehicle poses a driving hazard to the first vehicle.