Patent classifications
G06V40/18
AUTONOMOUS LIGHT SYSTEMS AND METHODS FOR OPERATING AUTONOMOUS LIGHT SYSTEMS
An autonomous light system may comprise a reading light assembly, an object detection device; and a controller. The reading light assembly may include a light source and an actuation system. The controller may be operably coupled to the light source and the object detection device. The controller may include an object detection module and may be configured to receive object data from the object detection device, compare the object data against an object database, identify an object based on the comparison of the object data to the object database, send a first command to at least one of the light source and the actuation system.
WARNING METHOD AND APPARATUS FOR DRIVING RISK, COMPUTING DEVICE AND STORAGE MEDIUM
Embodiments of the disclosure provide a warning method and apparatus for a driving risk, a computing device and a storage medium, and the method includes: obtaining dangerous driving behavior data of a driver in a first time period, and obtaining a correspondence between a quantity of occurrences of dangerous driving behaviors of one or more drivers and a quantity of an actual occurrence of dangerous scenarios to the one or more drivers while driving; predicting, based on a quantity of actual occurrences of the dangerous driving behaviors of the driver, indicated in the dangerous driving behavior data of the driver, and the correspondence, a target quantity of times the driver is predicted to encounter one or more dangerous scenarios in the first time period; and generating warning information based on the target quantity of times.
METHOD AND APPARATUS FOR IDENTIFYING OBJECT OF INTEREST OF USER
The present disclosure relates to methods and apparatuses for identifying an object of interest of a user. One example method includes obtaining information about a line-of-sight-gazed region of the user and an environment image corresponding to the user, obtaining information about a first gaze region of the user in the environment image based on the environment image, where the first gaze region is used to indicate a sensitive region determined by using a physical feature of a human body, and obtaining a target gaze region of the user based on the information about the line-of-sight-gazed region and the information about the first gaze region. The gaze region is used to indicate a region in which a target object gazed by the user in the environment image is located.
IMAGE GAZE CORRECTION METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
An image gaze correction method, apparatus, electronic device, computer-readable storage medium, and computer program product related to the field of artificial intelligence technologies are provided. The image gaze correction method includes: acquiring an eye image from an image; performing feature extraction processing on the eye image to obtain feature information of the eye image; performing, based on the feature information and a target gaze direction, gaze correction processing on the eye image to obtain an initially corrected eye image and an eye contour mask; performing, by using the eye contour mask, adjustment processing on the initially corrected eye image to obtain a corrected eye image; and generating a gaze corrected image based on the corrected eye image.
IMAGE GAZE CORRECTION METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
An image gaze correction method, apparatus, electronic device, computer-readable storage medium, and computer program product related to the field of artificial intelligence technologies are provided. The image gaze correction method includes: acquiring an eye image from an image; performing feature extraction processing on the eye image to obtain feature information of the eye image; performing, based on the feature information and a target gaze direction, gaze correction processing on the eye image to obtain an initially corrected eye image and an eye contour mask; performing, by using the eye contour mask, adjustment processing on the initially corrected eye image to obtain a corrected eye image; and generating a gaze corrected image based on the corrected eye image.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
An information processing apparatus according to an embodiment of the present technology includes a line-of-sight estimator, a correction amount calculator, and a registration determination section. The line-of-sight estimator calculates an estimation vector obtained by estimating a direction of a line of sight of a user. The correction amount calculator calculates a correction amount related to the estimation vector on the basis of at least one object that is within a specified angular range that is set using the estimation vector as a reference. The registration determination section determines whether to register, in a data store, calibration data in which the estimation vector and the correction amount are associated with each other, on the basis of a parameter related to the at least one object within the specified angular range.
Systems And Methods For Optical Evaluation Of Pupillary Psychosensory Responses
The present disclosure is directed to systems and methods for measuring and analyzing pupillary psychosensory responses. An electronic device is configured to receive video data with at least two frames. The electronic device then locates one or more eye objects in the video data and determine pupil and iris sizes of the one or more eye objects. The electronic device determines the pupillary psychosensory responses of the one or more eye objects by tracking a ratio of pupil diameter to iris diameter throughout the video. Several metrics for the pupillary psychosensory responses can be determined (e.g., velocity of change of the ratio, peak to peak amplitude of the change in ratio over time, etc.). These metrics can be used as measures of an individual's cognitive ability and mental health in a single session or tracked throughout multiple sessions.
BIOMETRIC GALLERY MANAGEMENT USING WIRELESS IDENTIFIERS
Biometric gallery management is performed by association one or more wireless identifiers that correspond to one or more mobile devices (such as smart phones, tablet computing devices, cellular telephones, wearable devices, smart watches, fitness monitors, digital media players, medical devices, and/or other mobile computing devices) that people carry with digital representations of biometrics corresponding to the people. Wireless identifiers corresponding to mobile devices proximate to a biometric reader device may be monitored. Upon detection of wireless identifiers corresponding to mobile devices proximate to the biometric reader device, the associated digital representations of biometrics may be loaded from a main gallery into one or more local galleries, which may then be used to perform one or more biometric identifications and/or verifications.
HOLOGRAPHIC DISPLAY SYSTEM
A display system for a vehicle includes a display unit mounted to the vehicle and is selectively operable in a first mode as a holographic display and in a second mode as a mirror. Holographic images may include rear view images obtained from a camera or computer generated graphics. Holographic images are displayed at a virtual image plane behind the display to reduce the operator's eyes accommodation.
Determining Features based on Gestures and Scale
A system, method, and computer-readable medium for associating a person’s gestures with specific features of objects is disclosed. Using one or more image capture devices, a person’s gestures and the location of that person in an environment is determined. Using determined distances between the person and objects in the environment and scales associated with features of those objects, the list of specific features in the person’s field-of-view may be determined. Further, a facial expression of the person may be scored and that score associated with one or more specific features.