Patent classifications
G06V40/18
ELECTRONIC APPARATUS, METHOD FOR CONTROLLING ELECTRONIC APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An electronic apparatus includes: control means for performing control to record, in recording means, characteristic data on a line-of-sight; authentication means for authenticating a user; and detection means for detecting, in a case where characteristic data on a line-of-sight of the authenticated user is recorded in the recording means, a line-of-sight of the user by using the recorded characteristic data on the line-of-sight.
COCKPIT DISPLAY AMBIENT LIGHTING INFORMATION FOR IMPROVING GAZE ESTIMATION
A computer-implemented method is described. The method is be implemented by processors of an aircraft system. The method includes receiving images of an eye and a lighting configuration associated with a cockpit of an aircraft. The method further includes detecting a position of the eye within each of the images. The method further includes compensating for a pupillary light response of the eye based on the position of the eye within the image and the lighting configuration. By compensating for the pupillary light response, a fatigue level of the operator is estimated with reduced noise.
SYSTEM AND METHOD FOR ASSESSING OPERATOR SITUATIONAL AWARENESS VIA CONTEXT-AWARE GAZE DETECTION
A system and method for continuous real-time assessment of the situational awareness of an aircraft operator incorporates gaze sensors to determine the current gaze target (or sequence of gaze targets) of the operator, e.g., which interfaces the operator is looking at. The system receives operational context from aircraft systems indicative of current events and conditions both internal and external to the aircraft (e.g., operational status, mission or flight plan objectives, weather conditions). Based on the determined gaze targets and coterminous operational context, the system evaluates the situational awareness of the operator relative to the operational context, e.g., perceptive of the operational context; comprehending the operational context and its implications, and projecting the operator's perception and comprehension into responsive action and second-order ramifications according to task models indicative of expected behavior.
METHOD OF DETECTING, SEGMENTING AND EXTRACTING SALIENT REGIONS IN DOCUMENTS USING ATTENTION TRACKING SENSORS
A method and system for detecting, segmenting, and extracting salient regions in documents by using attention tracking sensors is provided. The method includes: receiving an image that corresponds to a document; receiving, from a sensor, a sequence of measurements that correspond to a human reading of the document; determining, based on the sequence of measurements, at least one region of the document as being a salient document region; demarcating the salient document region in an electronically displayable manner; and outputting a file that includes a displayable version of the document with the demarcated document region. The salient document region may include a title, a section header, and/or a table. The sensor may be an eye-tracking sensor that detects a sequence of eye-gaze positions on the document as a function of time.
Temporal Approximation Of Trilinear Filtering
In one embodiment, a method includes receiving instructions to render a snapshot of a scene for a video, where the snapshot is to be displayed using a sequence of N frames, computing a mipmap-level determining factor for a texture appearing in the scene based on a scale of the texture on a pixel grid, selecting a mipmap level of the texture for each of the N frames based on the mipmap-level determining factor, where the mipmap levels selected for the N frames are non-uniform and temporally approximate the mipmap-level determining factor, rendering each of the N frames by sampling the mipmap level of the texture selected for that frame, and displaying the rendered N frames sequentially to represent the snapshot of the scene.
Devices, systems and methods for predicting gaze-related parameters using a neural network
A method for creating and updating a database is disclosed. In one example, the method includes presenting a first stimulus to a first user wearing a head-wearable device, using a first camera of the head-wearable device to generate. When the first user is expected to respond to the first stimulus or expected to have responded to the first stimulus, using a second camera of the head-wearable device to generate a first right image of at least a portion of the right eye of the first user. A data connection is established between the head-wearable device and the database. A first dataset is generated comprising the first left image, the first right image and a first representation of a gaze-related parameter, the first representation being correlated with the first stimulus, and adding the first dataset to a device database.
Systems and methods for providing real-time surveillance in automobiles
Techniques for providing real-time vehicle surveillance is disclosed. An in-vehicle surveillance device continuously captures images from the surroundings of a vehicle and the interior of the vehicle and transmits them to a surveillance management system. The images are processed in real-time using machine learning modules to determine primary, secondary, and adverse events. Upon determining the events, alerts are generated and sent to a display unit provided on the in-vehicle surveillance device to improve the safety of the passengers. The techniques further allow vehicle-to-vehicle communication and vehicle to third party device communication upon determining an event.
Automatic image-based skin diagnostics using deep learning
There is shown and described a deep learning based system and method for skin diagnostics as well as testing metrics that show that such a deep learning based system outperforms human experts on the task of apparent skin diagnostics. Also shown and described is a system and method of monitoring a skin treatment regime using a deep learning based system and method for skin diagnostics.
DYNAMIC FIELD OF VIEW SELECTION IN VIDEO
Apparatuses, methods, systems, and program products are disclosed for dynamic field of view selection in video. An apparatus includes a processor and memory that stores code executable by the processor to capture a 360-degree video using a 360-degree camera system, detect a direction that a user is looking within the 360-degree video captured using the 360-degree camera system, and set a field of view for the 360-degree video based on the detected direction that the user is looking.
Systems and methods for video delivery based upon saccadic eye motion
A method is provided for displaying an immersive video content according to eye movement of a viewer includes the steps of detecting, using an eye tracking device, a field of view of at least one eye of the viewer, transmitting eye tracking coordinates from the detected field of view to an eye tracking processor, identifying a region on a video display corresponding to the transmitted eye tracking processor, adapting the immersive video content from a video storage device at a first resolution for a first portion of the immersive video content and a second resolution for a second portion of the immersive video content, the first resolution being higher than the second resolution, displaying the first portion of the immersive video content on the video display within a zone, and displaying the second portion of the immersive video content on the video display outside of the zone.