Patent classifications
G06K9/52
WEARABLE AIRCRAFT TOWING COLLISION WARNING DEVICES AND METHODS
The disclosed embodiments describe collision warning devices, controllers, and computer readable media. A collision warning device for towing vehicles includes a housing, a scanning sensor, a display, and a controller. The housing is configured to be secured to at least one of a tow operator and a tug during aircraft towing operations. The a scanning sensor is secured to the housing and is configured to scan an aircraft and to scan an object in an environment surrounding the aircraft. The controller is mounted to the housing and is operably coupled with the scanning sensor and the display. The controller is configured to generate a three dimensional (3D) model of the aircraft and the environment based on a signal output from the scanning sensor, and to calculate potential collisions between the aircraft and the object based on the 3D model.
Method for removing a support of an object from volume data
Volume data representing a radiation image of an object are subjected to a coarse filtering process to obtain a first classification including support type and non-support type components. An iteration of low threshold filtering steps and successive extraction and classification of connected components is performed and used to rectify the result of the coarse filtering process. A further filtering is based on the location of the connected components in the volume.
Systems and methods of eye tracking calibration
An image of a user's eyes and/or face, captured by a camera on the computing device or on a device coupled to the computing device, may be analyzed using computer-vision algorithms, such as eye tracking and gaze detection algorithms, to determine the location of the user's eyes and estimate the gaze information associated with the user. A user calibration process may be conducted to calculate calibration parameters associated with the user. These calibration parameters may be taken into account to accurately determine the location of the user's eyes and estimate the location on the display at which the user is looking. The calibration process may include determining a plane on which the user's eyes converge and relating that plane to a plane of a screen on which calibration targets are displayed.
Real-time image enhancement for X-ray imagers
Systems and methods are provided for an X-ray imaging system. An X-ray source is configured to provide X-ray radiation and has an associated focal spot. A detector is configured to generate a digital image representing attenuation of the X-ray radiation as it passes through a subject. An image enhancement component is configured to apply a separable deconvolution kernel, derived from an estimate of a point spread function associated with the focal spot as a vector function applied to the rows and subsequently to the columns of the digital image.
Evaluating image values
Images of items are evaluated. A first image of the item, having a view of two or more of its surfaces, is captured at a first time. A measurement of at least one dimension of one or more of the surfaces is computed and stored. A second image of the item, having a view of at least one of the two or more surfaces, is captured at a second time, subsequent to the first time. A measurement of the dimension is then computed and compared to the stored first measurement. The computed measurement is evaluated based on the comparison.
LIVENESS DETECTION METHOD, LIVENESS DETECTION SYSTEM, AND COMPUTER PROGRAM PRODUCT
The application provides a liveness detection method capable of implementing liveness detection, and a liveness detection system that employs the liveness detection method. The liveness detection method comprises: irradiating an object to be detected with structured light; obtaining first facial image data of the object to be detected under irradiation of the structured light; determining, based on the first facial image data, a detection parameter that indicates a sub-surface scattering intensity of the structured light on a face of the object to be detected; and determining, based on the detection parameter and a predetermined parameter threshold, whether the object to be detected is a living body.
Objective assessment method for color image quality based on online manifold learning
An objective assessment method for a color image quality based on online manifold learning considers a relationship between a saliency and an image quality objective assessment. Through a visual saliency detection algorithm, saliency maps of a reference image and a distorted image are obtained for further obtaining a maximum fusion saliency map. Based on maximum saliencies of image blocks in the maximum fusion saliency map, a saliency difference between each reference image block and a corresponding distorted image block is measured through an absolute difference, and thus reference visual important image blocks and distorted visual important image blocks are screened and extracted. Through manifold eigenvectors of the reference visual important image blocks and the distorted visual important image blocks, an objective quality assessment value of the distorted image is calculated. The method has an increased assessment effect and a higher correlation between an objective assessment result and a subjective perception.
GAZE DETECTION APPARATUS AND GAZE DETECTION METHOD
A gaze detection apparatus includes: an image capturing unit which generates an image representing both eyes of a user; an eye detection unit which, for each of the eyes, detects an eye region containing the eye from the image; a pupil detection unit which, for one of the eyes, detects a pupil region containing a pupil from the eye region; an estimating unit which obtains, based on the pupil region detected for the one eye, an estimated shape of the pupil region on the image for the other one of the eyes; a redetection unit which detects the pupil region by searching for a region having a shape that matches the estimated shape from within the eye region detected for the other eye; and a gaze detection unit which detects the user's gaze direction based on a position of the pupil region detected for each of the eyes.
METHOD AND DEVICE FOR DETECTING VIOLENT CONTENTS IN A VIDEO , AND STORAGE MEDIUM
Embodiments of the disclosure provide a method and device for detecting violent contents in a video, and a non-transitory computer-readable storage medium. The method for detecting violent contents in a video includes: determining an average shot length of any scene in the video to be detected, and an average motion intensity of the shot in the scene; and extracting feature data of a number of elements in the scene upon determining that the average shot length is below a first preset threshold, and/or the average motion intensity of the shot is above a second preset threshold, and determining that there are violent contents in the video to be detected upon determining that the feature data of at least one element among the extracted feature data of the elements lie in a range of feature data of the element extracted in advance from a specific scene.
DEVICE AND METHOD FOR BIOMETRICS AUTHENTICATION
A biometrics authentication apparatus and a biometrics authentication method are disclosed. The biometrics authentication apparatus includes: a light source configured to emit a light; a modulator configured to change a spatial distribution of the light that is scattered and reflected from a region of interest of a user; a detector configured to detect an integral power of the light that is scattered from the region of interest; and a processor configured to obtain a measurement signal based on the integral power of the light, compare the measurement signal with a reference signal stored in a memory, and determine whether to authenticate the user based on a degree of match between the measurement signal and the reference signal.