Patent classifications
G06V10/446
Filtering methods for visual object detection
Machine logic that pre-processes and post-processes images for visual object detection by performing the following steps: receiving a set of image(s); filtering the set of image(s) using a set of multimodal integral filter(s), thereby removing at least a portion of the set of image(s) and resulting in a filtered set of image(s); performing object detection on the filtered set of image(s) to generate a set of object-detected image(s); assembling a first plurality of object-detected image(s) from the set of object-detected image(s); and upon assembling the first plurality of object-detected image(s), performing non-maximum suppression on the assembled first plurality of object-detected image(s).
DEVICE AND METHOD FOR PROCESSING VIDEO DATA TO DETECT LIFE
Device for analysing video data. comprising: a first analyser (6) designed to perform a remote photoplethysmography measurement on video data (25) which are to be analysed and which have been received as input, the analyser comprising a separator (20) designed to determine areas of interest (27) in the video data (25) to be analysed, an aggregator (22) designed to determine a remote photoplethysmography signal from the video data (25) to be analysed relating to each area of interest, and a computer (24) designed to calculate a spectral signal from the photoplethysmography signal and to extract one or more physiological signals (29) therefrom: a tester (8) designed to receive the one or more physiological signals (29) and to return a first human presence value: a second analyser (10) designed to receive the video data to be analysed and to apply a neural network to said data in order to extract a second human presence value therefrom, the neural network being trained using video data similar to the video data to be analysed and sets of characteristics extracted from said video data, obtained by local analysis and/or by machine learning; and a unifier (12) designed to receive the first and second human presence values and to return a unified human presence value.
IMPROVED METHOD FOR DETERMINING THE SEX OF A CHICK
The invention relates to a method for determining the sex of a chick, comprising: determining (100) a region of interest in the image in which the feathers of a wing are visible, and running, on said region of interest, a classification model (400) trained on a training data set comprising images of male chick wings and female chick wings, in order to determine whether the chick is male or female.
Systems and methods to transform events and/or mood associated with playing media into lighting effects
Example systems and methods to transform events and/or mood associated with playing media into lighting effects are disclosed herein. An example apparatus includes a content identifier to identify content presented via a media presentation device based on a fingerprint associated with the content and derive metadata from the identified content. The example apparatus includes a content driven analyzer to determine a light effect to be produced by a light-generating device based on the metadata, generate an instruction for the light-generating device to produce the light effect, and transmit the instruction to the light-generating device during presentation of the content.
Information processing for detection and distance calculation of a specific object in captured images
An information processing device includes a data processing unit that executes analysis of images which are captured from different viewpoints. The data processing unit executes an object distance calculation process in which pattern-irradiated images of a plurality of different viewpoints are applied, and an object detection process in which non-pattern-irradiated images are applied.
System and method to identify a vehicle and generate reservation
A system and method having a number of technological elements, one of which being a controller, which causes improvements to the controller and creates significantly more than the original default controller functionality. The elements collaborating to cause the controller to operate a camera to record images of a visual content; store the recorded images to a memory, the recorded images being in a digital form as digital images; perform a visual recognition module to identify at least one targeted object within at least one digital image; produce the identification results of the visual recognition module; compare the identification results to the vehicle-reservation information; generate reservation information derived from the comparison outcome of the identification results to the vehicle-reservation information; and operate the display to exhibit the reservation information.
FACIAL DETECTION DEVICE, FACIAL DETECTION SYSTEM PROVIDED WITH SAME, AND FACIAL DETECTION METHOD
In order to eliminate erroneous detection in a case where a plurality of facial regions are detected in a captured image, facial detection device (2) of the present disclosure is a facial detection device that detects a facial region of a person from captured images which are continuous in time series, including a processor (15) that performs facial detection processing of detecting the facial region from the captured images and error determination processing of calculating a moving direction of each facial region between the captured images that are sequential in time series, and determining whether or not the detection as a facial region is correct with respect to a plurality of facial regions in which a correlation degree in the moving directions of the facial regions is equal to or larger than a predetermined threshold value, in a case where a plurality of facial regions are detected.
Facial recognition using fractal features
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for facial recognition using fractal features are disclosed. In one aspect, a method includes the actions of accessing data encoding a facial image, the facial image including a face. The actions further include generating a hierarchical graphical model of the face in the facial image, the hierarchical graphical model including more than nodes from at least one level, each level approximating the face within a contour. The actions further include applying a bank of filters to the face at a particular node, each filter spanning more than one scale and at least one direction. The actions further include analyzing filter responses from the bank of filters applied at each direction to obtain a similarity measure that consolidates filter responses from filters applied at more than one scale. The actions further include generating a vector representation.
OPTICAL DETECTION APPARATUS AND METHODS
An optical object detection apparatus and associated methods. The apparatus may comprise a lens (e.g., fixed-focal length wide aperture lens) and an image sensor. The fixed focal length of the lens may correspond to a depth of field area in front of the lens. When an object enters the depth of field area (e.g., sue to a relative motion between the object and the lens) the object representation on the image sensor plane may be in-focus. Objects outside the depth of field area may be out of focus. In-focus representations of objects may be characterized by a greater contrast parameter compared to out of focus representations. One or more images provided by the detection apparatus may be analyzed in order to determine useful information (e.g., an image contrast parameter) of a given image. Based on the image contrast meeting one or more criteria, a detection indication may be produced.
EFFICIENT PARALLEL ALGORITHM FOR INTEGRAL IMAGE COMPUTATION FOR MANY-CORE CPUS
Techniques are provided herein for generating an integral image of an input image in parallel across the cores of a multi-core processor. The input image is split into a plurality of tiles, each of which is stored in a scratchpad memory associated with a distinct core. At each tile, a partial integral image of the tile is first computed over the tile, using a Single-Pass Algorithm. This is followed by aggregating partial sums belonging to subsets of tiles using a 2D Inclusive Parallel Prefix Algorithm. A summation is finally performed over the aggregated partial sums to generate the integral image over the entire input image.