Patent classifications
G06V10/143
Method and system for learning spectral features of hyperspectral data using DCNN
The embodiments herein provide a method and system that analyzes the pixel vectors by transforming the pixel vector into two-dimensional spectral shape space and then perform convolution over the image of graph thus formed. Method and system disclosed converts the pixel vector into image and provides a DCNN architecture that is built for processing 2D visual representation of the pixel vectors to learn spectral and classify the pixels. Thus, DCNN learn edges, arcs, arcs segments and the other shape features of the spectrum. Thus, the method disclosed enables converting a spectral signature to a shape, and then this shape is decomposed using hierarchical features learned at different convolution layers of the disclosed DCNN at different levels.
Method and system for learning spectral features of hyperspectral data using DCNN
The embodiments herein provide a method and system that analyzes the pixel vectors by transforming the pixel vector into two-dimensional spectral shape space and then perform convolution over the image of graph thus formed. Method and system disclosed converts the pixel vector into image and provides a DCNN architecture that is built for processing 2D visual representation of the pixel vectors to learn spectral and classify the pixels. Thus, DCNN learn edges, arcs, arcs segments and the other shape features of the spectrum. Thus, the method disclosed enables converting a spectral signature to a shape, and then this shape is decomposed using hierarchical features learned at different convolution layers of the disclosed DCNN at different levels.
HUMAN FALLING DETECTION EMPLOYING THERMAL SENSOR AND IMAGE SENSOR
There is provided a human falling detection system including an image sensor, a thermal sensor and a microphone. The image sensor captures an image frame that is used to identify a face and a height-width ratio of a human image. The thermal sensor is used as a filter for filtering out a living body and captures a thermal image that is used to identify a height-width ratio of a human thermal image. The microphone records a time stamp of an abrupt sound appearing.
HUMAN FALLING DETECTION EMPLOYING THERMAL SENSOR AND IMAGE SENSOR
There is provided a human falling detection system including an image sensor, a thermal sensor and a microphone. The image sensor captures an image frame that is used to identify a face and a height-width ratio of a human image. The thermal sensor is used as a filter for filtering out a living body and captures a thermal image that is used to identify a height-width ratio of a human thermal image. The microphone records a time stamp of an abrupt sound appearing.
METHOD AND APPARATUS FOR GAZE DETECTION
A method and apparatus for determining gaze direction information, includes a light source for forming illuminating light to an eye region of a user, and optical element(s) configured to guide the illuminating light from the light source to the eye region. The illuminating light is dynamically adjustable to generate a dynamic light beam on the eye region, and a sensor is configured to capture reflected light on the eye region and generate reflection eye data. The apparatus can maintain user profile information, adjust spectral power distribution of the light source based on the user profile information, receive the reflection eye data, and generate the gaze direction information based on the reflection eye data.
METHOD AND APPARATUS FOR GAZE DETECTION
A method and apparatus for determining gaze direction information, includes a light source for forming illuminating light to an eye region of a user, and optical element(s) configured to guide the illuminating light from the light source to the eye region. The illuminating light is dynamically adjustable to generate a dynamic light beam on the eye region, and a sensor is configured to capture reflected light on the eye region and generate reflection eye data. The apparatus can maintain user profile information, adjust spectral power distribution of the light source based on the user profile information, receive the reflection eye data, and generate the gaze direction information based on the reflection eye data.
VEHICLE OCCUPANT ENGAGEMENT USING THREE-DIMENSIONAL EYE GAZE VECTORS
According to the techniques of this disclosure, a method includes capturing, using a camera system of a vehicle, at least one image of an occupant of the vehicle, determining, based on the at least one image of the occupant, a location of one or more eyes of the occupant within the vehicle, and determining, based on the at least one image of the occupant, an eye gaze vector. The method may also include determining, based on the eye gaze vector, the location of the one or more eyes of the occupant, and a vehicle data file of the vehicle, a region of interest from a plurality of regions of interests of the vehicle at which the occupant is looking, wherein the vehicle data file specifies respective locations of each of the plurality of regions of interest, and selectively performing, based on the region of interest, an action.
Method for assisting with the detection of elements, associated device and platform
The present invention relates to a method for assisting in the detection of elements in an environment, comprising: the acquisition of a first wide-field image of an environment by a first sensor (14A) having a first field of view and a first resolution, the generation of a fused image from first narrow-field images of the environment acquired by a second sensor (14B) having a second field of view strictly less than the first field of view and a second resolution finer than the first resolution, In response to the detection of a difference relating to an element on a second wide-field image acquired by the first sensor (14A), compared to the first wide-field image, the acquisition of a second narrow-field image imaging the difference, and the update the fused image with the second narrow-field image.
Photographing Apparatus and Authentication Apparatus
A photographing apparatus includes: an imaging unit which is arranged at a position opposed to a plurality of fingers to be presented, and is configured to image the plurality of fingers; and a plurality of light sources which are arranged in plurality in an array direction of the plurality of fingers, and are configured to irradiate the plurality of fingers with light from an outside of an opposing region in which the imaging unit is opposed to the plurality of fingers toward an inside of the opposing region.
Photographing Apparatus and Authentication Apparatus
A photographing apparatus includes: an imaging unit which is arranged at a position opposed to a plurality of fingers to be presented, and is configured to image the plurality of fingers; and a plurality of light sources which are arranged in plurality in an array direction of the plurality of fingers, and are configured to irradiate the plurality of fingers with light from an outside of an opposing region in which the imaging unit is opposed to the plurality of fingers toward an inside of the opposing region.