Patent classifications
G06V10/145
BIOMETRIC SENSOR AND ELECTRONIC DEVICE COMPRISING THE SAME
A sensor for sensing biometric information includes a light emitting unit that emits a first light ray, a light receiving unit that receives a second light ray, where the second light ray includes a portion of the first light ray reflected by a body of a user, and an optical layer placed over the light emitting unit and the light receiving unit. The optical layer has a first surface facing the light emitting unit and the light receiving unit and a second surface opposite the first surface. The optical layer further includes an asymmetrical protrusion structure formed on the first surface or the second surface and including a plurality of asymmetrical protrusion units. The optical layer may further include a symmetrical protrusion structure formed on the first surface or the second surface opposite the asymmetrical protrusion structure and including a plurality of symmetrical protrusion units.
IMAGING APPARATUS AND IMAGING METHOD
An illumination apparatus emits illumination light S1 with an intensity distribution that changes with time. A photodetector measures reflected light from an object. A reconstruction processing unit reconstructs an intermediate image H.sub.i(x, y) of the object for every predetermined number of changes of the intensity distribution using a correlation calculation between detection intensities based on the output of the photodetector and the intensity distributions of the illumination light. A combining processing unit calculates shift amounts Δx and Δy for matching between the current reconstructed image G.sub.i(x, y) obtained by combining the previous intermediate images H(x, y) and the current intermediate image H.sub.i(x, y). Subsequently, the current reconstructed image G.sub.i(x, y) is shifted based on the shift amounts Δx and Δy. The shifted reconstructed image G.sub.i(x−Δx, y−Δy) is combined with the latest intermediate image H.sub.i(x, y) to create a new reconstructed image G.sub.i+1(x, y).
Interferometric structured illumination for depth determination
A depth camera assembly (DCA) has a light source assembly, a mask, a camera assembly, and a controller. The light source assembly includes at least one light source. The mask is configured to generate an interference pattern that is projected into a target area. The mask has two openings configured to pass through light emitted by the at least one light source, and the light passed through the two openings forms an interference pattern across the target area. The interference pattern has a phase based on a position of the light source. The camera assembly is configured to capture images of a portion of the target area that includes the interference pattern. The controller is configured to determine depth information for the portion of the target area based on the captured images.
Image pickup device
A device includes an illumination device that casts light of a prescribed polarization direction on a scattering body, a camera that picks up images of the scattering body at a plurality of different polarization angles, and a processor that executes a process of generating and outputting an inner layer image, of an inner layer of an inside of the scattering body, in response to a depth from a surface of the scattering body on the basis of the images of the scattering body picked up at the plurality of different polarization angles.
ILLUMINATION DEVICE AND ELECTRONIC APPARATUS INCLUDING THE SAME
Provided are an illumination device and an electronic apparatus. The illumination device includes a light source configured to emit light, a surface light source layer configured to convert the light emitted from the light source to surface light, a focusing lens configured to focus the surface light from the surface light source layer, and a display panel including an aperture through which light focused by the focusing lens passes.
VASCULAR PATTERN DETECTION SYSTEMS
In the examples provided herein, a vascular pattern recognition system integrated onto a portable card includes a vascular pattern detection system to obtain image data of blood vessels of a finger to be swiped across a detection area on the portable card, wherein the vascular pattern detection system includes a near infrared light source and an image sensor array. The vascular pattern recognition system also includes an image processor to process the image data to generate a scanned vascular pattern and compare the scanned vascular pattern to a pre-stored pattern stored on the portable card to authenticate the image data, and a security processor to generate a transaction code to authorize a transaction upon authentication of the image data.
Coordination of multiple structured light-based 3D image detectors
Technologies are generally described for coordination of structured light-based image detectors. In some examples, one or more structured light sources may be configured to project sets of points onto the scene. The sets of points may be arranged into disjoint sets of geometrical shapes such as lines, where each geometrical shape includes a subset of the points projected by an illumination source. A relative position and or a color of the points in each geometrical shape may encode an identification code with which each illumination source may be identified. Thus, even when the point clouds projected by each of the illumination sources overlap, the geometrical shapes may still be detected, and thereby a corresponding illumination source may be identified. A depth map may then be estimated based on stereovision principles or depth-from-focus principles by one or more image detectors.
Coordination of multiple structured light-based 3D image detectors
Technologies are generally described for coordination of structured light-based image detectors. In some examples, one or more structured light sources may be configured to project sets of points onto the scene. The sets of points may be arranged into disjoint sets of geometrical shapes such as lines, where each geometrical shape includes a subset of the points projected by an illumination source. A relative position and or a color of the points in each geometrical shape may encode an identification code with which each illumination source may be identified. Thus, even when the point clouds projected by each of the illumination sources overlap, the geometrical shapes may still be detected, and thereby a corresponding illumination source may be identified. A depth map may then be estimated based on stereovision principles or depth-from-focus principles by one or more image detectors.
Systems and methods for detecting light signatures and performing actions in response thereto
There is provided systems and methods for performing actions based on light signatures. An exemplary system includes a light source, a light detector, a non-transitory memory storing a plurality of light signatures and a hardware processor. The hardware processor executes an executable code to illuminate, using the light source, a target object with a first light, collect, using the light detector, a second light being a reflection of the first light by the target object, match the second light with one of the plurality of light signatures, and perform an action in response to matching the second light with the one of the plurality of light signatures.
Systems and methods for detecting light signatures and performing actions in response thereto
There is provided systems and methods for performing actions based on light signatures. An exemplary system includes a light source, a light detector, a non-transitory memory storing a plurality of light signatures and a hardware processor. The hardware processor executes an executable code to illuminate, using the light source, a target object with a first light, collect, using the light detector, a second light being a reflection of the first light by the target object, match the second light with one of the plurality of light signatures, and perform an action in response to matching the second light with the one of the plurality of light signatures.