G06V10/145

METHOD FOR BIOMETRIC PROCESSING OF IMAGES

The invention relates to a method for biometric processing of images of part of the human body comprising at least one finger, said method being performed by a sensor, and wherein an emissive screen (1) displays at least one first display pattern to light said part of the human body and an imager (2) acquires at least one first image of said part of the human body; from this first image, the automated data-processing system determines at least one second display pattern different to the first display pattern; the emissive screen (1) displays at least the second pattern to light said part of the human body and the imager (2) acquires at least one second image of said part of the human body; a biometric processing being performed on a final image constructed from at least the second image.

SYSTEMS, METHODS, AND MEDIA FOR DIRECTLY RECOVERING PLANAR SURFACES IN A SCENE USING STRUCTURED LIGHT

In accordance with some embodiments, systems, methods and media for directly recovering planar surfaces in a scene using structured light are provided. In some embodiments, a system comprises: a light source; an image sensor; a processor programmed to: cause the light source to emit a pattern comprising a pattern feature with two line segments that intersect on an epipolar line; cause the image sensor to capture an image including the pattern; identify an image feature in the image, the image feature comprising two intersecting line segments that intersect at a point in the image that corresponds to the first epipolar line; estimate a plane hypothesis associated with the pattern feature based on properties of the pattern feature and properties of the image feature, the plane hypothesis associated with a set of parameters characterizing a plane; and identify a planar surface in the scene based on the plane hypothesis.

Machine vision verification
09727781 · 2017-08-08 · ·

Embodiments of systems and methods for directly reading and verifying characters on a personalized document, are provided. A surface of the personalized document is illuminated independently or simultaneously by first and second light sources. The first light source has a ring shape, surrounds the personalized document in 360 degree, and transmits a grazing incident light to illuminate the surface of the personalized document. An incident light from the second light source is reflected by a beam splitter and illuminates the surface of the personalized document in a direction generally perpendicular to the surface. Light reflected from the surface of the personalized document in a direction generally perpendicular to the surface thereof is collected by a camera.

CAPTURING APPARATUS
20220036057 · 2022-02-03 ·

A capturing apparatus includes a capturing unit which captures an object, and a mirror which is installed in an angle of view of the capturing unit so as to exist as a part of an image captured by the capturing unit. The mirror reflects light coming from a part, existing outside the angle of view of the capturing unit, of the object so that the light enters on the capturing unit.

Under-screen fingerprint identification apparatus and electronic device

Embodiments of the present application discloses an under-screen fingerprint identification apparatus, which includes: a micro-lens array, configured to be disposed under the backlight module of the liquid crystal display screen; at least one light shielding layer, disposed under the micro-lens array, wherein the light shielding layer is provided with a plurality of light transmission holes; a photo detecting array, disposed under the light shielding layer; wherein the micro-lens array is configured to converge an optical signal with a specific direction passing through the backlight module to a plurality of light transmission holes, and transmit an optical signal with a non-specific direction passing through the backlight module to a light shielding region of the light shielding layer, wherein the optical signal with the specific direction is transmitted to the photo detecting array through the plurality of light transmission holes.

Method for augmenting a scene in real space with projected visual content

One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.

Method for augmenting a scene in real space with projected visual content

One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color image—depicting the scene in the field of view of the camera—based on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.

Object recognition apparatus and operation method thereof

An object recognition apparatus includes a first spectrometer configured to obtain a first type of spectrum data from light scattered, emitted, or reflected from an object; a second spectrometer configured to obtain a second type of spectrum data from the light scattered, emitted, or reflected from the object, the second type of spectrum data being different from the first type of spectrum data; an image sensor configured to obtain image data of the object; and a processor configured to identify the object using data obtained from at least two from among the first spectrometer, the second spectrometer, and the image sensor and using at least two pattern recognition algorithms.

3D silhouette sensing system

A 3D silhouette sensing system is described which comprises a stereo camera and a light source. In an embodiment, a 3D sensing module triggers the capture of pairs of images by the stereo camera at the same time that the light source illuminates the scene. A series of pairs of images may be captured at a predefined frame rate. Each pair of images is then analyzed to track both a retroreflector in the scene, which can be moved relative to the stereo camera, and an object which is between the retroreflector and the stereo camera and therefore partially occludes the retroreflector. In processing the image pairs, silhouettes are extracted for each of the retroreflector and the object and these are used to generate a 3D contour for each of the retroreflector and object.

Identifying defect on specular surfaces

A light is shined on a specular surface of an inspected object at a fixed position. Light is reflected directly from the surface into a fixed camera. Multiple images are taken as the light source moves. Images are fused into a single image. This invention takes a single image and generates several defect detection images using several distinct image processing sequences. Each defect detection image alone could be used to identify when defects are located under a camera pixel, but the several images are combined to create a feature vector that can be used as an input to a pattern classifier. The pattern classifier may be trained to achieve superior defect detection results by combining several detection images.