G06V10/145

Laser grid inspection of three-dimensional objects

A method, system, and computer program product for optical inspection of objects. The method projects an optical test line on a device under test. A frame is captured of the optical test line projected onto the device under test. The method provides a reference line for the device under test and compares the reference line and the optical test line within the frame. The method generates a visual quality determination based on the comparison of the reference line and the optical test line.

METHODS, SYSTEMS AND COMPUTER PROGRAM PRODUCTS FOR EYE BASED SPOOF DETECTION
20220027650 · 2022-01-27 ·

The invention enables spoof detection in eye based person detection, person recognition, or person monitoring systems. The invention includes (i) illuminating an eye from a first source located at a first position, (ii) illuminating the eye from a second source located at a second position spaced from the first position, (iii) acquiring at the image sensor, a set of images of the eye which includes (a) a first set of image information representing a first specular reflection at a third position relative to the eye, (b) a second set of image information representing a second specular reflection at a fourth position relative to the eye, (iv) determining a difference value representing a difference between the third position and the fourth position, and (v) generating a data signal representing detection of a real eye in response to determining that the difference value is less than a threshold value.

INTEGRATED PHOTO-SENSING DETECTION DISPLAY APPARATUS AND METHOD OF FABRICATING INTEGRATED PHOTO-SENSING DETECTION DISPLAY APPARATUS

An integrated photo-sensing detection display substrate having a subpixel region and an inter-subpixel region. The integrated photo-sensing detection display substrate includes a base substrate; a plurality of light emitting elements on the base substrate and configured to emit light, a portion of the light being totally reflected by a surface thereby forming totally reflected light; a light shielding layer between the plurality of light, emitting elements and the base substrate configured to block at least a portion of diffusedly reflected light from passing through, the light shielding layer having a light path aperture in the inter-subpixel region allowing at least a portion of the totally reflected light to pass through thereby forming a signal-enriched light beam; a diffraction grating layer configured to at least partially collimate the signal-enriched light beam thereby forming a collimated light beam; and a photosensor configured to detect the collimated light beam.

Texture recognition device and operation method of texture recognition device

A texture recognition device and an operation method of a texture recognition device are provided. The texture recognition device includes a light source array and an image sensor array. The light source array includes a plurality of light sources; the image sensor array is at a side of the light source array and includes a plurality of image sensors, and the plurality of image sensors are configured to receive light emitted from the plurality of light sources and reflected to the image sensors by a texture for a texture image collection; each of the image sensors includes a plurality of signal switches, and a signal of each of the image sensors is read through the plurality of signal switches for forming one image pixel of the texture image.

Smoke detection method with visual depth

The present invention provides a smoke detection method with visual depth, which uses an image camera and a depth camera to extract surrounding images and surrounding depth information. A vehicle is used to patrol an area, such as the area of a processing factory, for receiving the surrounding environment information and detecting existence of burning objects or smoke. Then a processor adopting the clustering algorithm is used to estimate the smoke distribution or the location of fire source of burning objects, and uses an alarm to provide alarm information. Thereby, the rescue crew can prepare in advance and respond immediately. By providing the correct information to the firemen in the fire scenes, the time to control the fire can be shortened and the time for evacuation can be increased.

LIGHT-EMITTING USER INPUT DEVICE FOR CALIBRATION OR PAIRING

A light emitting user input device can include a touch sensitive portion configured to accept user input (e.g., from a user's thumb) and a light emitting portion configured to output a light pattern. The light pattern can be used to assist the user in interacting with the user input device. Examples include emulating a multi-degree-of-freedom controller, indicating scrolling or swiping actions, indicating presence of objects nearby the device, indicating receipt of notifications, assisting pairing the user input device with another device, or assisting calibrating the user input device. The light emitting user input device can be used to provide user input to a wearable device, such as, e.g., a head mounted display device.

METHOD FOR ACQUIRING ANIMAL NOSE PATTERN IMAGE
20220015329 · 2022-01-20 ·

The present invention is a method for overcoming the challenge of acquiring good quality animal nose pattern images for biometric recognition caused by the moisture present on the nose surface, by removing the moisture or by pairing moisture removal and the application of an astringent or pigment prior to image capture.

IMAGE PROCESSING APPARATUS AND DISPLAY APPARATUS WITH DETECTION FUNCTION
20220019301 · 2022-01-20 ·

An image processing apparatus according to the present disclosure includes: a position detection illumination unit; an image recognition illumination unit; an illumination control unit; an imaging unit; and an image processing unit. The position detection illumination unit outputs position detection illumination light. The position detection illumination light is used for position detection on a position detection object. The image recognition illumination unit outputs image recognition illumination light. The image recognition illumination light is used for image recognition on an image recognition object. The illumination control unit controls the position detection illumination unit and the image recognition illumination unit to cause the position detection illumination light and the image recognition illumination light to be outputted at timings different from each other. The position detection illumination light and the image recognition illumination light enter the imaging unit at timings different from each other. The image processing unit determines switching between the position detection illumination light and the image recognition illumination light on the basis of luminance information regarding a captured image by the imaging unit. The image processing unit performs position detection on the position detection object on the basis of an imaging result of the imaging unit with the position detection illumination light switched on and performs image recognition on the image recognition object on the basis of an imaging result of the imaging unit with the image recognition illumination light switched on.

Visual, depth and micro-vibration data extraction using a unified imaging device

A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system.

Dynamic structured light for depth sensing systems based on contrast in a local area

A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements.