Patent classifications
G06V10/145
Blue light adjustment for biometric security
Systems and methods for blue light adjustment with a wearable display system are provided. Embodiments of the systems and methods for blue light adjustment can include receiving an eye image of an eye exposed to an adjusted level of blue light; detecting a change in a pupillary response by comparison of the received eye image to a first image; determining that the pupillary response corresponds to a biometric characteristic of a human individual; and allowing access to a biometric application based on the pupillary response determination.
IMAGE PROCESSING METHOD AND APPARATUS, AND COMPUTER-READABLE NON-TRANSITORY STORAGE MEDIUM
An image processing method, comprising: in response to a request instruction of an image application function, transferring acquired emission parameters to the emission driver module, and controlling the infrared camera to transmit a trigger signal to the emission driver module, when detecting the request instruction of the image application function; controlling the structured-light emitter to emit a laser by the emission driver module, and transmitting a synchronization signal to the infrared camera by the emission driver module, in response to the trigger signal; controlling the infrared camera to collect speckle images of a to-be-detected object, in response to the synchronization signal; controlling the infrared camera to transfer the speckle images to the image processing module; controlling the image processing module to acquire depth images of the to-be-detected object by performing depth calculation on the speckle images, and realize the image application function based on the depth images.
System and method for retrieving and analyzing particles
A system and method for isolating and analyzing single cells, including: a substrate having a broad surface; a set of wells defined at the broad surface of the substrate, and a set of channels, defined by the wall, that fluidly couple each well to at least one adjacent well in the set of wells; and fluid delivery module defining an inlet and comprising a plate, removably coupled to the substrate, the plate defining a recessed region fluidly connected to the inlet and facing the broad surface of the substrate, the fluid delivery module comprising a cell capture mode.
THREE-DIMENSIONAL IMAGING AND SENSING USING A DYNAMIC VISION SENSOR AND PATTERN PROJECTION
In one implementation, a three-dimensional image sensing system is provided that includes at least one processor that detects, from an image sensor, one or more first events based on reflections caused by electromagnetic pulses associated with a plurality of projected line patterns and corresponding to one or more first pixels of the image sensor. The at least one processor also detects, from the image sensor, one or more second events based on the reflections and corresponding to one or more second pixels of the image sensor, identifies a line corresponding to the one or more second events and the one or more first events, calculates three-dimensional rays for the one or more first pixels and the one or more second pixels based on the identified line, and calculates three-dimensional image or more first pixels and the one or more second pixels based on the three-dimensional rays and a plane equation.
APPARATUS AND METHOD FOR DETECTING POSITION OF AN ELECTRONIC SHELF LABEL
The present disclosure provides a device and method for position detection of electronic shelf labels. The device comprises: a lighting control module configured to send a lighting signal to electronic shelf labels; electronic shelf labels configured to receive the lighting signal and light up according to their respective preset lighting rules; a camera configured to take photos of the electronic shelf labels being lighting up according to a preset photographing rule to obtain image data of the electronic shelf labels; a detection module configured to determine positions of the electronic shelf labels on a shelf based on the image data of the electric shelf labels. The present disclosure can automatically detect positions of the electronic shelf labels on the shelf with high efficiency.
MOBILE TERMINAL AND METHOD FOR CONTROLLING SAME
The present disclosure relates to a mobile terminal having a lighting unit and a control method thereof. A mobile terminal according to one implementation includes a lighting unit, a camera, and a controller configured to control the lighting unit to irradiate illumination light to a subject to be captured through the camera, and control the camera to capture the subject irradiated with the illumination light, wherein the controller is configured to determine a material of the subject based on information related to the illumination light irradiated on the subject captured through the camera.
VISION BASED LIGHT DETECTION AND RANGING SYSTEM USING MULTI-FIELDS OF VIEW
A vision based light detection and ranging (LIDAR) system captures images including a targeted object and identifies the targeted object using an object recognition model. To identify the targeted object, the vision based LIDAR system determines a type of object and pixel locations or a boundary box associated with the targeted object. Based on the identification, the vision based LIDAR system directs a tracking beam onto one or more spots on the targeted object and detects distances to the one or more spots. The vision based LIDAR system updates the identification of the targeted object based on the one or more determined distances.
VISION BASED LIGHT DETECTION AND RANGING SYSTEM USING MULTI-FIELDS OF VIEW
A vision based light detection and ranging (LIDAR) system captures images including a targeted object and identifies the targeted object using an object recognition model. To identify the targeted object, the vision based LIDAR system determines a type of object and pixel locations or a boundary box associated with the targeted object. Based on the identification, the vision based LIDAR system directs a tracking beam onto one or more spots on the targeted object and detects distances to the one or more spots. The vision based LIDAR system updates the identification of the targeted object based on the one or more determined distances.
Texture recognition device and driving method of texture recognition device
A texture recognition device and a driving method of a texture recognition device (100) are provided. The texture recognition device has a touch side, and includes: a light source array, an image sensor array and a light valve structure; the image sensor array is configured to receive light emitted from the light source array and then reflected to the image sensor array by a texture for a texture image collection; the light valve structure is disposed on a side of the light source array close to a touch side and is configured to control a first region to be in a light transmission state in response to a control signal, so as to allow light emitted from the light source array to pass through the first region to form a first photosensitive light source in the light transmission state.
Image sensor and sensing method thereof with polarizers for removing background noise
The invention relates to an image sensor, including a substrate, a unit pixel, a first polarizer, a second polarizer, and readout circuit. First, incident light is emitted to the image sensor, and the first and second polarizers convert incident light into first and second incident lights respectively. Then, the photoelectric conversion element of the unit pixel covered by the first and second polarizers respectively generates first and second electrons after receiving the first and second incident lights respectively. Afterwards, the readout circuit performs subtraction and integral of the first electron and the second electron to generate a voltage signal corresponding to the number of electrons in the actual signal. Finally, repeat the above steps. Thereby, the image sensor of the invention effectively increases the full well capacity of the equivalent unit pixel, so as to improve the signal-to-noise ratio of the image sensor of the invention.