Patent classifications
G06V10/145
Distance sensor including adjustable focus imaging sensor
In one embodiment, a method for calculating a distance to an object includes simultaneously activating a first projection point and a second projection point of a distance sensor to collectively project a reference pattern into a field of view, activating a third projection point of the distance sensor to project a measurement pattern into the field of view, capturing an image of the field of view, wherein the object, the reference pattern, and the measurement pattern are visible in the image, calculating a distance from the distance sensor to the object based on an appearance of the measurement pattern in the image, detecting a movement of a lens of the distance sensor based on an appearance of the reference pattern in the image, and adjusting the distance as calculated based on the movement as detected.
Systems and methods for acquiring information from an environment
A system for acquiring information from an environment, comprising: a light source for generating at least one beam; a first optical setup for converting the at least one beam into a distorted light pattern projectable onto an environment; and a second optical setup for converting an original view returned from the environment and comprising the distorted light pattern deformed by at least one surface of the environment into a corrected image comprising a corrected pattern.
Display device
A display device including: an optical image sensor; a pinhole array mask layer on the optical image sensor; a display layer disposed on the pinhole array mask layer and including a plurality of pixels; and a transparent cover layer disposed on the display layer, wherein a finger placement surface is provided on the transparent cover layer, wherein each of the pixels is one of a red pixel, a green pixel, and a blue pixel, and at least one of the green pixel and the blue pixel emits light and the red pixel does not emit light when a finger is adjacent to the finger placement surface.
Method for human motion analysis, apparatus for human motion analysis, device and storage medium
A method for human motion analysis, an apparatus for human motion analysis, a device, and a storage medium. The method includes: acquiring image information captured by a number of photographing devices, where at least one of the number of photographing devices is disposed above a shelf; performing human tracking according to the image information captured by the plurality of photographing devices, and determining position information in space of at least one human body and identification information of the at least one human body; acquiring, according to the position information in space of a target human body of the at least one human body, a target image captured by the photographing device above a shelf corresponding to the position information; and recognizing an action of the target human body according to the target image and detection data of a non-visual sensor corresponding to the position information.
Operation method of texture recognition device and texture recognition device
An operation method of a texture recognition device and a texture recognition device are provided. The texture recognition device includes a light source array and an image sensor array; the light source array includes a plurality of light sources; the image sensor array includes a plurality of image sensors, which are configured to receive light emitted from the light sources and reflected to the image sensors by a texture for a texture collection; the operation method includes: in a process of the texture collection performed by the image sensor array, lighting a first group of light sources continuously arranged in a first pattern at a first moment, so that the first group of light sources continuously arranged in the first pattern serve as a photosensitive light source for the image sensor array, in which a length-width ratio of a region occupied by the first pattern is larger than two.
Automated analysis of petrographic thin section images using advanced machine learning techniques
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automated analysis of petrographic thin section images. In one aspect, a method includes determining a first image of a petrographic thin section of a rock sample, and determining a feature vector for each pixel of the first image. Multiple different regions of the petrographic thin section are determined by clustering the pixels of the first image based on the feature vectors, wherein one of the regions corresponds to grains in the petrographic thin section. The method further includes determining a second image of the petrographic thin section, including combining images of the petrographic thin section acquired with plane-polarized light and cross-polarized light. Multiple grains are segmented from the second image of the petrographic thin section based on the multiple different regions from the first image, and characteristics of the segmented grains are determined.
Three-dimensional real face modeling method and three-dimensional real face camera system
The present invention relates to the field of three-dimensional face modeling, and provides a three-dimensional real face modeling method and a three-dimensional real face camera system. The three-dimensional real face modeling method includes the following steps: 1, projecting structured light to a target face, and taking a photo to acquire facial three-dimensional geometric data; 2, acquiring facial skin chroma data and brightness data; 3, triangulating the facial three-dimensional geometric data; 4, acquiring patch chroma data corresponding to each of triangular patch regions; 5, performing interpolation calculation to acquire patch brightness data corresponding to each of the triangular patch regions; and 6, calculating a reflectivity of a facial skin region corresponding to each of pixel points. The three-dimensional real face camera system includes: a standard light source, a control and calculation module and a three-dimensional portrait acquisition unit. By the adoption of the three-dimensional real face modeling method and system, three-dimensional real face information unrelated to a modeling device for three-dimensional modeling and an illumination environment on a modeling site can be acquired.
Multi-channel depth estimation using census transforms
A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. In a first light channel of the plurality of light channels, the system calculates a census transform for each pixel of the first image and a census transform for each pixel of the second image. In a second light channel of the plurality of light channels, the system calculates a census transform for each pixel of the first image and a census transform for each pixel of the second image. The system generates a depth map based in part on the census transforms for each pixel of the first image and the second image in the first light channel and in the second light channel.
Optical device and optical neural network apparatus including the same
Provided are an optical device which is capable of optically implementing an activation function of an artificial neural network and an optical neural network apparatus which includes the optical device. The optical device may include: a beam splitter splitting incident light into first light and second light; an image sensor disposed to sense the first light; an optical shutter configured to transmit or block the second light; and a controller controlling operations of the optical shutter, based on an intensity of the first light measured by the image sensor.
Optical device and optical neural network apparatus including the same
Provided are an optical device which is capable of optically implementing an activation function of an artificial neural network and an optical neural network apparatus which includes the optical device. The optical device may include: a beam splitter splitting incident light into first light and second light; an image sensor disposed to sense the first light; an optical shutter configured to transmit or block the second light; and a controller controlling operations of the optical shutter, based on an intensity of the first light measured by the image sensor.