Patent classifications
G06V2201/121
LIGHT-EMITTING DEVICE, OPTICAL DEVICE, MEASUREMENT DEVICE, AND INFORMATION PROCESSING APPARATUS
A light-emitting device includes a laser element array having a quadrangular planar shape, a pair of capacitors that supply an electric current for light emission of the laser element array, and a driving unit that drives the laser element array by turning on and off the electric current for light emission of the laser element array. The pair of capacitors are disposed beside two sides of the laser element array that face each other so as to sandwich the laser element array, and the driving unit is disposed beside another side of the laser element array.
Depth calculation processor, data processing method and 3D image device
A depth calculation processor, a data processing method, and a 3D image device are disclosed herein. The depth calculation processor includes: two input ports to receive a first image data, wherein the first image data comprises structured light image acquired under projection of structured light; an input switch connected to the input ports and to convey all or some of the first image data from the input ports; a data processing engine connected to the input switch and to process the first image data that is output through the input switch and to output a second image data, wherein the second image data comprises a depth map, wherein the data processing engine comprises a depth processing engine to process the structured light image to obtain the depth map; and one output port connected to the data processing engine and to output the second image data to a main device.
Method and device to determine the dimensions and distance of a number of objects in an environment
A method to determine the dimensions and distance of a number of objects in an environment includes providing a number of objects including a marking element; recording a visual image-dataset of at least one of the objects with a camera; and determining a parameter value from the image of a marking element in the image-dataset or from a measurement of an additional sensor at the location of the camera. The parameter value is a value depending from the distance of the object to the camera. The method further includes calculating the relative distance between the object and the camera based on the parameter value and calculating dimensions of the object from at least a part of the image of the object in the image-dataset and the calculated distance. A related device, a related system and a related control unit for a virtual reality system are also disclosed.
Control apparatus, robot system, and method of detecting object
A control apparatus includes a processor that executes a first point cloud generation process including a first imaging process of acquiring a first image according to a first depth measuring method and a first analysis process of generating a first point cloud and a second point cloud generation process including a second imaging process of acquiring a second image according to a second depth measuring method and a second analysis process of generating a second point cloud, and detects the object using the first point cloud or the second point cloud. The first point cloud generation process completes in a shorter time than the second point cloud generation process, and the processor starts the second point cloud generation process after the first imaging process and discontinues the second point cloud generation process if the first point cloud satisfies a predetermined condition of success.
METHOD, SYSTEM, AND COMPUTER-READABLE MEDIUM FOR GENERATING SPOOFED STRUCTURED LIGHT ILLUMINATED FACE
In an embodiment, a method includes determining a spatial illumination distribution using a first image caused by at least first structured light and a second image caused by at least second structured light, wherein a portion of the first image is caused by a portion of the at least first structured light traveling a first distance, a portion of the second image is caused by a portion of the at least second structured light traveling a second distance, the portion of the first image and the portion of the second image cause a same portion of the spatial illumination distribution, and the first distance is different from the second distance; building a first 3D face model; rendering the first 3D face model using the spatial illumination distribution, to generate a first rendered 3D face model; and displaying the first rendered 3D face model to a first camera.
THREE-DIMENSIONAL MEASUREMENT DEVICE
A three-dimensional measurement device includes one or a plurality of light source units configured to irradiate the object to be measured SA with measurement light having a predetermined pattern, one or a plurality of image capture units configured to capture an image of the object to be measured which is irradiated with the measurement light, and a measurement unit configured to measure a three-dimensional shape of the object to be measured on the basis of results of image capture performed by the image capture units. The light source units are constituted by an S-iPMSEL of M-point oscillation.
SYSTEM AND METHOD FOR IDENTIFYING ITEMS
The method for item recognition can include: optionally calibrating a sampling system, determining visual data using the sampling system, determining a point cloud, determining region masks based on the point cloud, generating a surface reconstruction for each item, generating image segments for each item based on the surface reconstruction, and determining a class identifier for each item using the respective image segments.
ASSOCIATING THREE-DIMENSIONAL COORDINATES WITH TWO-DIMENSIONAL FEATURE POINTS
An example method includes causing a light projecting system of a distance sensor to project a three-dimensional pattern of light onto an object, wherein the three-dimensional pattern of light comprises a plurality of points of light that collectively forms the pattern, causing a light receiving system of the distance sensor to acquire an image of the three-dimensional pattern of light projected onto the object, causing the light receiving system to acquire a two-dimensional image of the object, detecting a feature point in the two-dimensional image of the object, identifying an interpolation area for the feature point, and computing three-dimensional coordinates for the feature point by interpolating using three-dimensional coordinates of two points of the plurality of points that are within the interpolation area.
SYSTEM FOR COMPOSING IDENTIFICATION CODE OF SUBJECT
A system includes a lighting module, a processing module, and photovoltaic units. Each of the photovoltaic units receives light reflected off a body portion which is illuminated by light from the lighting module, and converts light energy of the reflected light into electricity. The processing module stores modes each of which specifies a code set. When one of the modes is selected, the processing module activates the lighting module to emit light based on the code set specified by the mode thus selected. The processing module converts electrical quantities measured individually for the photovoltaic units into respective code parameters, and composes an identification code using the code parameters.
Method and System for Contactless 3D Fingerprint Image Acquisition
Embodiments of the present invention disclose a non-contact 3D fingerprint capturing apparatus and method. The apparatus includes: a housing, a circuit board and a fingerprint reader that are disposed in the housing; the circuit board includes a first control module; the fingerprint reader includes a fingerprint capturing module and a positioning module; the positioning module casts light to a first position point on a finger object; the fingerprint capturing module receives light reflected from the first position point, converts an optical signal into an electrical signal, and sends the electrical signal to the first control module; the first control module judges, according to the electrical signal, whether the first position point is a standard point, the standard point being an aperture with a diameter less than a first threshold and an illumination intensity greater than a second threshold; if the first position point is a standard point, the fingerprint capturing module captures fingerprint images from multiple directions, and transmits the fingerprint images to the first control module; and the first control module creates a 3D fingerprint image according to the fingerprint images. The embodiments of the present invention further provide a non-contact 3D fingerprint capturing method.