Patent classifications
G06V2201/121
ARTIFACT FILTERING USING ARTIFICIAL INTELLIGENCE
A system and a method for removing artifacts from a 3D coordinate data are provided. The system includes one or more processors and a measuring device. The one or more processors are operable to receive training data and train the 3D measuring device to identify artifacts by analyzing the training data. The one or more processors are further operable to identify artifacts in live data based on the training of the processor system. The one or more processors are further operable to generate clear scan data by filtering the artifacts from the live data and output the clear scan data.
Method and system for contactless 3D fingerprint image acquisition
Embodiments of the present invention disclose a non-contact 3D fingerprint capturing apparatus and method. The apparatus includes: a housing, a circuit board and a fingerprint reader that are disposed in the housing; the circuit board includes a first control module; the fingerprint reader includes a fingerprint capturing module and a positioning module; the positioning module casts light to a first position point on a finger object; the fingerprint capturing module receives light reflected from the first position point, converts an optical signal into an electrical signal, and sends the electrical signal to the first control module; the first control module judges, according to the electrical signal, whether the first position point is a standard point, the standard point being an aperture with a diameter less than a first threshold and an illumination intensity greater than a second threshold; if the first position point is a standard point, the fingerprint capturing module captures fingerprint images from multiple directions, and transmits the fingerprint images to the first control module; and the first control module creates a 3D fingerprint image according to the fingerprint images. The embodiments of the present invention further provide a non-contact 3D fingerprint capturing method.
ARTIFACT FILTERING USING ARTIFICIAL INTELLIGENCE
A system and a method for removing artifacts from a 3D coordinate data are provided. The system includes one or more processors and a measuring device. The one or more processors are operable to receive training data and train the 3D measuring device to identify artifacts by analyzing the training data. The one or more processors are further operable to identify artifacts in live data based on the training of the processor system. The one or more processors are further operable to generate clear scan data by filtering the artifacts from the live data and output the clear scan data.
DEPTH IMAGE ACQUIRING APPARATUS, CONTROL METHOD, AND DEPTH IMAGE ACQUIRING SYSTEM
It is intended to promote enhancement of performance of acquiring a depth image. A depth image acquiring apparatus includes a light emitting diode, a TOF sensor, and a filter. The light emitting diode irradiates modulated light toward a detection area becoming an area in which a depth image is to be acquired to detect a distance. The TOF sensor receives incident light into which the light irradiated from the light emitting diode is reflected by an object lying in the detection area to become, thereby outputting a signal used to produce the depth image. The filter passes more light having a wavelength in a predetermined pass bandwidth than light having a wavelength in a pass bandwidth other than the predetermined pass bandwidth of the light made incident toward the TOF sensor. In this case, at least one of the light emitting diode, the TOF sensor, or arrangement of the filter is controlled in accordance with a temperature of the light emitting diode or the TOF sensor. The present technique, for example, can be applied to a system for with international search report acquiring a depth image by using a TOF system.
DEPTH SENSING USING LINE PATTERN GENERATORS
A distance measurement system includes two or more line pattern generators (LPGs), a camera, and a processor. Each LPG emits a line pattern having a first set of dark portions separated by a respective first set of bright portions. A first line pattern has a first angular distance between adjacent bright portions, and a second line pattern has a second angular distance between adjacent bright portions. The camera captures at least one image of the first line pattern and the second line pattern. The camera is a first distance from the first LPG and a second distance from the second LPG. The processor identifies a target object illuminated by the first and second line patterns and determines a distance to the target object based on the appearance of the target object as illuminated by the first and second line patterns.
Gesture operation method based on depth values and system thereof
A gesture operation method based on depth values and the system thereof are revealed. A stereoscopic-image camera module acquires a first stereoscopic image. Then an algorithm is performed to judge if the first stereoscopic image includes a triggering gesture. Then the stereoscopic-image camera module acquires a second stereoscopic image. Another algorithm is performed to judge if the second stereoscopic image includes a command gesture for performing the corresponding operation of the command gesture.
STRUCTURED LIGHT ILLUMINATORS INCLUDING A CHIEF RAY CORRECTOR OPTICAL ELEMENT
The present disclosure describes techniques to improve the resolution and reduce the distortion of structured light projection in miniature wide-angle VCSEL array projection modules used for 3D imaging and gesture recognition. The projector module includes a chief ray corrector optical element, which directs the VCSEL beams along the projector lens chief ray paths. The VCSEL structured illumination projector using the chief ray optical element corrector can create a high resolution, low distortion structured light pattern over an extended distance range greater that the projector lens image focal range. The corrector element is placed close to the VCSEL array. The corrector element can be implemented in various ways including, for example, a refractive lens, diffractive lens or microlens array, depending on the specific application requirements and optical configurations.
Image depth decoder and computing device
An image depth decoder includes an NIR image buffer, a reference image ring buffer and a pattern matching engine. The NIR image buffer stores an NIR image inputted by a stream. The reference image ring buffer stores a reference image inputted by a stream. The pattern matching engine is coupled to the NIR image buffer and the reference image ring buffer, and performs a depth computation according to the NIR image and the reference image to output at least one depth value.
OPTICAL TOUCH APPARATUS AND WIDTH DETECTING METHOD THEREOF
An optical touch apparatus and a width detecting method thereof are provided. The optical touch apparatus includes at least two sensing components, a light emitting component, and a width detecting module. The sensing components are configured to sense a touch object located on a touch plane. The light emitting component is configured to be a light source of the touch plane and is disposed adjacent to one of the sensing components. The width detecting module is coupled to the light emitting component and the other of the sensing components. The light emitting component is controlled by the width detecting module to emit a light. The other of the sensing components is controlled by the width detecting module to sense intensity of the light. The width detecting module detects a distance between the sensing components according to the sensed intensity of the light.
SYSTEM ARCHITECTURE AND METHOD OF AUTHENTICATING A 3-D OBJECT
A non-transitory computer-readable medium encoded with a computer-readable program which, when executed by a processor, will cause a computer to execute a method of authenticating a 3-D object with a 2-D camera, the method including building a pre-determined database. The method additionally includes registering the 3-D object to a storage unit of a device comprising the 2-D camera, thereby creating a registered 3-D model of the 3-D object. Additionally, the method includes authenticating a test 3-D object by comparing the test 3-D object to the registered 3-D model.