Patent classifications
G06V10/143
SYSTEM FOR DETECTING SURFACE TYPE OF OBJECT AND ARTIFICIAL NEURAL NETWORK-BASED METHOD FOR DETECTING SURFACE TYPE OF OBJECT
An artificial neural network-based method for detecting a surface type of an object includes: receiving a plurality of object images, wherein a plurality of spectra of the plurality of object images are different from one another and each of the object images has one of the spectra; transforming each object image into a matrix, wherein the matrix has a channel value that represents the spectrum of the corresponding object image; and executing a deep learning program by using the matrices to build a predictive model for identifying a target surface type of the object. Accordingly, the speed of identifying the target surface type of the object is increased, further improving the product yield of the object.
METHOD AND SYSTEM FOR ANALYZING INTESTINAL MICROFLORA OF A SUBJECT
A method and system for analyzing and/or estimating intestinal microflora of a subject. A digital image of a sample of feces of the subject is received by one or more processors. The digital image and/or one or more features extracted from the digital image is provided as input to a trained machine learning model which is configured to output a classification based on said input digital image and/or one or more features extracted from the digital image. Data indicative of one or more properties of the intestinal microflora of the subject based on the output image classification is determined by the one or more processors.
METHOD AND SYSTEM FOR ANALYZING INTESTINAL MICROFLORA OF A SUBJECT
A method and system for analyzing and/or estimating intestinal microflora of a subject. A digital image of a sample of feces of the subject is received by one or more processors. The digital image and/or one or more features extracted from the digital image is provided as input to a trained machine learning model which is configured to output a classification based on said input digital image and/or one or more features extracted from the digital image. Data indicative of one or more properties of the intestinal microflora of the subject based on the output image classification is determined by the one or more processors.
SYSTEM AND METHOD FOR OBJECT RECOGNITION UNDER NATURAL AND/OR ARTIFICIAL LIGHT
Described herein are a system and a method for object recognition via a computer vision application, the system including at least the following components: at least one object to be recognized, the object having object specific reflectance and luminescence spectral patterns, a light source which is configured to illuminate a scene including the at least one object, the light source being designed to omit at least one spectral band of a spectral range of light when illuminating the scene, the at least one omitted spectral band being in the luminescence spectral pattern of the at least one object, at least one sensor which is configured to exclusively measure radiance data of the scene in at least one of the at least one omitted spectral band when the scene is illuminated by the light source, a data storage unit, and a data processing unit.
CODED LED OR OTHER LIGHT FOR TARGET IMAGING OR ANALYSIS
Modulation-encoded light, using different spectral bin coded light components, can illuminate a stationary or moving (relative) target object or scene. Response signal processing can use information about the respective different time-varying modulation functions, to decode to recover information about a respective response parameter affected by the target object or scene. Electrical or optical modulation encoding can be used. LED-based spectroscopic analysis of a composition of a target (e.g., SpO2, glucose, etc.) can be performed; such can optionally include decoding of encoded optical modulation functions. Baffles or apertures or optics can be used, such as to constrain light provided by particular LEDs. Coded light illumination can be used with a focal plane array light imager receiving response light for inspecting a moving semiconductor or other target. Encoding can use orthogonal functions, such as an RGB illumination sequence, or a sequence of combinations of spectrally contiguous or non-contiguous colors.
CODED LED OR OTHER LIGHT FOR TARGET IMAGING OR ANALYSIS
Modulation-encoded light, using different spectral bin coded light components, can illuminate a stationary or moving (relative) target object or scene. Response signal processing can use information about the respective different time-varying modulation functions, to decode to recover information about a respective response parameter affected by the target object or scene. Electrical or optical modulation encoding can be used. LED-based spectroscopic analysis of a composition of a target (e.g., SpO2, glucose, etc.) can be performed; such can optionally include decoding of encoded optical modulation functions. Baffles or apertures or optics can be used, such as to constrain light provided by particular LEDs. Coded light illumination can be used with a focal plane array light imager receiving response light for inspecting a moving semiconductor or other target. Encoding can use orthogonal functions, such as an RGB illumination sequence, or a sequence of combinations of spectrally contiguous or non-contiguous colors.
Under-Display Sensor Lamination
This document describes systems and techniques directed at under-display sensor lamination. In aspects, an electronic device having a mechanical frame designed with a bucket architecture includes an under-display sensor attached to one or more layers of a display panel stack. Such an implementation enables attachment of the under-display sensor to a protective layer, as opposed to attachment of the sensor directly to a display panel, minimizing the risk of delamination, as well as reducing damage to the display panel if delamination occurs and rework is attempted. Further, such an implementation removes the need for a mid-frame architecture, resulting in a thinner and lighter electronic device.
THREE-DIMENSIONAL MEASUREMENT DEVICE
A method includes capturing a frame including a 3D point cloud and a 2D image. A key point is detected in the 2D image, the key point is a candidate to be used as a feature. A 3D patch of a predetermined dimension is created that includes points surrounding a 3D position of the key point. The 3D position and the points of the 3D patch are determined from the 3D point cloud. Based on a determination that the points in the 3D patch are on a single plane based on the corresponding 3D coordinates, a descriptor for the 3D patch is computed. The frame is registered with a second frame by matching the descriptor for the 3D patch with a second descriptor associated with a second 3D patch from the second frame. The 3D point cloud is aligned with multiple 3D point clouds based on the registered frame.
APPARATUS AND METHOD FOR PROVIDING EXTENDED FUNCTION TO VEHICLE
Provided may be a method for providing an extended function to a vehicle according to an embodiment, the method comprising the steps of: obtaining first image information required for providing an extended function, through a first photographing unit; obtaining predetermined running information related to the running of a vehicle; performing image processing for providing an extended function, through a first ECU on the basis of the running information and the first image information; and displaying a result of the image processing. Furthermore, provided may be an extended function providing apparatus capable of performing the extended function providing method, and a non-volatile computer-readable recording medium in which a computer program for performing the extended function providing method is contained.
PORTABLE PROJECTION MAPPING DEVICE AND PROJECTION MAPPING SYSTEM
A device may provide, to a camera and a projector of a portable projection mapping device, first instructions for calibrating the camera and the projector, and may receive, based on the first instructions, calibration parameters for the camera and the projector. The device may calculate a stereo calibration between the camera and the projector based on the calibration parameters, and may provide, to the camera, second instructions for recognizing a reference instrument associated with the portable projection mapping device. The device may receive, based on the second instructions, binocular images, and may determine additional parameters based on the binocular images. The device may determine recognition parameters for recognizing the reference instrument, based on the binocular images and the additional parameters. The device may process the recognition parameters and the stereo calibration, with an optical tracking model, to generate and provide overlay visualization data to the portable projection mapping device.