Patent classifications
G06V10/431
MULTIMODAL DIAGNOSIS SYSTEM, METHOD AND APPARATUS
A method, system and mobile device can be used by a subject in diagnosing a disease or virus or other illness. The system can capture and analyze olfactory information providing a diagnosis based on the olfactory information. The system can also capture and output biometric data corresponding to the subject and can further include at least one camera or video sensor that can result in a diagnosis or a microphone or other acoustic sensor that can result in another diagnosis. Other sensors providing other corresponding diagnoses can be included. A sensor fusion component can receive and combine the biometric data and the various diagnoses results and further determine a confidence score. An event record creator compiles the biometric data and the confidence scores to create an event record having a higher confidence score with respect to a final diagnosis result. A data storage device stores the event record.
Image matching device
An image matching device that performs matching between a first image and a second image includes: a frequency characteristic acquisition unit configured to acquire a frequency characteristic of the first image and a frequency characteristic of the second image; a frequency characteristic synthesizing unit configured to synthesize the frequency characteristic of the first image and the frequency characteristic of the second image to generate a synthesized frequency characteristic; a determination unit configured to perform frequency transformation on the synthesized frequency characteristic to calculate a correlation coefficient map whose resolution coincides with a target resolution, and perform matching between the first image and the second image based on a matching score calculated from the correlation coefficient map; and a regulation unit configured to regulate the target resolution based on the matching score.
Method and apparatus with fingerprint verification
A fingerprint verification method and apparatus is disclosed. The fingerprint verification method may include obtaining an input fingerprint image, determining a matching region between the input fingerprint image and a registered fingerprint image, determining a similarity corresponding to the matching region, representing a determined indication of similarities between the input fingerprint image and the registered fingerprint image, relating the determined similarity to the matching region as represented in a matching region-based similarity, determining a result of a verification of the input fingerprint image based on the matching region-based similarity, and indicating the result of the verification.
Spectral unmixing of fluorescence imaging using radiofrequency-multiplexed excitation data
Disclosed herein include embodiments of a system, a device, and a method for sorting a plurality cells of a sample. A plurality of raw images comprising pixels of complex values in a frequency space can be generated from a plurality of channels of fluorescence intensity data of fluorescence emissions of fluorophores, the fluorescence emissions being elicited by fluorescence imaging using radiofrequency-multiplexed excitation in a temporal space. Spectral unmixing can be performed on the raw images prior to a sorting decision being made.
Learning method, computer program, classifier, and generator
In a learning that uses a machine learning model for an image, a learning method, a learning model, a classifier, and a generator in which human vision is taken into consideration are provided. The learning method learns a machine learning model that inputs or outputs image data with data for learning that includes training data subjected to a process of leaving out a component that is difficult to visually judge to reduce an information amount or generated data at a predetermined ratio.
Individual identification system
An individual identification system includes: a storing unit for storing an image capture parameter in association with data characterizing a surface of a reference object; an acquiring unit that, when data characterizing a surface of an object to be matched is input, calculates an approximation degree between the input data and each data stored in the storing unit, and acquires the image capture parameter applied to the object to be matched from the storing unit based on the calculated approximation degree; a condition setting unit that sets an image capture condition determined by the acquired image capture parameter; an image capturing unit that acquires an image of the surface of the object to be matched under the set image capture condition; an extracting unit that extracts a feature value from the acquired image; and a matching unit that matches the extracted feature value against a registered feature value.
EVENT-BASED GRAPHICS ENGINE FOR FAST VIDEO RENDERING
Technology to provide event-based image generation includes generating, using event-based simulation, asynchronous spatio-temporal data based on input data, the input data representing information for a scene or an environment to be rendered, converting the asynchronous spatio-temporal data to complex wave data, and generating, via a neural network, one or more images based on the asynchronous spatio-temporal data and the complex wave data, wherein the neural network is trained to generate high-resolution images. An alternative embodiment includes generating, using event-based simulation, asynchronous spatio-temporal data based on input data, the input data representing information for a scene or an environment to be rendered, converting the asynchronous spatio-temporal data to complex wave data, and generating, via a neural network, one or more images based on the complex wave data and on data from a graphics device that processes the input data, wherein the neural network is trained to generate high-resolution images.
Method and apparatus for detecting obstacle
The present disclosure discloses a method and apparatus for detecting an obstacle, and relates to the technical field of intelligent transportation. A specific implementation plan is: acquiring a current image acquired by a camera; inputting the current image into a pre-trained detection model to obtain a position of a detection frame of an obstacle and determine a first pixel coordinate of a grounding point in the current image; determining an offset between the current image and a template image; converting the first pixel coordinate into a world coordinate of the grounding point based on the offset; and outputting the world coordinate of the grounding point as a position of the obstacle in a world coordinate system. This embodiment solves the problem of camera jitter from an image perspective, greatly improves the robustness of the roadside perception system, and saves computing resources.
Optical detection apparatus and methods
An optical object detection apparatus and associated methods. The apparatus may comprise a lens (e.g., fixed-focal length wide aperture lens) and an image sensor. The fixed focal length of the lens may correspond to a depth of field area in front of the lens. When an object enters the depth of field area (e.g., due to a relative motion between the object and the lens) the object representation on the image sensor plane may be in-focus. Objects outside the depth of field area may be out of focus. In-focus representations of objects may be characterized by a greater contrast parameter compared to out of focus representations. One or more images provided by the detection apparatus may be analyzed in order to determine useful information (e.g., an image contrast parameter) of a given image. Based on the image contrast meeting one or more criteria, a detection indication may be produced.
Event-based spatial transformation
A method for computing a spatial Fourier transform for an event-based system includes receiving an asynchronous event output stream including one or more events from a sensor. The method further includes computing a discrete Fourier transform (DFT) matrix based on dimensions of the sensor. The method also includes computing an output based on the DFT matrix and applying the output to an event processor.