Patent classifications
G06V10/40
Method for size estimation by image recognition of specific target using given scale
The present invention relates to a method for size estimation by image recognition of a specific target using a given scale. First, a reference objected is recognized in an image and the corresponding scale is established. Then the specific target is searched and the size of the specific target is estimated according to the acquired scale.
Diagnostic systems and methods for deep learning models configured for semiconductor applications
Methods and systems for performing diagnostic functions for a deep learning model are provided. One system includes one or more components executed by one or more computer subsystems. The one or more components include a deep learning model configured for determining information from an image generated for a specimen by an imaging tool. The one or more components also include a diagnostic component configured for determining one or more causal portions of the image that resulted in the information being determined and for performing one or more functions based on the determined one or more causal portions of the image.
Diagnostic systems and methods for deep learning models configured for semiconductor applications
Methods and systems for performing diagnostic functions for a deep learning model are provided. One system includes one or more components executed by one or more computer subsystems. The one or more components include a deep learning model configured for determining information from an image generated for a specimen by an imaging tool. The one or more components also include a diagnostic component configured for determining one or more causal portions of the image that resulted in the information being determined and for performing one or more functions based on the determined one or more causal portions of the image.
Localization and mapping method and moving apparatus
A localization and mapping method is for localizing and mapping a moving apparatus in a moving process. The localization and mapping method includes an image capturing step, a feature point extracting step, a flag object identifying step, and a localizing and mapping step. The image capturing step includes capturing an image frame at a time point of a plurality of time points in the moving process by a camera unit. The flag object identifying step includes identifying whether the image frame includes a flag object among a plurality of the feature points in accordance with a flag database. The flag database includes a plurality of dynamic objects, and the flag object is corresponding to one of the dynamic objects. The localizing and mapping step includes performing localization and mapping in accordance with the image frames captured and the flag object thereof in the moving process.
Information processing device and recognition support method
In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device 100 comprises a detection unit 101 and an environment acquisition unit 102. The detection unit 101 detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit 102 acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone.
Information processing device and recognition support method
In order to acquire recognition environment information impacting the recognition accuracy of a recognition engine, an information processing device 100 comprises a detection unit 101 and an environment acquisition unit 102. The detection unit 101 detects a marker, which has been disposed within a recognition target zone for the purpose of acquiring information, from an image captured by means of an imaging device which captures images of objects located within the recognition target zone. The environment acquisition unit 102 acquires the recognition environment information based on image information of the detected marker. The recognition environment information is information representing the way in which a recognition target object is reproduced in an image captured by the imaging device when said imaging device captures an image of the recognition target object located within the recognition target zone.
Identifying the quality of the cell images acquired with digital holographic microscopy using convolutional neural networks
A system for performing adaptive focusing of a microscopy device comprises a microscopy device configured to acquire microscopy images depicting cells and one or more processors executing instructions for performing a method that includes extracting pixels from the microscopy images. Each set of pixels corresponds to an independent cell. The method further includes using a trained classifier to assign one of a plurality of image quality labels to each set of pixels indicating the degree to which the independent cell is in focus. If the image quality labels corresponding to the sets of pixels indicate that the cells are out of focus, a focal length adjustment for adjusting focus of the microscopy device is determined using a trained machine learning model. Then, executable instructions are sent to the microscopy device to perform the focal length adjustment.
Identifying the quality of the cell images acquired with digital holographic microscopy using convolutional neural networks
A system for performing adaptive focusing of a microscopy device comprises a microscopy device configured to acquire microscopy images depicting cells and one or more processors executing instructions for performing a method that includes extracting pixels from the microscopy images. Each set of pixels corresponds to an independent cell. The method further includes using a trained classifier to assign one of a plurality of image quality labels to each set of pixels indicating the degree to which the independent cell is in focus. If the image quality labels corresponding to the sets of pixels indicate that the cells are out of focus, a focal length adjustment for adjusting focus of the microscopy device is determined using a trained machine learning model. Then, executable instructions are sent to the microscopy device to perform the focal length adjustment.
Device with built-in bill capture, analysis, and execution
Systems and methods for secure and efficient bill capture, analysis, and execution are provided. A method may include capturing, via a camera embedded in a smart card, an image of a bill. The bill may include a plurality of text fields. The method may include processing the text fields via a microprocessor embedded in the smart card. The method may include determining, based at least in part on the processing of the text fields, a balance amount and a payment recipient associated with the bill. The method may also include executing a payment for the balance amount from an account associated with a user of the smart card to an account associated with the payment recipient. The executing may be performed via a wireless communication element embedded in the smart card which may be configured to provide wireless communication between the smart card and a payment gateway.
Device with built-in bill capture, analysis, and execution
Systems and methods for secure and efficient bill capture, analysis, and execution are provided. A method may include capturing, via a camera embedded in a smart card, an image of a bill. The bill may include a plurality of text fields. The method may include processing the text fields via a microprocessor embedded in the smart card. The method may include determining, based at least in part on the processing of the text fields, a balance amount and a payment recipient associated with the bill. The method may also include executing a payment for the balance amount from an account associated with a user of the smart card to an account associated with the payment recipient. The executing may be performed via a wireless communication element embedded in the smart card which may be configured to provide wireless communication between the smart card and a payment gateway.