G06V10/235

Generating Computer Augmented Maps from Physical Maps
20230050644 · 2023-02-16 ·

A method by a computing device obtains a digital image of a physical map, identifies features in the digital image, and obtains map augmentation information based on the identified features. The method then generates an augmented map based on the map augmentation information, and provides the augmented map for display. Related mobile devices and computer program products are disclosed.

SEARCH QUERY GENERATION BASED UPON RECEIVED TEXT

In an example, a first set of text may be received from a client device. A set of content items may be selected from among content items based upon the first set of text and a plurality of sets of content item text associated with the content items. A set of terms may be determined based upon the first set of text and the set of content items. A similarity profile associated with the set of terms may be generated. The similarity profile is indicative of similarity scores associated with similarities between terms of the set of terms. Relevance scores associated with the set of terms may be determined based upon the similarity profile. One or more search terms may be selected from among the set of terms based upon the relevance scores. A search may be performed based upon the one or more search terms.

Image forming apparatus and information processing method

An image forming apparatus includes a display device, a first detection unit, a second detection unit, a first acquisition unit, and an output unit. The display device displays at least one image for display. The first detection unit detects a user's selection of an image for display in at least one image for display. The second detection unit detects an input to a drawing region of drawing indicating an image quality abnormality with respect to a target image corresponding to the selected image for display. The first acquisition unit acquires drawing information based on the input of the drawing detected by the second detection unit. The output unit outputs the drawing information acquired by the first acquisition unit.

Eye image selection
11579694 · 2023-02-14 · ·

Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.

WORK ANALYZING DEVICE AND WORK ANALYZING METHOD

Provided are a device and a method which enable easy analysis and evaluation of work efficiency of a worker without burdensome tasks and easy determination of a worker's skill level by comparing the worker's work efficiency status with that of another worker or a past record of the same worker, wherein analytical information is produced by estimating the worker's joint positions based on a video; acquiring time series data on joint positions; determining work efficiency based on the time series data; acquiring a target range (part of a work process with low work efficiency); output an image of the target range overlaid on a graph of the time series data; and output a posture image of the worker overlaid on the video. The analytical information may include information on a working activity to be analyzed and information on a chosen model working activity with high work efficiency.

SYSTEMS AND METHODOLOGIES FOR AUTOMATED CLASSIFICATION OF IMAGES OF STOOL IN DIAPERS

A method involves use of multiple convolutional neural networks and multiple segmentation masks to programmatically generate a stool rating for a digital image of a diaper with stool. The method includes generating, by a first convolutional neural network, a first mask representing an identification of an area of the digital image that corresponds to stool, and a second mask representing an identification of an area of the digital image that corresponds to a diaper. The method further includes generating a third mask representing an intersection of the first and second masks, and generating a modified digital image utilizing the third mask. The method further includes determining, by a second convolutional neural network, a stool rating for the digital image of the diaper with stool by utilizing the modified digital image as input for the second convolutional neural network.

System and method for training an artificial intelligence (AI) classifier of scanned items

Systems and methods for training an artificial intelligence (AI) classifier of scanned items. The items may include a training set of sample raw scans. The set may include in-class objects and not-in-class raw scans. An AI classifier may be configured to sample raw scans in the training set, measure errors in the results, update classifier parameters based on the errors, and detect completion of training.

Techniques for image-based search using touch controls

Techniques for image-based search using touch controls are described. An apparatus may comprise: a processor circuit; a gesture component operative on the processor circuit to receive gesture information from a touch-sensitive screen displaying an image and generate a selection area corresponding to the gesture information; a capture component operative on the processor circuit to extract an image portion of the image corresponding to the selection area; and a search component operative on the processor circuit to perform an image-based search using the extracted image portion. Other embodiments are described and claimed.

INTELLIGENT MULTI-SCALE MEDICAL IMAGE LANDMARK DETECTION

Intelligent multi-scale image parsing determines the optimal size of each observation by an artificial agent at a given point in time while searching for the anatomical landmark. The artificial agent begins searching image data with a coarse field-of-view and iteratively decreases the field-of-view to locate the anatomical landmark. After searching at a coarse field-of view, the artificial agent increases resolution to a finer field-of-view to analyze context and appearance factors to converge on the anatomical landmark. The artificial agent determines applicable context and appearance factors at each effective scale.

Machine learning inference user interface

Two-dimensional objects are displayed upon a user interface; user input selects an area and selects a machine learning model for execution. The results are displayed as an overlay over the objects in the user interface. User input selects a second model for execution; the result of this execution is displayed as a second overlay over the objects. A first overlay from a model is displayed over a set of objects in a user interface and a ground truth corresponding to the objects is displayed as a second overlay on the user interface. User input selects the ground truth overlay as a reference and causes a comparison of the first overlay with the ground truth overlay; the visual data from the comparison is displayed on the user interface. A comparison of M inference overlays with N reference overlays is performed and visual data from the comparison is displayed on the interface.