Patent classifications
G06K9/62
Preview Image Acquisition User Interface for Linear Panoramic Image Stitching
A system and method that allows the capture of a series of images to create a single linear panoramic image is disclosed. The method includes capturing an image, dynamically comparing a previously captured image with a preview image on a display of a capture device until a predetermined overlap threshold is satisfied, generating a user interface to provide feedback on the display of the capture device to guide a movement of the capture device, and capturing the preview image with enough overlap with the previously captured image with little to no tilt for creating a linear panorama.
Quality Control of Automated Whole-slide Analyses
The subject disclosure presents systems and methods for automatically selecting meaningful regions on a whole-slide image and performing quality control on the resulting collection of FOVs. Density maps may be generated quantifying the local density of detection results. The heat maps as well as combinations of maps (such as a local sum, ratio, etc.) may be provided as input into an automated FOV selection operation. The selection operation may select regions of each heat map that represent extreme and average representative regions, based on one or more rules. One or more rules may be defined in order to generate the list of candidate FOVs. The rules may generally be formulated such that FOVs chosen for quality control are the ones that require the most scrutiny and will benefit the most from an assessment by an expert observer.
MACHINE LEARNING IMAGE PROCESSING
A machine learning image processing system performs natural language processing (NLP) and auto-tagging for an image matching process. The system facilitates an interactive process, e.g., through a mobile application, to obtain an image and supplemental user input from a user to execute an image search. The supplemental user input may be provided from a user as speech or text, and NLP is performed on the supplemental user input to determine user intent and additional search attributes for the image search. Using the user intent and the additional search attributes, the system performs image matching on stored images that are tagged with attributes through an auto-tagging process.
MEASURING AND MONITORING SKIN FEATURE COLORS, FORM AND SIZE
Kits, diagnostic systems and methods are provided, which measure the distribution of colors of skin features by comparison to calibrated colors which are co-imaged with the skin feature. The colors on the calibration template (calibrator) are selected to represent the expected range of feature colors under various illumination and capturing conditions. The calibrator may also comprise features with different forms and size for calibrating geometric parameters of the skin features in the captured images. Measurements may be enhanced by monitoring over time changes in the distribution of colors, by measuring two and three dimensional geometrical parameters of the skin feature and by associating the data with medical diagnostic parameters. Thus, simple means for skin diagnosis and monitoring are provided which simplify and improve current dermatologic diagnostic procedures.
Password-less software system user authentication
Data is received as part of an authentication procedure to identify a user. Such data characterizes a user-generated biometric sequence that is generated by the user interacting with at least one input device according to a desired biometric sequence. Thereafter, using the received data and at least one machine learning model trained using empirically derived historical data generated by a plurality of user-generated biometric sequences (e.g., historical user-generated biometric sequences according to the desired biometric sequence, etc.), the user is authenticated if an output of the at least one machine learning model is above a threshold. Data can be provided that characterizes the authenticating. Related apparatus, systems, techniques and articles are also described.
Image content obfuscation using a neural network
The technology described herein obfuscates image content using a local neural network and a remote neural network. The local network runs on a local computer system and a remote classifier runs in a remote computing system. Together, the local network and the remote classifier are able to classify images, while the image never leaves the local computer system. In aspects of the technology, the local network receives a local image and creates a transformed object. The transformed object may be generated by processing the image with a local neural network to generate a multidimensional array and then randomly shuffling data locations within a multidimensional array. The transformed object is communicated to the remote classifier in the remote computing system for classification. The remote classifier may not have the seed used to deterministically scramble the spatial arrangement of data within the multidimensional array.
IMAGE RECOGNITION ACCELERATOR, TERMINAL DEVICE, AND IMAGE RECOGNITION METHOD
An image recognition accelerator, a terminal device, and an image recognition method are provided. The image recognition accelerator includes a dimensionality-reduction processing module, an NVM, and an image matching module. The dimensionality-reduction processing module first reduces a dimensionality of first image data. The NVM writes, into a first storage area of the NVM according to a specified first current I, ω low-order bits of each numeric value of the first image data on which dimensionality reduction has been performed, and writes, into a second storage area of the NVM according to a specified second current, (N−ω) high-order bits of each numeric value of the first image data on which dimensionality reduction has been performed. The image matching module determines whether an image library stored in the NVM includes image data matching the first image data on which dimensionality reduction has been performed.
WARNING DEVICE, WARNING METHOD, AND WARNING PROGRAM
To issue an appropriate warning based on detection of an object even under the circumstances where it is difficult to determine the outside environment of a movable body, a warning device according to the present invention includes an image acquisition unit configured to acquire a plurality of images respectively based on a plurality of filter characteristics, a detection unit configured to perform detection of a specified object on each of the plurality of acquired images, and a warning unit configured to issue a specific warning when the object is detected from at least any one of the plurality of acquired images, wherein the warning unit issues a higher level of warning when the object is detected from all of the plurality of images than when the object is detected from some of the plurality of images.
DATA ANALYSIS SYSTEM, DATA ANALYSIS METHOD, AND DATA ANALYSIS PROGRAM
A data analysis system according to the present invention includes: a training data acquisition unit that acquires a combination of training data including information about a medicinal drug and a plurality of pieces of classification information for classifying the training data on the basis of a plurality of classification standards; a learning unit that learns a pattern of the information about the medicinal drug from distribution of data elements which constitute at least part of the training data and appear according to the classification information; an unknown data acquisition unit that acquires unknown data from a specified information source; a data evaluation unit that evaluates the acquired unknown data on the basis of the learned pattern with respect to each of the plurality of classification standards; and a presentation unit that presents the information about the medicinal drug included in the unknown data to a user according to evaluation by the data evaluation unit.
TEMPORAL-BASED VISUALIZED IDENTIFICATION OF COHORTS OF DATA POINTS PRODUCED FROM WEIGHTED DISTANCES AND DENSITY-BASED GROUPING
A user-selected group of data points is received. Weighted distances between further data points with the user-selected group of data points are computed, the weighted distances computed based on respective weights assigned to dimensions of data points. Density-based grouping of the further data points is performed based on the computed weighted distances, the density-based grouping producing cohorts of data points. A graphical visualization is generated including pixels representing the user-selected group of data points and the cohorts of data points. The graphical visualization provides a temporal-based visualized identification of the cohorts with the user selected group of data points.