Patent classifications
G06V10/464
System and method for retrieval of similar findings from a hybrid image dataset
In a method for retrieval of similar findings from a hybrid image dataset, a database of hotspots is prepared, wherein the hotspots are identified by binary strings encoding descriptors, and identify binary strings stored in the database are identified that resemble a new binary string.
Computer implemented method for sign language characterization
A sign language recognizer is configured to detect interest points in an extracted sign language feature, wherein the interest points are localized in space and time in each image acquired from a plurality of frames of a sign language video; apply a filter to determine one or more extrema of a central region of the interest points; associate features with each interest point using a neighboring pixel function; cluster a group of extracted sign language features from the images based on a similarity between the extracted sign language features; represent each image by a histogram of visual words corresponding to the respective image to generate a code book; train a classifier to classify each extracted sign language feature using the code book; detect a posture in each frame of the sign language video using the trained classifier; and construct a sign gesture based on the detected postures.
Techniques for identifying visually similar content
Embodiments of the present invention provide techniques for identifying and recommending similar items based on visual similarity to a selected content item. Visual similarity may be characterized by identifying features depicted in a selected content item and comparing those features to features in an electronic catalog of content items. Visual similarity may also factor in medium and subject matter. For example, when a content item depicts a landscape painting, other landscape paintings (rather than paintings of different subject matter or photographs) will be recommended. Other visual characteristics, such as color theme and distribution, brushwork, etc. may also be represented in the recommended content items. As discussed further herein, different features may be weighted differently based on the analysis of the content item. These weightings enable the recommended content items to be tailored to visually similar subject matter.
SYSTEM AND METHOD FOR THE FUSION OF BOTTOM-UP WHOLE-IMAGE FEATURES AND TOP-DOWN ENTTIY CLASSIFICATION FOR ACCURATE IMAGE/VIDEO SCENE CLASSIFICATION
Described is a system and method for accurate image and/or video scene classification. More specifically, described is a system that makes use of a specialized convolutional-neural network (hereafter CNN) based technique for the fusion of bottom-up whole-image features and top-down entity classification. When the two parallel and independent processing paths are fused, the system provides an accurate classification of the scene as depicted in the image or video.
Classification of severity of pathological condition using hybrid image representation
A computer-implemented method obtains at least one image from which severity of a given pathological condition presented in the at least one image is to be classified. The method generates a hybrid image representation of the at least one obtained image. The hybrid image representation comprises a concatenation of a discriminative pathology histogram, a generative pathology histogram, and a fully connected representation of a trained baseline convolutional neural network. The hybrid image representation is used to train a classifier to classify the severity of the given pathological condition presented in the at least one image. One non-limiting example of a pathological condition whose severity can be classified with the above method is diabetic retinopathy.
LABEL-FREE DIGITAL BRIGHTFIELD ANALYSIS OF NUCLEIC ACID AMPLIFICATION
An optical readout method for detecting a precipitate (e.g., a precipitate generated from the LAMP reaction) contained within a droplet includes generating a plurality of droplets, at least some which have a precipitate contained therein. The droplets are imaged using a brightfield imaging device. The image is subject to image processing using image processing software executed on a computing device. Image processing isolates individual droplets in the image and performs feature detection within the isolated droplets. Keypoints and information related thereto are extracted from the detected features within the isolated droplets. The keypoints are subject to a clustering operation to generate a plurality of visual words. The word frequency obtained for each droplet is input into a trained machine learning droplet classifier, wherein the trained machine learning droplet classifier classifies each droplet as positive for the precipitate or negative for the precipitate.
Fault tolerance to provide robust tracking for autonomous positional awareness
The described positional awareness techniques employing visual-inertial sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy.
Methods and systems for generating composite image descriptors
An illustrative image descriptor generation system determines a subset of image descriptors from a plurality of image descriptors that each correspond to a different feature point included within an image. The subset of image descriptors is determined based on geometric proximity, within the image, of respective feature points of the subset of image descriptors to a feature point of a primary image descriptor. The image descriptor generation system then selects a secondary image descriptor from the subset of image descriptors and combines the primary image descriptor and the secondary image descriptor to form a composite image descriptor. Corresponding methods and systems are also disclosed.
Texture based fusion for images with cameras having differing modalities
Techniques for generating an enhanced image. A first image is generated using a first camera of a first modality, and a second image is generated using a second camera of a second modality. Pixels that are common between the two images are identified. Textures for the common pixels are determined. Saliencies of the two images are determined, where the saliencies reflect amounts of texture variation present in those images. An alpha map is generated and reflects edge detection weights that have been computed for each one of the common pixels based on the two saliencies. A determination is made as to how much texture from the first and/or second images to use to generate an enhanced image. This determining process is based on the edge detection weights included within the alpha map. Based on the edge detection weights, textures are merged from the common pixels to generate the enhanced image.
Progressive vehicle searching method and device
The present application discloses a vehicle searching method and device, which can perform the steps of: calculating an appearance similarity distance between a first image of a target vehicle and several second images containing the searched vehicle; selecting several images from the several second images as several third images; obtaining corresponding license plate features of license plate areas in the first image and each of the third images with a preset Siamese neural network model; calculating a license plate feature similarity distance between the first image and each of the third images according to license plate feature; calculating a visual similarity distance between the first image and each of the third images according to the appearance similarity distance and the license plate feature similarity distance; obtaining a the first search result of the target vehicle by arranging the several third images in an ascending order of the visual similarity distances. The solution provided by the present application is not limited by application scenes, and it also improves vehicle searching speed and accuracy while reducing requirements of hardware such as cameras that collect images of a vehicle and auxiliary devices.