Patent classifications
G06V10/464
GLOBAL SIGNATURES FOR LARGE-SCALE IMAGE RECOGNITION
Techniques are provided that include obtaining a vocabulary including a set of content indices that reference corresponding cells in a descriptor space based on an input set of descriptors. A plurality of local features of an image are identified based on the vocabulary, the local features being represented by a plurality of local descriptors. An associated visual word in the vocabulary is determined for each of the plurality of local descriptors. A plurality of global signatures for the image are generated based on the associated visual words, wherein some of the plurality of global signatures are generated using local descriptors corresponding to different cropped versions of the image, two or more of the different cropped versions of the image being centered at a same pixel location of the image, and an image recognition search is facilitated using the plurality of global signatures to search a document image dataset.
ANALYZING CONTENT OF DIGITAL IMAGES
Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.
Machine learning approach for detecting mobile phone usage by a driver
A system and method for detecting electronic device use by a driver of a vehicle including acquiring an image including a vehicle from an associated image capture device positioned to view oncoming traffic, locating a windshield region of the vehicle in the captured image, processing pixels of the windshield region of the image for computing a feature vector describing the windshield region of the vehicle, applying the feature vector to a classifier for classifying the image into respective classes including at least classes for candidate electronic device use and candidate electronic device non-use, and outputting the classification.
Global signatures for large-scale image recognition
Techniques are provided that include obtaining a vocabulary including a set of content indices that reference corresponding cells in a descriptor space based on an input set of descriptors. A plurality of local features of an image are identified based on the vocabulary, the local features being represented by a plurality of local descriptors. An associated visual word in the vocabulary is determined for each of the plurality of local descriptors. A plurality of global signatures for the image are generated based on the associated visual words, wherein some of the plurality of global signatures are generated using local descriptors corresponding to different cropped versions of the image, two or more of the different cropped versions of the image being centered at a same pixel location of the image, and an image recognition search is facilitated using the plurality of global signatures to search a document image dataset.
Method and apparatus for processing block to be processed of urine sediment image
Concepts herein relate to processing urine sediment images. An example method comprises: approximating the color of a pixel in a block to be processed to one of the k.sub.c colors in a code book, wherein the code book is a set of the k.sub.c colors generated in a set of urine sample blocks; obtaining a distribution histogram of the number of pixels the color approximation results of which fall on each color of the k.sub.c colors; using an occurrence frequency correction factor to correct the number of pixels the color approximation results; standardizing the corrected number of pixels the color approximation results; and processing the block to be processed.
Image processing and matching
A configured machine performs image matching and retrieval of natural images that may depict logos. The machine generates and uses color-localized spatial masks, which may be computationally less expensive than spatial verification techniques. Key points are detected within images that form a reference database of images. Local masks are defined by the machine around each key point based on the scale and orientation of the key point. To utilize color information presented in logo images, ordered color histograms may be extracted by the machine from locally masked regions of each image. A cascaded index may then be constructed for both visual descriptors and color histograms. For faster matching, the cascaded index maps the visual descriptors and color histograms to a list of relevant or similar images. This list may then be ranked to generate relevant matches for an input query image.
OFFLINE, HYBRID AND HYBRID WITH OFFLINE IMAGE RECOGNITION
Methods and systems of identification of objects in query images are disclosed. Keypoints in the query images are identified corresponding to objects to be identified. Visual words are identified in a dictionary of visual words for the identified keypoints. A set of hits is identified corresponding to reference images comprising the identified keypoints. Reference images corresponding to the identified set of hits are ranked using clustering of matches in a limited pose space. The limited pose space comprises a one-dimensional table corresponding to the rotation between the object to be identified with respect to the reference image. A first subset of M reference images that obtained a rank above a predetermined threshold is then selected. Offline, hybrid and combined offline and hybrid systems for performing the proposed methods are disclosed.
SCALABLE IMAGE MATCHING
Various embodiments may increase scalability of image representations stored in a database for use in image matching and retrieval. For example, a system providing image matching can obtain images of a number of inventory items, extract features from each image using a feature extraction algorithm, and transform the same into their feature descriptor representations. These feature descriptor representations can be subsequently stored and used to compare against query images submitted by users. Though the size of each feature descriptor representation isn't particularly large, the total number of these descriptors requires a substantial amount of storage space. Accordingly, feature descriptor representations are compressed to minimize storage and, in one example, machine learning can be used to compensate for information lost as a result of the compression.
Vehicle classification from laser scanners using fisher and profile signatures
Methods, systems and processor-readable media for vehicle classification. In general, one or more vehicles can be scanned utilizing a laser scanner to compile data indicative of an optical profile of the vehicle(s). The optical profile associated with the vehicle(s) is then pre-processed. Particular features are extracted from the optical profile following pre-processing of the optical profile. The vehicle(s) can be then classified based on the particular features extracted from the optical feature. A segmented laser profile is treated as an image and profile features that integrate the signal in one of the two directions of the image and Fisher vectors which aggregate statistics of local patches of the image are computed and utilized as part of the extraction and classification process.
LOCALLY OPTIMIZED FEATURE SPACE ENCODING OF DIGITAL DATA AND RETRIEVAL USING SUCH ENCODING
A digital document is represented as a set of codes comprising indices into a feature space comprising a number of subspaces, each code corresponds to one subspace and identifying a cell within the subspace. Each digital document can be represented by a code set, and the code set can be used as selection criteria for identifying a number of digital documents using each digital document's corresponding code set. By way of some non-limiting examples, digital document code sets can be used to identify similar or different digital images, used to identify duplicate or nearly-duplicate digital images, used to identify similar and/or different digital images for inclusion in a recommendation, used to identify and rank digital images in a set of search results.