Patent classifications
G06V10/757
Object change detection and measurement using digital fingerprints
The present disclosure teaches a method of utilizing image “match points” to measure and detect changes in a physical object. In some cases “degradation” or “wear and tear” of the physical object is assessed, while in other applications this disclosure is applicable to measuring intentional changes, such as changes made by additive or subtractive manufacturing processes, which may, for example, involve adding a layer or removing a layer by machining. A system may include a scanner, and a digital fingerprinting process, coupled to an object change computer server. The server is coupled to a datastore that stores class digital fingerprints, selected object digital fingerprints collected over time, match measurements, and deterioration metrics.
THREE DIMENSIONAL OBJECT RECOGNITION
A methods and system for recognizing a three dimensional object on a base are disclosed. A three dimensional image of the object is received as a three-dimensional point cloud having depth data and color data. The base is removed from the three dimensional point cloud to generate a two-dimensional image representing the object. The two-dimensional image is segmented to determine object boundaries of a detected object. Color data from the object is applied to refine segmentation and match the detected object to a reference object data.
METHOD AND APPARATUS FOR COMPARING TWO MAPS WITH LANDMARKS DEPOSITED THEREIN
A method for comparing two maps with landmarks stored therein. The method includes comparing at least one portion of the first map with a corresponding portion in the second map, wherein a similarity between the portion of the first map and the corresponding portion in the second map is determined based on a correspondence between the landmarks respectively stored in the two portions, and wherein the similarity of the compared portion is expressed as a similarity value derived therefrom. Also disclosed is an associated apparatus, an associated computer program including program code, and an associated computer program product including program code.
CORRECTING PERSPECTIVE DISTORTION IN DOUBLE-PAGE SPREAD IMAGES
A distortion correction component of a mobile device receives an image of a spread open multi-page document, determines a binding edge line of the spread open multi-page document, determines a first set of substantially vertical straight lines lying left of the binding edge line and a second set of substantially vertical straight lines lying right of the binding edge line. The distortion correction component then determines a first vanishing point based on the first set of substantially vertical straight lines and a second vanishing point based on the second set of substantially vertical straight lines. A first quadrangle is determined based on the first vanishing point and a second quadrangle is determined based on the second vanishing point. A corrected image for the first page is generated based on the first quadrangle and a corrected image for the second page is generated based on the second quadrangle.
METHOD FOR FINE-GRAINED SKETCH-BASED SCENE IMAGE RETRIEVAL
A sketch-based image retrieval method, device and system, to improve accuracy of image searching from a scene sketch image. For example, the image retrieval method, device and system can be used to retrieve a target scene image from a collection of stored images in a storage (i.e., an image collection). The image retrieval method includes: segmenting the scene sketch image using an image segmentation module into semantic object-level instances, and fine-grained features are obtained for each object instance, generating an attribute graph which integrates the fine-grained features for each semantic object instance detected from the query scene sketch image, generating a feature graph by using a graph encoder module from the attribute graph, and computing a similarity or distance between the feature graphs of the query scene sketch image and the scene images in the image collection by a graph matching module and the most similar scene images are returned.
Image processing apparatus, photographic subject identifying method and program
Image processing apparatus includes a local amount generation unit, a correspondence point calculation unit, a relative correspondence point information calculation unit, a correspondence point selecting unit and a decision unit. The local feature value generation unit calculates, for a first image, a set of information about first local feature values including a first feature point(s). The correspondence point calculation unit calculates, for a second image, as information about correspondence point(s), a correspondence relation between the first and second feature point(s) contained in a set of information about second local feature values calculated from the second image. The relative correspondence point information calculation unit calculates relationships of scales of feature points, as the information about the relative scale sizes of correspondence points, based on the set of information about the first local feature values and the set of information about the second local feature values on the information about correspondence point(s).
Apparatus and method for low dynamic range and high dynamic range image alignment
An imaging system includes an image sensor to capture a sequence of images, including a low dynamic range (LDR) image and a high dynamic range (HDR) image, and a processor coupled to the image sensor to receive the LDR image and the HDR image. The processor receives instructions to perform operations to segment the LDR image and HDR image into a plurality of segments. The processor also scans the plurality of LDR and HDR image segments to find a first image segment in the plurality of LDR image segments and a second image segment in the plurality of HDR image segments. The processor then finds interest points in the first and second image segments, and determines an alignment parameter based on matched interest points. The LDR image and the HDR image are combined in accordance with the alignment parameter.
Method and apparatus for extracting lane line and computer readable storage medium
A method and an apparatus for extracting a lane line, a device, a computer readable-storage medium and a collection entity are provided. The method includes: obtaining a first group of lane lines of a road based on a first image generated from a point cloud collected by a laser radar; obtaining a second group of lane lines of the road based on a second image collected by a camera; and determining a lane line set of the road based on the first group of lane lines and the second group of lane lines.
Method and system for model-based fusion of multi-modal volumetric images
A method and system for fusion of multi-modal volumetric images is disclosed. A first image acquired using a first imaging modality is received. A second image acquired using a second imaging modality is received. A model and of a target anatomical structure and a transformation are jointly estimated from the first and second images. The model represents a model of the target anatomical structure in the first image and the transformation projects a model of the target anatomical structure in the second image to the model in the first image. The first and second images can be fused based on estimated transformation.
Guided sparse feature matching via coarsely defined dense matches
An example method is described herein. The method includes executing dense feature matching on an image pair that is down sampled to obtain a first set of feature correspondences for each pixel of the down sampled image pair. The method also includes calculating a neighborhood correspondence based on the first set of feature correspondences for each pixel in a first image of the image pair. Further, the method includes executing sparse feature matching on stereoscopic patch pairs from the image pair based on the neighborhood correspondence for each pixel to obtain correspondence estimates for each stereoscopic patch pair. Finally, the method includes refining the correspondence estimates for each stereoscopic patch pair to obtain a semi-dense set of feature correspondences by applying a geometric constraint to the correspondence estimates and retaining correspondences that satisfy the geometric constraint.