Patent classifications
G06T3/0075
SYSTEMS AND METHODS FOR SPATIAL ANALYSIS OF ANALYTES USING FIDUCIAL ALIGNMENT
Systems and methods for spatial analysis of analytes are provided. A data structure is obtained comprising an image, as an array of pixel values, of a sample on a substrate having a identifier, fiducial markers and a set of capture spots. The pixel values are used to identify derived fiducial spots. The substrate identifier identifies a template having reference positions for reference fiducial spots and a corresponding coordinate system. The derived fiducial spots are aligned with the reference fiducial spots using an alignment algorithm to obtain a transformation between the derived and reference fiducial spots. The transformation and the template corresponding coordinate system are used to register the image to the set of capture spots. The registered image is then analyzed in conjunction with spatial analyte data associated with each capture spot, thereby performing spatial analysis of analytes.
Method and device for stitching wind turbine blade images, and storage medium
The present disclosure provides a method and device for stitching wind turbine blade images, and a storage medium. The method includes performing edge detection on a plurality of images of the blade of the wind turbine to determine a blade region for each of the plurality of images; and for each pair of images among the plurality of images of the blade of the wind turbine, which are captured successively, stitching a front end of a former one of the pair of images captured successively and a rear end of a latter one of the pair of images captured successively, wherein the front end is far away from a root of the blade of the wind turbine, and the rear end is close to the root of the blade of the wind turbine.
Calculation method, computer-readable recording medium recording calculation program, and information processing apparatus
A calculation method for causing a computer to execute processing of: acquiring first measurement information including information of a distance to an object measured by a first sensor, and second measurement information including information of a distance to the object measured by a second sensor; acquiring a first vector, a second vector in a different direction from the first vector, and a first translation point from the first measurement information; acquiring information of a third vector treated as a vector parallel to and in a same direction as the first vector, a fourth vector treated as a vector parallel to and in a same direction as the second vector, and a second translation point treated as a same position as the first translation point from the second measurement information; calculating a rotation angle and a translation distance for aligning a point group of the object measured by the second sensor.
Aligning and merging contents from multiple collaborative workstations
A method for aligning and merging contents from multiple collaborative workstations. Collaborative workstations are multiple workstations that contribute respective contents to be combined into a single computer-generated output. The content generated from each collaborative workstation is the collaborative content. Individual collaborative content is created from each workstation by a user drawing on a piece of paper that is placed on a workspace surface of the workstation. Collaborative contents contributed by multiple workstations are aligned such that a combined product (i.e., a single computer-generated output) including both virtual and physical content appears to be collaboratively drawn by multiple users on a single piece of paper.
Method for merging multiple images and post-processing of panorama
A method for combining multiple images is disclosed herein. A target mapping matrix is determined based on a first image and a second image. The target mapping matrix is associated with a target correspondence between the first image and the second image. The first image and the second image are combined into a combined image based on the first target mapping matrix. The combined image is output by implementing the disclosed method.
METHOD AND SYSTEM FOR INSPECTING AN OPHTHALMIC LENS IN AN AUTOMATED LENS MANUFACTURING PROCESS
A method (6) for inspecting an ophthalmic lens (2), in particular a contact lens such as a soft contact lens (2), in an automated lens manufacturing process is disclosed. The method comprises the steps of acquiring (60) a plurality of images containing the ophthalmic lens (2) to be inspected as an imaged ophthalmic lens (2), wherein each image (4) of the plurality of images is of a different image type, registering (63) the plurality of images by applying a registration function to each image (4) of the plurality of images to obtain registered images, determining (64), based on the registered images, whether the ophthalmic lens (2) complies with predetermined specifications, and updating (62) the registration function to compensate for possible changes in the acquisition of the plurality of images. Updating (62) the registration function is performed during the automated lens manufacturing process.
Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning
When reviewing digital pathology tissue specimens, multiple slides may be created from thin, sequential slices of tissue. These slices may then be prepared with various stains and digitized to generate a Whole Slide Image (WSI). Review of multiple WSIs is challenging because of the lack of homogeneity across the images. In embodiments, to facilitate review, WSIs are aligned with a multi-resolution registration algorithm, normalized for improved processing, annotated by an expert user, and divided into image patches. The image patches may be used to train a Machine Learning model to identify features useful for detection and classification of regions of interest (ROIs) in images. The trained model may be applied to other images to detect and classify ROIs in the other images, which can aid in navigating the WSIs. When the resulting ROIs are presented to the user, the user may easily navigate and provide feedback through a display layer.
Systems and methods for transferring map data between different maps
Examples disclosed herein may involve a computing system that is operable to (i) identify a source map and a target map for transferring map data, where the source map and the target map have different respective coordinate frames and respective coverage areas that at least partially overlap, (ii) select a real-world element for which to transfer previously-created map data from the source map to the target map, (iii) select a source image associated with the source map in which the selected real-world element appears and has been labeled, (iv) select a target image associated with the target map in which the selected real-world element appears, (v) derive a geometric relationship between the source image and the target image, and (vi) use the derived geometric relationship between the source image and the target image to determine a position of the real-world element within the respective coordinate frame of the target map.
Correction of misaligned map data from different sources
Misaligned map data received from different sources is corrected to generate a map that includes aligned features. Each data source is associated with a reliability value that identifies the likelihood that the map data received from the corresponding source is aligned with a particular map location. A corrected version of the map data is generated based on the reliability values of the data sources. Generally, map data from unreliable sources is adjusted toward map data from more reliable sources until the map data from the different sources is aligned.
Method, apparatus, and computer program product for ensuring continuity of features between spatially partitioned maps
A method is provided to ensure continuity of features through spatially partitioned maps. Methods may include: identifying a map element extending from a first map tile to a second map tile; determining a first set of continuous features of the map element in the first map tile; determining a second set of continuous features of the map element in the second map tile; identifying a first set of locations in a plane separating the first map tile from the second map tile where the first set of continuous features intersect the plane; identifying a second set of locations where the second set of continuous features intersect the plane; correlating the first set of continuous features with the second set of continuous features; blending the first and second set of continuous features; and updating map data including the first map tile and the second map tile with a blended map element.