Patent classifications
G06T3/147
METHOD AND SYSTEM OF POINT CLOUD REGISTRATION FOR IMAGE PROCESSING
A system, article, and method of point cloud registration using overlap regions for image processing.
METHOD AND APPARATUS FOR GENERATING AN ENHANCED DIGITAL IMAGE OF A PHYSICAL OBJECT OR ENVIRONMENT
A method and apparatus are provided for generating enhanced digital imagery from digital input images (7) exhibiting differences other than or in excess of differences in projection. By applying a plurality of image transformations (10) to frequency normalised (8) versions of the input images, and generating measures of image similarity (11) therefrom, an optimum image transformation (12) can be determined that can be applied to the digital input images such that they are substantially matched. Digital input images of physical objects or environments to which the method is applied can then be used in image fusion, ortho-rectification or change detection applications, in order to monitor the physical object or environment.
Method and apparatus to infer structural stresses with visual image and video data
The present invention includes an apparatus and method for determining time-varying stress experienced by a structure comprising: obtaining images that include the structure; segmenting the second and any subsequent images to include the static portions that are identified from the first image; computing with a processor the affine transformations between the first and second, and optionally subsequent images, sequence of images; estimating a deformation (i.e. translation and rotation) undergone by the structure; and converting the deformation to estimate the structural stress by using one or more scaling functions) to generate the time-varying stress experienced by the structure.
Imaging processing apparatus, image pickup apparatus, control method for image processing apparatus, and storage medium
Some embodiments of an image processing apparatus comprise a processor to execute instructions. The instructions are for detecting feature points from images in a first plurality of images that at least partially overlap in angle of view and have different focus positions and from at least one second image, calculating a first conversion coefficient from the feature points, combining the images in the first plurality of images based on the first conversion coefficient, calculating a second conversion coefficient of at least a part of the images in the first plurality of images by using the feature points detected from the second image, and combining the at least a part of the images in the first plurality of images by using the second conversion coefficient, wherein a depth of field in the second image is deeper than a respective depth of field in each image in the first plurality of images.
Automatic marker-less alignment of digital 3D face and jaw models
The invention aligns a digital face model from a 3D face scanner with a 3D jaw scan from an intraoral scan produced by an intraoral scanner without using external markers. The alignment proceeds in two steps. In the first step, the teeth part of a subject whose teeth are clenched, referred to as a clenched-teeth face scan, is aligned with an intraoral jaw scan to obtain a first transformation matrix. In the second step, a face model of a subject with a normal facial expression is aligned with the clenched-teeth face model of the same subject to obtain a second transformation matrix. A graphical user interface is provided that enables a user to manually align the 3D jaw scan with a 2D image of the subject's teeth to determine the first transformation.
AN APPARATUS AND METHOD FOR FIDUCIAL MARKER ALIGNMENT IN ELECTRON TOMOGRAPHY
Provided is an apparatus and method for aligning fiducial markers. The apparatus may align positions of the fiducial markers on the two or more micrographs forming a two or more point sets corresponding to the two or more micrographs; create a first set of matched fiducial markers and unmatched fiducial markers; transform unmatched fiducial markers into transformed point sets and match the unmatched fiducial markers resulting in a second set of matched fiducial markers. The matching of the second set of matched fiducial markers results in improved alignment of a large number of fiducial markers. The aligned positions of fiducial markers may be constrained by an upper bound of transformation deviation of aligning positions of fiducial markers on two or more micrographs.
Generating virtually stained images of unstained samples
Systems and methods for generating virtually stained images of unstained samples are provided. According to an aspect of the invention, a method includes accessing an image training dataset including a plurality of image pairs. Each image pair includes a first image of an unstained first tissue sample, and a second image acquired when the first tissue sample is stained. The method also includes accessing a set of parameters for an artificial neural network, wherein the set of parameters includes weights associated with artificial neurons within the artificial neural network; training the artificial neural network by using the image training dataset and the set of parameters to adjust the weights; accessing a third image of a second tissue sample that is unstained; using the trained artificial neural network to generate a virtually stained image of the second tissue sample from the third image; and outputting the virtually stained image.
TARGET-IMAGE ACQUISITION METHOD, PHOTOGRAPHING DEVICE, AND UNMANNED AERIAL VEHICLE
The present disclosure provides a target-image acquisition method. The target-image acquisition method includes acquiring a visible-light image and an infrared (IR) image of a target, captured at a same time point by a photographing device; weighting and fusing the visible-light image and the IR image to obtain a fused image; and obtaining an image of the target according to the fused image. The present disclosure also provides a photographing device and an unmanned aerial vehicle (UAV) using the method above.
Image processing method and device
An image processing method includes presetting an image processing model and performing the following processing based on the model when a first three-dimensional effect plane image is displayed in response to an operation of a user. The method further includes mapping the first three-dimensional effect plane image to the projection plane, determining, according to the three-dimensional position relationship among the viewpoint, the projection plane, and the view window and the size of the view window, a first visual area obtained by projection onto the projection plane through the viewpoint and the view window, and clipping a first image in the first visual area, and displaying the first image.
GENERATING AND EVALUATING MAPPINGS BETWEEN SPATIAL POINT SETS
A method implemented on a computing device comprising a data-parallel coprocessor and a memory coupled with the data-parallel processor for generating and evaluating N-to-1 mappings between spatial point sets in nD. The method comprises using the computing device to carry out steps comprising receiving a first and a second spatial point sets, an array of (n+1) combinations in the first spatial point set, an array of one or more pairs of neighbor (n+1) combinations referencing into the array of (n+1) combinations, and a CCISS between the two spatial point sets; computing a plurality of solving structures and provide a two-level indexing structure for (n+1) combinations for the plurality of (n+1) combinations; generating one or more N-to-1 mappings; and generating a plurality of local distance measures for unique combinations of the one or more pairs of neighbor (n+1) combinations and the one or more N-to-1 mappings. Some embodiments further comprises providing in addition a two-level indexing structure for pairs of neighbor (n+1) combinations for generating the plurality of local distance measures.