Patent classifications
G06T3/0075
Method for quantitatively identifying the defects of large-size composite material based on infrared image sequence
The present invention provides a method for quantitatively identifying the defects of large-size composite material based on infrared image sequence, firstly obtaining the overlap area of an infrared splicing image, and dividing the infrared splicing image into three parts according to overlap area: overlap area, reference image area and registration image area, then extracting the defect areas from the infrared splicing image to obtain P defect areas, then obtaining the conversion coordinates of pixels of defect areas according to the three parts of the infrared splicing image, and further obtaining the transient thermal response curves of centroid coordinate and edge point coordinates, finding out the thermal diffusion points from the edge points of defect areas according to a created weight sequence and dynamic distance threshold ε.sub.ttr×d.sub.p_max, finally, based on the thermal diffusion points, the accurate identification of quantitative size of defects are completed.
Imaging apparatus, moveable body, and imaging method
An imaging apparatus comprises a camera and a controller. The camera generates a captured image. The controller superimposes a calibration object movable by translation or rotation in the captured image. In the case where a plurality of indexes I located at positions determined with respect to a moveable body having the camera mounted therein are subjected to imaging, the controller moves the calibration object so that a first corresponding portion coincides with an image of a first index of the plurality of indexes, and performs distortion correction on an area in the captured image determined based on a position of the image of the first index and a position of an image of a second index in the captured image and a position at which the calibration object is superimposed so that the image of the second index coincides with a second corresponding portion.
Systems and methods for spatial analysis of analytes using fiducial alignment
Systems and methods for spatial analysis of analytes are provided. A data structure is obtained comprising an image, as an array of pixel values, of a sample on a substrate having a identifier, fiducial markers and a set of capture spots. The pixel values are used to identify derived fiducial spots. The substrate identifier identifies a template having reference positions for reference fiducial spots and a corresponding coordinate system. The derived fiducial spots are aligned with the reference fiducial spots using an alignment algorithm to obtain a transformation between the derived and reference fiducial spots. The transformation and the template corresponding coordinate system are used to register the image to the set of capture spots. The registered image is then analyzed in conjunction with spatial analyte data associated with each capture spot, thereby performing spatial analysis of analytes.
ROBOTIC SYSTEMS PROVIDING CO-REGISTRATION USING NATURAL FIDUCIALS AND RELATED METHODS
A method may be provided to operate a medical system. First data may be provided for a first 3-dimensional (3D) image scan of an anatomical volume, with the first data identifying a blood vessel node in a first coordinate system for the first 3D image scan. Second data may be provided for a second 3D image scan of the anatomical volume, with the second data identifying the blood vessel node in a second coordinate system for the second 3D image scan. The first and second coordinate systems for the first and second 3D image scans of the anatomical volume may be co-registered using the blood vessel node identified in the first data and in the second data as a fiducial.
Panoramic Stitching Method, Apparatus, and Storage Medium
The present disclosure discloses a panoramic stitching method, an apparatus, and a storage medium. A transformation matrix obtaining method includes: obtaining motion data detected by sensors, wherein the sensors are disposed on a probe used to collect images, and the motion data is used to represent a moving trend of the probe during image collection; inputting the motion data into a pre-trained neural network, to calculate matrix parameters by using the neural network; calculating a transformation matrix by using the matrix parameters, wherein the transformation matrix is used to stitch images collected by the probe, to obtain a panoramic image. In the present disclosure, the transformation matrix can be calculated and the images can be stitched without using characteristics of the images, and factors such as brightness and the characteristics of the images do not impose an impact, thereby improving transformation matrix calculation accuracy, and improving an image stitching effect.
Automated co-registration of prostate MRI data
Medical imaging analysis systems are configured to perform automatic image registration algorithms that perform three-dimensional (3D), affine, and/or intensity-based co-registration of magnetic resonance imaging (MRI) data, such as multiparametric MRI (mpMRT) data, using mutual information (MI) as a similarity metric. An apparatus comprises a computer-readable storage medium storing a plurality of imaging series of magnetic resonance imaging (MRI) data for imaged tissue of a patient; and a processor coupled to the computer-readable storage medium. The processor is configured to receive the imaging series of MRI data; identify a volume of interest (VOI) of each image of the imaging series of MRI data; compute registration parameters for the VOIs through the maximization of mutual information of the corrected VOIs; and register the VOIs using the computed registration parameters.
Generating long exposure images
Video stream data is received. The video stream data comprises a plurality of arriving frames. An indication that the stream data should be processed into a long exposure image is received. In response to receipt of a frame arrival, an attempt to align a current frame with a registration image is made. Based at least in part on at least one of the frame arrival and the attempt to align the current frame with the registration image, a determination is made that feedback should be provided to a user. The feedback is provided to the user.
Adversarial scene adaptation for camera pose regression
A pose estimation training system includes: a first model configured to generate a first 6 degrees of freedom (DoF) pose of a first camera that captured a first image from a first domain; a second model configured to generate a second 6 DoF pose of a second camera that captured a second image from a second domain, where the second domain is different than the first domain; a discriminator module configured to, based on first and second outputs from the first and second encoder modules, generate a discriminator output indicative of whether the first and second images are from the same domain; and a training control module configured to, based on the discriminator output, selectively adjust at least one weight value shared by the first model and the second model.
Image generation and editing with latent transformation detection
This disclosure includes technologies for image processing, particularly for image generation and editing in a configurable semantic direction. A generative adversarial network is trained with an auxiliary network with an auxiliary task that is designed to disentangle the latent space of the generative adversarial network. Resultantly, a new type of GAN is created to improve image generation or editing in both conditional and unconditional settings.
Generating candidate mirror snap points using determined axes of symmetry
In implementations of systems for generating candidate mirror snap points using determined axes of symmetry, a computing device implements a symmetry system to receive vector object data describing a set of points of a vector object. The symmetry system generates convex polygons that enclose the set of points and identifies a particular convex polygon that has a smallest area. A side of the particular convex polygon is determined as an axis of symmetry for the vector object. The symmetry system generates an indication for display in a user interface of a candidate snap point based on the axis of symmetry and a point of the set of points of the vector object.