Patent classifications
G06T3/14
Deconvolution of digital images
A method for deconvolution of digital images includes obtaining a degraded image from a digital sensor, a processor accepting output from the digital sensor and recognizing a distorted element within the image. The distorted element is compared with a true shape of the element to produce a degrading function. The degrading function is deconvolved from at least a portion of the image to improve image quality of the image. A method of indirectly decoding a barcode includes obtaining an image of a barcode using an optical sensor in a mobile computing device, the image comprising barcode marks and a textual character. The textual character is optically recognized and an image degrading characteristic is identified from the textual character. Compensating for the image degrading characteristic renders previously undecodable barcode marks decodable. A system for deconvolution of digital images is also included.
System and method for generating partial surface from volumetric data for registration to surface topology image data
The present disclosure relates to the generation of partial surface models them volumetric datasets for subsequent registration of such partial surface models to surface topology datasets. Specifically, given an object that is imaged using surface topology imaging and another volumetric modality, the volumetric dataset is processed in combination with an approach viewpoint to generate one or more partial surfaces of the object that will be visible to the surface topology imaging system. This procedure can eliminate internal structures from the surfaces generated from volumetric datasets, thus increases the similarity of the dataset between the two different modalities, enabling improved and quicker registration.
Post capture imagery processing and deployment systems
A post capture imagery processing system is provided. The system is for use with aerial imagery and includes a server having a processor and a memory and a software application providing instruction to the server to process the captured aerial imagery, such as spherical imagery. The server further includes instructions to geo-rectify the spherical imagery. The geo rectifying of the spherical imagery may include one of use of a third party GIS map to associate corresponding data with the spherical imagery in order to produce a geo-referenced spherical image, or calculate the geo-references by a software application performing particular operations on the server.
Method and device for performing mapping on spherical panoramic image
Disclosed are a method and device for image pasting on a spherical panoramic image. The method may comprise: establishing a spherical coordinate system for a first spherical panoramic image; mapping the first spherical panoramic image on a spherical surface to obtain a spherical projection image of the first spherical panoramic image; determining a first image pasting area in the spherical projection image according to the selection of the user and transforming the corresponding image into a plane image; transforming the image to be pasted into a shape of the plane image; mapping the transformed image to be pasted to the spherical coordinate system; rotating the image to be pasted that is mapped to the spherical coordinate system to a position at which the first image pasting area overlaps with the image; transforming the image to be pasted after being transformed into a second spherical panoramic image; determining a second image pasting area corresponding to the first image pasting area; and interpolating the second spherical panoramic image into the second image pasting area to complete the pasting. Distortionless image pasting on a spherical panoramic image may be realized.
Method for predicting defects in assembly units
One variation of a method for predicting manufacturing defects includes: accessing a set of inspection images of a set of assembly units recorded by an optical inspection station; for each inspection image in the set of inspection images, detecting a set of features in the inspection image and generating a vector representing the set of features in a multi-dimensional feature space; grouping neighboring vectors in the multi-dimensional feature space into a set of vector groups; and, in response to receipt of a first inspection result indicting a defect in a first assembly unit, in the set of assembly units, associated with a first vector in a first vector group, in the set of vector groups, labeling the first vector group with the defect and flagging a second assembly unit associated with a second vector, in the first vector group, as exhibiting characteristics of the defect.
Image processing apparatus, alignment method and storage medium
An image processing apparatus includes first alignment means configured to perform an alignment in a horizontal direction on a plurality of two-dimensional tomographic images based on measurement light controlled to scan an identical position of an eye according to a first method, and second alignment means configured to perform an alignment in a depth direction on the plurality of two-dimensional tomographic images according to a second method that is different from the first method.
LINE-BASED IMAGE REGISTRATION AND CROSS-IMAGE ANNOTATION DEVICES, SYSTEMS AND METHODS
The disclosure relates to devices, systems and methods for image registration and annotation. The devices include computer software products for aligning whole slide digital images on a common grid and transferring annotations from one aligned image to another aligned image on the basis of matching tissue structure. The systems include computer-implemented systems such as work stations and networked computers for accomplishing the tissue-structure based image registration and cross-image annotation. The methods include processes for aligning digital images corresponding to adjacent tissue sections on a common grid based on tissue structure, and transferring annotations from one of the adjacent tissue images to another of the adjacent tissue images. The basis for alignment may be a line-based registration process, wherein sets of lines are computed on the boundary regions computed for the two images, where the boundary regions are obtained using information from two domainssoft-weighted foreground images and gradient magnitude images. The binary mask image, based on whose boundary the line features are computed, may be generated by combining two binary masksa first binary mask is obtained on thresholding a soft-weighted (continuous valued) foreground image, which is computed based on the stain content in an image, while a second binary mask is obtained after thresholding a gradient magnitude domain image, where the gradient is computed from the grayscale image obtained from the color image.
Gaze-tracking device, computable readable medium, and method
A corneal-reflection-based gaze detection section calculates a time series of a three-dimensional gaze vector in a camera coordinate system from a time series of facial images. A face position-and-orientation estimation section estimates a time series of a three-dimensional position and orientation of a face. An eyeball-center-coordinates transformation section calculates a time series of a three-dimensional position of the eyeball center in a coordinate system of a three-dimensional facial model. A fixed parameter calculation section calculates for use as a fixed parameter a three-dimensional position of the eyeball center in the three-dimensional facial-model coordinate system. An eyeball-center-based gaze detection section uses the three-dimensional position of the eyeball center calculated by the fixed parameter calculation section to calculate a three-dimensional gaze vector from a three-dimensional position of the eyeball center to a three-dimensional position of a pupil center in the camera coordinate system. This enables accurate gaze tracking to be performed using a simple configuration and without performing calibration.
Generating a stylized image or stylized animation by matching semantic features via an appearance guide, a segmentation guide, and/or a temporal guide
Certain embodiments involve generating one or more of appearance guide and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target image and a style exemplar image. The system generates an appearance guide, a positional guide, or both from the target image and the style exemplar image. The system uses one or more of the guides to transfer a texture or style from the style exemplar image to the target image.
Image stitching method using viewpoint transformation and system therefor
An image stitching method using viewpoint transformation and a system therefor are provided. The method includes obtaining images captured by a plurality of cameras included in the camera system, performing viewpoint transformation for each of the images using a depth map for the images, and stitching the images, the viewpoint transformation of which is performed.