Patent classifications
G06T3/00
Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping
An electronic device and method for 3D modeling based on 2D warping is disclosed. The electronic device acquires a color image of a face of a user, depth information corresponding to the color image, and a point cloud of the face. A 3D mean-shape model of a reference 3D face is acquired, and rigid aligned with the point cloud. A 2D projection of the aligned 3D mean-shape model is generated. The 2D projection includes a set of landmark points associated with the aligned 3D mean-shape model. The 2D projection is warped such that the set of landmark points in the 2D projection is aligned with a corresponding set of feature points in the color image. A 3D correspondence between the aligned 3D mean-shape model and the point cloud is determined for a non-rigid alignment of the aligned 3D mean-shape model, based on the warped 2D projection and the depth information.
Arrangement for producing head related transfer function filters
When three-dimensional audio is produced by using headphones, particular HRTF-filters are used to modify sound for the left and right channels of the headphone. As the morphology of every ear is different, it is beneficial to have HRTF-filters particularly designed for the user of headphones. Such filters may be produced by deriving ear geometry from a plurality of images taken with an ordinary camera, detecting necessary features from images and fitting said features to a model that has been produced from accurately scanned ears comprising representative values for different sizes and shapes. Taken images are sent to a server (52) that performs the necessary computations and submits the data further or produces the requested filter.
IMAGE PROCESSING METHOD TO GENERATE A PANORAMIC IMAGE
An image processing method to provide a final panoramic image of at least a portion of a head of a patient, wherein a plurality of different provisional panoramic images are calculated from captured frame data sets through the variation of a reconstruction parameter; the provisional panoramic images are scanned for recognizable structures; the imaging quality of the recognizable structures is determined; the variation of the at least one reconstruction parameter for the calculation of different provisional panoramic images of those frame data sets which have recognizable structures with the highest imaging quality is determined; and with reference to the determined variation of the reconstruction parameter of step a final panoramic image is calculated. A computer-readable storage medium comprising instructions which cause a computer to perform the method and an imaging system having such a storage medium are also described.
DISPLAY CONTROL DEVICE AND HEAD-UP DISPLAY DEVICE
In the case in which a loss of the viewpoint position of a driver occurs when a viewpoint position follow-up warping control is executed to update warping parameters according to the viewpoint position of the driver, and then the viewpoint position is re-detected, it is suppressed that the appearance of an image instantaneously changes in accordance with the update of the warping parameters and the driver is caused to feel uneasy. When the viewpoint loss in which at least one position of the right and left viewpoints becomes unclear is detected, a control unit, which executes the viewpoint position follow-up warping control, maintains, in a viewpoint loss period, the warping parameters set immediately before the viewpoint loss period, and, when the viewpoint position is re-detected after the viewpoint loss period, invalidates at least one warping process using the warping parameters corresponding to the re-detected viewpoint position.
METHODS AND APPARATUS FOR DEEP LEARNING BASED IMAGE ATTENUATION CORRECTION
Systems and methods for reconstructing medical images are disclosed. Measurement data from positron emission tomography (PET) data, and measurement data from an anatomy modality, such as magnetic resonance (MR) data or computed tomography (CT) data, is received from an image scanning system. A PET image is generated based on the PET measurement data, and an anatomy image is generated based on the anatomy measurement data. A trained neural network is applied to the PET image and the anatomy image to generate an attenuation map. The neural network may be trained based on anatomy and PET images. In some examples, the trained neural network generates an initial attenuation map based on the anatomy image, registers the initial attenuation map to the PET image, and generates an enhanced attenuation map based on the registration. Further, a corrected image is reconstructed based on the generated attenuation map and the PET image.
Image processing system and method thereof for generating projection images based on inward or outward multiple-lens camera
An image processing system is disclosed, comprising: an M-lens camera, a compensation device and a correspondence generator. The M-lens camera generates M lens images. The compensation device generates a projection image according to a first vertex list and the M lens images. The correspondence generator is configured to conduct calibration for vertices to define vertex mappings, horizontally and vertically scan each lens image to determine texture coordinates of its image center, determine texture coordinates of control points according to the vertex mappings, and P1 control points in each overlap region in the projection image; and, determine two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate the first vertex list, where M>=2.
Method and a display device with pixel repartition optimization
A method for presenting an image on a display device (100) includes modifying the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image (S18).
System for performing convolutional image transformation estimation
A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
Subsurface formation imaging
A method includes generating a set of sub-images of a subsurface formation based on measurement values acquired by a plurality of sensors corresponding to one or more signals that have propagated through the subsurface formation, wherein each of the set of sub-images correspond to one of the plurality of sensors. The plurality of sensors are on a tool in a borehole, wherein each of the plurality of sensors are at different spatial positions with respect to each other. The method also includes generating a combined image by aligning the set of sub-images based on the measurement values, wherein the aligning of the set of sub-images is independent of acceleration of the tool during tool motion.
STRUCTURAL MASKING FOR PROGRESSIVE HEALTH MONITORING
A method of structural masking for progressive health monitoring of a structural component includes receiving a current image of the structural component. A processor aligns the current image and a reference image of the structural component. The processor performs a structure estimation on the current image and the reference image to produce a current structure estimate image and a reference structure estimate image. The processor generates a structural mask from the reference structure estimate image. The processor masks the current structure estimate image with the structural mask to identify one or more health monitoring analysis regions including a potential defect or damaged area appearing in the masked current structure estimate image that does not appear in the reference structure estimate image.