Patent classifications
G06T5/50
Enhanced Illumination-Invariant Imaging
Devices, systems, and methods for generating illumination-invariant images are disclosed. A method may include activating, by a device, a camera to capture first image data; while the camera is capturing the first image data, activating of a first, light source; receiving the first image data, the first image data having pixels having first color values; identifying first light generated by the first light source while the camera is capturing the first image data; identifying, based on the first image data, second light generated by a second light source; generating, based on the first light and the second light, second image data that are illumination-invariant; and presenting the second image data.
AUGMENTED REALITY SYSTEM AND METHODS FOR STEREOSCOPIC PROJECTION AND CROSS-REFERENCING OF LIVE X-RAY FLUOROSCOPIC AND COMPUTED TOMOGRAPHIC C-ARM IMAGING DURING SURGERY
A method for performing a procedure on a patient includes acquiring a three-dimensional image of a location of interest on the patient and a two-dimensional image of the location of interest can be acquired. A computer system can relate the three-dimensional image with the two-dimensional image to form a holographic image dataset. The computer system can register the holographic image dataset with the patient. The augmented reality system can render a hologram based on the holographic image dataset from the patient. The hologram can include a projection of the three-dimensional image and a projection of the two-dimensional image. The practitioner can view the hologram with the augmented reality system and perform the procedure on the patient. The practitioner can employ the augmented reality system to visualize a point on the projection of the three-dimensional image and a corresponding point on the projection of the two-dimensional image during the procedure.
AUGMENTED REALITY SYSTEM AND METHODS FOR STEREOSCOPIC PROJECTION AND CROSS-REFERENCING OF LIVE X-RAY FLUOROSCOPIC AND COMPUTED TOMOGRAPHIC C-ARM IMAGING DURING SURGERY
A method for performing a procedure on a patient includes acquiring a three-dimensional image of a location of interest on the patient and a two-dimensional image of the location of interest can be acquired. A computer system can relate the three-dimensional image with the two-dimensional image to form a holographic image dataset. The computer system can register the holographic image dataset with the patient. The augmented reality system can render a hologram based on the holographic image dataset from the patient. The hologram can include a projection of the three-dimensional image and a projection of the two-dimensional image. The practitioner can view the hologram with the augmented reality system and perform the procedure on the patient. The practitioner can employ the augmented reality system to visualize a point on the projection of the three-dimensional image and a corresponding point on the projection of the two-dimensional image during the procedure.
TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING
Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.
TEXT BORDER TOOL AND ENHANCED CORNER OPTIONS FOR BACKGROUND SHADING
Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.
IMAGE SENSOR APPARATUS AND METHOD FOR OBTAINING LOW-NOISE, HIGH-SPEED CAPTURES OF A PHOTOGRAPHIC SCENE
A system, method, and computer program product are provided for obtaining low-noise, high-speed captures of a photographic scene. In use, a first cell of a first pixel is in communication with a first node for storing a first sample. Further, a second cell of a second pixel is in communication with a second node for storing a second sample. Still further, the first cell and the second cell are communicatively coupled.
IMAGE SENSOR APPARATUS AND METHOD FOR OBTAINING LOW-NOISE, HIGH-SPEED CAPTURES OF A PHOTOGRAPHIC SCENE
A system, method, and computer program product are provided for obtaining low-noise, high-speed captures of a photographic scene. In use, a first cell of a first pixel is in communication with a first node for storing a first sample. Further, a second cell of a second pixel is in communication with a second node for storing a second sample. Still further, the first cell and the second cell are communicatively coupled.
MEDICAL IMAGE GENERATION APPARATUS, MEDICAL IMAGE GENERATION METHOD, AND MEDICAL IMAGE GENERATION PROGRAM
To generate a medical image with high visibility in fluorescence observation. A medical image generation apparatus (100) according to the present application includes an acquisition unit (131), a calculation unit (132), and a generation unit (134). An acquisition unit (131) acquires a first medical image captured with fluorescence of a predetermined wavelength and a second medical image captured with fluorescence of a wavelength different from the predetermined wavelength. A calculation unit (132) calculates a degree of scattering, indicating a degree of blurring of fluorescence of a living body, included in the first medical image and the second medical image acquired by the acquisition unit (131). A generation unit (134) generates an output image on the basis of at least one of the degrees of scattering calculated by the calculation unit (132).
MEDICAL IMAGE GENERATION APPARATUS, MEDICAL IMAGE GENERATION METHOD, AND MEDICAL IMAGE GENERATION PROGRAM
To generate a medical image with high visibility in fluorescence observation. A medical image generation apparatus (100) according to the present application includes an acquisition unit (131), a calculation unit (132), and a generation unit (134). An acquisition unit (131) acquires a first medical image captured with fluorescence of a predetermined wavelength and a second medical image captured with fluorescence of a wavelength different from the predetermined wavelength. A calculation unit (132) calculates a degree of scattering, indicating a degree of blurring of fluorescence of a living body, included in the first medical image and the second medical image acquired by the acquisition unit (131). A generation unit (134) generates an output image on the basis of at least one of the degrees of scattering calculated by the calculation unit (132).
Iterative synthesis of views from data of a multi-view video
Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.