Patent classifications
G06T3/14
MAP GENERATION SYSTEM AND METHOD FOR GENERATING AN ACCURATE BUILDING SHADOW
A map generation system, method and computer program product are provided to generate a shadow layer from a raster image that accurately represents the shadows of one or more buildings. In the context of a map generation system, the map generation system extracts pixel values from a raster image of one or more buildings and processes the pixel values so as to retain pixel values within a predefined range while eliminating other pixel values. The pixel values that are retained represent a shadow. The map generation system also modifies the a representation of the shadow by modifying the pixel values of respective pixels so as to have a shape corresponding to the shape of the one or more buildings. The map generation system causes presentation or storage of the building layer representing the one or more buildings and a shadow layer representing the shadow.
Alignment system for liver surgery
A method for automatic registration of landmarks in 3D and 2D images of an organ comprises: using a first set of coordinates of identified first, second and third landmarks of the organ, derived from a 3D surface representation of the organ, and a second set of coordinates of the landmarks, derived from 2D laparoscopic images of the organ, to register the three landmarks as identified in the 3D surface representation with the three landmarks as identified in the 2D images. The third landmark comprises a plurality of points defining a path between two points characterizing the first and second landmarks The identification of the first, second and third landmarks in the 3D representation and the 2D images, the derivation of the first and second sets of coordinates, and the registration, based on the derived first and second sets of coordinates, are performed automatically.
Systems and methods of continuous registration for image-guided surgery
Methods and systems of registering a model of one or more anatomic passageways of a patient to a patient space are provided herein. An exemplary method may include accessing a set of model points of the model of the passageways, the model points being associated with a model space, collecting measured points along a length of a catheter inserted into the passageways of the patient, the measured points determined by a shape of the catheter, and assigning points of the first set to a plurality of subsets. The exemplary method may further include registering each of the subsets with the model points to produce a plurality of registration candidates, comparing the candidates to identify an optimal subset associated with an optimal registration of the plurality of candidates that translates the set of model points and at least one set of the sets of measured points into a common space.
Microscope system, projection unit, and image projection method
A microscope system comprising an eyepiece, an objective that guides light from a sample to the eyepiece, a tube lens that is disposed on a light path between the eyepiece and the objective and forms an optical image of the sample on the basis of light therefrom, a projection apparatus that projects a projection image including a first assistance image onto an image plane on which the optical image is formed, and a processor that performs processes. The processes include generating projection image data representing the projection image. The first assistance image is an image of the sample in which a region wider than an actual field of view corresponding to the optical image is seen, The first assistance image is projected onto a portion of the image plane that is close to an outer edge of the optical image.
Object velocity from images
Techniques are discussed for determining a velocity of an object in an environment from a sequence of images (e.g., two or more). A first image of the sequence is transformed to align the object with an image center. Additional images in the sequence are transformed by the same amount to form a sequence of transformed images. Such sequence is input into a machine learned model trained to output a scaled velocity of the object (a relative object velocity (ROV)) according to the transformed coordinate system. The ROV is then converted to the camera coordinate system by applying an inverse of the transformation. Using a depth associated with the object and the ROV of the object in the camera coordinate frame, an actual velocity of the object in the environment is determined relative to the camera.
Photographing method for terminal and terminal
A photographing method for a terminal that comprises a monochrome camera and a color camera, the method including simultaneously photographing a same to-be-photographed scene using the monochrome camera and the color camera and separately obtaining K frames of images, where the monochrome camera uses a full-size operation mode, where the color camera uses a binning operation mode, and where K1, obtaining a first image corresponding to the monochrome camera and a second image corresponding to the color camera, obtaining high-frequency information according to the first image, obtaining low-frequency information according to the second image, and fusing the first image and the second image according to the high-frequency information and the low-frequency information, and generating a composite image of the to-be-photographed scene.
Spatially dynamic fusion of images of different qualities
A system and method of processing an image is provided in which an input image output by an imaging sensor is received. For each location of a plurality of locations of a reference point of a moving window in the input image, a first image quality metric is determined as a function of quality of first image content included in a region covered by the moving window, wherein the window is sized to include at least a significant portion of a target of interest. An enhancement process is applied to the input image and generates a resulting enhanced image that is spatially registered with the input image. For each location of the plurality of locations of the reference point of the moving window in the enhanced image, a second image quality metric is determined as a function of quality of second image content included in the region covered by the moving window. For each location of the plurality of locations, first fused content is determined by a first fusing of the first image content with the second image content, the first fusing being a function of the first image content, the first image quality metric, the second image content, and the second image quality metric associated with the location. A fused image is generated that includes the first fused content determined for each of the locations of the plurality of locations.
SYSTEMS AND METHODS FOR ENHANCED RESOLUTION IMAGING BASED ON MULTIPLE CROSS-SECTIONAL IMAGES ACQUIRED AT DIFFERENT ANGLES
Systems and methods for imaging based on multiple cross-sectional images acquired at different angles are disclosed. According to an aspect, multiple cross-sectional images of an object are acquired at different angles. The method also includes registering the acquired cross-sectional images. Further, the method includes reconstructing an enhanced resolution image of the object based on the registered images. As a result of registering the images, a distortion map is generated that is coregistered with the high-resolution image. The method also includes displaying an image of the object based on the enhanced resolution image and the distortion map.
COLONY CONTRAST GATHERING
An imaging system and method for microbial growth detection, counting or identification. One colony may be contrasted in an image that is not optimal for another type of colony. The system and method provides contrast from all available material through space (spatial differences), time (differences appearing over time for a given capture condition) and color space transformation using image input information over time to assess whether microbial growth has occurred for a given sample.
Mapping of spherical image data into rectangular faces for transport and decoding across networks
A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap portions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.