Patent classifications
G06T2207/20152
Method and apparatus for the morphometric analysis of cells of a corneal endothelium
The present invention relates to a method and apparatus for the morphometric analysis of endothelial cells, which is based on the use of an image taken from a camera connected to a biomicroscope which is digitally reprocessed and subsequently analyzed.
REAL-TIME WHOLE SLIDE PATHOLOGY IMAGE CELL COUNTING
Techniques are provided for determining a cell count within a whole slide pathology image. The image is segmented using a global threshold value to define a tissue area. A plurality of patches comprising the tissue area are selected. Stain intensity vectors are determined within the plurality of patches to generate a stain intensity image. The stain intensity image is iteratively segmented to generate a cell mask using a local threshold value that is and gradually reduced after each iteration. A chamfer distance transform is applied to the cell mask to generate a distance map. Cell seeds are determined on the distance map. Cell segments are determined using a watershed transformation, and a whole cell count is calculated for the plurality of patches based on the cell segments. A client device may be configured for real-time cell counting based on the whole cell count.
Polyline contributor in civil engineering
A computer-implemented method for civil engineering including obtaining a mesh representing a terrain and a polyline on the mesh, the method further includes computing a contributor of the polyline. The computing of the contributor includes modifying the mesh by determining, based on the polyline, a trench below the polyline. The computing of the contributor further includes computing a watershed segmentation of the terrain based on the modified mesh. The computing of the contributor further includes, based on the computed watershed segmentation, identifying, on the modified mesh, a basin comprising the trench. The contributor corresponds to the identified basin.
IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING SYSTEM
In order to perform quantitative analysis on an object in an image, it is important to accurately identify the object, but when plural objects are in contact with each other, it is potential that a target portion cannot be accurately identified. An image is segmented into a foreground region and a background region, the foreground region being a region in which an object for which quantitative information is to be calculated is shown, and the background region being a region other than the foreground region. With respect to a first object and a second object in contact with each other in the image, a contact point between the first object and the second object is detected based on a region segmentation result output by a segmentation unit. The first object and the second object can be separated by connecting two boundary reference pixels including a first boundary reference pixel that is a pixel in a background region closest to the contact point, and a second boundary reference pixel that is a pixel in a background region in a direction opposite to the first boundary reference pixel across the contact point.
DEEP LEARNING BASED INSTANCE SEGMENTATION VIA MULTIPLE REGRESSION LAYERS
Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation and/or implementing instance segmentation based on partial annotations. In various embodiments, a computing system might receive first and second images, the first image comprising a field of view of a biological sample, while the second image comprises labeling of objects of interest in the biological sample. The computing system might encode, using an encoder, the second image to generate third and fourth encoded images (different from each other) that comprise proximity scores or maps. The computing system might train an AI system to predict objects of interest based at least in part on the third and fourth encoded images. The computing system might generate (using regression) and decode (using a decoder) two or more images based on a new image of a biological sample to predict labeling of objects in the new image.
SYSTEM AND METHOD FOR GENERATING A MORPHOLOGICAL ATLAS OF AN EMBRYO
A method for generating a morphological atlas of an embryo including the steps of receiving a plurality of 3D images of the embryo representative of the morphological process of embryonic cells from a first predetermined cell population to a second predetermined cell population; processing the plurality of 3D images to derive nucleus lineage information associated with each nucleus of the embryonic cells during the morphological process; performing a membrane segmentation procedure to segment the 3D images into membrane segments; and combining the nucleus lineage information and the membrane segments to generate the morphological atlas of the embryo.
INTERSECTION REGION DETECTION AND CLASSIFICATION FOR AUTONOMOUS MACHINE APPLICATIONS
In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
SYSTEM AND METHOD FOR DETERMINING A BREAST REGION IN A MEDICAL IMAGE
Systems and methods for determining a breast region in a medical image are provided. The methods may include obtaining a first image relating to the breast region, determining a first region including a first plurality of pixels in the breast region, determining a second region including a second plurality of pixels relating to an edge of the breast region, and determining the breast region by combining the first region and the second region. The second plurality of pixels may include at least a portion of the first plurality of pixels.
System and method for component positioning by registering a 3D patient model to an intra-operative image
Disclosed herein are a system and method that may help place or position a component, such as an acetabular cup or a femoral component, during surgery. An example system may iteratively register a plurality of two-dimensional projections from a three-dimensional model of a portion of a patient, the three-dimensional model being generated from a data set of imaging information obtained at a neutral position. An example system may further score each two-dimensional projection against an intra-operative image by calculating a spatial difference between corresponding points. A two-dimensional projection having a minimum score reflecting the smallest distance between the corresponding points may be identified. Using the two-dimensional projection having the minimum score, an adjustment score reflecting a difference in the values representing the orientation of the three-dimensional model at the intra-operative projection position and values representing the orientation of the three-dimensional model at the neutral position may be calculated.
CLASSIFIED TRUNCATION COMPENSATION
PET/MR images are compensated with simplified adaptive algorithms for truncated parts of the body. The compensation adapts to a specific location of truncation of the body or organ in the MR image, and to attributes of the truncation in the truncated body part. Anatomical structures in a PET image that do not require any compensation are masked using a MR image with a smaller field of view. The organs that are not masked are then classified as types of anatomical structures, the orientation of the anatomical structures, and type of truncation. Structure specific algorithms are used to compensate for a truncated anatomical structure. The compensation is validated for correctness and the ROI is filled in where there is missing voxel data. Attenuation maps are generated from the compensated ROI.