Patent classifications
G06T2207/20161
BOREHOLE CORING RECONSTRUCTIONS USING BOREHOLE SCANS
An apparatus includes a processor and a machine-readable medium having program code executable by the processor to cause the apparatus to obtain a scan result based on a core sample, generate a set of contours of the scan result, and generate a set of object associations, wherein the set of object associations includes a first object association that is associated with a first contour from the set of contours and a second contour from the set of contours. The program code also generates a set of contour connections, wherein the set of contour connections includes a first contour connection that is based on the first object association. The program code also generates a reconstructed core having a three-dimensional space based on the set of contour connections, wherein the three-dimensional space includes a volume greater than the core sample.
DEVICE AND METHOD FOR POST-PROCESSING OF COMPUTED TOMOGRAPHY
A device and a method for post-processing of computed tomography (CT), which are adapted to improve an identification image of a focal nodular hyperplasia (FNH) of a liver, are provided. The method includes: obtaining the identification image including a liver contour and a non-liver contour and a Hounsfield unit (HU) value of each pixel corresponding the identification image, wherein the liver contour includes an FNH candidate contour; calculating an average HU of the liver contour; adjusting an HU value of the non-liver contour to the average HU value of the liver contour in respect with the identification image to generate a processed identification image; and updating the FNH candidate contour according to a morphological algorithm based on the processed identification image to generate an updated FNH candidate contour.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing apparatus according to the present invention includes a first classification unit configured to classify a plurality of pixels in two-dimensional image data constituting first three-dimensional image data including an object into a first class group by using a trained classifier, and a second classification unit configured to classify a plurality of pixels in second three-dimensional image data including the object into a second class group based on a result of classification by the first classification unit, the second class group including at least one class of the first class group. According to the image processing apparatus according to the present invention, a user's burden of giving pixel information can be reduced and a region can be extracted with high accuracy.
METHODS AND SYSTEMS FOR IMAGE SEGMENTATION
The application discloses a method and system for segmenting a lung image. The method may include obtaining a target image relating to a lung region. The target image may include a plurality of image slices. The method may also include segmenting the lung region from the target image, identifying an airway structure relating to the lung region, and identifying one or more fissures in the lung region. The method may further include determining one or more pulmonary lobes in the lung region.
SYSTEMS AND METHODS FOR PERFORMING A MEASUREMENT ON AN ULTRASOUND IMAGE DISPLAYED ON A TOUCHSCREEN DEVICE
The present embodiments relate generally to systems and methods for performing a measurement on an ultrasound image displayed on a touchscreen device. The method may include: receiving, via the touchscreen device, first input coordinates corresponding to a point on the ultrasound image; using the first input coordinates as a seed for performing a contour identification process on the ultrasound image, wherein the contour identification process performs contour evolution using morphological operators to iteratively dilate from the first input coordinates; upon identification of a contour from the contour identification process, placing measurement calipers on the identified contour; and storing a value identified by the measurement calipers as the measurement.
Image processing device, image processing method, and program
To increase accuracy in extracting a foreground area while saving a user time and effort. Image obtaining means of an image processing device obtains an image including a background and an object. Element area setting means sets, with respect to the image, a plurality of element areas respectively corresponding to a plurality of elements in the image. Overlapped area specifying means specifies an overlapped area in which a degree of overlap of the element areas is greater than or equal to a predetermined value in the image. Foreground area extracting means extracts a foreground area corresponding to the object from the image based on the overlapped area.
IMAGE-BASED ACTION DETECTION USING CONTOUR DILATION
A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first person and a second person may be associated with the event. In response, the tracking subsystem dilates contours associated with the first and second person from a first depth to a second depth until the contours enter a zone adjacent to the rack. A number of iterations is determined for each contour to enter the zone adjacent to the rack. If the first person's contour enters the zone in fewer iterations, the item is assigned to the first person.
APPARATUS AND METHOD FOR ALIGNING 3-DIMENSIONAL DATA
The present disclosure discloses a three-dimensional data alignment apparatus, a three-dimensional data alignment method, and a recording medium, which may align a location between volumetric data and surface data even without a segmentation process of extracting a surface from the volumetric data. A three-dimensional data alignment apparatus according to an exemplary embodiment of the present disclosure includes a three-dimensional data alignment unit for aligning a location between first three-dimensional data and second three-dimensional data expressed in different data forms with regard to a target to be measured. The first three-dimensional data are three-dimensional data acquired in a voxel form with regard to the target to be measured, and the second three-dimensional data are three-dimensional data acquired in a surface form with regard to the target to be measured. The three-dimensional data alignment unit is configured to extract one or more vertices from the second three-dimensional data; extract the first voxel values of first voxels located around each vertex from the first three-dimensional data, based on a location of each vertex extracted from the second three-dimensional data; determine corresponding points between the first three-dimensional data and the second three-dimensional data based on the first voxel values extracted from the first three-dimensional data; and calculate location conversion information minimizing a location error between the first three-dimensional data and the second three-dimensional data based on the corresponding points.
Methods and systems for image segmentation
The application discloses a method and system for segmenting a lung image. The method may include obtaining a target image relating to a lung region. The target image may include a plurality of image slices. The method may also include segmenting the lung region from the target image, identifying an airway structure relating to the lung region, and identifying one or more fissures in the lung region. The method may further include determining one or more pulmonary lobes in the lung region.
Automated surface area assessment for dermatologic lesions
A method for assessing a three-dimensional (3D) surface area having one or more lesions is disclosed. The method includes steps of: capturing a two-dimensional (2D) color image and a depth image of the 3D surface area; enhancing contrast of the 2D color image; segmenting the one or more lesions of the 2D color image into one or more segmented lesions; and calculating 3D area of the one or more segmented lesions using information from 2D color image and the depth image.