G06K9/34

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20170249301 · 2017-08-31 · ·

A non-transitory computer readable medium stores a program causing a computer to execute a process for displaying text. The process includes displaying in association with each other a text region extracted from image information and including an image of a text, an original text that is obtained by performing character recognition on the image of the text included in the text region, and a translation text into which the original text is translated.

REPAIRING HOLES IN IMAGES

A method for image processing that includes: obtaining a mask of a connected component (CC) from an image; generating a stroke width transform (SWT) image based on the mask; calculating multiple stroke width parameters for the mask based on the SWT image; identifying a hole in the CC of the mask; calculating a stroke width estimate for the hole based on the stroke width values of pixels in the SWT image surrounding the hole; generating a comparison of the stroke width estimate for the hole with a limit based on the multiple stroke width parameters for the mask; and generating a revised mask by filling the hole in response to the comparison.

NON-TRANSITORY COMPUTER READABLE MEDIUM AND INFORMATION PROCESSING APPARATUS AND METHOD
20170249299 · 2017-08-31 · ·

A non-transitory computer readable medium storing a translation program causes a computer to execute a process. The process includes: displaying image information, text regions, and original text in association with each other, the text regions being obtained by extracting regions including an image of text from the image information, the original text being obtained by performing character recognition on the text included in the text regions; and editing the text regions in accordance with the content of a received operation.

IMAGE PROCESSING APPARATUS AND MEDIUM STORING PROGRAM EXECUTABLE BY IMAGE PROCESSING APPARATUS
20170249527 · 2017-08-31 ·

An image processing apparatus includes a controller configured to execute: acquiring objective image data representing an objective image which includes a first character and a second character; analyzing first partial image data and specifying the first character in an image represented by the first partial image data; and generating processed image data representing a processed image which includes the first character and the second character by using the objective image data. The objective image data includes the first partial image data in a bitmap format which represents the image including the first character and second partial image data in a vector format which represents an image including the second character. The processed image data includes: first processed data representing an image including the first character; and second processed data representing an image including the second character.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
20170249526 · 2017-08-31 ·

In the case where a user extracts a desired character string by specifying a range by using a finger or the like of him/herself on an image including a character, a specific character (space or the like) located at a position adjacent to the desired character string is prevented from being included unintendedly in the selected range. The character area corresponding to each character included in the image is identified and character recognition processing is performed for each of the identified character areas. Then, from results of the character recognition processing, a specific character is determined and the character area corresponding to the determined specific character is extended. Then, the range selected by the user in the displayed image is acquired and character recognition results corresponding to a plurality of character areas included in the selected range are output.

Accurate and efficient polyp detection in wireless capsule endoscopy images

A method for detecting polyps in endoscopy images includes pruning a plurality of two dimensional digitized images received from an endoscopy apparatus to remove images that are unlikely to depict a polyp, where a plurality of candidate images remains that are likely to depict a polyp, pruning non-polyp pixels that are unlikely to be part of a polyp depiction from the candidate images, detecting polyp candidates in the pruned candidate images, extracting features from the polyp candidates, and performing a regression on the extracted features to determine whether the polyp candidate is likely to be an actual polyp.

Object identification
09747693 · 2017-08-29 ·

A system for identifying objects within an image. Disclosed are methods and systems for an image processing system to segment digital images. Generally stated, certain embodiments implement operations for consolidating shapes in a digital image, including: performing a shape identification analysis of pixels within the digital image to identify shapes within the digital image; analyzing each shape to identify attributes of each shape; comparing the identified attributes of each shape to the identified attributes of other shapes to determine if the shapes are sufficiently related to constitute an object; and if the identified attributes are sufficiently related, associating the shapes with each other to form an object.

CONTROL SYSTEM ENABLING COMPARISON BETWEEN TWO CHARACTER STRINGS AND METHOD OF INSTALLING A NEW CONFIGURATION IN AN AIRCRAFT
20170242676 · 2017-08-24 ·

A control system and method enabling comparison of first and second character strings, the control system comprising a first source of information supplying the first string and a second source of information embedded in an aircraft and supplying the second string. A first processing module can model the first character string, each first character string character divided into a given number H×W of standardized elements comprising sign elements and background elements. The first processing module can transform each character into a standardized image. A second processing module can model the second character string, each character divided into H×W standardized elements comprising sign elements and background elements, and can transform each character into a standardized image in which each standardized element is associated with a comparison code. A comparison module can load the first string into the first module and load the second string into the second module for processing.

STRUCTURE-PRESERVING COMPOSITE MODEL FOR SKIN LESION SEGMENTATION
20170243345 · 2017-08-24 ·

A structure-preserving composite model for skin lesion segmentation includes partitioning a dermoscopic image into superpixels at a first scale. Each superpixel is a vertex on a graph defined by color coordinates and spatial coordinates, and represents a number of pixels of the dermoscopic image according to the first scale. Further, constructing a plurality of k background templates by k-means clustering selected ones of the superpixels in space and color. Additionally, generating sparse representations of the plurality of superpixels based on the plurality of background templates. Also, calculating a reconstruction error for each superpixel by comparison of its sparse representation to its original color coordinates and spatial coordinates. Furthermore, outputting a confidence map that identifies each pixel of the dermoscopic image as belonging or not belonging to a skin lesion, based on the reconstruction errors of the representative superpixels.

Evaluation of co-registered images of differently stained tissue slices

A method for co-registering images of tissue slices stained with different biomarkers displays a first digital image of a first tissue slice on a graphical user interface such that an area of the first image is enclosed by a frame. Then a portion of a second image of a second tissue slice is displayed such that the area of the first image enclosed by the frame is co-registered with the displayed portion of the second image. The displayed portion of the second image has the shape of the frame. The tissue slices are both z slices of a tissue sample taken at corresponding positions in the x and y dimensions. The displayed portion of the second image is shifted in the x and y dimensions to coincide with the area of the first image that is enclosed by the frame as the user shifts the first image under the frame.