Patent classifications
G06K9/34
AUTOMATIC FACT EXTRACTION
Automatic fact extraction that involves tokenizing text in unstructured information to generate a token list. Parent entity rules defined for a selected domain are applied to the token list to identify a parent entity. Related entity rules that are defined for a related entity linked to the parent entity are applied to the token list to identify the related entity. The related entity is added as an extracted fact of the parent entity to a fact list. The extracted fact is transmitted as structured information to a repository.
IMAGE PROCESSING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND STORAGE MEDIUM
An image processing apparatus includes a character recognition unit configured to perform character recognition processing for recognizing one or more characters included in image data and acquiring character information, a display unit configured to display one or more characters indicated by the character information acquired by the character recognition unit, and a transmission unit configured to transmit the image data to a folder the name of which is a character selected by a user from among the one or more characters displayed by the display unit.
IMAGE PROCESSING APPARATUS, SYSTEM, CONVERSION METHOD, AND RECORDING MEDIUM
An image processing apparatus, system, method, and control program stored in a non-transitory recording medium are provided each of which obtains image data of a document; determines an arrangement pattern of each of a plurality of character strings in the image data, based on positional relationship of the plurality of character strings; and generates a text data file including the plurality of character strings each being arranged according to the arrangement pattern that is determined.
Surgical Kit Inspection Systems And Methods For Inspecting Surgical Kits Having Parts Of Different Types
Surgical kit inspection systems and methods are provided for inspecting surgical kits having parts of different types. The surgical kit inspection system comprises a vision unit including a first camera unit and a second camera unit to capture images of parts of a first type and a second type in each kit and to capture images of loose parts from each kit that are placed on a light surface. A robot supports the vision unit to move the first and second camera units relative to the parts in each surgical kit. One or more controllers obtain unique inspection instructions for each of the surgical kits to control inspection of each of the surgical kits and control movement of the robot and the vision unit accordingly to provide output indicating inspection results for each of the surgical kits.
Methods, Apparatuses, and Systems for Creating 3-Dimensional Representations Exhibiting Geometric and Surface Characteristics of Brain Lesions
Methods, apparatuses, systems, and implementations for creating 3-dimensional (3D) representations exhibiting geometric and surface characteristics of brain lesions are disclosed. 2D and/or 3D MM images of the brain may be acquired. Brain lesions and other abnormalities may be identified and isolated with each lesion serving as a region of interest (ROI). Saved ROI may be converted into stereolithography format, maximum intensity projection (MIP) images, and/or orthographic projection images. Data corresponding to these resulting 3D brain lesion images may be used to create 3D printed models of the isolated brain lesions using 3D printing technology. Analysis of the 3D brain lesion images and the 3D printed brain lesion models may enable a more efficient and accurate way of determining brain lesion etiologies.
SEGMENTING OBJECTS BY REFINING SHAPE PRIORS
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing instance segmentation by detecting and segmenting individual objects in an image. In one aspect, a method comprises: processing an image to generate data identifying a region of the image that depicts a particular object; obtaining data defining a plurality of example object segmentations; generating a respective weight value for each of the example object segmentations; for each of a plurality of pixels in the region of the image, determining a score characterizing a likelihood that the pixel is included in the particular object depicted in the region of the image using: (i) the example object segmentations, and (ii) the weight values for the example object segmentations; and generating a segmentation of the particular object depicted in the region of the image using the scores for the pixels in the region of the image.
METHOD AND APPARATUS FOR CAPTURING AN IMAGE OF A LATERAL FACE OF A WOODEN BOARD
Method and apparatus for capturing an image of a lateral face of a wooden board, where the lateral face (2) is parallel to a main axis of development of the wooden board (3), with the method comprising a step for feeding the wooden board (3) with the lateral face (2) transverse to a feeding direction (4), a step for illuminating the lateral face (2), a step for capturing, using a plurality of area scan cameras (9), a plurality of first digital images at different times, where each first digital image comprises a portion (10) of the lateral face (2) extending for an entire height of the lateral face (2) transverse to the main axis of development and for part of a length of the lateral face (2) along the main axis of development, where the set of all such portions (10) corresponds to the entire lateral face (2), and a merging step for merging the first digital images using an electronic processing unit to obtain a second digital image showing the entire lateral face (2).
DISTRIBUTABLE DESCRIPTIVE RECIPE FOR INTELLIGENT IMAGE AND VIDEO PROCESSING SYSTEMS
This disclosure relates to a method for rendering images. First, a user request is received from a user interface to access an image effect renderer recipe, comprising conditional logic and non-visual image data, from an effect repository. Next, at least one image signal is received. Objects are identified within the image signal(s). The image effect renderer recipe is processed via an effect renderer recipe interpreter to generate image processing steps and image processing prioritizations. The image processing steps are then ordered in accordance with the image processing prioritizations. Next, an image processor applies the image processing steps to the identified objects of the image signal(s) to generate at least one processed image signal. The processed image signal(s) are then displayed on a display device.
UTILIZING MACHINE LEARNING AND IMAGE FILTERING TECHNIQUES TO DETECT AND ANALYZE HANDWRITTEN TEXT
In some implementations, a device may receive an image that depicts handwritten text. The device may determine that a section of the image includes the handwritten text. The device may analyze, using a first image processing technique, the section to identify subsections of the section that include individual words of the handwritten text. The device may reconfigure, using a second image processing technique, the subsections to create preprocessed word images associated with the individual words. The device may analyze, using a word recognition model, the preprocessed word images to generate digitized words that are associated with the preprocessed word images. The device may verify, based on a reference data structure, that the digitized words correspond to recognized words of the word recognition model. The device may generate, based on verifying the digitized words, digital text according to a sequence of the digitized words in the section.
SEMANTIC CLUSTER FORMATION IN DEEP LEARNING INTELLIGENT ASSISTANTS
Enhanced techniques and circuitry are presented herein for providing responses to questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving an indication of a question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the question, ranking the set of passages according to relevance to the question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the question based at least on a selected semantic cluster.