Patent classifications
G06K9/34
Systems and methods for computer vision background estimation using foreground-aware statistical models
Systems and methods are disclosed for background modeling in a computer vision system for enabling foreground object detection. A video acquisition model receives video data from a sequence of frames. A fit test module identifies a foreground object from the video data and defines a foreground mask representative of the identified foreground object. A foreground-aware background estimation module defines a first background model from the video data and then further defines an updated background model from an association of a current frame of the video data, the first background model and the foreground mask.
INTELLIGENT SCORING METHOD AND SYSTEM FOR TEXT OBJECTIVE QUESTION
An intelligent scoring method and system for a text objective question, the method comprising: acquiring an answer image of a text objective question (101); segmenting the answer image to obtain one or more segmentation results of an answer string to be identified (102); determining whether any of the segmentation results has the same number of characters as the standard answer (103); if no, the answer is determined to be wrong (106); otherwise, calculating identification confidence of the segmentation result having the same number of words as the standard answer, and/or calculating the identification confidence of respective characters in the segmentation result having the same number of words as the standard answer (104); determining whether the answer is correct according to the calculated identification confidence (105). The method can automatically score text objective questions, thus reducing consumption of human resource, and improving scoring efficiency and accuracy.
MACHINE VISION-BASED METHOD AND SYSTEM FOR AIRCRAFT DOCKING GUIDANCE AND AIRCRAFT TYPE IDENTIFICATION
A machine vision-based method and system for aircraft docking guidance and aircraft type identification, comprising: S1, a monitoring scenario is divided into different information processing function areas; S2 a captured image is pre-processed; S3 the engine and the front wheel of an aircraft are identified in the image, so as to confirm that the aircraft has appeared in the image; S4 continuous tracking and real-time updating are performed on the image of the engine and the front wheel of the aircraft captured in step S3; S5 real-time positioning of the aircraft is implemented and the degree of deviation of the aircraft with respect to a guide line and the distance with respect to a stop line are accurately determined; S6 the degree of deviation of the aircraft with respect to the guide line and the distance with respect to the stop line of step S5 are outputted and displayed.
TEX LINE DETECTION
A system and method for text line detection are described Examples include detection of symbols in an image received from an image-capturing device. In examples, for each of at least some of the symbols, neighboring symbols within a local region a given distance from the symbol are analyzed in order to determine a direction for a line in the local regions In examples, based on the determined directions for the lines, text lines in the image are identified.
LANGUAGE PROCESSING APPARATUS AND LANGUAGE PROCESSING METHOD
According to an embodiment, a language processing apparatus includes a recognizer and a generator. The recognizer recognizes a first character string of a first language from first data associated with a first time and recognizes a second character string of the first language including a first overlapping character string which overlaps with the first character string from second data associated with a second time later than the first time. The generator applies a production rule to the first character string and the second character string to generate a first resultant character string of the first language including the first overlapping character string.
INFORMATION PROCESSING APPARATUS, PROGRAM, AND INFORMATION PROCESSING METHOD
In an information processing apparatus, character recognition processing is executed on a character string image including a plurality of characters, and a character string as a result of the character recognition processing on the character string image is displayed. When any character in the character string displayed as the result of the character recognition processing is selected by a user, a correction candidate character for the selected character is displayed based on a character string in master data managed in a database, different from the character string displayed as the result of the character recognition processing in a predetermined number of characters and in at least the selected character. When the displayed correction candidate character is selected by the user, the character string as the result of the character recognition processing is corrected using the selected correction candidate character.
Method and System for Machine Learning Based Classification of Vascular Branches
A method and apparatus for learning based classification of vascular branches to distinguish falsely detected branches from true branches is disclosed. A plurality of overlapping fixed size branch segments are sampled from branches of a detected centerline tree of a target vessel extracted from a medical image of a patient. A plurality of 1D profiles are extracted along each of the overlapping fixed size branch segments. A probability score for each of the overlapping fixed size branch segments is calculated based on the plurality of 1D profiles extracted for each branch segment using a trained deep neural network classifier. The trained deep neural network classifier may be a convolutional neural network (CNN) trained to predict a probability of a branch segment being fully part of a target vessel based on multi-channel 1D input. A final probability score is assigned to each centerline point in the branches of the detected centerline tree based on the probability scores of the overlapping branch segments containing that centerline point. The branches of the detected centerline tree of the target vessel are pruned based on the final probability scores of the centerline points.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM
An image processing apparatus comprises generating unit configured to generate a second image by changing a size of a first image such that a region of interest in the second image satisfies a predetermined criterion about a dimension; and extracting unit configured to extract the region of interest from the second image generated by the generating unit, by applying a Graph-Cut method using a Graph-Cut coefficient corresponding to the criterion to the second image.
IMAGE INSPECTION METHOD WITH A PLURALITY OF CAMERAS
A digital image inspection method checks printing material processing machine products by recording digital printed partial images using recording devices and combining partial images in an image processing computer forming a digital overall image causing abutment edges at an overlap. The computer inspects the digital overall image and transmits a result to a machine control computer. The computer creates a new image, only containing detected edges, using edge detection methods after combining partial images forming a digital overall image. The computer uses known positions of abutment edges of recording devices to create a further new image only containing regions with abutment edges of recording devices. The computer overlays the new images, providing a resultant image containing only edges along abutment edges of recording devices. The computer applies the resultant image to the digital overall image, defining masking zones in the resultant digital overall image not being checked by image inspection.
HANDWRITING-BASED PREDICTIVE POPULATION OF PARTIAL VIRTUAL KEYBOARDS
A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window. A related drawing mode enables entry of drawings in combination with the handwritten characters.