Patent classifications
G06V10/267
Information processing system for obtaining read data of handwritten characters, training a model based on the characters, and producing a font for printing using the model
An information processing system acquires, using a reading device, a read image from an original on which a handwritten character is written; acquires, based on the read image, a partial image that is a partial region of the read image and a binarized image that expresses the partial image by two tones; performs learning of a learning model based on learning data that uses the partial image as a correct answer image and the binarized image as an input image; acquires print data including a font character; generates conversion image data including a gradation character obtained by inputting the font character to the learning model; and causes an image forming device to form an image based on the generated conversion image data.
BIOLOGICAL SAMPLE ANALYSIS DEVICE
An object of the present disclosure is to provide a technique capable of acquiring an analysis target region and color information without causing a decrease in extraction accuracy of the analysis target region due to erroneous extraction of a color of a colored label, measuring a solution volume of the specimen, and determining a specimen type. The biological specimen analysis device according to the present disclosure creates a developed view by cutting out a partial region from a color image of a biological sample tube and connecting the partial region along a circumferential direction of the biological sample tube, and extracts a detection target region from the developed view (see FIG. 6B).
METHOD FOR PROCESSING IMAGE, METHOD FOR TRAINING FACE RECOGNITION MODEL, APPARATUS AND DEVICE
A method for processing an image includes: obtaining a face image to be processed, and dividing the face image to be processed into image patches; determining respective importance information of the image patches of the face image to be processed; obtaining a pruning rate of a preset vision transformer (ViT) model; inputting the image patches into the ViT model, and pruning inputs of network layers of the ViT model according to the pruning rate and the respective importance information of the image patches, to obtain a result outputted by the ViT model; and determining feature vectors of the face image to be processed according to the result outputted by the ViT model.
SYSTEMS AND METHODS FOR AUTOMATED PROCESSING OF RETINAL IMAGES
Embodiments disclose systems and methods that aid in screening, diagnosis and/or monitoring of medical conditions. The systems and methods may allow, for example, for automated identification and localization of lesions and other anatomical structures from medical data obtained from medical imaging devices, computation of image-based biomarkers including quantification of dynamics of lesions, and/or integration with telemedicine services, programs, or software.
Artificial Intelligence Enabled Metrology
Methods and systems for implementing artificial intelligence enabled metrology are disclosed. An example method includes segmenting a first image of structure into one or more classes to form an at least partially segmented image, associating at least one class of the at least partially segmented image with a second image, and performing metrology on the second image based on the association with at least one class of the at least partially segmented image.
Target tracking method for panorama video,readable storage medium and computer equipment
The present application is applicable to the field of video processing. Provided are a target tracking method for a panoramic video, a readable storage medium, and a computer device. The method comprises: using a tracker to track and detect a target to be tracked to obtain a predicted tracking position of said target in the next panoramic video frame, calculating the reliability of the predicted tracking position, and using an occlusion detector to calculate an occlusion score of the predicted tracking position; determining whether the reliability of the predicated tracking position is greater than a preset reliability threshold value, and determining whether the occlusion score of the predicted tracking position is greater than a preset occlusion score threshold value; and using a corresponding tracking strategy according to the reliability and the occlusion score. By means of the present application, whether a tracking failure is caused by the loss of a target or occlusion can be determined, such that a corresponding tracking recovery strategy can be used, and tracking can be automatically recovered when tracking fails, thereby achieving the effect of performing tracking continuously for a long time. In addition, the method of the present invention has a low operation complexity and a good real-time performance.
Method and apparatus for measuring endolymphatic hydrops ratio of inner ear organ using artificial neural network
Provided are a method and an apparatus for measuring an endolymphatic hydrops ratio of inner ear organs using an artificial neural network. The method of measuring an endolymphatic hydrops ratio includes obtaining a plurality of frame images obtained by capturing inner ear organs, obtaining a plurality of pieces of mask data corresponding to each of the plurality of frame images by inputting the plurality of frame images into a neural network, clustering the plurality of pieces of mask data according to the inner ear organs and obtaining representative images according to the inner ear organs according to certain conditions, and overlapping a target image synthesized by using the plurality of frame images and the representative images according to the inner ear organs so as to measure an endolymphatic hydrops ratio.
SYSTEMS AND METHODS FOR ANATOMICAL SEGMENTATION
A method includes receiving a three-dimensional image dataset of a surgical site of a patient. The method also includes segmenting one or more anatomical features of the surgical site based on the three-dimensional image dataset. The method also includes receiving a two-dimensional image of the surgical site of the patient and registering the two-dimensional image to an image from the three-dimensional image dataset. The method also includes displaying a two-dimensional representation of the segmented one or more anatomical features based on the registered two-dimensional image and the image from the three-dimensional image dataset.
Method and system for segmenting touching text lines in image of uchen-script Tibetan historical document
A method and system for segmenting touching text lines in an image of a uchen-script Tibetan historical document are provided. The method includes: first obtaining a binary image of a uchen-script Tibetan historical document after layout analysis; detecting local baselines in the binary image, to generate a local baseline information set; detecting and segmenting a touching region in the binary image according to the local baseline information set, to generate a touching-region-segmented image; allocating connected components in the touching-region-segmented image to corresponding lines, to generate a text line allocation result; and splitting text lines in the touching-region-segmented image according to the text line allocation result, to generate a line-segmented image. In the present disclosure, touching text lines in a Tibetan historical document can be effectively segmented, and text line segmentation efficiency of the Tibetan historical document is improved.
DEVICE AND A METHOD FOR MERGING CANDIDATE AREAS
A device and method merge a first candidate area relating to a candidate feature in a first image and a second candidate area relating to a candidate feature in a second image. The first and second images have an overlapping region, and at least a portion of the first and second candidate areas are located in the overlapping region. An image overlap size is determined indicating a size of the overlapping region of the first and second images, and a candidate area overlap ratio is determined indicating a ratio of overlap between the first and second candidate areas. A merging threshold is then determined based on the image overlap size, and, on condition that the candidate area overlap ratio is larger than the merging threshold, the first candidate area and the second candidate area are merged, thereby forming a merged candidate area.