Patent classifications
G06V10/7796
TWO DIMENSIONAL HILBERT HUANG TRANSFORM REAL-TIME IMAGE PROCESSING SYSTEM WITH PARALLEL COMPUTATION CAPABILITIES
An apparatus, computer program product and method of analyzing two-dimensional data input. The system is known as Syneren Signal and Image Enhancement Technology (SIETECH). SIETECH can be implemented in software or in Field Programmable Gate Array (FPGA) hardware. Some embodiments of the present invention pertain to apparatuses, method, and a computer program that is configured to cause the central processor to pass the input data to the multi-thread processors, wherein each data point is mapped on the thread level and the local lower and upper bounds are constructed simultaneously based on order statistic window.
AUTOMATED CLASSIFICATION BASED ON PHOTO-REALISTIC IMAGE/MODEL MAPPINGS
Techniques are provided for increasing the accuracy of automated classifications produced by a machine learning engine. Specifically, the classification produced by a machine learning engine for one photo-realistic image is adjusted based on the classifications produced by the machine learning engine for other photo-realistic images that correspond to the same portion of a 3D model that has been generated based on the photo-realistic images. Techniques are also provided for using the classifications of the photo-realistic images that were used to create a 3D model to automatically classify portions of the 3D model. The classifications assigned to the various portions of the 3D model in this manner may also be used as a factor for automatically segmenting the 3D model.
GUIDED MACHINE-LEARNING TRAINING USING A THIRD PARTY CLOUD-BASED SYSTEM
Systems and methods may enable a user who may not have any experience in machine learning to effectively train new models for use in object recognition applications of a device. Embodiments can include, for example, analyzing training data comprising a set of images to determine a set of metrics indicative of a suitability of the training data in machine-learning training for object recognition, and providing an indication of the set of metrics to a user. Additionally or alternatively, an intermediate model can be used, after a first portion of the machine-learning training is conducted, to determine the effectiveness of a remaining portion of negative samples (images without the object) in the training data or to find other negative samples outside of the training data. Identifying and utilizing effective negative samples in this manner can improve the effectiveness of the training.
TRAINING DATA FOR MACHINE-BASED OBJECT RECOGNITION
Systems and methods may enable a user who may not have any experience in machine learning to effectively train new models for use in object recognition applications of a device. Embodiments can include, for example, analyzing training data comprising a set of images to determine a set of metrics indicative of a suitability of the training data in machine-learning training for object recognition, and providing an indication of the set of metrics to a user. Additionally or alternatively, an intermediate model can be used, after a first portion of the machine-learning training is conducted, to determine the effectiveness of a remaining portion of negative samples (images without the object) in the training data or to find other negative samples outside of the training data. Identifying and utilizing effective negative samples in this manner can improve the effectiveness of the training.
Training system for infield training of a vision-based object detector
Described is a training system for training a vision-based object detector. The system is configured to run an object detector on an image of a cleared scene to detect objects in the cleared scene. The object detector includes a support vector machine (SVM) or similar classifier with a feature model to generate an SVM score for object features and a spatial bias threshold to generate augmented object scores. The system designated detected objects in the cleared scene as false detections and, based on that, updates at least one of the feature model and spatial bias threshold to designate the false detections as background. The updated feature model or updated spatial bias threshold are then stored for use in object detection.
IMAGE PROCESSING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM
An image processing apparatus for reducing an influence of a false detection from processing for detecting a predetermined object, and a control method thereof are provided. An image processing apparatus, comprising: a detecting unit for detecting a region of a predetermined object from an image; a determining unit for determining whether the detection by the detecting unit is a false detection; a processing unit for performing processing relating to the predetermined object on the region detected by the detecting unit; and a controlling unit for controlling, based on a determination result by the determining unit, execution of the processing by the processing unit on the region detected by the detecting unit.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
Before dimension reduction is performed while local data distribution is stored as neighborhood data, a distance between data to be subjected to the dimension reduction is calculated, and a parameter (a neighborhood number of the k-nearest neighbor algorithm or a size of a hypersphere) which determines the neighborhood data is determined for each data to be subjected to the dimension reduction. Thereafter, the dimension reduction is performed on the target data based on the determined parameter.
DIGITAL HISTOPATHOLOGY AND MICRODISSECTION
A computer implemented method of generating at least one shape of a region of interest in a digital image is provided. The method includes obtaining, by an image processing engine, access to a digital tissue image of a biological sample; tiling, by the image processing engine, the digital tissue image into a collection of image patches; identifying, by the image processing engine, a set of target tissue patches from the collection of image patches as a function of pixel content within the collection of image patches; assigning, by the image processing engine, each target tissue patch of the set of target tissue patches an initial class probability score indicating a probability that the target tissue patch falls within a class of interest, the initial class probability score generated by a trained classifier executed on each target tissue patch; generating, by the image processing engine, a first set of tissue region seed patches by identifying target tissue patches having initial class probability scores that satisfy a first seed region criteria, the first set of tissue region seed patches comprising a subset of the set of target tissue patches; generating, by the image processing engine, a second set of tissue region seed patches by identifying target tissue patches having initial class probability scores that satisfy a second seed region criteria, the second set of tissue region seed patches comprising a subset of the set of target tissue patches; calculating, by the image processing engine, a region of interest score for each patch in the second set of tissue region seed patches as a function of initial class probability scores of neighboring patches of the second set of tissue region seed patches and a distance to patches within the first set of issue region seed patches; and generating, by the image processing engine, one or more region of interest shapes by grouping neighboring patches based on their region of interest scores.
Learning apparatus, learning method, and recording medium
The learning apparatus classifies target domain data into (N-c) classes based on unique features of the target domain data, classifies source domain data into N classes based on unique features of the source domain data, and classifies the target domain data and the source domain data into the N classes based on common features of the target domain data and the source domain data. Also, the learning apparatus calculates a first distance between the common features of the target domain data and the source domain data, and calculates a second distance between the unique features of the target domain data and the source domain data. Next, the learning apparatus updates parameters of a common feature extraction unit based on the first distance, and updates parameters of a target domain feature extraction unit and a source domain feature extraction unit based on the second distance.
Sensing device for medical facilities
A medical system may utilize a modular and extensible sensing device to derive a two-dimensional (2D) or three-dimensional (3D) human model for a patient in real-time based on images of the patient captured by a sensor such as a digital camera. The 2D or 3D human model may be visually presented on one or more devices of the medical system and used to facilitate a healthcare service provided to the patient. In examples, the 2D or 3D human model may be used to improve the speed, accuracy and consistency of patient positioning for a medical procedure. In examples, the 2D or 3D human model may be used to enable unified analysis of the patient's medical conditions by linking different scan images of the patient through the 2D or 3D human model. In examples, the 2D or 3D human model may be used to facilitate surgical navigation, patient monitoring, process automation, and/or the like.