Patent classifications
G06V20/69
AUTOMATED ASSESSMENT OF ENDOSCOPIC DISEASE
The application relates to devices and methods for analysing a colonoscopy video or a portion thereof, and for assessing the severity of ulcerative colitis in a subject by analysing a colonoscopy video obtained from the subject. Analysing a colonoscopy video comprises using a first deep neural network classifier to classify image data from the subject colonoscopy video or portion thereof into at least a first severity class (more severe endoscopic lesions) and a second severity class (less severe endoscopic lesions), wherein the first deep neural network has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, the training image data comprising multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein frames in a set have the same severity class label. Devices and methods for providing a tool for analysing colonoscopy videos are also described.
Method of Diagnosis
The invention relates to methods for determining the stage of a disease, particularly an ocular neurodegenerative disease such as Alzheimer's, Parkinson's, Huntington's and glaucoma, comprising the steps of identifying the status of microglial cells in the retina and relating that status to disease stage. Methods for identifying cells in the eye are also provided, as are labelled markers and the use thereof.
Method of Diagnosis
The invention relates to methods for determining the stage of a disease, particularly an ocular neurodegenerative disease such as Alzheimer's, Parkinson's, Huntington's and glaucoma, comprising the steps of identifying the status of microglial cells in the retina and relating that status to disease stage. Methods for identifying cells in the eye are also provided, as are labelled markers and the use thereof.
PATHOLOGICAL DIAGNOSIS ASSISTING METHOD USING AI, AND ASSISTING DEVICE
Diagnosis is assisted by acquiring microscopical observation image data while specifying the position, classifying the image data into histological types with the use of AI, and reconstructing the classification result in a whole lesion. There is provided a pathological diagnosis assisting method that can provide an assistance technology which performs a pathological diagnosis efficiently with satisfactory accuracy by HE staining which is usually used by pathologists. Furthermore, there are provided a pathological diagnosis assisting system, a pathological diagnosis assisting program, and a pre-trained model.
METHOD AND APPARATUS FOR MEASURING MOTILITY OF CILIATED CELLS IN RESPIRATORY TRACT
The present disclosure relates to a method and an apparatus for measuring motility of ciliated cells in a respiratory tract. The method includes the operations of: acquiring image data including a plurality of frames of respiratory tract organoids; identifying positions of ciliated cells by performing motion-contrast imaging on the image data; when a region of interest (ROI) related to the position of the ciliated cells is selected, measuring a ciliary beat frequency (CBF) related to motility of cilia included in the selected region of interest using cross-correlation between the plurality of frames; and expressing the cilia included in the region of interest in a preset display method on the basis of the range of the measured ciliary beat frequency.
Identifying the quality of the cell images acquired with digital holographic microscopy using convolutional neural networks
A system for performing adaptive focusing of a microscopy device comprises a microscopy device configured to acquire microscopy images depicting cells and one or more processors executing instructions for performing a method that includes extracting pixels from the microscopy images. Each set of pixels corresponds to an independent cell. The method further includes using a trained classifier to assign one of a plurality of image quality labels to each set of pixels indicating the degree to which the independent cell is in focus. If the image quality labels corresponding to the sets of pixels indicate that the cells are out of focus, a focal length adjustment for adjusting focus of the microscopy device is determined using a trained machine learning model. Then, executable instructions are sent to the microscopy device to perform the focal length adjustment.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
An information processing device (200A, 200B, and 200C) according to the present disclosure includes a control unit (220, 220B, and 220C). The control unit (220, 220B, and 220C) acquires a captured image of a target imaged by a sensor. The captured image is an image obtained from reflected light of light emitted to the target from a plurality of light sources arranged at different positions, respectively. The control unit (220, 220B, and 220C) extracts a flat region from the captured image based on a luminance value of the captured image. The control unit (220, 220B, and 220C) calculates shape information regarding a shape of a surface of the target based on information regarding the sensor and the flat region of the captured image.
LABEL FREE CELL SORTING
Provided herein are techniques for label free cell sorting. The systems and methods provided herein may use machine learning based image classification techniques to identify cells of interest within a sample of cells. The cells of interest may then be separated from the sample using mechanical, pneumatic, piezoelectric, and/or electronic devices.
System and method to simultaneously track multiple organisms at high resolution
A microscopy includes multiple cameras working together to capture image data of a sample having a group of organisms distributed over a wide area, under the influence of an excitation instrument. A first processor is coupled to each camera to process the image data captured by the camera. Outputs from the multiple first processors are aggregated and streamed serially to a second processor for tracking the organisms. The presence of the multiple cameras capturing images from the sample, configured with 50% or more overlap, can allow 3D tracking of the organisms through photogrammetry.
Fully automatic, template-free particle picking for electron microscopy
Systems and methods are described for the fully automatic, template-free locating and extracting of a plurality of two-dimensional projections of particles in a micrograph image. A set of reference images is automatically assembled from a micrograph image by analyzing the image data in each of a plurality of partially overlapping windows and identifying a subset of windows with image data satisfying at least one statistic criterion compared to other windows. A normalized cross-correlation is then calculated between the image data in each reference image and the image data in each of a plurality of query image windows. Based on this cross-correlation analysis, a plurality of locations in the micrograph is automatically identified as containing a two-dimensional projection of a different instance of the particle of the first type. The two-dimensional projections identified in the micrograph are then used to determine the three-dimensional structure of the particle.