Patent classifications
G06T2207/10056
MULTIPLEXED SUPER-RESOLUTION LABEL-FREE NONLINEAR MICROSCOPY
Super-resolution label-free microscopy is provided using multiplexed, temporally modulated acquisition patterns of emission point spread functions (“PSFs”). Supercontinuum ultrafast pulses can be used to enhance nonlinear processes, such as autofluorescence and harmonic generation, in order to provide super-resolution imaging of nonlinear label-free signals. Images can be reconstructed using various reconstruction techniques, including pixel reassignment, wavelet reconstruction, and deep learning model-based reconstructions.
LASER SCANNING SYSTEM
A method of scanning a laser over a field of view, the method comprising: providing a laser to produce the laser beam; rasterizing the laser beam over a first sub-area of the field of view; deflecting the laser beam to a second sub-area of the field of view; and rasterizing the laser beam over the second sub-area of the field of view; and capturing image information produced by the laser beam so that, for each sub-area of the field of view, the rasterized laser beam defines a plurality of image segments; for each segment calculating an image correction and applying a correction to the laser according to the calculated image correction for the segment, and corresponding system.
PROCESSING OF IMAGES CONTAINING OVERLAPPING PARTICLES
A computer-implemented method of generating training data to be used to train a machine learning model for generating a segmentation mask of an image containing overlapping particles. Training data is generated from sparse particle images which contain no overlaps. Generating masks for non-overlapping particles is generally not a problem if the particles can be identified clearly; in many cases simple methods such as thresholding already yield usable masks. The sparse images can then be combined to images which contain artificial overlaps. The same can be done for the masks as well which yields a large amount of training data, because of the many combinations which can be created from just a small set of images. The method is simple yet effective and can be adapted to many domains for example by adding style-transfer to the generated images or by including additional augmentation steps.
METHOD AND SYSTEM FOR DETERMINING A POSE OF AT LEAST ONE OBJECT IN AN OPERATING THEATRE
The invention relates to a method and a system for determining a pose of at least one object in an operating theatre, in a reference coordinate system of a pose detection device of a surgical microscope, involving the determination of the pose of the object by way of a movably arranged microscope-external pose detection device in a first coordinate system, the first coordinate system being a coordinate system that is arranged to be stationary relative to the operating theatre, the determination of the pose of the reference coordinate system by the non-stationary microscope-external pose detection device in the first coordinate system, and the transformation of the pose of the object from the first coordinate system into the reference coordinate system of the pose detection device of the surgical microscope.
STAIN-FREE DETECTION OF EMBRYO POLARIZATION USING DEEP LEARNING
Disclosed herein include systems, devices, and methods for detecting embryo polarization from a 2D image generated from a 3D image of an embryo that is not fluorescently labeled using a convolutional neural network (CNN), e.g., deep CNN.
CELL AGGREGATE INTERNAL PREDICTION METHOD, COMPUTER READABLE MEDIUM, AND IMAGE PROCESSING DEVICE
An internal prediction method includes acquiring an image of a cell aggregate, calculating a feature amount related to a shape of the cell aggregate on the basis of the image, and outputting structure information related to an internal structure of the cell aggregate on the basis of the feature amount.
DIGITAL ANTIMICROBIAL SUSCEPTIBILITY TESTING
Detecting single bacterial cells in a sample includes collecting, from a sample provided to an imaging apparatus, a multiplicity of images of the sample over a length of time; assessing a trajectory of each bacterial cell in the sample; and assessing, based on the trajectory of each bacterial cell in the sample, a number of bacterial cell divisions that occur in the sample during the length of time.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, IMAGE PROCESSING PROGRAM, AND DIAGNOSIS SUPPORT SYSTEM
An image processing device 100 includes, in a case where designation of a plurality of partial regions corresponding to a cell morphology is received, the plurality of partial regions being extracted from a pathological image, a generation unit 154 that generates auxiliary information indicating information about a feature amount effective when a plurality of partial regions is classified or extracted with respect to a plurality of feature amounts calculated from the image; and in a case where setting information about an adjustment item according to the auxiliary information is received, an image processing unit 155 that performs an image process on the image using the setting information.
Platforms and systems for automated cell culture
Disclosed herein are platforms, systems, and methods including a cell culture system that includes a cell culture container comprising a cell culture, the cell culture receiving input cells, a cell imaging subsystem configured to acquire images of the cell culture, a computing subsystem configured to perform a cell culture process on the cell culture according to the images acquired by the cell imaging subsystem, and a cell editing subsystem configured to edit the cell culture to produce output cell products according to the cell culture process.
IMAGE DISPLAY METHOD, IMAGE DISPLAY DEVICE AND RECORDING MEDIUM
An image display method includes the following operations (a) to (e). The (a) is of obtaining a plurality of two-dimensional images by two-dimensionally imaging a specimen, in which a plurality of objects to be observed are present three-dimensionally in the specimen, at a plurality of mutually different focus positions. The (b) is of obtaining image data representing a three-dimensional shape of the specimen. The (c) is of obtaining a three-dimensional image of the specimen based on the image data. The (d) is of obtaining the two-dimensional image selected from the plurality of two-dimensional images or a two-dimensional image generated to be focused on the plurality of objects based on the plurality of two-dimensional images as an integration two-dimensional image. The (e) is of integrating the integration two-dimensional image obtained in the (d) with the three-dimensional image obtained in the (c) and displaying an integrated image on a display unit.