Patent classifications
G06K15/1276
LIVE SURGICAL AID FOR BRAIN TUMOR RESECTION USING AUGMENTED REALITY AND DEEP LEARNING
An augmented reality system and method, comprising: a memory configured to store 3D medical scans comprising an image of a tumor and an angiogram; an output port configured to present a signal for presentation of an augmented reality display to a user; at least one camera, configured to capture images of a physiological object from a perspective; at least one processor, configured to: implement a first neural network trained to automatically segment the tumor; implement a second neural network to segment vasculature in proximity to the tumor; implement a third neural network to recognize a physiological object in the captured images; and generate an augmented reality display of the physiological object, tumor and vasculature based on the captured images, the segmented tumor and the segmented vasculature, compensated for changes in the perspective.
Information recording medium, information recording method, and information reproduction method
It is an object of the present invention to improve visibility for observation with naked eyes or for camera shooting without spoiling the appearance during marking inside a transparent medium using a laser. By irradiating an inside of a transparent medium with a laser, the present invention forms a micro-denatured region in each of a first layer and a second layer inside the medium. The micro-denatured regions in the respective layers are arranged out of alignment with each other on a two-dimensional plane (refer to FIG. 1).
Live surgical aid for brain tumor resection using augmented reality and deep learning
An augmented reality system and method, comprising: a memory configured to store 3D medical scans comprising an image of a tumor and an angiogram; an output port configured to present a signal for presentation of an augmented reality display to a user; at least one camera, configured to capture images of a physiological object from a perspective; at least one processor, configured to: implement a first neural network trained to automatically segment the tumor; implement a second neural network to segment vasculature in proximity to the tumor; implement a third neural network to recognize a physiological object in the captured images; and generate an augmented reality display of the physiological object, tumor and vasculature based on the captured images, the segmented tumor and the segmented vasculature, compensated for changes in the perspective.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, STORAGE MEDIUM, AND IMAGE FORMING APPARATUS
An object of the present invention is to provide an information processing apparatus that reduces the amount of used memory necessary for alpha blending processing in a printer and enables high-speed printing processing. The present invention is an information processing apparatus including: an overlap determination unit configured to determine, in a case where a drawing object making up page data of a print job to be output to an image forming apparatus is a transparent object having an alpha channel as color information, whether the transparent object overlaps another drawing object; and a conversion unit configured to convert, in a case where the overlap determination unit determines that the transparent object does not overlap the other drawing object, drawing data of the transparent object into drawing data of a drawing object not having an alpha channel as color information.
LIVE SURGICAL AID FOR BRAIN TUMOR RESECTION USING AUGMENTED REALITY AND DEEP LEARNING
An augmented reality system and method, comprising: a memory configured to store 3D medical scans comprising an image of a tumor and an angiogram; an output port configured to present a signal for presentation of an augmented reality display to a user; at least one camera, configured to capture images of a physiological object from a perspective; at least one processor, configured to: implement a first neural network trained to automatically segment the tumor; implement a second neural network to segment vasculature in proximity to the tumor; implement a third neural network to recognize a physiological object in the captured images; and generate an augmented reality display of the physiological object, tumor and vasculature based on the captured images, the segmented tumor and the segmented vasculature, compensated for changes in the perspective.
NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING PRINTING PROGRAM, PRINTED MATTER GENERATION METHOD, PRINTING SYSTEM, SPECIAL PLATE DATA GENERATION METHOD, SPECIAL PLATE DATA GENERATION SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING SPECIAL PLATE DATA GENERATION PROGRAM
A non-transitory computer-readable storage medium stores a program including: a special plate generation function for generating special plate data based on image data; a special plate printing function for printing the special plate data using special color ink; and an image printing function for printing the image data at a position overlapping the special color ink. A relative position between a position at which the special color ink of the special plate data is printed and a position of an object of the image data corresponds to a type of the object.
Live surgical aid for brain tumor resection using augmented reality and deep learning
An augmented reality system and method, comprising: a memory configured to store 3D medical scans comprising an image of a tumor and an angiogram; an output port configured to present a signal for presentation of an augmented reality display to a user; at least one camera, configured to capture images of a physiological object from a perspective; at least one processor, configured to: implement a first neural network trained to automatically segment the tumor; implement a second neural network to segment vasculature in proximity to the tumor; implement a third neural network to recognize a physiological object in the captured images; and generate an augmented reality display of the physiological object, tumor and vasculature based on the captured images, the segmented tumor and the segmented vasculature, compensated for changes in the perspective.