Patent classifications
A61B1/000096
TOOTH ANALYSIS SERVER, TOOTH ANALYSIS TERMINAL, AND TOOTH ANALYSIS PROGRAM
Provided is a tooth analysis server including: a server communication unit linked with a tooth photographing device for photographing teeth to communicate with a user terminal; a database management unit for managing an image of the photographed teeth as data; an analysis unit for analyzing a state of the teeth of the user according to the request; and a result providing unit for providing an analysis result for the state of the teeth of the user to the user terminal, wherein the analysis unit receives a first image obtained by photographing the teeth of the user at a first speed from the tooth photographing device to perform a first analysis, and receives a second image obtained by photographing the doubtful area at a second speed slower than the first speed from the tooth photographing device to perform a second analysis.
Creating Surgical Annotations Using Anatomy Identification
A system and method allow creation of surgical annotations during surgery. A digital image of a treatment site in a region of patient anatomy is captured using a camera and is displayed on an image display. The system displays an overlay marking an anatomic region of interest in the displayed image. During the course of the surgery, panning of the image results in the anatomic region of interest being outside the displayed field of view. In response to user input, the displayed image is subsequently panned such that the anatomic region of interest and the displayed overlay return to the displayed field of view.
Route selection assistance system, recording medium on which route selection assistance program is recorded, route selection assistance method, and diagnosis method
A route selection assistance system, a recording medium on which a route selection assistance program is recorded, a route selection assistance method, and a diagnosis method that enable easy selection of a route of a living body lumen for delivering a medical instrument to a site within a living body via the living body lumen. A route selection assistance system includes: a receiving section configured to receive an input of site information specifying a target site; an image obtaining section configured to obtain image information on a living body of a target patient; a route extracting section configured to extract a plurality of routes of a living body lumen; a ranking assigning section configured to assign rankings to the plurality of routes extracted by the route extracting section; and an output section configured to output the plurality of routes extracted and the rankings assigned by the ranking assigning section.
Systems and methods for processing real-time video from a medical image device and detecting objects in the video
The present disclosure relates to computer-implemented systems and methods for detecting a feature-of-interest in a video. In one implementation, a computer-implemented system may include a discriminator network and a generative network. The discriminator network may include a perception branch and an adversarial branch, the perception branch being configured to output detections of the feature-of-interest in the video. The generative network may be configured to receive detections of the feature-of-interest from the perception branch of the discriminator network and generate artificial representations of the feature-of-interest based on the detections from the perception branch. Further, the adversarial branch may be configured to provide an output identifying differences between the false representations and true representations of the feature-of-interest, and the perception branch may be further configured to be trained by the output of the adversarial branch so that false representations are not detected by the perception branch as true representations.
Recording Medium, Method for Generating Learning Model, Image Processing Device, and Surgical Operation Assisting System
A non-transitory recording medium recoding a program that causes a computer to execute processing includes: acquiring an operation field image obtained by imaging an operation field of an endoscopic surgery; inputting the acquired operation field image to a learning model trained to output information on a connective tissue between a preservation organ and a resection organ in a case where the operation field image is input, and acquiring information on the connective tissue included in the operation field image; and outputting navigation information when treating the connective tissue between the preservation organ and the resection organ on the basis of the information acquired from the learning model.
Automated endoscopic device control systems
Systems, methods, and computer-readable media are disclosed for automated endoscopic device control systems. In one embodiment, an example endoscopic device control system may include memory that stores computer-executable instructions, and at least one processor configured to access the memory and execute the computer-executable instructions to determine a first image from an endoscopic imaging system comprising a camera and a scope, determine, using the first image, that a first condition is present, determine a first response action to implement using a first endoscopic device, and automatically cause the first endoscopic device to implement the first response action.
DIAGNOSIS SUPPORT SYSTEM, DIAGNOSIS SUPPORT METHOD, AND STORAGE MEDIUM
A diagnosis support system includes a processor. The processor is connected to a plurality of classifiers that are different in performance. The processor displays performance information of each of the classifiers side by side, receives a user's selection of the performance information displayed side by side, and inputs an input image to the classifier associated with the performance information selected by the user.
Methods and Systems for Controlling Cooperative Surgical Instruments
Systems, devices, and methods for controlling cooperative surgical instruments are provided. Various aspects of the present disclosure provide for coordinated operation of surgical instruments accessing a common body cavity of a patient from different approaches to achieve a common surgical purpose. For example, various methods, devices, and systems disclosed herein can enable the coordinated treatment of surgical tissue by disparate minimally invasive surgical systems that approach the tissue from varying anatomical spaces and operate in concert with one another to effect a desired surgical treatment.
MEDICAL IMAGE PROCESSING SYSTEM, RECOGNITION PROCESSING PROCESSOR DEVICE, AND OPERATION METHOD OF MEDICAL IMAGE PROCESSING SYSTEM
An endoscope processor device generates first video signals. A recognition processing processor device generates second video signals by reflecting a result of recognition processing based on the first video signals. A display displays any one of the second video signals or the first video signals switched from the second video signals on the basis of first video switching signals from the recognition processing processor device. The display displays that a result display of the recognition processing is being stopped in a case where the result display of the recognition processing by the second video signals is stopped.
Surgical Systems and Devices, and Methods for Configuring Surgical Systems and Performing Endoscopic Procedures, Including ERCP Procedures
Embodiments relate to surgical systems and methods. The system includes a main assembly having an IMU subsystem, camera, scope head assembly, and processor. Processor processes images and IMU information, including determining whether images include a distal end of the scope head assembly and a cannulation target. Responsive to a determination that images include the distal end of the scope head assembly, the processor generates 3-dimensional position of distal end of the scope head assembly. When images are determined to include the cannulation target, the processor generates 3-dimensional positions of the cannulation target. Processor also generates predictions of one or more real-time trajectory paths for the distal end of the scope head assembly to cannulate the cannulation target.