G09B23/286

SYSTEM AND METHOD FOR EVALUATING THE PERFORMANCE OF A USER IN CAPTURING AN ULTRASOUND IMAGE OF AN ANATOMICAL REGION
20230037923 · 2023-02-09 · ·

A training platform, a method and a computer-readable medium for evaluating users in capturing images of an internal anatomical region for the analysis of organs. Automated machine learning models, trained on a dataset of labelled training images associated with different imaging device positions, are used. The one or more automated machine learning models are used to process an image resulting from a user positioning an imaging device at various imaging device positions relative to a training manikin, a human or an animal, to determine whether the generated image corresponds to a predefined view required for the analysis of the organ features shown therein. An output indicative of whether the generated image corresponds to the predefined view expected for organ analysis and measurements is provided.

System and method for extended spectrum ultrasound training using animate and inanimate training objects

A system and method for extended spectrum ultrasound training using tags placed on animate and/or inanimate objects. The system combines the use of tags, a reader, and a 3-DOF motion tracker to train a user in finding image windows and optimal image views in an ultrasound simulation environment.

System and method for orientating capture of ultrasound images

A downloadable navigator for a mobile ultrasound unit having an ultrasound probe, implemented on a portable computing device. The navigator includes a trained orientation neural network to receive a non-canonical image of a body part from the mobile ultrasound unit and to generate a transformation associated with the non-canonical image, the transformation transforming from a position and rotation associated with a canonical image to a position and rotation associated with the non-canonical image; and a result converter to convert the transformation into orientation instructions for a user of the probe and to provide and display the orientation instructions to the user to change the position and rotation of the probe.

CATHETER SIMULATOR AND HEART MODEL FOR CATHETER SIMULATOR
20230005390 · 2023-01-05 ·

The present invention is a heart model that is used at the time of performing a simulation of an operation for installing a leadless pacemaker in the inner part or a simulation of a myocardial examination method of collecting myocardial tissue, and is formed by means of a material having elasticity. The heart model has a main body having a right atrium, a right ventricle, a left atrium, and a left ventricle in the inner part; an inferior vena cava provided in the main body and allowing insertion therethrough of a catheter holding a leadless pacemaker; and a holder detachably provided inside the main body and including a flexible part capable of locking a locking part of a leadless pacemaker.

Tumor ablation training system

A training system and method includes a subject phantom (102) capable of being visualized on a display (120). A spatial tracking system (104) is configured to track an interventional instrument (108) in subject phantom space. A simulation system (110) is configured to generate a simulated abnormality in the phantom space and to simulate interactions with the simulated abnormality to provide feedback and evaluation information to a user for training the user in an associated procedure related to the abnormality.

SYSTEMS AND METHODS FOR ACQUIRING ULTRASONIC DATA
20230017291 · 2023-01-19 ·

Methods for acquiring ultrasonic data from a scanner constructed for B-mode scans are disclosed. An image-acquiring system is provided. A three-dimensional target region is selected. A model of the target region comprising a plurality of target locations representing a plurality of planned locations in the target region at which ultrasonic data is to be acquired is created, and a visual representation of the model comprising a plurality of graphical elements is displayed. Ultrasonic data at each of the planned locations is acquired. A transformation of the visual representation is executed, comprising: performing a data quality test at each target location; for any target location that fails the data quality test, altering a graphical element corresponding to the failed target location to indicate failure of the data quality test at that location; and displaying a transformed visual representation comprising updated graphical elements on the visual display

Dynamic and interactive navigation in a surgical environment

A system and method for converting medical images of a particular patient into high resolution, 3D dynamic and interactive images interacting with medical tools including medical devices by coupling a model of tissue dynamics and tool characteristics to the patient specific imagery for simulating a medical procedure in an accurate and dynamic manner. The method includes a tool to add and/or to adjust the dynamic image of tissues and ability to draw and add geometric shapes on the dynamic image of tissues. The system imports the 3D surgery plan (craniotomy, head position, approach etc.). The surgeon establishes multiple views, rotates and interacts with the navigation image to see behind pathology and vital structures. The surgeon can make structures such as tumors, vessels and tissue transparent to improve visualization and to be able to see behind the pathology. The System can warn on proximity of tools to specific anatomical structure.

SURGICAL SKILL TRAINING SYSTEM AND MACHINE LEARNING-BASED SURGICAL GUIDE SYSTEM USING THREE-DIMENSIONAL IMAGING
20230210598 · 2023-07-06 · ·

A surgical skill training system includes: a data collecting unit configured to collect actual surgical skill data on a patient of an operating surgeon; an image providing server configured to generate a 3-dimensional (3D) surgical image for surgical skill training, based on the actual surgical skill data; and a user device configured to display the 3D surgical image, wherein the image providing server includes: a patient image generating unit configured to generate a patient image, based on patient information of the patient; a surgical stage classifying unit configured to classify the actual surgical skill data into actual surgical skill data for each surgical stage performed by the operating surgeon; and a 3D image generating unit configured to generate the 3D surgical image by using the patient image, and feature information detected from the actual surgical skill data.

Systems and methods for quality control in 3D printing applications using a 3D printed phantom

The present disclosure provides systems and methods for performing quality control assessments of a three dimensional (3D) printing system. In particular, the present disclosure provides a phantom designs for use in 3D printing systems, as well as methods of quality control for a 3D printing system performed using a 3D printed phantom.

PORTABLE MEDICAL EDUCATION DEVICE, MEDICAL EDUCATION PLATFORM, AND MEDICAL EDUCATION METHODS

A portable medical education device, medical education platform, and medical education methods are disclosed. The medical education portable device enables a camera to capture a specific picture to generate an image, extracts several features from the image, converts the features into an identification code, and transmits the identification code to the medical education platform. The medical education platform stores several three-dimensional medical models and finds a specific three-dimensional medical model from the three-dimensional medical models according to the identification code, wherein the preset code corresponding to the specific three-dimensional medical model is the same as the identification code. After that, the medical education platform transmits the specific three-dimensional medical model to the portable medical education device, and the portable medical education device enables a display screen to present a reality scene and enables the reality scene to show the specific three-dimensional medical model.