A61B2017/00216

System and method for intraoperative surgical planning
11490965 · 2022-11-08 · ·

The subject matter includes systems, methods, and prosthetic devices for joint reconstruction surgery. A computer-assisted intraoperating planning method can include accessing a first medical image providing a first view of a joint within a surgical site as well as receiving selection of a first component of a modular prosthetic device implanted in the first bone of the joint. The method continues by displaying a graphical representation of the first component of the modular prosthetic device overlaid on the first medical image, and updating a graphical representation of the first component based on receiving positioning inputs representative of an implant location of the first component relative to landmarks on the first bone visible within the first medical image. The method concludes by presenting a selection interface enabling visualization of additional components of the modular prosthetic device virtually connected to the first component and overlaid on the first medical image.

Surgical system with voice control

A surgical system includes a plurality of voice sensors located in a surgical environment and configured to detect sound and generate a first plurality of signals. The surgical system also includes a position indicator, in proximity to a designated user, configured to indicate a first position of the designated user and generate a second signal representative of the first position. The surgical system further includes a processor configured to receive the first plurality of signals and the second signal and determine, based on the first plurality of signals, a second position. The processor is also configured to compare the detected sound with registered voice command of the designated user stored in a memory to verify the designated user's credentials, and send a command signal to a surgical instrument to carry out an operation related to the voice command based on at least one of the verification of the designated user's credentials, the first position and the second position.

Surgeon disengagement detection during termination of teleoperation
11571269 · 2023-02-07 · ·

A method for disengagement detection of a surgical instrument of a surgical robotic system, the method comprising: determining whether a user's head is unstable prior to disengagement of a teleoperation mode; determining whether a pressure release has occurred relative to at least one of a first user input device or a second user input device for controlling a surgical instrument of the surgical robotic system during the teleoperation mode; and in response to determining the user's head is unstable or determining the pressure release has occurred, determining whether a distance change between the first user input device and the second user input device indicates the user is performing an unintended action prior to disengagement of the teleoperation mode.

SYSTEMS AND METHODS FOR WORK VOLUME MAPPING TO FACILITATE DYNAMIC COLLISION AVOIDANCE
20230096023 · 2023-03-30 ·

A system according to at least one embodiment of the present disclosure includes a processor; and a memory coupled with the processor and including data stored thereon that, when processed by the processor, enables the processor to: predict, at a first time, a motion of an object during a surgical procedure and at a second time following the first time; and update, based on the predicted motion of the object, a surgical navigation path of a robotic arm.

Extended Reality AR/VR System
20230033743 · 2023-02-02 ·

A system includes a mobile device having one or more cameras to take images; a sensor detecting reflected light from one or more lasers and a diffuser to detect object range or dimension; code for motion tracking, environmental understanding by detecting planes in an environment, and estimating light and dimensions of the surrounding based on the one or more lasers; code to estimate a three-dimensional (3D) volume of an object from multiple perspectives and from projected laser beams to measure size or scale and determine locations of points on the object's surface in a plane or a slice using time-of-flight, wherein positions and cross-sections for different slices are correlated to construct a 3D model of the object, including object position and shape; the device receiving user request to select a content from one or more augmented, virtual, or extended reality contents and rendering a reality view of the environment.

Systems, methods, and computer-readable program products for controlling a robotically delivered manipulator
11612446 · 2023-03-28 · ·

Systems, methods, and computer-readable media are provided for controlling a robotically delivered manipulator. One system includes a robotic manipulator having a base and a surgical instrument holder configured to move relative to the base, a surgical instrument removably coupled to the surgical instrument holder, a user interface configured to present information related to least one of the robotic manipulator or the surgical instrument, a gesture detection sensor configured to detect a gesture made by a user representing a desired movement of the robotic manipulator, and a controller configured to actuate the robotic manipulator in a predetermined manner corresponding to the detected gesture.

Light field capture and rendering for head-mounted displays

Systems and methods for capturing and rendering light fields for head-mounted displays are disclosed. A mediated-reality visualization system includes a head-mounted display assembly comprising a frame configured to be mounted to a user's head and a display device coupled to the frame. An imaging assembly separate and spaced apart :from the head-mounted display assembly is configured to capture light-field data. A computing device in communication with the imaging assembly and the display device is configured to receive light-field data from the imaging assembly and render one or more virtual cameras. Images from the one or more virtual cameras are presented to a user via the display device.

Left-atrial-appendage annotation using 3D images

A computer that determines at least an anatomic feature of a left atrial appendage (LAA) is described. During operation, the computer generates a 3D image associated with an individual's heart. This 3D image may present a view along a perpendicular direction to an opening of the LAA. Then, the computer may receive information specifying a set of reference locations. For example, the set of reference locations may include: a location on a circumflex artery, a location between a superior portion of the LAA and a left pulmonary vein, and/or a location on a superior wall of the LAA and distal to trabeculae carneae. Next, the computer automatically determines, based, at least in part, on the set of reference locations, at least the anatomical feature of the LAA, which is associated with the opening of the LAA and a size of a device used in an LAA closure (LAAC) procedure.

HEAD-MOUNTED VISUALISATION UNIT AND VISUALISATION SYSTEM

A head-mounted visualization unit is proposed having an at least partially light-transmissive optical system, wherein the optical system has a first optical channel, which is assigned to a first eye of a user of the head-mounted visualization unit, and a second optical channel, which is assigned to a second eye of the user, wherein the first optical channel is substantially transmissive to optical radiation of a first polarization and substantially opaque to optical radiation of a second polarization, with the first polarization substantially being orthogonal to the second polarization, wherein the second optical channel is substantially transmissive to optical radiation of the second polarization and substantially opaque to optical radiation of the first polarization, wherein at least in the first optical channel a polarizer and a light attenuator are arranged, and wherein the light attenuator is arranged downstream of the polarizer in a direction toward the first eye of the user.

AN AUTOMATED SYSTEM AND A METHOD FOR PERFORMING HAIR RESTORATION
20220346896 · 2022-11-03 ·

The invention relates to an automated system for performing hair restoration, which comprises automated harvest means, comprising a harvesting needle mechanism and a temporal storage means for harvested follicles; automated transplanting means, comprising a transplanting needle mechanism and a temporal storage means from which harvested follicles are extracted for transplantations; displacement means, onto which one of the automated harvest means or one of the automated transplanting means are movably connected for being displaced to desirable harvest or transplantation target points of a patient head; image acquisition means for acquiring images of a patient's scalp; mechanical support means for supporting and stabilizing the patient and particularly the patient's head during the restoration procedure; a control module, comprising suitable data processing and storage hardware and software configured to receive and process images acquired by the two or more image acquisition by image processing algorithms, map the initial patient's hair distribution, identify qualifying candidate follicles for transplantation, and finally to generate an optimized harvest and transplantation plan considering aesthetic, efficiency and safety optimization parameters. The control module is further configured to operate the at least one automated harvest means, the at least one automated transplanting means and the at least one displacement means to harvest and to transplant hair follicles in accordance with the generated optimized harvest and transplantation plan.