A61B1/04

DYNAMIC SCALING FOR A ROBOTIC SUGICAL SYSTEM

A robotic surgical system in which the system applies a scaling factor between user input from a user input device and corresponding movements of the robotic manipulator. Scaling factors may be applied or adjusted based on detected conditions such as the type of instrument being manipulated, detected distance between multiple instruments being manipulated, user biometric parameters.

DYNAMIC SCALING FOR A ROBOTIC SUGICAL SYSTEM

A robotic surgical system in which the system applies a scaling factor between user input from a user input device and corresponding movements of the robotic manipulator. Scaling factors may be applied or adjusted based on detected conditions such as the type of instrument being manipulated, detected distance between multiple instruments being manipulated, user biometric parameters.

RECEIVER COMPRISING COILS FOR WIRELESSLY RECEIVING POWER

A receiver (6) is disclosed for wirelessly receiving power from a transmitter. The receiver comprises a resonant receiver circuit having a plurality of coils (200a)-(200d) operatively coupled to a combining circuit (202). Each coil, with the combining circuit, is arranged to receive power via resonant inductive coupling. The combining circuit is arranged to combine power received from the plurality of coils for provision to an electric load. Other embodiments provide a capsule for ingestion by a patient, the capsule comprising the receiver.

MEDICAL IMAGE GENERATION APPARATUS, MEDICAL IMAGE GENERATION METHOD, AND MEDICAL IMAGE GENERATION PROGRAM

To generate a medical image with high visibility in fluorescence observation. A medical image generation apparatus (100) according to the present application includes an acquisition unit (131), a calculation unit (132), and a generation unit (134). An acquisition unit (131) acquires a first medical image captured with fluorescence of a predetermined wavelength and a second medical image captured with fluorescence of a wavelength different from the predetermined wavelength. A calculation unit (132) calculates a degree of scattering, indicating a degree of blurring of fluorescence of a living body, included in the first medical image and the second medical image acquired by the acquisition unit (131). A generation unit (134) generates an output image on the basis of at least one of the degrees of scattering calculated by the calculation unit (132).

ENDOSCOPE HOST AND ENDOSCOPE DEVICE FOR INTELLIGENTLY DETECTING ORGANS
20230047334 · 2023-02-16 ·

Disclosed are an endoscope host and an endoscope device for intelligently detecting organs including a main body having a connection channel for inserting an endoscope tube, and a drive connection part and an electrical connection identification part have first electrical connection points and second electrical connection point respectively. When the endoscope tube is inserted into the connection channel, the endoscope tube is electrically conducted with the first electrical connection point and the second electrical connection point to generate a driving signal and a type signal respectively. An organ identification unit is provided for storing an organ comparison table and comparing the type signal with the organ comparison table to obtain the organ type of the endoscope tube and generate an execution signal. A processing unit is installed in the main body for receiving the driving signal and the type signal and displaying a result image according to the execution signal.

APPARATUS FOR TREATING OBESITY
20230046613 · 2023-02-16 ·

An apparatus for treating obesity in a human or animal mammal patient. The apparatus comprising a first volume filling device segment and a second volume filling device segment. The first and second volume filling device segments are adapted to be assembled into an implantable volume filling device of a controlled size. Each one of the first and second volume filling device segment comprises at least one interconnecting structure. The interconnecting structure of the second volume filling device segment is adapted to be formed fitted, such that the first and second volume filling device segment can be assembled into the volume filling device. The assembled volume filling device is adapted to be at least substantially invaginated by a stomach wall portion of a patient, with the outer surface of the device resting against the stomach wall, such that the volume of the food cavity is reduced in size.

APPARATUS FOR TREATING OBESITY
20230046613 · 2023-02-16 ·

An apparatus for treating obesity in a human or animal mammal patient. The apparatus comprising a first volume filling device segment and a second volume filling device segment. The first and second volume filling device segments are adapted to be assembled into an implantable volume filling device of a controlled size. Each one of the first and second volume filling device segment comprises at least one interconnecting structure. The interconnecting structure of the second volume filling device segment is adapted to be formed fitted, such that the first and second volume filling device segment can be assembled into the volume filling device. The assembled volume filling device is adapted to be at least substantially invaginated by a stomach wall portion of a patient, with the outer surface of the device resting against the stomach wall, such that the volume of the food cavity is reduced in size.

ENDOSCOPIC VESSEL HARVESTING WITH THERMAL MANAGEMENT AND AUGMENTED REALITY DISPLAY

A vessel harvesting system removes a target vessel from a patient for use as a bypass. An elongated harvesting instrument inserts into a body along a path of a target vessel which includes at least one side branch. The harvesting instrument includes a cutter for applying thermal energy to sever and cauterize the side branch. An endoscopic camera captures visible-light images from a distal tip of the instrument within a dissected tunnel around the target vessel. A thermal camera captures thermograms coinciding with the visible-light images to characterize a temperature present at respective surfaces in the tunnel. An image processor (e.g., an electronic controller) renders a video stream including the visible-light images and an overlay depicting the temperatures present on at least some of the respective surfaces when applying the thermal energy. A display presenting the video stream and overlay to a user can be an augmented-reality display.

Objective optical system for endoscope
11576564 · 2023-02-14 · ·

An objective optical system for endoscope consists of a front group having a negative refractive power, an aperture stop, and a rear group having a positive refractive power. The front group consists of a first lens having a negative refractive power and a second lens having a positive refractive power. The rear group consists of a third lens having a positive refractive power, a cemented lens of a fourth lens having a positive refractive power and a fifth lens having a negative refractive power, and a sixth lens having a positive refractive power. A shape of the second lens is a meniscus shape having a convex surface directed toward an image side. The sixth lens is cemented to a plane parallel plate, and the following conditional expressions (1) and (2) are satisfied:
1.0<f3/d6<2.8  (1)
1.7<|f1/f1<10  (2).

Machine-learning-based visual-haptic system for robotic surgical platforms

Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.