Determining a Configuration of a Medical Robotic Arm

20180001475 · 2018-01-04

    Inventors

    Cpc classification

    International classification

    Abstract

    A computer implemented method for determining a configuration of a medical robotic arm, wherein the configuration comprises a pose of the robotic arm and a position of a base of the robotic arm, comprising the steps of: —acquiring treatment information data representing information about the treatment to be performed by use of the robotic arm; —acquiring patient position data representing the position of a patient to be treated; and —calculating the configuration from the treatment information data and the patient position data.

    Claims

    1.-15. (canceled)

    16. A computer implemented method for determining a configuration of a medical robotic arm, wherein the configuration comprises a pose of the robotic arm and a position of a base of the robotic arm, the method executed by one or more processors and comprising the steps of: acquiring, by one or more of the processors, treatment information data representing information about the treatment to be performed by use of the robotic arm; acquiring, by one or more of the processors, patient position data representing the position of a patient to be treated; and calculating, by one or more of the processors, the configuration from the treatment information data and the patient position data.

    17. The method of claim 16 wherein the treatment information data comprises information regarding a disease to be treated.

    18. The method of claim 17, wherein the disease to be treated is defined by a class of the International Classification of Diseases.

    19. The method of any claim 16, further comprising the step of acquiring constraint information data, by one or more of the processors, representing at least one of people data describing persons involved in the treatment, equipment data describing equipment used for the treatment other than the robotic arm, room data describing the room in which the treatment is performed, robot data describing properties of the robotic arm and the step of transforming, by one or more of the processors, the constraint information data into spatial constraint data representing spatial constraints for the configuration of the robotic arm, wherein the calculation of the configuration is further based on the spatial constraint data.

    20. The method of claim 19, wherein transforming the constraint information data involves retrieving, by one or more of the processors, the spatial constraint data corresponding to the constraint information data from a database.

    21. A computer implemented method for determining a configuration of a medical robotic arm, wherein the configuration comprises a pose of the robotic arm and a position of a base of the robotic arm, the method executed by one or more processors and comprising the steps of: acquiring, by one or more of the processors, changed environment data representing a change in the environment in which the robotic arm is used; and calculating, by one or more of the processors, the configuration of the robotic arm from the changed environment data.

    22. The method of claim 21, further comprising the step of acquiring, by one or more of the processors, constraint information data representing at least one of people data describing persons involved in the treatment, equipment data describing equipment used for the treatment other than the robotic arm, room data describing the room in which the treatment is performed and robot data describing properties of the robotic arm and the step of transforming, by one or more of the processors, the constraint information data into spatial constraint data representing spatial constraints for the configuration of the robotic arm, wherein the calculation of the configuration of the robotic arm is further based on the spatial constraint data.

    23. The method of claim 21, further comprising the step of acquiring, by one or more of the processors, patient position data representing the position of a patient to be treated, wherein the calculation of the configuration of the robotic arm is further based on the patient position data.

    24. The method of any one of claim 21, further comprising the step of acquiring, by one or more of the processors, a current configuration of the robotic arm, wherein the current configuration of the robotic arm is used as the configuration of the robotic arm if the current configuration of the robotic arm does not interfere with the changed environment.

    25. The method of any one of claim 21, wherein the changed environment data is acquired from a medical tracking system and includes the position of an object tracked by the medical tracking system.

    26. The method of any one of claim 21, wherein the changed environment data includes information about the beginning of a new workflow step of a workflow using the robotic arm.

    27. The method of any one of claim 21, wherein the changed environment information includes movement data representing the movement of a device other than the robotic arm.

    28. The method of any one of claim 21, wherein the configuration further comprises work space data representing the work space which the robotic arm is allowed to occupy.

    29. At least one or more non-transitory computer storage medium storing instructions, the instructions comprising: a plurality of instructions which, when executed by one or more processors, causes at least one of the processors to: acquire, by one or more of the processors, treatment information data representing information about a treatment to be performed by use of a robotic arm; acquire, by one or more of the processors, patient position data representing the position of a patient to be treated; and calculate, by one or more of the processors, the configuration from the treatment information data and the patient position data.

    30. A system for determining a configuration of a medical robotic arm, wherein the configuration comprises a pose of the robotic arm and a position of a base of the robotic arm, comprising: memory storing instructions; one or more processors executing the instructions causing the one or more processors to: acquire, by one or more of the processors, treatment information data representing information about the treatment to be performed by use of the robotic arm; acquire, by one or more of the processors, patient position data representing the position of a patient to be treated; and calculate, by one or more of the processors, the configuration from the treatment information data and the patient position data.

    Description

    BRIEF DESCRIPTION OF DRAWINGS

    [0086] In the following, the invention is described with reference to the enclosed figures which represent preferred embodiments of the invention. The scope of the invention is not however limited to the specific features disclosed in the figures, which show:

    [0087] FIG. 1 two different configurations of a robotic arm;

    [0088] FIG. 2 a scenario in which the present invention is used;

    [0089] FIG. 3 a schematic overview of the present invention;

    [0090] FIG. 4 a table comprising spatial constraint data and

    [0091] FIG. 5 a system according to the present invention.

    DETAILED DESCRIPTION

    [0092] FIG. 1 shows two configurations of a robotic arm 1 in an exemplary scenario. This scenario relates to a head surgery, wherein a head of a patient P1 is to be treated. The patient P1 is lying on an operating room table 2. Involved in the treatment is a surgeon P2, a nurse P3 and an anesthetist P4. Further provided is an imaging unit 3, which is for example an MR imaging unit, and a sterile barrier 4, which separates a sterile area from a non-sterile area.

    [0093] The medical robotic arm 1 comprises a base 1a and a plurality of segments, wherein two adjacent segments are connected via at least one joint. One end is attached to the base 1a, for example via at least one joint, and the other end, which is also referred to as a free end, can move in space depending on the joint positions, which represent the positions of the joints between the segments and is also referred to as pose of the robotic arm 1. The combination of the pose of the robotic arm and the position of the base 1a of the robotic arm is called the configuration of the robotic arm 1.

    [0094] Each of the persons P2 to P4 involved in the treatment and of the equipment (imaging device 3 and sterile barrier 4) requires a particular spatial area during the treatment. The required areas may vary for different workflow steps of the treatment. It is therefore essential to determine a suitable configuration of the robotic arm 1, in particular a proper position of the base 1a of the robotic arm 1. The configuration of the robotic arm 1 might be different for two or more different workflow steps, but could also be the same for all workflow steps.

    [0095] The left and right parts of FIG. 1 show different positions of the base 1a of the robotic arm 1, and therefore different configurations of the robotic arm 1. For the configuration shown in the left part of FIG. 1, the position of the base 1a of the robotic arm 1 is such that all persons and parts of the equipment have enough room for performing the treatment. For the position of the base 1a of the robotic arm 1 shown in the right part of FIG. 1, to the contrary, the freedom of the surgeon P2 and the nurse P3 is limited by the robotic arm 1, which can easily lead to deteriorated results of the treatment. The configuration of the robotic arm 1 in the left part of FIG. 1 is therefore favorable over the configuration of the robotic arm 1 shown in the right part of FIG. 1.

    [0096] FIG. 2 shows scenarios similar to the one shown in FIG. 1 for two different workflow steps of a treatment of the patient P1. In the workflow step shown in the left part of FIG. 1, the free end of the robotic arm 1a is near the head of the patient P1, for example for holding a medical instrument or a medical tool.

    [0097] In the workflow step shown in the right part of FIG. 2, the free end of the robotic arm 1 is retracted and the imaging device 3 is being used for imaging the head of the patient P1. During the imaging process, the imaging device 3 moves along the inferior-superior axis of the patient P1 as indicated by the double arrow. Since the imaging device 3 emits x-ray radiation during the imaging process, the persons P2 to P4 are standing behind protective shields 5.

    [0098] FIG. 3 gives a schematic overview of the data which are processed and output by the algorithm for calculating the configuration of the robotic arm 1 according to the present invention.

    [0099] On the input side, there are treatment information data, people data, equipment data, room data, robot data, live data and patient position data. The people data, equipment data, room data and robot data can be summarized as constraint information data. They define spatial constraints on the configurations of the robotic arm 1. The constraint information data are optional, but advantageous.

    [0100] The treatment information data comprises at least one of the diagnosis, the disease, disease classification, diagnostic information, location of the treatment, regions of interest, organs at risk, medical images and processed images.

    [0101] The people data comprises information on at least one person involved in the treatment, such as the patient, a surgeon, a scrub-nurse, and an anesthetist, including the department, profession and personal preferences of the person. Patient data might include patient weight (which might cause potential bending of the OR table), patient position (for example prone versus supine versus lateral) or equipment effecting patient access (such as cables, tubes, stickers or holders).

    [0102] The equipment data comprises at least one of the operating room table type, layout, height, operating room lights, devices for anesthesia, imaging devices, treatment devices and tools used for the treatment.

    [0103] The room data comprises at least one of room number, room geometries, fixed installations, such as booms or lights, equipment flow, sterile tables or air flow.

    [0104] The robot data comprises at least one of type, size, footprint, maximum payload, work space, start options, movement options, and current state of the robotic arm 1. Start options might indicate a possible initial position like a park position, a position in the middle of the treatment volume or a position in the middle of the achievable work space of the robot (avoiding border positions which affect maximum load and/or accuracy).

    [0105] Live data comprises data from sensors, such as torque, force, speed, acceleration, gravity and gyroscopic information, and from other devices such as imaging devices and orientation devices, like cameras or ultrasound devices.

    [0106] The patient position data represent the position of the patient P1 to be treated.

    [0107] The treatment information data basically describes the part of the patient P1 to be treated, for example relative to a patient coordinate system. Since the position of the patient P1 is known from the patient position data, which defines the position of the patient in space relative to a reference, such as a reference system associated with the operating room, the spatial area to be treated is known or can be calculated in this reference system. The patient position data may be derived from the treatment information data and the position of the operating room table 2 in the reference system, for example if the position of the patient P1 on the operating room table 2 is known or can be estimated.

    [0108] The output of the algorithm comprises the configuration of the robotic arm 1. Optional outputs are potential movement data and/or user options data. As mentioned above, the configuration comprises a pose of the robotic arm 1 and a position of the base of 1a of the robotic arm 1. However, the pose of the robotic arm 1 might be a default pose, such that the algorithm only calculates the position of the base 1a of the robotic arm. User options could be a manual movement of the robot, for example if the robot has 7 degrees of freedom and there is a variety of possible configurations. Another user option could be that the robot builds a boundary box, in which all robot positions would be good, but the user can select (within the box) the final position. In this example, the algorithm allows a multitude of good or even ideal positions within the box. Another user option could be to either move the base or to move the arm to achieve the desired, good or ideal position or to combine the base position and the arm pose in a way to achieve the best configuration calculated by the algorithm.

    [0109] The potential movement data describe potential movement options that minimize interference of the robotic arm 1 with the patient P1 other persons P2 to P4, equipment 3 and 4 and further procedure steps of the treatment. They do for example describe the transition from one pose to another. A controller of the robotic arm 1 is then not free to determine the transition, because this could lead to a violation of a constrained space. The transition is instead provided to the controller.

    [0110] The overview of FIG. 3 further comprises different loops connecting the output of the algorithm to the input of the algorithm. The first loop is the live data loop, which provides updated live data to the algorithm.

    [0111] A second loop is a procedure loop which indicates changed environment data to the algorithm. The changed environment data represents a change in the environment in which the robotic arm 1 is used. This may comprise at least one of information on already performed workflow steps of the treatment, new layouts, new positions of persons or equipment involved in the treatment and other constraints. New layouts can refer to potential repositioning of the patient during the surgical procedure, leading to a new layout of the surgery. In an abdominal case for example, access could be first through ports in the stomach wall, whereas later on access is through the colon. New layouts can further describe how equipment could be moved to a new position or could be removed as it is not needed anymore, leading to increased space for the robot. New layouts can still further describe that an additional surgeon enters the procedure for a later and/or more complex step.

    [0112] The user interaction loop provides user interaction data to the algorithm. The user interaction data may comprise at least one of options selected by the user, options confirmed by a user or information indicating that the user ignores or overrides the calculated configuration of the robotic arm 1. The user could for example use the advantage of a robot with 7 degrees of freedom to select the best option in case more than one option exists, could consider constraints not present in the algorithm or could react to unforeseen situations, complications or life threatening conditions, for example by abandoning the case, removing the robot, changing the planned treatment or adding new unknown equipment.

    [0113] With the feedback loops described above, it is possible to update a configuration of the robotic arm 1 depending on any changes occurring in the environment or the scenario in which the robotic arm 1 is used.

    [0114] The present invention also involves to bring the robotic arm 1 into the calculated configuration, for example by relocating the base 1a of the robotic arm 1 or providing the configuration to a controller of the robotic arm 1, which controls the robotic arm 1 to assume the pose comprised in the configuration.

    [0115] FIG. 4 shows a table which comprises spatial constraint data for the configuration of the robotic arm 1 depending on treatment information data and constraint information data. FIG. 4 also assigns a priority to each spatial constraint data item. FIG. 4 only shows a part of the whole table.

    [0116] In the example shown in FIG. 4, the treatment information data indicates a head surgery. For the head surgery, the table comprises a plurality of entries for different constraint information data. Spatial constraint data and a priority are assigned to each item of the constraint information data. In the present embodiment, the spatial constraint data describe a cuboid defined by a position, which is given by three coordinates (X, Y, Z), an orientation, which is given three angles (α, β, γ), and a size given by three length values (Lx, Ly, Lz). However, the spatial constraint data may describe a spatial area of any other shape and/or by any other suitable parameters.

    [0117] In the table, different spatial constraint data are defined for a surgeon, a nurse and an anesthetist. In addition, spatial constraint data are defined for an operating room table, a sterile barrier and an imaging device.

    [0118] FIG. 5 schematically shows a system 6 for carrying out the present invention. The system 6 comprises a computer 7 connected to an input unit 11 and an output unit 12. The input unit 11 can be any suitable input unit, such a mouse, a keyboard, a touch screen or any other man-machine interface. The output unit 12 can be any suitable output unit such as a monitor or a projector.

    [0119] The computer 7 comprises a central processing unit 8 connected to an interface 9 and a memory unit 10. Via the interface 9, the central processing unit 8 can acquire data, such as the treatment information data, the constraint information data and the live data. The memory unit 10 can store working data, such as the acquired data, and program data which implements the method according to the present invention.

    [0120] In one embodiment, the central processing unit 8 acquires all available input data, such as the treatment information data, the patient position data and constraint information data. The central processing unit 8 then accesses a table like the one shown in FIG. 4 to determine spatial constraint data and corresponding priorities depending on the treatment information data and the constraint information data. The central processing unit 8 then combines the obtained spatial constraint data to calculate an overall spatial area into which the robotic arm 1 shall not enter. The central processing unit 8 then calculates the configuration of the robotic arm 1 from the treatment information data, the patient position data and the combined spatial area into which the robotic arm 1 shall not enter.

    [0121] If there is no suitable configuration for the robotic arm 1 under the given conditions, the central processing unit 8 repeats the process of calculating the configuration, but ignores one or more spatial constraint data with the lowest priority.

    [0122] In another embodiment, the central processing unit 8 acquires changed environment data. In the present example, the changed environments data indicates a workflow step or a change in the workflow step, such as from the workflow step shown in the left part of FIG. 2 to the workflow step shown in the right part of FIG. 2, and live data. The changed environment data indicates that the imaging unit 3 is used for imaging the head of the patient P1, which includes a movement of an imaging device 3 along the inferior-superior axis of the patient P1. This implies changed spatial constraint data corresponding to the imaging device 3, such that the configuration of the robotic arm 1 is to be updated.

    [0123] The central processing unit 8 calculates the (updated) configuration of the robotic arm 1 from the changed environment data. The calculation can further be based on at least one of constraint information data, patient position data and the current configuration of the robotic arm. In the example shown in FIG. 2, the position of the base of the robotic arm 1 can be maintained, while the pose of the robotic arm 1 has to be changed in order to free the space required for the movement of the imaging device 3.

    [0124] Further examples include that the changed environment data indicates changes in the constraint information data. The computer 7 is connected to a medical tracking system 13 via the interface 9. The medical tracking system 13 then tracks the position of an object, for example by detecting a marker device attached to the object. If it detects a change in the position of the object, it generates corresponding changed environment data which is acquired by the central processing unit 8 and used for calculating the configuration of the robotic arm 1.