METHODS AND APPARATUS FOR CONTROLLING A CONTINUUM ROBOT

20210260767 · 2021-08-26

    Inventors

    Cpc classification

    International classification

    Abstract

    A continuum robot having at least two independently manipulateable bendable section for advancing the robot through a passage, without contacting fragile elements within the passage, wherein the robot incorporates control algorithms that enable the continuum robot to operate and advance into the passage, as well as the systems and procedures associated with the continuum robot and said functionality.

    Claims

    1. A robotic apparatus comprising: continuum robot including at least one bending section having a proximal end, a distal end, and a centroid along a longitudinal direction of the continuum robot; an actuator configured to manipulate the at least one bending section; a controller configured to issue control command to the actuator; a control input device in communication with the controller, and configured to accept control input for bending of the distal end from an operator, wherein the distal end has an end-effector coordinate that has an origin at the center of the distal end and two axes perpendicular to a centroid of the continuum robot and one axis parallel to the centroid, wherein the proximal end has a robot base coordinate that has an origin at the center of the proximal end and two—two axes perpendicular to a centroid of the continuum robot and one axis parallel to the centroid, and wherein the controller is configured to: store a current distal orientation vector based on the robot base coordinate, read the control input from the control input device by a time interval, determine a bending input vector from the reading of the control input based on the end-effector coordinate, and issue the control command based on the current distal orientation vector, and the bending input vector by the time interval.

    2. The robotic apparatus according to claim 1, further comprising: a camera probe configured to provide an endoscopic view from the distal end of the at least one bending section, and to define a camera-view coordinate that includes an origin of the camera view coordinate at the center of the endoscopic view.

    3. The robotic apparatus of claim 2, further comprising: A display configured to provide the endoscopic view from the camera probe.

    4. The robotic apparatus of claim 2, wherein the continuum robot maintains a relationship between the camera-view coordinate and the end-effector coordinate.

    5. The robotic apparatus of claim 2, wherein the control input generated by the operator is based on the endoscopic view provided by the camera probe.

    6. The robotic apparatus according to claim 2, further comprising: a view adjuster configured to rotate the endoscopic view on the display to adjust rotational orientation of the camera-view coordinate to the end-effector coordinate.

    7. The robotic apparatus according to claim 2, wherein the controller is configured to generate a computer graphic symbol of the bending input vector, and superimpose the computer graphic symbol with the endoscopic view on the display.

    8. The robotic apparatus according to claim 7, wherein the computer graphic symbol comprises an arrow/line symbol whose orientation signifies the orientation of the bending input vector and whose length signifies magnitude of the bending input vector by the time interval.

    9. The robotic apparatus according to claim 8, wherein the control input device comprises a joystick.

    10. The robotic apparatus according to claim 9, wherein the joystick is configured to accept the orientation of the bending input vector as tilting orientation, and the magnitude of the bending input vector as tilting angle.

    11. robotic apparatus according to claim 1, wherein the bending section is configured to bend by pushing or/and pulling at least two driving wires.

    12. The robotic apparatus according to Claim ii, wherein the controller is configured to: calculate a next distal orientation by adding the bending input vector to the current distal orientation; and calculate pushing or/and pulling amount of the at least two driving wires based on a kinematic of the continuum robot to achieve the next distal orientation.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0021] Further objects, features and advantages of the present invention will become apparent from the following detailed description when taken in conjunction with the accompanying figures showing illustrative embodiments of the present invention.

    [0022] FIG. 1a illustrates a three-dimensional movement model of the subject continuum robot, according to one or more embodiment of the subject apparatus, method or system.

    [0023] FIG. 1b shows a two-dimensional movement model of the subject continuum robot, according to one or more embodiment of the subject apparatus, method or system.

    [0024] FIG. 2 provides an exemplary camera view from the subject continuum robot, according to one or more embodiment of the subject apparatus, method or system.

    [0025] FIGS. 3a, 3b, and 3c provide exemplary camera views from the subject continuum robot as the robot is adjusted for proper orientation, according to one or more embodiment of the subject apparatus, method or system.

    [0026] FIG. 4 illustrates an exemplary movement model of the subject continuum robot, according to one or more embodiment of the subject apparatus, method or system.

    [0027] FIG. 5 provides a graph of an alternative camera view of the subject continuum robot, according to one or more embodiment of the subject apparatus, method or system.

    [0028] Throughout the Figures, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. In addition, reference numeral(s) including by the designation “′ ” (e.g. 12′ or 24′) signify secondary elements and/or references of the same nature and/or kind. Moreover, while the subject disclosure will now be described in detail with reference to the Figures, it is done so in connection with the illustrative embodiments. It is intended that changes and modifications can be made to the described embodiments without departing from the true scope and spirit of the subject disclosure as defined by the appended paragraphs.

    DETAILED DESCRIPTION OF THE DISCLOSURE

    [0029] In the subject disclosure, Applicant will first identify the nomenclature of the symbols provided in the Figures and accompanying calculations and algorithms that enable the continuum robot to operate and advance into a path. As such: [0030] .sup.Ax: x defined in {A} coordinate system; [0031] .sup.AR.sub.B,i: Rotation matrix of coordinate system {B} relative to {A} at i-th attempt; [0032] e: end-effector coordinate system; [0033] bj base-coordinate system at the base of the j-th section of the robot (j=1: most proximal section of the robot); [0034] .sup.Ap.sub.i: directional vector of the tip of the robot defined in {A} coordinate system at the i-th attempt [0035] .sup.Aθ.sub.i: bending angle where angle between z.sub.A axis and directional vector of the tip of the robot defined in {A} coordinate system at the i-th attempt; and [0036] .sup.Aζ.sub.i: bending plane where angle between x.sub.A axis and directional vector of the tip of the robot projected onto x.sub.A-y.sub.A plane defined in {A} coordinate system at the i-th attempt.

    First-Person Control

    [0037] According to the subject innovation, tendon-driven three-section robots with an endoscopic camera at the tip of the robot are used to identify and/or diagnose various elements during medical procedures, like for instance, a lung nodule. The direction of the camera view in the endoscope matches the end-effector coordinate system, which can be seen in FIG. 2. The end-effector coordinate system is mapped to the direction of a controller (e.g. joystick) which controls the robot via an operator. The operator operates the tip of the robot in the end-effector coordinate system (referred to as “First-Person Control”), wherein the joystick control is fully applicable in a 360 degree fashion, allowing a wide range of manipulation of the robot.

    [0038] When the operator tries to point the robot to the next airway (FIG. 2), the operator tilts the joystick. The amount and direction the joystick is tilted decides the amount of desired velocity (direction and speed) of the bending of the tip of the robot, (.sup.eθ.sub.i+1), defined in the end-effector coordinate system and the direction of the joystick decides desired bending plane of the tip of the robot, (.sup.eζ.sub.i+1), defined in the end-effector coordinate system.

    [0039] Then (.sup.eθ.sub.i+1, .sup.eζ.sub.i+1) is converted to (.sup.b1θ.sub.i+1, .sup.b1ζ.sub.i+1) and stored in the memory of the apparatus.

    [0040] Then (.sup.eθ.sub.i+1, .sup.eζ.sub.i+1) is converted to (.sup.b1θ.sub.i+1, .sup.b1ζ.sub.i+1) using the directional vector p.sub.i+1 and rotation matrix .sup.eR.sub.b1, i and stored in the memory of the apparatus as follows:


    .sup.b1p.sub.i+1=.sup.eR.sub.b1,i.sup.ep.sub.i+1

    Here, b1 coordinate system is the coordinate system attached to the base of the 1.sup.st section, i.e. the coordinate system attached to the base of the entire robot (robot-base coordinate system). The amount of tendons pulled/pushed is computed from (.sup.b1θ.sub.i+1, .sup.b1ζ.sub.i+1). Note that once (.sup.b1θ.sub.i+1 , .sup.b1ζ.sub.i+1) are derived, the amount of tendons pulled/pushed is computed without any dependence on the posture of other sections. The tendons are pulled/pushed based on the computed amount, and the robot is bent toward the desired direction.

    [0041] Because the camera view doesn't have depth information, deciding (.sup.eθ.sub.i+1) as the exact angle toward the next airway is difficult. In this embodiment, the operator sets the maximal increment value of (.sup.eθ.sub.i+1) as the maximal tilting of the joystick and repeats aforementioned steps until the robot turns to the next airway, i.e, the next airway comes to the center of the camera view (FIG. 3). The operator can select the increment value of (.sup.eθ.sub.i+1) by adjusting the tilting amount of the joystick during operation. As shown in FIG. 2, an arrow is overlaid over the camera view. The size and direction of the arrow indicate (.sup.eθ.sub.i+1, .sup.eζ.sub.i+1). If the operator is well trained, the operator can set a larger increment value of .sup.eθ.sub.i+1 through a dialog box in the GUI. The aforementioned series of repeated steps (visual feedback loop via the operator) is useful especially in the tortuous pathway where it applies unexpected force to the robot and deforms it.

    [0042] Accordingly, even if the motion of the robot is inaccurate, due to unexpected force, the adjustable incremental value and trial-and-error steps by the operator eventually point the robot toward the correct direction for the next airway. If the increment value is not small enough, the operator can set a new value through a dialog box in GUI. This can be automated based on the tortuosity of the airway, defined as the shape of the airway, such as the total angle of bifurcation points.

    [0043] The incremental value can be also automatically set using other information. As the robot advances deeper in the lung, the diameter of the airway becomes smaller. This requires finer control of the robot. Therefore, the increment value is automatically set based on how deep the robot advances into the lung, or the number of striations of the airway.

    Follow-The-Leader (“FTL”)

    [0044] The first and second sections may be controlled by Follow-The-Leader motion control. When the first and second sections pass through a bifurcation where the third section has already passed, the history of (.sup.b1θ.sub.3 , .sup.b1ζ.sub.3) used for the third section is applied to other sections.

    Sampling

    [0045] When the robot reaches the target lesion, the operator collects the tissue sample using a biopsy tools. The operator may collect multiple tissue samples at various locations of the lesion.

    Embodiment 2

    [0046] In this iteration of the subject innovation, the endoscopic camera is substituted for a 6 Degree of Freedom (DOF) Electro-Magnetic (EM) tracking sensor. By registering the position of EM tracking sensor onto a CT image of the lung taken before the bronchoscopy, an operator obtains virtual camera view. Similar to the camera view taken by an actual endoscopic camera described in Embodiment 1, the direction of the virtual camera view matches with the end-effector coordinate system. The end-effector coordinate system is mapped to the direction of the joystick allowing the operator to operate the tip of the robot in the end-effector coordinate system (First-Person Control).

    [0047] The combination of EM tracking sensor and the CT image gives the operator the current position of the EM tracking sensor in the CT image and the path to the target lesion.

    [0048] By using virtual camera view, the operator can select his/her virtual viewpoint. During navigation toward the lesion, the operator controls the tip of the robot in the end-effector coordinate system. When the robot reaches the target lesion, the operator switches his/her viewpoint from the tip of the robot (end-effector coordinate system (viewpoint A in FIG. 4)) to the base of the 3.sup.rd section (b3 coordinate system (viewpoint B in FIG. 4)), by pushing a switching button in GUI. Using viewpoint B, set at the b3 coordinate system, the operator can virtually view both of the entire body of the 3.sup.rd section and the target lesion (FIG. 5) depicted in the virtual image based on CT data. When the operator pushes the button to switch to viewpoint B, set at the b3 coordinate system, the b3 coordinate system is mapped to the direction of the joystick. This is useful to collect samples along a concentric path on the lesion (dash line in FIG. 5) because the user can achieve this path by just rotating the joystick (i.e. fix the .sup.b3θ.sub.i+1, and change .sup.b3ζ.sub.i+1).

    [0049] Similar to Embodiment 1, (.sup.b3θ.sub.i+1, .sup.b3ζ.sub.i+1) are converted to (.sup.b1θ.sub.i+1, .sup.b1ζ.sub.i+1) using the directional vector p.sub.i+1 and the rotation matrix .sup.b3R.sub.b1,i.


    .sup.b1p.sub.i+1=.sup.b3R.sub.b1,i.sup.b3p.sub.i+1

    [0050] Then, the amount of tendons pulled/pushed is computed from (.sup.b1θ.sub.i+1, .sup.b1ζ.sub.i+1). The tendons are pulled/pushed based on the computed amount, and the robot is bent toward the desired direction to collect the tissue.

    [0051] Not limited to switching from the end-effector coordinate system to the b.sub.3 coordinate system, the operator can select the viewpoint based on his/her objective. When the operator selects a viewpoint, the coordinate system is automatically mapped to the direction of the game joystick.