FEEDFORWARD CONTINUOUS POSITIONING CONTROL OF END-EFFECTORS
20220125524 · 2022-04-28
Inventors
Cpc classification
A61B8/12
HUMAN NECESSITIES
G06T7/246
PHYSICS
A61B34/20
HUMAN NECESSITIES
G16H20/40
PHYSICS
G05B19/423
PHYSICS
A61B90/37
HUMAN NECESSITIES
A61B2034/107
HUMAN NECESSITIES
G05B2219/39286
PHYSICS
B25J9/1607
PERFORMING OPERATIONS; TRANSPORTING
G06N3/082
PHYSICS
A61B2034/2061
HUMAN NECESSITIES
A61B2090/064
HUMAN NECESSITIES
A61B2034/301
HUMAN NECESSITIES
International classification
A61B34/20
HUMAN NECESSITIES
A61B90/00
HUMAN NECESSITIES
Abstract
A positioning controller (50) including a forward predictive model (60) and/or inverse control predictive model (70) for positioning control of an interventional device (30) including a portion (40) of an interventional device. In operation, the controller (50) may apply the forward predictive model (60) to a commanded positioning motion of the interventional device (30) to render a predicted navigated pose of the end-effector (40), and generate positioning data informative of a positioning by the interventional device (30) of said portion of interventional device (40) to a target pose based on the predicted navigated pose of said portion (40). Alternatively, antecedently or subsequently, the controller (50) may apply the control predictive model (70) to the target pose of the portion of interventional device (40) to render a predicted positioning motion of the interventional device (30), and generate positioning commands controlling a positioning by the interventional device (30) of said device portion (40) to the target pose based on the predicted positioning motion of the interventional device (30).
Claims
1. A positioning controller for an interventional device including a device portion, the positioning controller comprising: a memory including at least one of: a forward predictive model configured with embedded kinematics of interventional device to receive commanded positioning motion of the interventional device and to output data related to a prediction of a navigated pose of the device portion, and a control predictive model configured with kinematics of interventional device to receive a target pose data of the interventional device and to output data related to a prediction of a positioning motion of the interventional device; and at least one processor in communication with the memory, wherein the at least one processor is configured to at least one of: (i) apply the forward predictive model to a commanded positioning motion of the interventional device to render a predicted navigated pose of the device portion, and generate positioning data informative of a positioning by the interventional device of the device portion to a target pose based on the predicted navigated pose of the device portion; and (ii) apply the control predictive model to the target pose of the device portion to render a predicted positioning motion of the interventional device, and generate positioning commands controlling a positioning of the interventional device of the device portion to the target pose based on the predicted positioning motion of the interventional device.
2. The positioning controller of claim 1, wherein the imaging predictive model has been or is trained on forward kinematics of the interventional device, and/or the control predictive model is an inverse predictive model which has been or is trained on inverse kinematics of the interventional device.
3. The positioning controller of claim 1, wherein the device portion is an end-effector of the interventional device.
4. The positioning controller of claim 1, further configured to generate said positioning data continuously and/or said positioning commands continuously such that the positioning controller is considered as a continuous positioning controller.
5. The positioning controller of claim 1, wherein the forward predictive model includes: a neural network base having an input layer configured to input joint variables of the interventional robot representative of the commanded positioning motion of the interventional device, and an output layer configured to output at least one of a translation, a rotation and a pivoting of the device portion derived from a regression of the joint variables of the interventional robot, wherein the at least one of the translation, the rotation and the pivoting of the device portion infers the predicted navigated pose of the device portion.
6. The positioning controller of claim 1, wherein the control predictive model includes: a neural network base having an input layer configured to input at least one of a translation, a rotation and a pivoting of the device portion, and an output layer configured to output joint variables of the interventional robot derived from a regression at least one of the translation, the rotation and the pivoting of the device portion, wherein the joint variables of the interventional robot infer the predicted positioning motion of the interventional device.
7. The positioning controller of claim 1, wherein the forward predictive model includes: a neural network base having an input layer configured to input joint velocities of the interventional device representative of the commanded positioning motion of the interventional device, and an output layer configured to output at least one of a linear velocity and an angular velocity of the device portion from a regression of the joint velocities of the interventional device, wherein at least one of a linear velocity and an angular velocity of the device portion infers the predicted navigated pose of the interventional device.
8. The positioning controller of claim 1, wherein the control predictive model includes: a neural network base having an input layer configured to input at least one of a linear velocity and an angular velocity of the device portion to the target pose and an output layer configured to output joint velocities of the interventional device from a regression at least one of a linear velocity and an angular velocity of the device portion to the target pose, wherein the joint velocities of the interventional device infer the predicted positioning motion of the interventional device.
9. The positioning controller of claim 1, wherein the forward predictive model includes: a neural network base having an input layer configured to input a preceding sequence of shapes of the interventional device representative of the commanded positioning motion of the interventional device, and an output layer configured to output a succeeding sequence of shapes of the interventional device derived from a time series prediction of the preceding sequence of shapes of the interventional device, wherein the succeeding sequence of shapes of the interventional device infer the predicted navigated pose of the device portion.
10. The positioning controller of claim 1, wherein the forward predictive model includes: a neural network base having an input layer configured to input a preceding sequence of shapes of the interventional device representative of the commanded positioning motion of the interventional device, and an output layer configured to output a succeeding shape of the interventional device derived from a time series prediction of the preceding sequence of shapes of the interventional device, wherein the succeeding shape of the interventional device infers the predicted navigated pose of the device portion.
11. The positioning controller of claim 2, wherein at least one of: the forward predictive model is further trained on at least one navigation parameter of the interventional device auxiliary to the forward kinematics of the interventional device predictive of the pose of the device portion, and the control predictive model is further trained on the at least one navigation parameter of the interventional device auxiliary to the inverse kinematics of the interventional device predictive of the positioning motion of the interventional device; and wherein the at least one processor is configured to at least one of: (i′) apply the forward predictive model to both the commanded positioning motion of the interventional device and the at least one navigation parameter auxiliary to the forward kinematics of the interventional device to render the predicted navigated pose of the device portion; and (ii′) apply the control predictive model to both the target pose of the device portion and the at least one navigation parameter auxiliary to the inverse kinematics of the interventional device to render the predicted positioning motion of the interventional device.
12. The positioning controller of claim 1, wherein at least one of: the forward predictive model is configured to further receive at least one auxiliary navigation parameter of the interventional device, and further process it to output the prediction of navigated pose of the device portion, and the control predictive model is configured to further receive at least one auxiliary navigation parameter of the interventional device, and further process it to output the prediction of positioning motion of the interventional device; and wherein the at least one processor is configured to at least one of: (i′) apply the forward predictive model to both the commanded positioning motion of the interventional device and at least one auxiliary navigation parameter to render the predicted navigated pose of the device portion; and (ii′) apply the control predictive model to both the target pose of the device portion and at least one auxiliary navigation parameter to render the predicted positioning motion of the interventional device.
13. A machine-readable storage medium, encoded with instructions for execution by at least one processor, to instruct an international device including a device portion, the machine-readable storage medium storing: at least one of: a forward predictive model configured with kinematics of interventional device, to receive commanded positioning motion of the interventional device and to output data related to a prediction of a navigated pose of the device portion, and an control predictive model configured with kinematics of interventional device to receive target pose data of the interventional device and to output data related to a prediction of a positioning motion of the interventional device; and instruction to at least one of: (i) apply the forward predictive model to a commanded positioning motion of the interventional device to render a predicted navigated pose of the device portion, and generate positioning data informative of a positioning by the interventional device of the device portion to a target pose based on the predicted navigated pose of the device portion; and (ii) apply the control predictive model to the target pose of the device portion to render a predicted positioning motion of the interventional device, and generate positioning commands controlling a positioning of the interventional device of the device portion to the target pose based on the predicted positioning motion of the interventional device.
14. A positioning method executable by a positioning controller for an interventional device including a device portion, the positioning controller storing at least one of: a forward predictive model configured with embedded kinematics of interventional device to receive commanded positioning motion of the interventional device and to output data related to a prediction of a navigated pose of the device portion, and an control predictive model configured with kinematics of interventional device to receive a target pose data of the interventional device and to output data related to a prediction of a positioning motion of the interventional device, wherein the positioning method comprising the position controller executing at least one of: (i) applying the forward predictive model to a commanded positioning motion of the interventional device to render a predicted navigated pose of the device portion, and generating positioning data informative of a positioning by the interventional device of the device portion to a target pose based on the predicted navigated pose of the device portion; and (ii) applying the control predictive model to the target pose of the device portion (40) to render a predicted positioning motion of the interventional device, and generating positioning commands controlling a positioning of the interventional device of the device portion (40) to the target pose based on the predicted positioning motion of the interventional device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0062] The present disclosure is applicable to numerous and various applications that require continuous position control of an end-effector. Examples of such applications include, but are not limited to, minimally-invasive procedures (e.g., endoscopic hepatectomy, necrosectomy, prostatectomy, etc.), video-assisted thoracic surgery (e.g., lobetectomy, etc.), minimally-vascular procedures (e.g., via catheters, sheaths, deployment systems, etc.), minimally medical diagnostic procedures (e.g., endoluminal procedures via endoscopes or bronchoscopes), orthopedic procedures (e.g., via k-wires, screwdrivers, etc.) and non-medical applications.
[0063] The present disclosure improves upon continuous position control of an end-effector during such applications by providing a prediction of poses of the end-effector and/or positioning motions of the interventional device that may be utilized to control and/or corroborate a manual navigated positioning or an automated navigated positioning of the end-effector.
[0064] To facilitate an understanding of the present disclosure, the following description of
[0065] Referring to
[0066] In operation, navigation command(s) 31, actuation signal(s) 32 and/or navigation force(s) 33 are communicated to/imposed onto interventional device 30 whereby interventional device 30 is translated, rotated and/or pivoted in accordance with the navigation command(s) 31, actuation signal(s) 32 and/or navigation force(s) 33 to thereby navigate end-effector 40 to a target pose (i.e., a location and an orientation in the application space).
[0067] For example,
[0068] More particularly, TEE probe 130 includes flexible elongate member 131, handle 132 and imaging end-effector 140. The flexible elongate member 131 is sized and/or shaped, structurally arranged, and/or otherwise configured to be positioned within a body lumen of a patient, such as an esophagus. The imaging end-effector 140 is mounted at a distal end of the member 131 and includes one or more ultrasound transducer elements whereby imaging end-effector 140 is configured to emit ultrasonic energy towards an anatomy (e.g., the heart) of the patient P. The ultrasonic energy is reflected by the patient's vasculatures and/or tissue structures whereby the ultrasound transducer elements in the imaging end-effector 140 receive the reflected ultrasound echo signals. In some embodiments, the TEE probe 130 may include an internal or integrated processing component that can process the ultrasound echo signals locally to generate image signals representative of the patient P's anatomy under imaging. In practice, the ultrasound transducer element(s) may be arranged to provide two-dimensional (2D) images or three-dimensional (3D) images of the patient P's anatomy. The images acquired by the TEE probe 130 are dependent on the depth of insertion, the rotation, and/or the tilt of the imaging end-effector 140, as described in greater detail herein.
[0069] The handle 132 is coupled a proximal end of the member 131. The handle 132 includes control elements for navigating the imaging end-effector 140 to the target pose. As shown, the handle 132 includes knobs 133 and 134, and a switch 135. The knob 133 flexes the member 131 and the imaging end-effector 140 along an anterior-posterior plane of the patient P (e.g., heart). The knob 134 flexes the member 131 and the imaging end-effector 140 along a left-right plane of the patient P. The switch 135 controls beamforming at the imaging end-effector 140 (e.g., adjusting an angle of an imaging plane).
[0070] In a manual navigated embodiment, the practitioner manually dials the knobs 133 and 134 and/or manually turns the switch 135 on and/or off as needed to navigate imaging end-effector 140 to the target pose. The practitioner may receive a display of the images generated by imaging end-effector 140 to thereby impose navigation forces 33 (
[0071] In an automated navigated embodiment, a robotic system (not shown) may include electrical and/or mechanical components (e.g., motors, rollers, and gears) configured to dial the knobs 133 and 134 and/or turn the switch 135 on and/or off whereby robot controller 100 may receive motion control commands 31 (
[0072] The TEE probe 130 is maneuverable in various degrees of freedom.
[0073]
[0074]
[0075]
[0076]
[0077] By further example of an exemplary embodiment of interventional device 30 (
[0078] In a manual navigated embodiment, the practitioner manually imposes navigation forces 33 (
[0079] In an automated navigated embodiment, a robotic system (not shown) include electrical and/or mechanical components (e.g., motors, rollers, and gears) configured to steer the links 231 of robot 230 whereby robot controller 101 receives motion control commands 32 (
[0080] By further example of an exemplary embodiment of interventional device 30 (
[0081] To further facilitate an understanding of the present disclosure, the following description of
[0082] Additionally, TEE probe 130 (
[0083] Referring to
[0084] Specifically, as shown in
[0085] An execution of state S92 of continuous positioning state machine 90 results in a generation of navigation data 34 and an optional generation of auxiliary data 35. Generally, in practice, navigation data 34 will be in the form of navigation command(s) 31, actuation signal(s) 32 and/or navigation force(s) 33 communicated to/imposed onto interventional device 30, and auxiliary data 35 will be in the form of image(s) of interventional device 30 and/or end-effector 40, operational characteristics of interventional device 30 (e.g., shape, strain, twist, temperature, etc.) and operational characteristics of end-effector 40 (e.g., pose, forces, etc.).
[0086] In response thereto, a state S94 of continuous positioning state machine 90 encompasses continuous positioning control by continuous position controller 50 of the navigation of interventional tool 30 and end-effector 40 in accordance with the state S92. To this end, continuous position controller 50 employs a forward predictive model 60 of the present disclosure, an inverse predictive model 70 of the present disclosure and/or an imaging predictive model 80 of the present disclosure.
[0087] In practice, as will be further explained in the present disclosure, forward predictive model 60 may be any type of machine learning model or equivalent for a regression of a positioning motion of the interventional device 30 to a navigated pose of end-effector 40 (e.g., a neural network) that is suitable for the particular type of interventional device 30 being utilized in the particular type of application being implemented, whereby forward predictive model 60 is trained on the forward kinematics of interventional device 30 that is predictive of a navigated pose of end-effector 40.
[0088] In operation, forward predictive model 60 inputs navigation data 34 (and auxiliary data 35 if communicated) associated with a manual navigation or an automated navigation of interventional device 30 to thereby predict a navigated pose of end-effector 40 corresponding to the navigation of interventional device 30, and outputs continuous positioning data 51 informative of a positioning by interventional device 30 of the end-effector 40 to the target pose based on the predicted navigated pose of end-effector 40. Continuous positioning data 51 may be utilized in state S92 as an control to determine an accuracy and/or execute a recalibration of the manual navigation or the automated navigation of interventional device 30 to position end-effector 40 to the target pose.
[0089] In practice, as will be further explained in the present disclosure, inverse predictive model 70 may be any type any type of machine learning model or equivalent for a regression a target pose of end-effector 40 to a positioning motion of the interventional device 30 (e.g., a neural network) that is suitable for the particular type of interventional device 30 being utilized in the particular type of application being implemented, whereby inverse predictive model 60 is trained on the inverse kinematics of interventional device 30 that is predictive of a positioning motion of the interventional device 30.
[0090] In operation, inverse predictive model 70 inputs navigation data 34 (and auxiliary data 35 if communicated) associated with a target pose of end-effector 40 to thereby predict a positioning motion of interventional device 30 for positioning end-effector 40 to the target pose, and outputs continuous positioning commands 52 for controlling a positioning by interventional device 30 of end-effector 40 to the target pose based on the predicted positioning motion of interventional device 30. Continuous positioning commands 52 may be utilized in state S92 as a control to execute the manual navigation or the automated navigation of interventional device 30 to position end-effector 40 to the target pose.
[0091] In practice, as will be further explained in the present disclosure, imaging predictive model 80 may be any type of machine learning model or equivalent for a regression of relative imaging by the end-effector 40 to a navigated pose of end-effector 40 (e.g., a neural network or a scale invariant feature transform network) that is suitable for the particular type of end-effector 40 being utilized in the particular type of application being implemented, whereby inverse predictive model 60 is trained on a correlation of relative imaging by end-effector 40 and forward kinematics of the interventional device 30 that is predictive of a navigated pose of end-effector 40.
[0092] In operation, imaging predictive model 60 inputs auxiliary data 35 in the form of images generated by the end-effector 40 at one or more poses to thereby predict the navigated pose of end-effector 40 as feedback data informative of a corrective positioning by interventional device 30 of the end-effector 40 to a target pose. The feedback data is utilized in a closed loop of state S94 to generate a differential between a target pose of end-effector 40 and the predicted navigated pose of the end-effector 40 whereby inverse predict model 70 may input the differential to predict a corrective positioning motion of interventional device 30 to reposition end-effector 30 to the target pose.
[0093] In practice, an embodiment of continuous positioning controller 50 may employ forward predictive model 60, inverse predictive model 70 and/or imaging predictive model 80.
[0094] For example, an embodiment of continuous positioning controller 50 may employ only forward predictive model 60 to facilitate a display of an accuracy of a manual navigation or the automated navigation of interventional device 30 to position end-effector 40 to the target pose.
[0095] More particularly, a user interface is provided to display an image of an attempted navigation of end-effector 40 to the target pose and an image of the predicted navigated pose of end-effector 40 by forward predictive model 60. A confidence ratio of the prediction is shown to the user. To evaluate prediction uncertainty, multiple feedforward iterations of forward predictive model 60 are performed with dropout enabled stochastically as known in the art of the present disclosure.
[0096] By further example, an embodiment of continuous positioning controller 50 may only employ inverse predictive model 70 to command a manual navigation or an automated navigation of interventional device 30 to position end-effector 40 to the target pose.
[0097] By further example, an embodiment of continuous positioning controller 50 may only employ imaging predictive model 80 to provide feedback data informative of a corrective positioning by interventional device 30 of the end-effector 40 to a target pose.
[0098] By further example, an embodiment of continuous positioning controller 50 may employ forward predictive model 60 and inverse predictive model 70 to thereby command a manual navigation or an automated navigation of interventional device 30 to position end-effector 40 to the target pose and to display of an accuracy of a manual navigation or the automated navigation of interventional device 30 to position end-effector 40 to the target pose.
[0099] By further example, an embodiment of continuous positioning controller 50 may employ inverse predictive model 70 and imaging predictive model 80 to thereby command a manual navigation or an automated navigation of interventional device 30 to position end-effector 40 to the target pose and to provide feedback data informative of a corrective positioning by interventional device 30 of the end-effector 40 to the target pose.
[0100] To further facilitate an understanding of the present disclosure, the following description of
[0101]
[0102] More particularly, referring to
[0103] In practice, training dataset D is a collection of expert data with reasonable coverage of different navigations of interventional device 30. To this end, the diverse dataset training dataset D for learning should incorporate manufacturing differences between various types of robots, performance characteristics, wear and tear of the hardware components, and other system dependent and independent factors, such as temperature or humidity of the environment in which robot operates.
[0104] Referring to
[0105] In one embodiment as shown in
[0106] In practice, the combination of layers is configured to implement a regression of joint variables Q to pose {circumflex over (T)}.
[0107] In one embodiment for implementing a regression of joint variables Q to pose {circumflex over (T)}, neural network base 160a includes a set of N number of fully connected layers 163a.
[0108] In a second embodiment for implementing a regression of joint variables Q to pose {circumflex over (T)}, neural network base 160a includes a set of N convolutional layers 164a followed by either a set of M fully connected layers 163a or a set of W recurrent layers 165a or a set of W long term short memory layers 166a.
[0109] In a third embodiment for implementing a regression of joint variables Q to pose {circumflex over (T)}, neural network base 160a includes a set of N convolutional layers 164a followed combination of a set of M fully connected layers 163a and a set of W recurrent layers 165a or a set W of long term short memory layers 166a.
[0110] In practice, a fully connected layer 163a may include K neurons, where N, M, W, K may be any positive integer, and values may vary depending on the embodiments. For example, N may be about 8, M may be about 2, W may be about 2, and K can be about 1000. Alson, a convolutional layer 164a may implement a non-linear transformation, which may be a composite function of operations (e.g., batch normalization, rectified linear units (ReLU), pooling, dropout and/or convolution), and a convolutional layer 164a may also include a non-linearity function (e.g. including rectified non-linear ReLU operations) configured to extract rectified feature maps.
[0111] Further in practice, one of the layers 163a or 164a serve as an input layer for inputting a sequence 161a of joint variables Q, whereby a size of the sequence of joint variables Q may be ≥1, and one of the layers 163a, 165a and 166a may serve as an output layer for outputting a pose 162a of end-effector 40 in Cartesian space (e.g., a translation and a rotation of the end-effector 40 in Cartesian space). The outputted pose of end-effector 40 in Cartesian space may be represented as vectorial parametrization and/or non-vectorial parametrization of a rigid-body position and orientation. More particularly, the parametrizations may be in the form of Euler angles, quaternions, matrix, exponential map, and/or angle-axis representing rotations and/or translations (e.g., including a direction and a magnitude for the translations).
[0112] Also in practice, the output layer may be a non-linear fully connected layer 163a that gradually shrinks a high-dimensional output of the last convolutional layer 164a of neural network base 160a to produce a set of output variables.
[0113] In training, training weights of forward predictive model 60a are constantly updated by comparing the outputs inferred forward predictive model ({circumflex over (T)})—given input sequence Q—with ground-truth end-effector pose T.sub.i from a batch of training datasets D, which may be systematically or randomly selected from data memory (not shown). More particularly, the coefficients for the filters may be initialized with predefined or arbitrary values. The coefficients for the filters are applied to the batch of training datasets D via a forward propagation, and are adjusted via a backward propagation to minimize any output error.
[0114] In application, forward predictive model 60a infers the pose ({circumflex over (T)}) 62a of end-effector 40 given a sequence Q of j consecutive joint variables 62a.
[0115] Still referring to
[0116] Referring to
[0117] A stage S94a of procedure 90a involves robot controller 100, forward predictive model 50a and a display controller 104. Robot controller 100 stores and communicates the sequence of consecutive joint variables (Q) 61a to forward predictive model 60a that predicts the navigated pose {circumflex over (T)} of the end effector 140. Continuous positioning controller 50a generates a confidence ratio of the prediction derived from uncertainty multiple feedforward iterations of forward predictive model 60a performed with dropout enabled stochastically as known in the art of the present disclosure. Forward predictive model 50a communicates continuous positioning data 51a including the predicted navigated pose {circumflex over (T)} of the end effector 140 and the confidence ratio to a display controller 104, which in turn controls a display of an image 105a of the navigated pose of end-effector 140, an image 106a of the navigated pose of end-effector 140 and the confidence ratio for guidance purposes to end-effector 140 to the target pose.
[0118]
[0119] More particularly, referring to
[0120] In practice, training dataset D is a collection of expert data with reasonable coverage of different navigations of interventional device 30. To this end, the diverse dataset training dataset D for learning should incorporate mechanical differences between various types of robots, wear and tear of the hardware components, and other system dependent factors.
[0121] Referring to
[0122] In one embodiment as shown in
[0123] In practice, the combination of layers is configured to implement a regression of pose T to joint variables {circumflex over (Q)}.
[0124] In one embodiment for implementing a regression of pose T to joint variables {circumflex over (Q)}, neural network base 170a includes a set of N number of fully connected layers 173a.
[0125] In a second embodiment for implementing a regression of pose T to joint variables {circumflex over (Q)}, neural network base 170a includes a set of N convolutional layers 174a followed by either a set of M fully connected layers 173a or a set of W recurrent layers 175a or a set of W long term short memory layers 176a.
[0126] In a third embodiment for implementing a regression of pose T to joint variables {circumflex over (Q)}, neural network base 170a includes a set of N convolutional layers 174a followed combination of a set of M fully connected layers 173a and a set of W recurrent layers 175a or a set W of long term short memory layers 176a.
[0127] In practice, a fully connected layer 173a may include K neurons, where N, M, W, K may be any positive integer, and values may vary depending on the embodiments. For example, N may be about 8, M may be about 2, W may be about 2, and K can be about 1000. Also, a convolutional layer 174a may implement a non-linear transformation, which may be a composite function of operations (e.g., batch normalization, rectified linear units (ReLU), pooling, dropout and/or convolution), and a convolutional layer 174a may also include a non-linearity function (e.g. including rectified non-linear ReLU operations) configured to extract rectified feature maps.
[0128] Further in practice, one of the layers 173a or 174a serve as an input layer for inputting pose 171a of end-effector 40 in Cartesian space (e.g., a translation and a rotation of the end-effector 40 in Cartesian space), and one of the layers 173a, 175a and 176a may serve as an output layer for outputting a sequence 172a of joint variables Q, whereby a size of the sequence of joint variables Q may be ≥1. The inputted pose of end-effector 40 in Cartesian space may be represented as vectorial parametrization and/or non-vectorial parametrization of a rigid-body position and orientation. More particularly, the parametrizations may be in the form of Euler angles, quaternions, matrix, exponential map, and/or angle-axis representing rotations and/or translations (e.g., including a direction and a magnitude for the translations).
[0129] Also in practice, the output layer may be a non-linear fully connected layer 173a that gradually shrinks a high-dimensional output of the last convolutional layer 174a of neural network base 170a to produce a set of output variables.
[0130] In training, training weights of inverse predictive model 70a are constantly updated by comparing the output from the inverse predicted model {circumflex over (Q)}—given as input a ground-truth end effector pose T—with a ground-truth sequence Q.sub.i from a batch of training datasets D, which may be systematically or randomly selected from data memory (not shown). More particularly, the coefficients for the filters may be initialized with predefined or arbitrary values. The coefficients for the filters are applied to the batch of training datasets D via a forward propagation, and are adjusted via a backward propagation to minimize any output error.
[0131] In application, inverse predictive model 70a infers sequence {circumflex over (Q)} off consecutive joint variables 72b (e.g., parameters α, β as shown in
[0132] Still referring to
[0133] Referring to
[0134] A stage S94A of procedure 90b involves inverse predictive model 70a and robot controller 100. Inverse predictive model 70a infers sequence {circumflex over (Q)} of j consecutive joint variables 72a (e.g., parameters α, β as shown in
[0135]
[0136] More particularly, referring to
The 2-tuple consists of commanded sequence of j consecutive joint velocities E ({dot over (Q)}∈({dot over (q)}.sub.t,{dot over (q)}.sub.t+1 . . . {dot over (q)}.sub.t+j)) 61b and linear velocity and/or angular velocity 62b of the end effector
Entry {dot over (q)}.sub.i stands for all joint velocities that are controlled by the robot controller (not shown). Sequence may also contain only one entry.
[0137] In practice, training dataset D is a collection of expert data with reasonable coverage of different navigations of interventional device 30. To this end, the diverse dataset training dataset D for learning should incorporate mechanical differences between various types of robots, wear and tear of the hardware components, and other system dependent factors.
[0138] Referring to
[0139] In one embodiment as shown in
[0140] In practice, the combination of layers is configured to implement a regression of joint velocities of the interventional device 30 to a linear velocity and/or an angular velocity of end-effector 40.
[0141] In one embodiment for implementing a regression of joint velocities of the interventional device 30 to a linear velocity and/or an angular velocity of end-effector 40, neural network base 160b includes a set of N number of fully connected layers 163b.
[0142] In a second embodiment for implementing a regression of joint velocities of the interventional device 30 to a linear velocity and/or an angular velocity of end-effector 40, neural network base 160b includes a set of N convolutional layers 164b followed by either a set of M fully connected layers 163b or a set of W recurrent layers 165b or a set of W long term short memory layers 166b.
[0143] In a third embodiment for implementing a regression of joint velocities of the interventional device 30 to a linear velocity and/or an angular velocity of end-effector 40, neural network base 160b includes a set of N convolutional layers 164b followed combination of a set of M fully connected layers 163b and a set of W recurrent layers 165b or a set W of long term short memory layers 166b.
[0144] In practice, a fully connected layer 163b may include K neurons, where N, M, W, K may be any positive integer, and values may vary depending on the embodiments. For example, N may be about 8, M may be about 2, W may be about 2, and K can be about 1000. Also, a convolutional layer 164b may implement a non-linear transformation, which may be a composite function of operations (e.g., batch normalization, rectified linear units (ReLU), pooling, dropout and/or convolution), and a convolutional layer 164b may also include a non-linearity function (e.g. including rectified non-linear ReLU operations) configured to extract rectified feature maps.
[0145] Further in practice, one of the layers 163b or 164b serve as an input layer for inputting a sequence of j consecutive joint velocities ({dot over (Q)}∈({dot over (q)}.sub.t, {dot over (q)}.sub.t+1 . . . {dot over (q)}.sub.t+j)), whereby a size of the a sequence of j consecutive joint velocities ({dot over (Q)}∈({dot over (q)}.sub.t, {dot over (q)}.sub.t+1 . . . {dot over (q)}.sub.t+j)) may be ≥1, and one of the layers 163b, 165b and 166b may serve as an output layer for outputting a linear and angular velocity of the end effector
as regressed from last fully-connected layer (e.g. 6 units, 3 units for linear and 3 units for angular velocity) with linear or non-linear activation function.
[0146] In training, training weights of forward predictive model 60b are constantly updated by comparing the predicted linear and angular velocity
via forward velocity predictive model the —given a sequence of joint velocities—with linear velocity and/or angular velocity 62b from a batch of training datasets D, which may be systematically or randomly selected from data memory (not shown). More particularly, the coefficients for the filters may be initialized with predefined or arbitrary values. The coefficients for the filters are applied to the batch of training datasets D via a forward propagation, and are adjusted via a backward propagation to minimize any output error.
[0147] In application, forward predictive model 60b infer the linear velocity and/or angular velocity 62b of end-effector 40 given sequence of joint velocities 61b of interventional device 30.
[0148] Still referring to
[0149] Referring to
[0150] A stage S94c of procedure 90c involves robot controller 101, forward predictive model 50a and a display controller 104. Robot controller 101 stores and communicates the n-vector of joint velocities 61b of interventional device 30 to forward predictive model 60b that predicts the linear velocity and/or angular velocity 62b of TTE probe 240. Continuous positioning controller 50c generates a confidence ratio of the prediction derived from uncertainty multiple feedforward iterations of forward predictive model 60b performed with dropout enabled stochastically as known in the art of the present disclosure. Forward predictive model 60b communicates continuous positioning data 51b including a predicted navigated pose i of the TTE probe 240 derived from the predicted linear velocity and/or angular velocity 62b of TTE probe 240, and further including the confidence ratio to a display controller 104, which in turn controls a display of an image 105a of the navigated pose of TTE probe 240, an image 106a of the navigated pose TTE probe 240 and the confidence ratio for guidance purposes to end-effector 240 to the target pose.
[0151]
[0152] More particularly, referring to
The 2-tuple consists of a linear velocity and/or angular velocity of the end effector and sequence of consecutive joint velocities ({dot over (q)}.sub.t) acquired at sequential time points starting from t to t+j. Entry {dot over (q)}.sub.t stands for all joint velocities that are controlled by the robot controller (not shown).
[0153] In practice, training dataset D is a collection of expert data with reasonable coverage of different navigations of interventional device 30. To this end, the diverse dataset training dataset D for learning should incorporate mechanical differences between various types of robots, wear and tear of the hardware components, and other system dependent factors.
[0154] Referring to
[0155] In one embodiment as shown in
[0156] In practice, the combination of layers is configured to implement a regression of a linear velocity and/or an angular velocity of end-effector 40 to joint velocities of the interventional device 30.
[0157] In one embodiment for implementing a regression of a linear velocity and/or an angular velocity of end-effector 40 to joint velocities of the interventional device 30, neural network base 170b includes a set of N number of fully connected layers 173b.
[0158] In a second embodiment for implementing a regression of a linear velocity and/or an angular velocity of end-effector 40 to joint velocities of the interventional device 30, neural network base 170b includes a set of N convolutional layers 174b followed by either a set of M fully connected layers 173b or a set of W recurrent layers 175b or a set of W long term short memory layers 176b.
[0159] In a third embodiment for implementing a regression of a linear velocity and/or an angular velocity of end-effector 40 to joint velocities of the interventional device 30, neural network base 170b includes a set of N convolutional layers 174b followed combination of a set of M fully connected layers 173b and a set of W recurrent layers 175b or a set W of long term short memory layers 176b.
[0160] In practice, a fully connected layer 173b may include K neurons, where N, M, W, K may be any positive integer, and values may vary depending on the embodiments. For example, N may be about 8, M may be about 2, W may be about 2, and K can be about 1000. Also, a convolutional layer 174b may implement a non-linear transformation, which may be a composite function of operations (e.g., batch normalization, rectified linear units (ReLU), pooling, dropout and/or convolution), and a convolutional layer 174b may also include a non-linearity function (e.g. including rectified non-linear ReLU operations) configured to extract rectified feature maps.
[0161] Further in practice, one of the layers 173b or 174b serve as an input layer for inputting angular and linear velocity
and one of the layers 173b, 175b and 176b may serve as an output layer for outputting a a sequence of j consecutive joint velocities ({dot over (Q)}∈({dot over (q)}.sub.t, {dot over (q)}.sub.t+1 . . . {dot over (q)}.sub.t+j)) that provided as output from LSTM layers 176b. Alternatively, single joint velocities may be regressed from fully-connected layer 173b consisting of m units, each unit for every join in the robot controlled by the robot controller. Fully-connected layer 173b may have linear or non-linear activation functions.
[0162] In training, training weights of inverse predictive model 70b are constantly updated by comparing the predicted sequence of joint velocities {dot over ({circumflex over (Q)})}—given a linear and angular velocity at input
—with the ground-truth sequence {dot over (Q)}.sub.i of joint velocities from a batch of training datasets D, which may be systematically or randomly selected from data memory (not shown). More particularly, the coefficients for the filters may be initialized with predefined or arbitrary values. The coefficients for the filters are applied to the batch of training datasets D via a forward propagation, and are adjusted via a backward propagation to minimize any output error.
[0163] In application, inverse predictive model 70b infers a n-vector of joint velocities 61b of interventional device 30 given a linear velocity and/or angular velocity 62b of end-effector 40.
[0164] Still referring to
and output is a sequence {dot over ({circumflex over (Q)})} of joint velocities that provided as output from LSTM layers. Alternatively, single joint velocities may be regressed from fully-connected layer consisting of m units, each unit for every join in the robot controlled by the robot controller. Fully-connected layer may have linear or non-linear activation functions.
[0165] Referring to
[0166] A stage S94d of procedure 90d involves inverse predictive model 70b and robot controller 101. Inverse predictive model 70b infers a n-vector of joint velocities 61b of interventional device 30 given linear velocity and/or angular velocity 62b, and continuous positioning controller 50c communicates a continuous positioning command 52b to robot controller 101 to thereby control a positioning of TTE probe 240 via the robot 230 (
[0167] In practice, forward predictive model 60a (
[0168] Referring to
[0169] Those having ordinary skill in the art will know how to apply shape 35a, an image 35b and a force 35c of the interventional device as well as any other additional auxiliary information to inverse predictive model 70a, forward predictive model 60b and inverse predictive model 70b.
[0170]
[0171] More particularly, referring to
[0172] In practice, training dataset D is a collection of expert data with reasonable coverage of different navigations of OSS interventional device 30. To this end, the diverse dataset training dataset D for learning should incorporate anatomies with different curvatures, magnitude of motion, mechanical differences between various types of robots, wear and tear of the hardware components, and other system independent factors such as temperature and humidity of the environment.
[0173] Referring to
[0174] In one embodiment as shown in
[0175] In training, training weights of forward predictive model 60d are constantly updated by comparing the sequence of future shapes Ĥ.sub.i+1— predicted by a the model given an input sequence H.sub.i— with ground-truth future sequence H.sub.i+1 from the training dataset D, which may be systematically or randomly selected from data memory (not shown). More particularly, the coefficients for the filters may be initialized with predefined or arbitrary values. The coefficients for the filters are applied to the batch of training datasets D via a forward propagation, and are adjusted via a backward propagation to minimize any output error.
[0176] In application, forward predictive model 60d infers the future sequence Ĥ.sub.t+1 consisting of k shapes and therefore use the last shape ĥ.sub.t+k+1 in the predicted sequence to estimate the position of OSS interventional device 30 in the future time point.
[0177] In an alternative embodiment as shown in
[0178] Referring to
[0179] A stage S94e of procedure 90e involves shape sensing controller 103, forward predictive model 50d and a display controller 104. shape sensing controller 103 communicates a sequence 61d of k consecutive shapes to forward predictive model 60e to thereby inf infers the following sequence of shapes Ĥ.sub.t+1, where the last shape ĥ.sub.t+k+1 is the predicted position of OSS guidewire 332. Display controller 104 controls a display of a sensed position image 105a of OSS guidewire 332 for guidance purposes to the end-effector 340 to the target pose.
[0180] Referring to
[0181]
[0182]
[0183] In a training phase, a data acquisition controller (not shown) is configured to receive and interpret information from both a robot and end-effector (e.g., an ultrasound device), and save data on a data storage media (not shown) by a data acquisition controller (not shown) in a format defined by the following specifications: training dataset D consisting of i data points represented by a 2-tuple: d.sub.i=(U.sub.i, T.sub.i). This 2-tuple consists of an ultrasound image U.sub.i 81a that was acquired at a certain position T∈SE(3) 82a in respect to the reference positon.
[0184] A training controller is configured to interpret the training dataset D saved on the data storage media. This dataset D consists of i data points represented by a 2-tuple: d.sub.i=(U.sub.i, T.sub.i). This 2-tuple consists of: ultrasound image U.sub.i 81a of the anatomy and relative motion T.sub.i 82a between current pose of the end-effector at which ultrasound image U.sub.i was acquired and some arbitrarily chosen reference position.
[0185] In one embodiment as shown in
[0186] In practice, the combination of layers is configured to implement a relative positioning of image U.sub.C to a reference image to thereby pose {circumflex over (T)}.
[0187] In one embodiment for implementing a relative positioning of image U.sub.C to a reference image to thereby pose {circumflex over (T)}, neural network base 180a includes a set of N number of fully connected layers 183a.
[0188] In a second embodiment for implementing a relative positioning of image U.sub.C to a reference image to thereby pose {circumflex over (T)}, neural network base 180a includes a set of N convolutional layers 184a followed by either a set of M fully connected layers 183a or a set of W recurrent layers 185a or a set of W long term short memory layers 186a.
[0189] In a third embodiment for implementing a relative positioning of image U.sub.C to a reference image to thereby pose {circumflex over (T)}, neural network base 180a includes a set of N convolutional layers 184a followed combination of a set of M fully connected layers 183a and a set of W recurrent layers 185a or a set W of long term short memory layers 186a.
[0190] In practice, a fully connected layer 183a may include K neurons, where N, M, W, K may be any positive integer, and values may vary depending on the embodiments. For example, N may be about 8, M may be about 2, W may be about 2, and K can be about 1000. Also, a convolutional layer 184a may implement a non-linear transformation, which may be a composite function of operations (e.g., batch normalization, rectified linear units (ReLU), pooling, dropout and/or convolution), and a convolutional layer 184a may also include a non-linearity function (e.g. including rectified non-linear ReLU operations) configured to extract rectified feature maps.
[0191] Further in practice, one of the layers 183a or 184a serve as an input layer for inputting image U.sub.C, and one of the layers 183a, 185a and 186a may serve as an output layer for outputting a pose 182a of end-effector 40 in Cartesian space (e.g., a translation and a rotation of the end-effector 40 in Cartesian space). The outputted pose of end-effector 40 in Cartesian space may be represented as vectorial parametrization and/or non-vectorial parametrization of a rigid-body position and orientation. More particularly, the parametrizations may be in the form of Euler angles, quaternions, matrix, exponential map, and/or angle-axis representing rotations and/or translations (e.g., including a direction and a magnitude for the translations).
[0192] Also in practice, the output layer may be a non-linear fully connected layer 183a that gradually shrinks a high-dimensional output of the last convolutional layer 184a of neural network base 180a to produce a set of output variables.
[0193] In training, training weights of image predictive model 80a are constantly updated by comparing a predicted relative motion of the end-effector ({circumflex over (T)}) in respect to some reference anatomy using image predictive model—given and ultrasound image 161c as input—with ground-truth relative motion T from a batch of training datasets D, which may be systematically or randomly selected from data memory (not shown). More particularly, the coefficients for the filters may be initialized with predefined or arbitrary values. The coefficients for the filters are applied to the batch of training datasets D via a forward propagation, and are adjusted via a backward propagation to minimize any output error.
[0194] Referring to
[0195]
[0196]
[0197] In a training phase, a data acquisition controller (not shown) is configured to receive and interpret information from both a robot and end-effector (e.g., an ultrasound device), and save data on a data storage media (not shown) by a data acquisition controller (not shown) in a format defined by the following specifications: training dataset D consisting of i data points represented by a 2-tuple: d.sub.i=(U.sub.i, V.sub.i). This 2-tuple consists of an ultrasound image U.sub.i 81a that was acquired at a certain position T∈SE(3) 82a in respect to the reference position via the-vector 83a of linear velocity and angular velocity of the end-effector.
[0198] A training controller is configured to interpret the training dataset D saved on the data storage media. This dataset D consists of i data points represented by a 2-tuple: d.sub.i=(U.sub.i, V.sub.i). This 2-tuple consists of: ultrasound image U.sub.i 81a of the anatomy and relative n vector 83a of linear velocity and angular velocity of the end-effector at which ultrasound image U.sub.i was acquired and some arbitrarily chosen reference position.
[0199] In one embodiment as shown in
[0200] In practice, the combination of layers is configured to implement a relative positioning of image U.sub.C to a reference image to thereby derive a linear velocity and/or an angular velocity of end-effector 40.
[0201] In one embodiment for implementing relative positioning of image U.sub.C to a reference image to thereby derive a linear velocity and/or an angular velocity of end-effector 40, neural network base 180b includes a set of N number of fully connected layers 183b.
[0202] In a second embodiment for implementing relative positioning of image U.sub.C to a reference image to thereby derive a linear velocity and/or an angular velocity of end-effector 40, neural network base 180b includes a set of N convolutional layers 184b followed by either a set of M fully connected layers 183b or a set of W recurrent layers 185b or a set of W long term short memory layers 186b.
[0203] In a third embodiment for implementing relative positioning of image U.sub.C to a reference image to thereby derive a linear velocity and/or an angular velocity of end-effector 40, neural network base 180b includes a set of N convolutional layers 184b followed combination of set of M fully connected layers 183b and a set of W recurrent layers 185b or a set W of long term short memory layers 186b.
[0204] In practice, a fully connected layer 183b may include K neurons, where N, M, W, K may be any positive integer, and values may vary depending on the embodiments. For example, N may be about 8, M may be about 2, W may be about 2, and K can be about 1000. Also, a convolutional layer 184b may implement a non-linear transformation, which may be a composite function of operations (e.g., batch normalization, rectified linear units (ReLU), pooling, dropout and/or convolution), and a convolutional layer 184b may also include a non-linearity function (e.g. including rectified non-linear ReLU operations) configured to extract rectified feature maps.
[0205] Further in practice, one of the layers 183b or 184b serve as an input layer for inputting image U.sub.C, and one of the layers 183b, 185b and 186b may serve as an output layer for outputting a linear and angular velocity of the end effector
as regressed from last fully-connected layer (e.g. 6 units, 3 units for linear and 3 units for angular velocity) with linear or non-linear activation function.
[0206] In training, training weights of image predictive model 80b are constantly updated by comparing a predicted linear and angular velocity of the end-effector in respect to some reference anatomy—given ultrasound image 161c as input—with ground-truth end-effector linear and angular velocity describing motion to some reference anatomy from a batch of training datasets D, which may be systematically or randomly selected from data memory (not shown). More particularly, the coefficients for the filters may be initialized with predefined or arbitrary values. The coefficients for the filters are applied to the batch of training datasets D via a forward propagation, and are adjusted via a backward propagation to minimize any output error.
[0207] Referring to
[0208] In one TEE probe embodiment, stage S192a of procedure 190a encompasses TEE probe handle 132 (
[0209] A stage S194a of procedure 190a encompasses image predictive model 90 processing a current ultrasound image 81a, to which we will refer as former ultrasound image U.sub.f, to predict relative position of this image plane in respect to the reference anatomy .sup.ref{circumflex over (T)}.sub.f. Sonographer observes the anatomy using ultrasound image and decides about the desired movement of the transducer from its current position T.sub.d. Alternatively, desired motion of the transducer could be provided from: external tracking devices, user interfaces, other imaging modalities that are registered to the ultrasound image, such as X-ray images that are registered to 3D TEE images using EchoNavigator (Philips).
[0210] Based on T.sub.d, inverse predictive model 70a predicts joint variables {circumflex over (q)}.sub.t 72a that are required to move the robot to the desired position. Robot controller 100 receives the joint variable 72a s and moves TEE probe 130 accordingly.
[0211] A stage S196a of procedures 190a encompasses the ultrasound transducer reaching another position at which ultrasound image U.sub.c is acquired. Image predictive model 90 g uses process the current ultrasound image U.sub.c to predict relative position of both current image plane in respect to the reference anatomy .sup.ref{circumflex over (T)}.sub.c. In result a motion between former and current position in the anatomy, such as heart, can be calculated as follows:
[0212] In a second TEE probe embodiment, as shown in
[0213] Referring to
[0214] In one TEE probe embodiment, stage S192b of procedure 190b encompasses TEE probe handle 142 (
[0215] A stage S194b of procedure 190b encompasses a user desiring to move the transducer in the image space for instance by selecting the path from point A to point B in ultrasound image 203 or a transformation between image plane A and B. In this embodiment, a first linear and angular velocity 202 defined by the path on the image is transformed to the end effector coordinate system using .sup.end-effectorJ.sub.image Jacobian 204, which is calculated by knowing the spatial relationship between end effector and image coordinate system using methods known in art of the present disclosure.
[0216] Based on V.sub.d, inverse predictive model 70b (FIG. 7B3) predicts joint velocities {dot over ({circumflex over (q)})}.sub.t 72b that are required to move the robot to the desired position. Robot controller 100 receives the Joint velocities {dot over ({circumflex over (q)})}.sub.t 72b and moves TEE probe 130 accordingly.
[0217] A stage S196b of procedures 190a encompasses the ultrasound transducer reaching another position at which ultrasound image U.sub.c is acquired. Image predictive model 80b process the current ultrasound image U.sub.c 81a to predict a velocity vector 83a of the end-effector between points A and B. The image predictive model 80b estimating the function between Cartesian velocity of the end effector and the velocities in the joint space, i.e. neural network models a manipulator Jacobian —given a 6-vector consisting of the linear and angular velocities of the end effector 71c to predict n-vector 72c of joint velocities.
[0218] As understood by those skilled in the art of the present disclosure, neural networks that model spatial relationships between images of the anatomy, such as heart, require large training dataset, which are specific to given organ.
[0219] In an alternative embodiment, features are directly extracted from the images in order to validate the position of the transducer. In this embodiment as shown in
[0220] More particular, a velocity-based control system of the continuum-like robot. Desired motion on the image is identified as soon as the user selects certain object on the image, e.g. apical wall (see red dot on the ultrasound image). Motion is defined by a path between center of the field of view and the selected object, which can be transformed to linear and angular velocities in the end effector space using Jacobian 204. This Cartesian velocity is then sent to the neural network which will infer joint velocities. Achieved position will be iteratively validated against the path defined by a constantly tracked object, and center of the field of view.
[0221] In practice, the closed control loop of
[0222] Also in practice, prediction accuracy of the neural network can be affected by the configuration of the flexible endoscope. Thus, a position of a transducer in respect to heart is first defined, using for instance neural network g, or Philips HeartModel, which will implicitly define one of the possible configurations. Second, a certain set of network weights will be loaded into the models according to the detected configuration, thus improving the prediction accuracy.
[0223] Similar approach can be used to guide the user to the location at which the most optimal images and guidance can be provided.
[0224] Furthermore, one of the hardest issues in machine/deep learning is the accessibility of large data in the right format for training of the predictive models. More particularly, collecting and constructing the training and validation sets is very time consuming and expensive because it requires domain-specific knowledge. For instance, to train a predictive model to accurately differentiate a benign breast tumor and a malignant breast tumor, such training needs several thousands of ultrasound images annotated by expert radiologists and transformed into a numerical representation that a training algorithm can understand. Additionally, the image datasets might be inaccurate, corrupted or labelled with noise, all leading to detection inaccuracies, and an acquisition of large medical datasets may trigger ethical and privacy concerns, and many others concerns.
[0225] Referring to
[0226] For example,
[0227] Referring back to
[0228] More particular, a shape-sensed guidewire 332 is embedded or attached to the continuum robot, which uses an optical shape sensing (OSS) technology known in the art. OSS uses light along a multicore optical fiber for device localization and navigation during surgical intervention. The principle involved makes use of distributed strain measurements in the optical fiber using characteristic Rayleigh backscatter or controlled grating patterns.
[0229] Shape sensing controller 103 is configured to acquire the shape of shape-sensed guidewire 322, estimate the pose T∈SE(3) of the end-effector that is rigidly attached to a plastic casing 350, which enforces a certain curvature in the guidewire as previously describe herein. The methods for estimating pose T based on a well-defined curvature as well as template matching algorithm as known in the art of the present disclosure.
[0230] Data acquisition controller 191 is configured to generate a sequence of motor commands according to pre-defined acquisition pattern (e.g., a spiral, radial, or square motion, etc.) and send movement commands to robot controller 100.
[0231] Robot controller 100 is configured to receive robot position and send movement signals to the robot. Using motorized knobs robot will pull/loosen the tendons, which will result in the motion of the end-effector. Robot controller is further configured to receive and interpret information from the data acquisition controller 191, and change the robot position based on the information from the data acquisition controller 191.
[0232] Data storage controller 190 is configured to receive and interpret information from both robot controller 100 and shape sensing controller 103, and save data on the data storage media (not shown) in a format defined by the following specifications: First specification is to acquire training dataset D for all configurations pre-defined by the data acquisition controller 191. A dataset D consists of a set of n sequences W: D={W.sub.1, W.sub.2, . . . , W.sub.n}, each sequence W.sub.n consists of i data points d.sub.i: W.sub.n={d.sub.1, d.sub.2, . . . , d.sub.i}; and each data point d.sub.i from the sequence is defined by a 3-tuple: d.sub.i=(T.sub.i, H.sub.i, Q.sub.i)
[0233] The 3-tuple consists of end-effector pose T∈SE(3), sequence of k consecutive shapes such as H∈(h.sub.t, h.sub.t+1 . . . h.sub.t+k), where h∈(p.sub.1 . . . p.sub.m) is a set of m vectors p.sub.m that describe both the position of the shape-sensed guidewire in 3D Euclidean space and auxiliary shape parameters such as strain, curvature, and twist, and a sequence of j consecutive joint variables Q∈(q.sub.t, q.sub.t+1 . . . q.sub.t+j) acquired at time points starting from t to t+j. For instance, entry q.sub.t could be an angle on the control knobs α, β acquired at a time point t.
[0234] Referring to
[0235] Referring to both
[0236] Casing 350 is rigidly attached to the end-effector of the continuum-like robot.
[0237] By using template matching algorithm as known in the art of the present disclosure during a stage S364 of method 360, shape sensing controller 103 can now estimate the pose T∈SE(3) of the end-effector. Preferably, the coordinate system of the end-effector is defined by the template, however additional calibration matrixes can be used. When robotic system is still in the home position, pose of the end-effector is acquired in the OSS coordinate system. Every following pose that is acquired during the experiment is estimated relative to this initial position.
[0238] Data acquisition controller 191 generates a motion sequence, i.e. set of joint variables, according to pre-defined acquisition pattern (e.g., pattern 370 of
[0239] Stage S366 of method 300 encompasses, at each time point, an acquisition and storage of a data tuple d.sub.i=(T.sub.i, H.sub.i, Q.sub.i) the data storage controller 190. Of importance, because H.sub.i and Q.sub.i are sequential, all former time points are kept in the memory by the data storage controller 190.
[0240] To facilitate a further understanding of the various inventions of the present disclosure, the following description of
[0241] Referring to
[0242] Each processor 401 may be any hardware device, as known in the art of the present disclosure or hereinafter conceived, capable of executing instructions stored in memory 402 or storage or otherwise processing data. In a non-limiting example, the processor(s) 401 may include a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or other similar devices.
[0243] The memory 402 may include various memories, e.g. a non-transitory and/or static memory, as known in the art of the present disclosure or hereinafter conceived, including, but not limited to, L1, L2, or L3 cache or system memory. In a non-limiting example, the memory 402 may include static random access memory (SRAM), dynamic RAM (DRAM), flash memory, read only memory (ROM), or other similar memory devices.
[0244] The user interface 403 may include one or more devices, as known in the art of the present disclosure or hereinafter conceived, for enabling communication with a user such as an administrator. In a non-limiting example, the user interface may include a command line interface or graphical user interface that may be presented to a remote terminal via the network interface 404.
[0245] The network interface 404 may include one or more devices, as known in the art of the present disclosure or hereinafter conceived, for enabling communication with other hardware devices. In a non-limiting example, the network interface 404 may include a network interface card (NIC) configured to communicate according to the Ethernet protocol. Additionally, the network interface 404 may implement a TCP/IP stack for communication according to the TCP/IP protocols. Various alternative or additional hardware or configurations for the network interface 404 will be apparent.
[0246] The storage 405 may include one or more machine-readable storage media, as known in the art of the present disclosure or hereinafter conceived, including, but not limited to, read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, or similar storage media. In various non-limiting embodiments, the storage 405 may store instructions for execution by the processor(s) 401 or data upon with the processor(s) 401 may operate. For example, the storage 405 may store a base operating system for controlling various basic operations of the hardware. The storage 405 also stores application modules in the form of executable software/firmware for implementing the various functions of the controller 400a as previously described in the present disclosure including, but not limited to, forward predictive model(s) 60, inverse predictive model(s) 70 and imaging predictive model(s) 80 as previously described in the present disclosure.
[0247] In practice, controller 400 may be installed within an X-ray imaging system 500, an intervention system 501 (e.g., an intervention robot system), or a stand-alone workstation 502 in communication with X-ray imaging 500 system and/or intervention system 501 (e.g., a client workstation or a mobile device like a tablet). Alternatively, components of controller 400 may be distributed among X-ray imaging system 500, intervention system 501 and/or stand-alone workstation 502.
[0248] Also in practice, additional controllers of the present disclosure including a shape sensing controller, a data storage controller and a data acquisition controller may also each include one or more processor(s), memory, a user interface, a network interface, and a storage interconnected via one or more system buses as arranged in
[0249] Referring to
[0250] Further, as one having ordinary skill in the art will appreciate in view of the teachings provided herein, structures, elements, components, etc. described in the present disclosure/specification and/or depicted in the Figures may be implemented in various combinations of hardware and software, and provide functions which may be combined in a single element or multiple elements. For example, the functions of the various structures, elements, components, etc. shown/illustrated/depicted in the Figures can be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software for added functionality. When provided by a processor, the functions can be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which can be shared and/or multiplexed. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and can implicitly include, without limitation, digital signal processor (“DSP”) hardware, memory (e.g., read only memory (“ROM”) for storing software, random access memory (“RAM”), non-volatile storage, etc.) and virtually any means and/or machine (including hardware, software, firmware, combinations thereof, etc.) which is capable of (and/or configurable) to perform and/or control a process.
[0251] Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (e.g., any elements developed that can perform the same or substantially similar function, regardless of structure). Thus, for example, it will be appreciated by one having ordinary skill in the art in view of the teachings provided herein that any block diagrams presented herein can represent conceptual views of illustrative system components and/or circuitry embodying the principles of the invention. Similarly, one having ordinary skill in the art should appreciate in view of the teachings provided herein that any flow charts, flow diagrams and the like can represent various processes which can be substantially represented in computer readable storage media and so executed by a computer, processor or other device with processing capabilities, whether or not such computer or processor is explicitly shown.
[0252] Having described preferred and exemplary embodiments of the various and numerous inventions of the present disclosure (which embodiments are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the teachings provided herein, including the Figures. It is therefore to be understood that changes can be made in/to the preferred and exemplary embodiments of the present disclosure which are within the scope of the embodiments disclosed herein.
[0253] Moreover, it is contemplated that corresponding and/or related systems incorporating and/or implementing the device/system or such as may be used/implemented in/with a device in accordance with the present disclosure are also contemplated and considered to be within the scope of the present disclosure. Further, corresponding and/or related method for manufacturing and/or using a device and/or system in accordance with the present disclosure are also contemplated and considered to be within the scope of the present disclosure.