System for Monitoring a Clinical Scenario

20230030273 · 2023-02-02

Assignee

Inventors

Cpc classification

International classification

Abstract

The present invention relates to a system for monitoring a non-static clinical scenario, preferably a surgical field, such that different elements of interest can be located throughout the entire clinical event; in particular, a high-precision monitoring and distinction of critical biological tissues, such as nerves or blood vessels, is sought.

Claims

1-26. (canceled)

27. A system for monitoring biological tissues or other elements or structures that are present in a clinical scenario, the clinical scenario being a set of elements or structures on which a surgery, a diagnosis event, or a treatment event takes place, the system comprising: a) a control module with a processing capacity; and b) a viewing unit connected to the control module and controlled by the control module, the viewing unit configured to capture images of a clinical scenario within a field of vision (C.sub.s), wherein captured images provide information about an orientation, shape, and position of the clinical scenario; wherein the control module and the viewing unit are configured to: capture at least one initial monitoring image (S.sub.0) of the clinical scenario within the field of vision (C.sub.s) of the viewing unit; capture a plurality of monitoring images (S.sub.i) of the clinical scenario within the field of vision (C.sub.s) of the viewing unit in different instants in time, where i is an index indicating a time instant of capture of a captured i-th monitoring image (S.sub.i); executing a monitoring algorithm (A) of the control module to: determine a plurality of initial characteristic reference points (F.sub.0) belonging to the clinical scenario and shown in at least one initial monitoring image (S.sub.0)to establish a spatial position (P.sub.0); determine a plurality of characteristic reference points (F.sub.i) for each of monitoring images (S.sub.i), a plurality of characteristic reference points (F.sub.i) belonging to the clinical scenario, which match characteristic reference points determined in an immediately preceding monitoring image (S.sub.i−1), and establishing their spatial position (P.sub.i); determine a transformation (T.sub.i) relating a position (P.sub.i) of the characteristic reference points (F.sub.i) determined in index i with a position (P.sub.i−1) of corresponding characteristic reference points (F.sub.i−1) determined in instant i−1; wherein the control module is further configured to: receive or generate at least one region of interest (R) when the captured monitoring image is the at least one initial monitoring image (S.sub.0), and spatially position the at least one region of interest (R) with respect to each monitoring image (S.sub.i), wherein the monitoring algorithm (A) is configured such that a concentration of characteristic reference points (F.sub.i) in the at least one region of interest (R) is greater than a concentration of characteristic reference points (F.sub.i) in a remainder of the monitoring image (S.sub.i); and wherein the system further comprises at least one of a spatial positioning or orientation unit coupled to the viewing unit, the at least one of the spatial position or orientation unit connected to the control module, and controlled by the control module, the control module further configured to perform at least one of moving, displacing, or orienting the viewing unit using at least one of the spatial positioning or orientation unit by applying the transformation (T.sub.i) to a position and orientation before the movement for determining a new position and orientation, so as to capture quasi-static images of the clinical scenario.

28. The system according to claim 27, wherein for each monitoring image (S.sub.i) the control module is further configured, to determine a numerical model of the clinical scenario comprising the orientation, shape, and position of the clinical scenario from the monitoring image (S.sub.i), and incorporate the plurality of characteristic reference points (F.sub.i) determined by the monitoring algorithm (A) belonging to the clinical field in the numerical model associated with the monitoring image (S.sub.i).

29. The system according to claim 27, wherein the monitoring algorithm (A) is additionally configured to at least one of: determine characteristic reference points (F.sub.i) in the i-th monitoring image (S.sub.i) that are not determined in the immediately preceding monitoring image (S.sub.i−1); ordiscard characteristic reference points (F.sub.i−1) in the i-th index in time that are determined in the monitoring image (S.sub.i−1) captured in the immediately preceding instant in time.

30. The system according to claim 27, wherein the control module is further configured to receive or generate at least one new region of interest (R) in an i-th monitoring image (S.sub.i), where i>0, the new region of interest (R) being at least one of included in and replacing, completely or partially, the set of pre-existing regions of interest.

31. The system according to claim 27, wherein the control module and the viewing unit are spatially pre-calibrated such that a match is established between the position of the points in the field of vision (C.sub.s) of the viewing unit and a position of the points in real space.

32. The system according to claim 27, wherein the transformation (T.sub.i) between first and second consecutive monitoring images (S.sub.i−1, S.sub.i) is a linear transformation corresponding to a rigid solid model of the clinical scenario, wherein the linear transformation verifies that a distance between the characteristic reference points (F.sub.i−1) of the first monitoring image (S.sub.i−1) transformed by the transformation (T.sub.i) and the characteristic reference points (F.sub.i) of the second monitoring image (S.sub.i) is minimal.

33. The system according to claim 27, wherein the transformation (T.sub.i) is a non-linear transformation corresponding to a deformation model, wherein each pair of characteristic reference points (F.sub.i−1, F.sub.i) between two consecutive monitoring images (S.sub.i−1, S.sub.i) has its own match.

34. The system according to claim 33, wherein a transformation for the remaining points which are not characteristic reference points (F.sub.i) is additionally established by an interpolatory deformation model subject to the characteristic reference points (F.sub.i) having the match given by the transformation (T.sub.i).

35. The system according to claim 27, wherein the control module is further configured to perform at least one of moving, displacing, or orienting the viewing unit using at least one of the spatial positioning or orientation unit when the control module automatically detects that the at least one region of interest (R) is located at a smaller distance than a predetermined distance from one of the ends of the field of vision (C.sub.s) of the viewing unit.

36. The system according to claim 27, wherein the control module is further configured to perform at least one of moving, displacing, or orienting the viewing unit using at least one of the spatial positioning or orientation unit when the control module receives an order to at least one of displace or orient the viewing unit through at least one of a peripheral or other communication paths.

37. The system according to claim 29, wherein the control module is configured to: transform the i-th monitoring image (S.sub.i) from transformation (T.sub.i) such that a distance between the characteristic reference points (F.sub.i) of the i-th monitoring image (S.sub.i) and the reference points (F.sub.i−1) of the immediately preceding monitoring image (S.sub.i−1) is minimal, and transform the at least one region of interest (R) from the same transformation (T.sub.i) in the monitoring image (S.sub.i).

38. The system according to claim 28 , wherein the control module is additionally configured for transforming the numerical model associated with the i-th monitoring image (S.sub.i) from the transformation (T.sub.i) such that the distance between the characteristic reference points (F.sub.i) of the i-th monitoring image (S.sub.i) and the reference points (F.sub.i−1) of the immediately preceding monitoring image (S.sub.i−1) is minimal.

39. The system according to claim 27, wherein the monitoring algorithm (A) is aSimultaneous Localization and Mapping (SLAM) type algorithm.

40. The system according to claim 27, wherein the system further comprises a critical structure distinction module in communication with the control module, the critical structure distinction module being controlled by the control module, and wherein: the critical structure distinction module is configured to: generate measurements (m.sub.j) in a plurality of spatial points of its field of vision (C.sub.v), where j is an index indicating different instants in time; process the measurements (m.sub.j) and carry out a distinction of one or more structures (D.sub.j) in said plurality of spatial points, and send the information associated with a distinction of structures (D.sub.j) to the control module (2); and wherein the at least one region of interest (R) received or generated by the control module is selected from a monitoring image (S.sub.0, S.sub.i) depending on information associated with a j-th distinction of structures (D.sub.j), where a j-th instant in time of generating measurements (m.sub.j) is closest to the i-th instant in time of capturing the monitoring image (S.sub.0, S.sub.i).

41. The system according to claim 40, wherein the critical structure distinction module is spatially pre-calibrated such that a match is established between a position of the points of its field of vision (C.sub.v) and a position of the points in the field of vision (C.sub.s) of the viewing unit (3).

42. The system according to claim 40, wherein the critical structure distinction module is spatially pre-calibrated such that a match is established between the position of the points of its field of vision (C.sub.v) and the position of the points in real space.

43. The system according to claim 40, wherein the system further comprises a memory configured for storing, for each spatial point, the measurements (m.sub.j) carried out by the critical structure distinction module, such that if there are measurements stored in time instant j−1 (m.sub.j−1) and new measurements are generated in time instant j (m.sub.j), the memory is updated with the measurements of time instant j for each spatial point.

44. The system according to claim 43, wherein the control module transforms the information about the distinction of structures (D.sub.j) from transformation (T.sub.i) determined by the monitoring algorithm (A), wherein the j-th instant in time of generating the information about the distinction of structures (D.sub.j) is the closest to the i-th instant in time of capturing the monitoring image (S.sub.0, S.sub.i).

45. The system according to claim 40, wherein coordinates of the spatial points in which the plurality of measurements (m.sub.j) of distinction is performed are expressed in the coordinates of the numerical model determined after transformation (T.sub.i).

46. The system according to claim 27, wherein a selection of the at least one region of interest (R) is carried out according to a predefined criterion, wherein the at least one region of interest (R) comprises at least one of a nerve, tract of brain, blood vessel, a tissue considered critical, soft tissue, bone, or at least one reference marker.

47. The system according to claim 46, wherein the field of vision (C.sub.v) of the critical structure distinction module includes: the at least one region of interest (R), or the at least one region of interest (R) enlarged by predefined margins.

48. The system according to claim 27, wherein the viewing unit comprises at least one of a RGB or monochrome camera, a RGB-D or monochrome-D camera with a depth sensor, a camera with spectral, multispectral, or hyperspectral filtering, ultrasound equipment, magnetic resonance equipment, computerized tomography equipment, and polarization-sensitive optical coherence tomography equipment.

49. The system according to claim 48, wherein the critical structure distinction module carries out the distinction of one or more structures (D.sub.j) utilizing at least one of laser-induced breakdown spectroscopy (LIBS), optical coherence tomography (OCT), polarization-sensitive optical coherence tomography (PS-OCT), hyperspectral imaging, linear spectrometry based on endogenous or exogenous contrast, or non-linear spectrometry based on endogenous or exogenous contrast.

50. The system according to claim 27, wherein: a maximum number of monitoring images (S.sub.i) is at least one of: a predefined integer greater than or equal to one, established upon ending a clinical event or session, and established upon reaching a predefined time limit, and a maximum number of times the critical structure distinction module generates measurements (m.sub.j) is at least one of: a predefined integer greater than or equal to one, established upon ending the clinical event or session, and established upon reaching a predefined time limit.

51. The system according to claim 47, wherein the control module is further configured to at least one of displace or orient the viewing unit by means of at least one of the spatial positioning or orientation unit when the control module automatically detects that the at least one region of interest (R) is located at a smaller distance than a predetermined distance from one of the ends of the field of vision (C.sub.v) of the critical structure distinction module.

Description

DESCRIPTION OF THE DRAWINGS

[0219] These and other features and advantages of the invention will become more apparent based on the following detailed description of a preferred embodiment, given solely by way of non-limiting illustrative example in reference to the attached drawings.

[0220] FIGS. 1a and 1b schematically show the state of the art of a system for monitoring a clinical scenario.

[0221] FIGS. 2a and 2b show a diagram of the system for monitoring a clinical scenario according to an embodiment of the invention.

[0222] FIG. 3 shows a diagram of the system for monitoring a clinical scenario according to an embodiment of the invention comprising a spatial positioning and/or orientation unit.

[0223] FIG. 4 shows a diagram of the system for monitoring a clinical scenario according to an embodiment of the invention comprising a critical structure distinction module.

DETAILED DISCLOSURE OF THE INVENTION

[0224] The present invention describes a system (1) for monitoring a clinical scenario (6). Said clinical scenario (6) is not static, but rather undergoes variations over time for a number of reasons, such as the patient's breathing or the interaction of medical instruments or medical personnel with the tissues of the patient. Due to this movement, there is a need to perform comprehensive monitoring of the elements appearing in the clinical scenario (6), particularly structures of interest; for example, during an ablation surgery it is of interest to know the position, orientation, and shape of certain biological tissues such as nerves or blood vessels.

[0225] FIG. 1a schematically shows a system (1) for monitoring a clinical scenario (6) comprising: [0226] a) a control module (2) with processing capacity and with the capacity to execute the instructions of a monitoring algorithm (A), [0227] b) a viewing unit (3), connected to the control module (2) and controlled by said control module (2), adapted for capturing images of a clinical scenario (6) within its field of vision (C.sub.s), wherein the captured images provide information about the orientation, shape, and position of said clinical scenario (6).

[0228] A system (1) such as the one described is known in the state of the art, where it is common for both elements to work together. On one hand, the viewing unit (3) is configured for capturing at least one initial monitoring image (S.sub.0) and a plurality of monitoring images (S.sub.i) of the clinical scenario (6) within its field of vision (C.sub.s) in different i-th instants in time. All the monitoring images (S.sub.0, S.sub.i) are sent by the viewing unit (3) to the control module (2).

[0229] In this example shown in FIG. 1a, the clinical scenario (6) comprises the stretcher on which the patient to be operated on is placed, as well as the patient, medical personnel, and the entire environment in which the clinical event occurs. Preferably, the clinical event is a surgery and the clinical scenario is a surgical field comprising, among other elements, the biological tissues of the patient, surgical instruments, or sterile material such as gauzes or swabs.

[0230] The control module (2) receives the monitoring images (S.sub.0, S.sub.i) captured by the viewing unit (3) and carries out on said images a given type of processing. In particular, by means of executing the monitoring algorithm (A), in the initial instant of the clinical event it determines a plurality of initial characteristic reference points (F.sub.0) belonging to the clinical scenario (6)—shown in the at least one initial monitoring image (S.sub.0)—and establishes their spatial position (P.sub.0). As it receives the rest of the monitoring images (S.sub.i), it performs the same operation for each of them, i.e., it determines a plurality of characteristic reference points (F.sub.i) in each monitoring image (S.sub.i) belonging to the clinical scenario (6) and establishes their spatial position (P.sub.i).

[0231] The characteristic reference points (F.sub.0, F.sub.i) are pixels or pixel regions of the monitoring images (S.sub.0, S.sub.i) readily distinguishable with respect to their environment. The distinction criteria of said points are of a varying nature, inter alia: [0232] characteristic points or regions of the image, such as a mole in the skin or luster in a surgical instrument; or [0233] specific colors of the image, such as the red color of blood or the metallic of surgical instruments; or [0234] specific textures of the image, such as the aqueous texture of given fluids; or [0235] regions of the image with high contrast with respect to their environment, such as the presence of a tumor; or [0236] regions of the image with high heterogeneity with respect to their environment, such as the presence of a set of blood vessels opposite a homogeneous area such as the skin; or [0237] a combination of any of the preceding criteria.

[0238] In general, the characteristic reference points of two consecutive monitoring images (S.sub.i−1, S.sub.i) match one another, i.e., pairs of points can be established between two consecutive monitoring images. It must be observed that it is possible for certain characteristic reference points to be determined in a single image of the monitoring images, so said points will be lacking said match, at least between two specific instants in time. It must also be noted that for the instant in time i=1, the match between pairs of points is established between the points of the at least one initial monitoring image (S.sub.0) and the points of the monitoring image in said instant in time (S.sub.1).

[0239] As a result of the match between pairs of points, the monitoring algorithm (A) can infer the relative movement between the viewing unit (3) and the clinical scenario (6) over time. Thus, starting from said matches, the monitoring algorithm (A) determines a transformation (T.sub.i) relating the position (P.sub.i) of the characteristic reference points (F.sub.i) determined in instant i with the position (P.sub.i−1) of the corresponding characteristic reference points (F.sub.i−1) determined in instant i−1.

[0240] Preferably, the monitoring algorithm (A) is a SLAM (Simultaneous Localization and Mapping) type algorithm, the capabilities of which in contexts such as robotics or surgical navigation are highly demonstrated.

[0241] FIG. 1b shows an example of how the monitoring algorithm (A) executed by the control module (2) determines characteristic reference points (F.sub.i) in a monitoring image (S.sub.i). As can be seen, said points are distributed over the entire monitoring image (S.sub.i), covering at least most of the structures of the clinical scenario (6) visible in the monitoring image (S.sub.i).

[0242] Therefore, no type of specific structure of the clinical scenario (6) is given priority, there being a compromise between the monitoring of all the structures appearing in the monitoring image (S.sub.i)—tissues and other elements of the clinical scenario (6)—and the precision attained. in. said monitoring.

[0243] This solution would not be suitable when comprehensive monitoring of certain critical structures is to be performed, or when the number of characteristic reference points (F.sub.i) is high, since by means of a characteristic monitoring algorithm (A) of the state of the art, information about the position, orientation, and shape of said structures with insufficient precision and/or a high computational cost would be achieved.

[0244] Depending on the context of the clinical event, the critical structures are different; for example, in ablation or cutting surgery comprehensive monitoring of veins and nerves should performed so that medical personnel or a robotic surgical system does not act on said structures. In contrast, in a therapeutic method, the objective of monitoring a critical structure can be to apply the treatment exclusively on that area, such as for example during optical irradiation on cutaneous lesions or the stimulation of nerve tissue. The monitoring of critical structures in diagnosis applications should also be mentioned, where for example it may be of interest to display a critical region. such as a node by means of medical imaging techniques for the time needed to complete the diagnosis.

[0245] The solution proposed by the invention solves this problem of the state of the art. FIGS. 2a and 2b illustrate the system (1) of the invention, which seeks to improve precision in the monitoring of critical structures of the clinical scenario (6).

[0246] This system (1) comprises the same elements as those described in the preceding figures. The viewing unit (3) of the system (1) is preferably a stereoscopic system comprising two RGB cameras. In alternative examples, the viewing unit comprises (3): [0247] a single RGB camera, or [0248] at least one monochrome camera, or [0249] at least one RGB-D or monochrome-D camera with a depth sensor, or [0250] at least one camera with spectral, multispectral, or hyperspectral filtering, or [0251] ultrasound equipment, or [0252] magnetic resonance equipment, or [0253] computerized tomography equipment, or [0254] preferably polarization-sensitive optical coherence tomography equipment, or [0255] a combination of any of the foregoing.

[0256] In the preferred embodiment, it must be observed that since the viewing unit (3) comprises two RGB cameras, two initial monitoring images will be captured separately. The information provided by one of said images, referred to as dominant, is completed with information from the other “extra” monitoring image.

[0257] The pixels or voxels of the field of vision (C.sub.s) of this viewing unit (3) match physical points of real space in which the clinical event takes place. To that end, preferably, the system (1) for monitoring—control module (2) and viewing unit (3)—is spatially calibrated.

[0258] Moreover, the number of monitoring images (S.sub.i) acquired by the viewing unit (3) during the clinical event can be a predefined integer greater than or equal to one, be established upon ending said clinical event, or be established upon reaching a predefined time limit.

[0259] The system (1) of the invention is characterized in that, unlike the system shown in FIGS. 1a and 1b, the control module (2) is additionally configured for receiving or generating at least one region of interest (R) when the captured monitoring image is the at least one initial monitoring image (S.sub.0), and for spatially positioning the at least one region of interest (R) with respect to each monitoring image (S.sub.i). Furthermore, the monitoring algorithm (A) is configured such that the concentration of characteristic reference points (F.sub.i) in the at least one region of interest (R) is greater than the concentration of characteristic reference points (F.sub.i) in the rest of the monitoring image (S.sub.i). Preferably, said monitoring algorithm (A) is a modified SLAM algorithm.

[0260] In particular, FIG. 2a shows how the control module (2) receives a region of interest (R) and positions it with respect to a monitoring image (S.sub.i). This region of interest (R) is a sub-region of the at least one initial monitoring image (S.sub.0) in which there appear elements of the clinical scenario (6) with respect to which a higher precision monitoring is to be obtained.

[0261] In an alternative embodiment, the control module (2) itself is in charge of generating said region of interest (R). For the sake of simplicity, said FIG. 2a shows only one region of interest (R), but the control module (2) can receive—or generate—more than one region of interest (R).

[0262] As already discussed, the region or regions of interest (R) comprise critical structures. Preferably, the critical structures are biological tissues such as nerves, tract of brain, blood vessels, tumors, soft tissue, or bone. Other examples of critical structures other than biological tissues are reference markers present in the clinical scenario (6) or the actual surgical instruments or material.

[0263] FIG. 2b shows how the monitoring algorithm (A), taking into account the region of interest (R), determines the plurality of characteristic reference points (F.sub.i) in the monitoring image (S.sub.i) such that the concentration of said points is higher in the region of interest (R) than in the rest of the monitoring image (S.sub.i). As mentioned above, in general the characteristic reference points (F.sub.i) determined for a monitoring image (S.sub.i) match the characteristic reference points (F.sub.i−1) determined for the immediately preceding monitoring image (S.sub.i−1). However, there are two exceptions which the system (1) of the invention takes into account for reducing the computational cost of processing.

[0264] On one hand, it is possible for the monitoring algorithm (A) to identify at least one reference point (F.sub.i) in a monitoring image (S.sub.i) which was not determined in the preceding monitoring image (S.sub.i−1). This case may occur, for example, when because of the movement of the clinical scenario (6), elements which previously were not visible and which are susceptible to being identified with a characteristic reference point enter the field of vision (C.sub.s) of the viewing unit (3).

[0265] Moreover, even though the monitoring algorithm (A) has identified at least one reference point (F.sub.i−1) in a monitoring image (S.sub.i−1), said at least one point can be discarded in the immediately following monitoring image (S.sub.i). This case may occur either because the structure that has given rise to the at least one reference point is no longer visible in the i-th instant in time (in which case the monitoring algorithm (A) can temporarily store it if it becomes visible again in following instants in time), or simply because the at least one point is no longer a good candidate as a reference point on which to base the monitoring of structures.

[0266] This flexibility allows the monitoring algorithm (A) to adapt to the changes the clinical scenario (6) experiences. Thus, it only analyzes the reference points which give rise to good precision in the monitoring of the clinical scenario (6) without wasting computational resources in the processing of points which are not of interest.

[0267] In an alternative example, when more precision in the monitoring of critical structures than that provided by conventional monitoring algorithms (A) is not required, with the system (1) of the invention the monitoring speed of structures increases substantially. This is possible as a result of the reduction of the number of characteristic reference points (F0 to be analyzed produced by the optimal choice thereof.

[0268] In a preferred example, the region or regions of interest (R) are updated over time. To that end, the control module (2) is additionally configured for receiving or generating at least one new region of interest (R) in a monitoring image (S.sub.i), for i>0. A first option is for the new region or regions of interest (R) to be included as part of the already pre-existing set of regions of interest (R), for example, if a new critical structure enters the field of vision (C.sub.s) of the viewing unit (3) and detailed monitoring thereof is required. Another option is for the new region or regions of interest (R) to replace one of the pre-existing regions of interest (R), for example, if, due to the movement of the clinical scenario (6) the critical structures of a region of interest (R), have changed in position, shape, and/or orientation. A final option is for a combination of the two preceding options to be produced, i.e., one or more of the new regions of interest (R) are included in the pre-existing set of regions and another new region of interest or other new regions of interest (R) replace one or more pre-existing regions.

[0269] Once the characteristic reference points (F.sub.i) have been determined and positioned, the monitoring algorithm (A) determines the transformation (T.sub.i) relating the position (P.sub.i) of the characteristic reference points (F.sub.i) determined in instant i with the position (P.sub.i−1) of the corresponding characteristic reference points (F.sub.i−1) determined in instant i−1. There are different types of transformation (T.sub.i), i.e., linear or non-linear, which entail different degrees of mathematical complexity, and therefore computational cost.

[0270] Preferably, the monitoring algorithm (A) determines that the transformation (T.sub.i) between two consecutive monitoring images (S.sub.i−1,S.sub.i) is of the linear type and corresponds to a rigid solid model of the clinical scenario (6). This linear transformation (T.sub.i) has to verify that the distance between the characteristic reference points (F.sub.i−1) of the first monitoring image (S.sub.i−1) transformed by the transformation (T.sub.i) and the characteristic reference points (F.sub.i) of the second monitoring image (S.sub.i) is minimal.

[0271] This linear transformation assumes that the clinical scenario (6) is non-deformable over time. Thus, one skilled in the art would not use a transformation (T.sub.i) of this type to perform monitoring of structures undergoing temporary deformations, such as those characteristic of the clinical scenario (6) of the invention.

[0272] Even if one skilled in the art decided to use this type of transformation (T.sub.i), he or she would tend to separate the reference points (F.sub.i) in the monitoring images (S.sub.i) as much as possible in order to cover, to the extent possible, the entire clinical scenario (6) because the greater distance between points allows increasing precision in the orientation of the referenced object. Nevertheless, it has been observed that precision in the monitoring of structures is obtained which is insufficient in the context of the invention, particularly in the monitoring of critical structures. To improve this precision, one skilled in the art would use a number of reference points (F.sub.i) much greater than what any standard monitoring algorithm (A) would determine, which would exponentially increase the computational cost, rendering the monitoring of clinical scenarios (6) in quasi real time unviable.

[0273] However, the system (1) of the invention is characterized in that the control module (2) receives or generates one or more regions of interest (R) in which the monitoring algorithm (A) determines the highest concentration of characteristic reference points (F.sub.i), which surprisingly allows applying a simple linear transformation (T.sub.i) for carrying out the monitoring of a deformable scenario with high precision in the monitoring of the critical structures visible in the region or regions of interest (R). On one hand, the regions of interest (R) are of a limited size compared with the size of the monitoring images (S.sub.i), and on the other, the concentration of reference points (F.sub.i) in said regions of interest (R) is greater than in the rest of the monitoring image (S.sub.i). Thus, the relative positions between the characteristic reference points (F.sub.i) of the regions of interest (R) do not undergo or barely undergo variations between consecutive monitoring images (S.sub.i−1, S.sub.i). With this type of transformation, precision in the monitoring of the rest of the visible structures in the monitoring images (S.sub.i) would be lost since the actual deformation that said structures undergo is not taken into account. However, given that these structures are not contained in regions of interest (R), it can be considered that they are not critical, and therefore their monitoring does not require high precision. The main advantage of applying this model is its simplicity and the low computational cost required by the determination of the transformation (T.sub.i).

[0274] Alternatively, the monitoring algorithm (A) determines that the transformation (T.sub.i) between two consecutive monitoring images (S.sub.i−1,S.sub.i) is of the non-linear type and corresponds to a deformable solid model of the clinical scenario (6). In this transformation (T.sub.i), which is more complex than the aforementioned linear transformation, each pair of reference points (F.sub.i−1,F.sub.i) between two consecutive monitoring images (S.sub.i−1, S.sub.i) has its own match, i.e., a different transformation (T.sub.i) is applied to each pair of points.

[0275] This type of transformation (T.sub.i) is of the type which, a priori, one skilled in the art would use since the clinical scenario (6) deforms over time. However, as discussed for the preceding case, he or she would tend to separate the reference points (F.sub.i) in the monitoring images (S.sub.i) as much as possible so as to cover, to the extent possible, the entire clinical scenario (6) and achieve a smooth and realistic deformation by analyzing the deformation in a larger area. Although the precision obtained in the monitoring of structures would be greater than that obtained by a linear transformation, it would not be focused on the critical structures as is required in the context of the invention. Thus, as there is no information available about the critical structures, one skilled in the art would use a much larger number of reference points (F.sub.i) than what any standard monitoring algorithm (A) would determine, which would exponentially increase the computational cost, rendering the monitoring of clinical scenarios (6) in quasi real time unviable. It must be observed that the computational cost in this case would even be a much more critical factor than for a linear transformation, since in this case a different match must be determined for each pair of points.

[0276] In contrast, using this model in the context of the invention, i.e., using regions of interest (R) with higher concentrations of reference points (F.sub.i) than in the rest of the monitoring image (S.sub.i), entails the advantage of obtaining very high precision in the monitoring of structures belonging to the region or regions of interest (R). Evidently, the determination of this type of transformation (T.sub.i) represents a higher computational cost than that corresponding to a single linear transformation for all the reference points.

[0277] Additionally, non-linear transformation (T.sub.i) can be extrapolated to the remaining points of the monitoring image (S.sub.i) which are not characteristic reference points (F.sub.i) using an interpolatory deformation model. Therefore, even more precision is achieved in the monitoring of structures at the expense of a higher computational cost.

[0278] FIG. 3 shows an embodiment in which the system (1) additionally comprises a spatial positioning and/or orientation unit (5) coupled to the viewing unit (3). This unit (5) is connected and controlled by the control module (2), which is additionally configured for displacing and/or orienting the viewing unit (3) by means of this spatial positioning and/or orientation unit (5) depending on the transformation (T.sub.i) determined by the monitoring algorithm (A).

[0279] In an alternative example, the viewing unit (3) is repositioned and/or reoriented if the control module (2) automatically detects that the at least one region of interest (R) is located at a smaller distance than a predetermined distance from one of the ends of the field of vision (C.sub.s) of the viewing unit (3). In another alternative example, the viewing unit (3) is repositioned and/or reoriented if the control module (2) receives an order to displace and/or orient the viewing unit (3) through a peripheral—for example, by means of a joystick—and/or through other communication paths—for example, via WiFi.

[0280] The objective of this repositioning and/or reorientation of the viewing unit (3) is to achieve capturing quasi-static monitoring images (S.sub.i) of the clinical scenario (6), i.e., after repositioning and/or reorienting the viewing unit (3), it will be perceived that the captured monitoring images (S.sub.i) have not varied over time, or that at least said variation was minimal. Thus, the system (1) actively monitors the movement of the clinical scenario (6) such that even though it experiences variations, the system (1) is always positioned with respect to the scenario (6) such that the viewing unit (3)—clinical scenario (6) relative position ideally remains static.

[0281] In an alternative embodiment, the system (1) does not comprise this spatial positioning and/or orientation unit (5) so the viewing unit (3) remains static. Thus, in order to achieve the same effect of the quasi-static clinical scenario (6), the control module (2) is additionally configured for processing the monitoring images (S.sub.i) by applying the transformation (T.sub.i) determined by the monitoring algorithm (A) on said images (S.sub.i). To that end, the control module (2) transforms the i-th monitoring image from transformation (T.sub.i), such that the distance between the characteristic reference points (F.sub.i) of this image (S.sub.i) and the reference points (F.sub.i−1) of the immediately preceding monitoring image (S.sub.i−1) is minimal. Additionally, so that the region or regions of interest (R) continue representing the same area of the monitoring image (S.sub.i), the control module (2) also transforms said region or regions (R) from the same transformation (T.sub.i).

[0282] According to another embodiment, the number of degrees of freedom of the spatial positioning and/or orientation unit (5) is limited, for example, to two movements on a horizontal plane, and therefore compensation for the movements is not sufficient. This embodiment combines the use of this spatial positioning and/or orientation unit (5) and the application of the transformation (T.sub.i) determined by the monitoring algorithm (A) on the images (S.sub.i) after the compensation applied by the spatial positioning and/or orientation unit (5), improving the end result since this latter transformation gives rise to minor displacements and rotations.

[0283] In a particular example, the system (1) of FIGS. 2a, 2b, and 3 is configured, for each monitoring image (S.sub.i), for determining a numerical model of the clinical scenario (6) comprising the orientation, shape, and position of said clinical scenario (6) from said monitoring image (S.sub.i). Additionally, the system (1) is configured for incorporating the plurality of characteristic reference points (F.sub.i) determined by the monitoring algorithm (A) belonging to the clinical field in the numerical model associated with said monitoring image (S.sub.i). A data structure intended for storing this numerical model comprises the monitoring image (S.sub.i) as well as data sub-structures intended for storing other elements, such as for example, surface and volume model equations, reference points, or other images representing scalar values of properties.

[0284] Additionally, in order to achieve the aforementioned effect of a quasi-static clinical scenario (6), the control module (2) is additionally configured for transforming the numerical model associated with the monitoring image (S.sub.i) from transformation (T.sub.i) such that the distance between the characteristic reference points (F.sub.i) of the monitoring image (S.sub.i) and the reference points (F.sub.i−1) of the immediately preceding monitoring image (S.sub.i−1) is minimal.

[0285] FIG. 4 shows an embodiment in which the system (1) additionally comprises a critical structure distinction module (4). Said module has the function of distinguishing, identifying, or discriminating which tissues or structures of those present in the clinical scenario (6) are critical or non-critical. For example, in the context of ablation or cutting surgery, the critical structure distinction module discriminates between the tissues on which it is possible to carry out an ablation or cutting operation from those in which it is not. In the context of a clinical event of treatment, the distinction module discriminates between the tissues on which the treatment must be applied from those on which it must not. And finally, in the case of a diagnosis operation, the diagnosis module discriminates between the tissues of interest on which the diagnosis is to be focused from those which are irrelevant for determining the healthy or pathological condition of the patient.

[0286] Preferably, the technique used by the critical structure distinction module (4) is laser-induced breakdown spectroscopy (LIBS). In other alternative examples, the technique is one of the following: [0287] optical coherence tomography (OCT), or [0288] polarization-sensitive optical coherence tomography (PS-OCT), or [0289] hyperspectral imaging, or [0290] linear or non-linear spectrometry based on endogenous or exogenous contrast, or [0291] a combination of any of the preceding techniques.

[0292] This additional module, which communicates with the control module (2) and is controlled by said control module (2), generates measurements (m.sub.j) in a plurality of spatial points of its field of vision (C.sub.v), where j is an index indicating different instants in time. In a particular example, the measurements (m.sub.j) carried out by the critical structure distinction module (4) is stored in a memory comprised in the system (1). These measurements (m.sub.j) must furthermore be updated in subsequent time instants.

[0293] The number of measurements (m.sub.j) generated by the critical structure distinction module (4) during the clinical event may be is a predefined integer greater than or equal to one, may be established upon ending said clinical event, or may be established upon reaching a predefined time limit.

[0294] Once the measurements (m.sub.j) have been taken, the critical structure distinction module (4) processes them and carries out a distinction of one or more structures (D.sub.j) in said plurality of spatial points (m.sub.j). Finally, it sends the information associated with the distinction of structures (D.sub.j) to the control module (2). In a preferred embodiment, for this information and the monitoring of structures to be consistent, the control module (2) transforms the information about the distinction of structures (D.sub.j) from the transformation (T.sub.i) determined by the monitoring algorithm (A).

[0295] This information associated with the distinction of structures (D.sub.j) serves as a basis for selecting the at least one region of interest (R) of a monitoring image (S.sub.0, S.sub.i) taking into account that the instant of generating measurements (m.sub.j) must be the closest to the instant in time of capturing the monitoring image to minimize the effects of movement of the clinical scenario (6).

[0296] In a particular example, the control module (2) generates the at least one region of interest (R) from the information associated with the distinction of structures (D.sub.j). The minimum requirement is that the region of interest (R) must contain at least the region of spatial points encompassing a critical structure according to the information provided by the distinction module (4). Preferably, the delimitation is established through a hull. More preferably, the hull is convex. Additionally, the control module (2) adds a certain safety margin to said hull by generating a hull expanded a determined distance.

[0297] In another particular example, the medical personnel selects the at least one region of interest (R) from the information associated with the distinction of structures (D.sub.j) and sends it to the control module (2).

[0298] As can be seen in FIG. 4, the fields of vision (C.sub.s, C.sub.v) of the viewing unit (3) and of the critical structure distinction module (4) do not have to encompass the same extension. Preferably, both fields of vision at least partially overlap one another.

[0299] For the monitoring of the critical structures of the clinical scenario (6) to be precise, both fields of vision (C.sub.s, C.sub.v) are required to have the same coordinate system. Thus, the critical structure distinction module (4) and the viewing unit (3) of the system (1) shown in FIG. 4 are spatially calibrated. Additionally, the critical structure distinction module (4) is also spatially calibrated so that its system has the same references as the real space in which the clinical event takes place.

[0300] In a preferred embodiment, the field of vision of the critical structure distinction module (4) is limited to the region of interest (R) with or without predefined safety margins. Therefore, measurement time of the distinction module (4) is reduced and the refresh rate in critical structures is increased.

[0301] In an alternative embodiment, the system (1) of FIG. 4 additionally comprises a spatial positioning and/or orientation unit (5) coupled to the viewing unit (3). This unit (5) is connected and controlled by the control module (2), which is additionally configured for displacing and/or orienting the viewing unit (3) by means of this spatial positioning and/or orientation unit (5) if the control module (2) automatically detects that the at least one region of interest (R) is located at a smaller distance than a predetermined distance from one of the ends of the field of vision (C.sub.v) of the critical structure distinction module (4).