METHOD OF MONITORING A SURFACE FEATURE AND APPARATUS THEREFOR
20210219907 · 2021-07-22
Inventors
- William Richard Fright (Christchurch, NZ)
- Mark Arthur Nixon (Christchurch, NZ)
- Bruce Clinton McCallum (Lyttelton, NZ)
- James Telford George Preddey (Einsiedeln, CH)
Cpc classification
H04N1/00209
ELECTRICITY
A61B5/7282
HUMAN NECESSITIES
A61B5/0077
HUMAN NECESSITIES
G01B11/14
PHYSICS
H04N23/64
ELECTRICITY
A61B5/748
HUMAN NECESSITIES
A61B5/445
HUMAN NECESSITIES
A61B5/743
HUMAN NECESSITIES
A61B5/0022
HUMAN NECESSITIES
H04N23/631
ELECTRICITY
A61B2576/02
HUMAN NECESSITIES
A61B8/5223
HUMAN NECESSITIES
A61B5/0059
HUMAN NECESSITIES
A61B5/447
HUMAN NECESSITIES
A61B5/444
HUMAN NECESSITIES
A61B2090/367
HUMAN NECESSITIES
A61B5/7275
HUMAN NECESSITIES
A61B5/6898
HUMAN NECESSITIES
A61B2034/105
HUMAN NECESSITIES
A61B5/746
HUMAN NECESSITIES
A61B5/01
HUMAN NECESSITIES
International classification
A61B5/00
HUMAN NECESSITIES
A61B5/01
HUMAN NECESSITIES
A61B5/107
HUMAN NECESSITIES
A61B5/145
HUMAN NECESSITIES
G01B11/00
PHYSICS
G01B11/14
PHYSICS
H04N1/00
ELECTRICITY
Abstract
Dimensions of a surface feature are determined by capturing an image of the surface feature and determining a scale associated with the image. Structured light may be projected onto the surface, such that the position of structured light in the captured image allows determination of scale. A non-planar surface may be unwrapped. The surface may alternatively be projected into a plane to correct for the scene being tilted with respect to the camera axis. A border of the surface feature may be input manually by a user. An apparatus and system for implementing the method are also disclosed.
Claims
1-34. (canceled)
35. An apparatus including: a 3-D camera for capturing three-dimensional data of a patient's skin surface including an anatomical surface feature; a display configured to display a two-dimensional representation of the patient's skin surface including the anatomical surface feature, and to enable a user to input at least a portion of an outline of the anatomical surface feature on the display, to create a two-dimensional outline of the anatomical surface feature; a processor configured to: map the two-dimensional outline into the three-dimensional data; based on the three-dimensional data and mapped outline, determine one or more of: surface feature depth, surface feature area and surface feature volume; and the display being further configured to display one or more of the determined surface feature depth, determined surface feature area and determined surface feature volume.
36. The apparatus of claim 35 wherein the 3-D camera is a stereoscopic 3-D camera.
37. The apparatus of claim 35 wherein the 3-D camera is attached to a portable device that includes the display.
38. An apparatus including: a camera for capturing data of a patient's skin surface including an anatomical surface feature; a processor configured to: create a three-dimensional model of the skin surface including the anatomical surface feature, based on the captured data; map a two-dimensional outline of the anatomical surface feature onto the three-dimensional model; based on the three-dimensional model and mapped outline, determine one or more of: surface feature depth, surface feature area and surface feature volume; and a display configured to display one or more of the determined surface feature depth, determined surface feature area and determined surface feature volume.
39. The apparatus of claim 38 wherein the camera is attached to a portable device that includes the display.
40. The apparatus of claim 38 wherein the camera is a 3-D camera configured to capture three-dimensional data of the patient's skin surface including the anatomical surface feature.
41. The apparatus of claim 40 wherein the 3-D camera is a stereoscopic 3-D camera.
42. The apparatus of claim 38 further including a structured light arrangement configured to project one or more structured light elements onto the skin surface, the camera capturing image data including the structured light elements.
43. The apparatus of claim 38, the display being configured to display a two-dimensional representation of the patient's skin surface including the anatomical surface feature, and to enable a user to input at least a portion of an outline of the anatomical surface feature on the display, to create the two-dimensional outline of the anatomical surface feature.
44. A method of assessing an anatomical surface feature on a skin surface of a patient, including: capturing data of the skin surface including the anatomical surface feature; based on the captured data, creating a three-dimensional model of the skin surface including the anatomical surface feature; creating a two-dimensional outline of the anatomical surface feature; mapping the two-dimensional outline onto the three-dimensional model; based on the three-dimensional model and mapped outline, determining one or more of: surface feature depth, surface feature area and surface feature volume; and displaying one or more of the determined surface feature depth, determined surface feature area and determined surface feature volume.
45. The method of claim 44 wherein capturing data includes capturing three-dimensional data of the patient's skin surface including an anatomical surface feature.
46. The method of claim 45 wherein capturing three-dimensional data includes capturing three-dimensional image data.
47. The method of claim 46 wherein capturing three-dimensional image data includes capturing data with a 3-D camera.
48. The method of claim 47 wherein the 3-D camera is a stereoscopic 3-D camera.
49. The method of claim 44 wherein capturing data includes capturing image data including one or more structured light elements.
50. The method of claim 44 including displaying a two-dimensional representation of the skin surface including the anatomical surface feature to a user on a display, wherein creating the two-dimensional outline of the anatomical surface feature includes enabling the user to input at least a portion of the outline.
51. The method of claim 50 wherein creating the two-dimensional outline includes enabling the user to adjust the outline manually by selecting and moving a portion of the outline to change the position of the portion relative to the displayed two-dimensional representation of the anatomical surface feature, thereby creating a revised two-dimensional outline of the anatomical surface feature; and mapping the revised two-dimensional outline into the three-dimensional data.
Description
DRAWINGS
[0047] The invention will now be described by way of example with reference to possible embodiments thereof as shown in the accompanying figures in which:
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
DETAILED DESCRIPTION
[0060] Referring to
[0061] In use the assembly of camera 1 and laser 4 is directed so that optical axis 2 is aligned with the central region of wound 7. Laser 4 projects stripe 6 across wound 7 and the image is captured by camera 1. It will be appreciated that due to the fixed angular relationship of the laser fan beam 5 and the optical axis 2 that the distance of points of stripe 6 from camera 1 may be determined: the distance of points of stripe 6 along the x-axis shown in
[0062] In a first embodiment the assembly of camera 1 and laser 4 may be positioned above wound 7 so that stripe 6 is aligned with optical axis 2. This may be achieved by aligning cross hairs (or a dot) in the centre of a display screen displaying the image with the centre of wound 7 and stripe 6. In this way the camera is positioned a known distance away from the centre of wound 7 and so a scale can be determined.
[0063] The area of a wound may be calculated by calculating the pixel area of wound 7 from a captured image and multiplying by a known scaling factor. This technique may be effective where camera 1 can be oriented normal to the wound 7 and where wound 7 is generally planar. This technique offers a simple solution in such cases. However, many wounds are not generally planar and images may be taken at an oblique angle. In such cases this approach may not provide sufficient accuracy and repeatability due to the camera axis not being perpendicular to the wound and significant variation in the distance from the camera to the wound from that assumed.
[0064] In a second embodiment an image may be captured in the same fashion except that the stripe need not be aligned with the optical axis of the camera. An image as shown in
[0065] In one embodiment laser 4 projects structured light in the form of laser cross hairs onto the image capture area. An image captured according to this embodiment is shown in
[0066] This approach has the advantage that it provides correction where an image is not taken normal to a wound. Determining the area within a two dimensional outline rather than in three dimensional space also reduces the computational load.
[0067] A wound depth measurement may also be derived as will be explained in connection with
[0068] Utilising this information standard wound measurements may be made. The so-called “Kundin area” may be calculated by obtaining the maximum linear dimension of the wound and the short axis (orthogonal to the long axis) of the outline and multiplying the product of these measurements by π/4. The so-called “Kundin volume” may be calculated from the product of the two diameters, the maximum depth and a factor of 0.327. The dimensions may be determined and the volume calculated by a local processor. Various other algorithms may be used to calculate wound volume as appropriate for the circumstances.
[0069] Referring now to
[0070] The outline of the wound may be determined utilising image processing techniques. However, the results of such techniques may be variable depending upon image quality, available processing capacity and the optical characteristics of the wound. According to a preferred embodiment the outline is input by a user.
[0071] Apparatus for performing the method may take a variety of forms ranging from a stationary system (having a stationary camera or a handheld camera connected wirelessly or by a cable) to a fully portable unit. Portable units in the form of PDAs, cell phones, notebooks, ultramobile PCs etc. including an integrated or plug-in camera allow great flexibility, especially for medical services outside of hospitals. Referring now to
[0072] In one embodiment placing input device 23 near outline 24 and dragging it may drag the proximate portion of the outline as the input device 23 is dragged across the screen. This may be configured so that the effect of adjustment by the input device is proportional to the proximity of the input device to the outline. Thus, if the input device is placed proximate to the outline the portion proximate to the outline will be adjusted whereas if the input device is placed some distance from the outline a larger area of the outline will be adjusted as the input device is dragged.
[0073] Utilising manual input of the outline avoids the need for complex image processing capabilities and allows a compact portable unit, such as a PDA, to be utilised. Further, this approach utilises human image processing capabilities to determine the outline where automated approaches may be less effective.
[0074] Once an image is captured it may be stored by the PDA in a patient record along with measurement information (wound area, wound depth, wound volume etc.). An image without the cross hairs may also be captured by the PDA deactivating laser 21. This may be desirable where an image of the wound only is required. Where previous information has been stored comparative measurements may be made and an indication of improvement or deterioration may be provided. Where the PDA has wireless capabilities images may be sent directly for storage in a central database or distributed to medical professionals for evaluation. This allows an expert to review information obtained in the field and provide medical direction whilst the health practitioner is visiting the patient. The historic record allows patient progress to be tracked and re-evaluated, if necessary.
[0075] Measurements of other wound information may also be made. The colour of the wound and the size of particular coloured regions may also be calculated. These measurements may require a colour reference target to be placed within the image capture area for accurate colour comparison to be made.
[0076] According to another embodiment a 3-D camera may be employed.
[0077] In other embodiments “time-of-flight” cameras may be substituted for camera 27. Time-of-flight cameras utilise modulated coherent light illumination and per-pixel correlation hardware.
[0078] Referring now to
[0079]
[0080] The three-dimensional locations of elements of beams 37 and 38 may then be determined from the captured image. A three dimensional model of the surface (grid 43 illustrates this) may be calculated using the three dimensional coordinates of elements along lines 37 and 38. The model may be an inelastic surface draped between the three-dimensional coordinates of the structured light elements, or an elastic surface stretched between the three-dimensional coordinates, or a model of the anatomy, or simply a scaled planar projection. A model of the anatomy may be a model retrieved from a library of models, or simply a geometric shape approximating anatomy (a cylinder approximating a leg, for example).
[0081] In a first method the three dimensional surface may be unwrapped to form a planar image in which all regions have the same scale (i.e. for a grid the grid is unwrapped such that all cells of the image are the same size). The area within wound boundary 41 may then be easily calculated by calculating the area from the planar image.
[0082] Alternatively the area within wound boundary 41 may be calculated by scaling the areas within each region according to scale attributes associated with each region (e.g. for the grid example normalising the total area within each cell to be the same). The granularity can of course be adjusted depending upon the accuracy required.
[0083] This approach could be extended so that a plurality of parallel crossing lines are projected to achieve greater accuracy. The lines could have different optical characteristics (e.g. colour) to enable them to be distinguished. However, the two line approach described above does have the advantage of mimicking some manual approaches currently employed which involves tracing the wound outline onto a transparent sheet and then calculating the area.
[0084]
[0085] Use of a positioning system allows automation of tasks and validation of actions. This may be achieved using the apparatus alone, or through communication with a central computer system and database. For example, a nurse may be using the apparatus to monitor wound healing for a patient. The nurse arrives at the patient's home and the position of the home is determined using the GPS system. The position may be used in determining an address. This may be used to ensure that the nurse is at the correct address, possibly by comparison with a schedule of patient visits.
[0086] In response to determination of an address, the system may automatically select a patient associated with that address from a patient database. Alternatively, for a new patient, the nurse enters patient information using the PDA and this information is automatically associated with the address determined using the GPS receiver. This avoids the necessity to enter a large amount of data using the PDA. Similarly, the position may be used directly without converting to an address, to select a patient associated with that position, or to associate a new patient with a position.
[0087] The positioning system may also be used in auditing user actions. For example, a nurse may enter patient information and this may be verified using the position data by checking it against a patient database. This also allows an employer to monitor staff actions, to ensure that a staff member has in fact visited a particular address or patient.
[0088] Data gathered using the GPS system may also be stored for future reference. For example, travel data may be gathered by monitoring position information over a period of time. This data may be used later in estimating travel times between sites and in establishing or optimizing travel schedules for workers.
[0089]
[0090] In one embodiment the auxiliary sensors include a Doppler Ultrasound Probe. The management of some types of wound, such as vascular ulcers, requires measurement of blood-flow in the underlying tissue and Doppler Ultrasound is the method generally used to perform this measurement. Low-power Doppler Ultrasound Probes such as those used in foetal heart-beat monitors may be suitable. This would make it unnecessary for a patient to visit a clinic or hospital, or for a separate ultrasound machine to be transported.
[0091] Data gathered from the auxiliary sensors may be associated with a particular address, patient or image. Data may be displayed on the PDA's screen, and may be overlaid on the associated image. The combined information may enable more advanced wound analysis methods to be employed.
[0092] Use of auxiliary sensors allows many measurements to be more easily performed at the same time as an image is captured and by the same person. (In a medical setting, this person may also be performing wound treatment.) This is efficient and also allows data to be easily and accurately associated with a particular image or patient.
[0093] In any of the above embodiments the section containing the lasers and camera could be combined so that they can be housed in a detachable unit from the PDA, interfaced via a SDIO or Compact Flash (CF) slot, for example. This allows added convenience for the user, plus enables lasers and cameras to be permanently mounted with respect to each other, for ease of calibration. Furthermore, the camera can be optimally focussed, and an illumination means, such as a white LED, may be used to give relatively constant background lighting.
[0094] In any of the above embodiments the section containing the camera and/or lasers could be movable with respect to the PDA (being interconnected by a cable or wirelessly). This allows independent manipulation of the camera to capture wounds in awkward locations whilst optimising viewing of the image to be captured.
[0095] In any of the embodiments described above, multiple images may be captured in rapid succession. This is particularly advantageous where structured light (e.g. a laser) is used. For example, two images may be captured: one with the laser on and one with the laser off. Subtracting one of these images from the other yields an image with just the laser lines (disregarding the inevitable noise). This facilitates the automated detection of the laser profiles. Other combinations of images may also be useful. For example, three images could be captured: one without illumination but with the laser on, one without illumination and with the laser off and a third image with the illumination on and the laser off. The first two images could be used to detect the laser profile, while the third image is displayed to the user. The first image, showing the laser line with the illumination off would have a higher contrast, so that the laser line would stand out more clearly. Capturing the images in rapid succession means that the motion of the camera between the images is negligible.
[0096]
[0097] This centralised system allows appropriate categorising and storage of data for future use. For example, by mining historical data from the database it is possible to analyse the efficacy of a particular treatment or to compare different treatments. Statistical trends of conditions, treatments and outcomes can be monitored. This data can be used to suggest a particular treatment, based on a set of symptoms exhibited by a particular patient. Data can provide predictions for wound healing. Where actual healing differs from the prediction by more than a threshold, the system may issue an alert.
[0098] A healthcare provider can use the data to audit efficiency of its whole organisation, departments within the organisation or even individual workers. Historical data may be compared with historical worker schedules to determine whether workers are performing all tasks on their schedules. Efficiencies of different workers may be compared.
[0099] There are thus provided methods of measuring wounds that are simple, inexpensive, repeatable and may be performed remotely. The methods utilize human image processing capabilities to minimise the processing requirements. The methods do not require the placement of articles near the wound and allow historical comparison of a wound. The apparatus are portable with relatively low processing requirements and enable records to be sent wirelessly for evaluation and storage.
[0100] While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in detail, it is not the intention of the Applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of the Applicant's general inventive concept.