ALIGNMENT OF ULTRASOUND IMAGE

20220304653 · 2022-09-29

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for alignment of an ultrasound image obtained by an ultrasound probe for obtaining images in relation to the eye (1) is provided. The method comprises: placing the ultra-sound probe in a suitable location across a region of interest (ROI) expected to include the optic nerve; obtaining images of said ROI; 4 processing the images to identify boundary features (4, 5, 6) representative of the boundaries of at least one of the optic nerve and the optic nerve sheath (2); using the identified boundary features to determine a principal direction (10) extending along the length of the optic nerve and the optic nerve sheath; identifying at least first and second points on the identified boundary features, the first and second points being at different locations along the principal direction; and rotating the image plane until the first and second points are aligned in the image, and thereby determining a required orientation of the image plane of the probe for alignment of the ultrasound image with the principal direction

    Claims

    1. A method for alignment of an ultrasound image obtained by an ultrasound probe for obtaining images in relation to the eye, the method comprising: placing the ultrasound probe in a suitable location for obtaining images of anatomical structures in the region of the eye and across a region of interest expected to include the optic nerve; obtaining images of said region of interest using the ultrasound probe; processing the images of said region of interest via computer implemented image processing techniques in order to identify boundary features in the images that are representative of the boundaries of at least one of the optic nerve and the optic nerve sheath; using the identified boundary features of the imaged structure to determine a principal direction extending along the length of the optic nerve and the optic nerve sheath; identifying at least first and second points on the identified boundary features, wherein the first and second points are at different locations along the principal direction; and rotating the image plane of the probe until the first and second points are aligned in the image, and thereby determining a required orientation of the image plane of the probe for alignment of the ultrasound image with the principal direction.

    2. A method as claimed in claim 1, wherein the probe includes an actuation mechanism for rotating the probe and the method comprises using the actuation mechanism to rotate the probe to align the first and second points; and/or wherein electronically controlled activations of the transducer array elements are used to steer the image plane to align the first and second points.

    3. A method as claimed in claim 1, wherein the probe is arranged to be handled manually and the step of rotating the probe is performed by the user manually rotating the probe to align the first and second points.

    4. A method as claimed in claim 1, 2 or 3, comprising obtaining images of the optic nerve sheath after alignment of the ultrasound image with the principal direction, and using the images for at least one of: automated assessment of the diameter of the optic nerve sheath; and determining a measure of stiffness of the optic nerve sheath as a marker for increased intracranial pressure.

    5. A method as claimed in any preceding claim, comprising obtaining images of the optic nerve sheath and the eye circle after alignment of the ultrasound image with the principal direction, and using the images for quantification of papilledema.

    6. A method as claimed in any preceding claim, wherein the two points are two cross-sections of the identified boundary features and the method comprises alignment of the centre of the ultrasound image with the principal direction by ensuring that the two cross-sections are aligned with each other via rotation of the image plane of the probe whilst also ensuring that they are centrally placed within the ultrasound image from the probe by translating the probe.

    7. A method as claimed in any preceding claim, wherein the two points are two cross-sections of the identified boundary features and the method comprises providing feedback to the user in relation to the degree of alignment of the cross-sections using a visual display with representations of the two cross-sections.

    8. A method as claimed in claim 7, wherein the visual display uses two markings, such as circles, to represent the two cross-sections and a further marking, such as a line, to represent the centre of the ultrasound image.

    9. A method as claimed in any preceding claim, wherein the two points on the identified boundary features are spaced apart by at least 2 mm along the principal direction.

    10. A method as claimed in any preceding claim, wherein the two points on the identified boundary features are at distances below the eye circle of about 3 mm and 6 mm.

    11. A method as claimed in any preceding claim, wherein the principal direction is a direction aligned with a centre-line of the boundary features identified by the use of computer implemented image processing techniques, wherein the computer implemented image processing techniques identify the directions of two boundaries extending along the sides of the optic nerve sheath by identifying points along the two sides of the optic nerve sheath and fitting two vectors to these points to define directions extending along the two sides of the optic nerve sheath; and wherein the principal direction is an average of the two vectors in order to determine the principal direction.

    12. A method as claimed in any preceding claim, comprising determining diameter measurements for the optic nerve sheath at different distances along the principal direction in order to find a maximum diameter of the optic nerve sheath.

    13. A method as claimed in any preceding claim, comprising the use of machine learning techniques in relation to algorithms for automated processing of the images to find the boundaries of the optic nerve sheath.

    14. A method for non-invasively calculating a marker indicating possibly increased intracranial pressure of a patient, the method comprising obtaining images of the optic nerve sheath after alignment of an ultrasound image obtained with an ultrasound probe with the principal direction in accordance with the method of any preceding claim; quantifying pulsatile dynamics of the optic nerve sheath by monitoring dynamic properties of the optic nerve sheath, and/or of nearby regions, based on motion of the optic nerve sheath over a period of time; using the measured dynamic properties to determine a measure of stiffness of the optic nerve sheath; and using the quantified pulsatile dynamics, including the measure of stiffness, to obtain the marker indicating possibly increased intracranial pressure by associating increased stiffness with increased intracranial pressure.

    15. A method for non-invasively calculating a marker indicating possibly increased intracranial pressure of a patient, the method comprising obtaining images of the optic nerve sheath after alignment of an ultrasound image obtained with an ultrasound probe with the principal direction in accordance with the method of any of claims 1 to 13; detecting cardiac pulse related motion of the optic nerve sheath by monitoring displacements on each side of the optic nerve sheath; using a difference between the displacements on each side of the optic nerve sheath to determine a measure of stiffness of the optic nerve sheath; and using the detected displacements and the measure of stiffness to obtain the marker indicating possibly increased intracranial pressure by associating increased stiffness with increased intracranial pressure.

    16. A method as claimed in claim 14 or 15, comprising detecting displacements as they vary with time at two locations around the optic nerve sheath or in the region surrounding the optic nerve sheath, wherein determining the measure of stiffness of the optic nerve sheath includes obtaining a parameter of deformability (Δ) based on the displacements.

    17. A method as claimed in claim 16, wherein the parameter of deformability (Δ) is calculated according to the equation: Δ = .Math. "\[LeftBracketingBar]" d A - d B .Math. "\[RightBracketingBar]" d A + d B wherein d.sub.A and d.sub.B represent the displacements at the two locations.

    18. A method for automated quantification of papilledema, the method comprising obtaining images of the eye circle and the optic nerve sheath after alignment of an ultrasound image obtained with an ultrasound probe with the principal direction in accordance with the method of any of claims 1 to 13; processing of the images to identify the curve of the eye circle; and determining a measurement of papilledema with reference to the shape and size of formations at the back of the eye.

    19. A method as claimed in claim 18, comprising: using computer implemented image processing techniques to identify the curve of the eye circle; based on the intersection of the principal direction and the curve of the eye circle, identifying an area of the eye circle relevant to papilledema; and, using an image processing algorithm, finding one or more parameter(s) relating to the shape and/or size of a formation at the back of the eye in the area of the eye circle known to exhibit papilledema to thereby quantify the papilledema.

    20. A computer programme product comprising instructions that, when executed, will configure a computer system including an ultrasound imaging device to carry out the method of any preceding claim.

    21. A computer programme product as claimed in claim 20, wherein the instructions will configure the computer system to: use an ultrasound probe of the ultrasound imaging device to obtain images of anatomical structures in the region of the eye and across a region of interest expected to include the optic nerve; process the images of said region of interest via computer implemented image processing techniques in order to identify boundary features in the images that are representative of the boundaries of at least one of the optic nerve and the optic nerve sheath; use the identified boundary features of the imaged structure to determine a principal direction extending along the length of the optic nerve and the optic nerve sheath; identify at least first and second points on the identified boundary features, wherein the first and second points are at different locations along the principal direction; and guide the user to move the probe or control the probe to move the image plane of the probe in order to rotate the image plane of the probe until the first and second points are aligned in the image, and thereby guide the image plane of the probe to a required orientation for alignment of the ultrasound image with the principal direction.

    22. A computer programme product as claimed in claim 20 or 21, wherein the instructions will configure the computer system to: use an ultrasound probe of the ultrasound imaging device to obtain images of a region including the eye circle and optic nerve; using computer implemented image processing techniques, identify a principal direction of the optic nerve and the curve of the eye circle; based on the intersection of the principal direction and the curve of the eye circle, identify an area of the eye circle relevant to papilledema; and using an image processing algorithm, find one or more parameter(s) relating to the shape and/or size of a formation at the back of the eye in the area of the eye circle known to exhibit papilledema to thereby quantify the papilledema.

    23. An apparatus for alignment of an ultrasound image obtained with an ultrasound probe for obtaining images in relation to the eye, the apparatus comprising: an ultrasound imaging device including the ultrasound probe; and a computer system configured to carry out the method of any of claims 1 to 19.

    24. An apparatus as claimed in claim 23, wherein the computer system is configured to: use the ultrasound probe of the ultrasound imaging device to obtain images of anatomical structures in the region of the eye and across a region of interest expected to include the optic nerve; process the images of said region of interest via computer implemented image processing techniques in order to identify boundary features in the images that are representative of the boundaries of at least one of the optic nerve and the optic nerve sheath; use the identified boundary features of the imaged structure to determine a principal direction extending along the length of the optic nerve and the optic nerve sheath; identify at least first and second points of the identified boundary features, wherein the first and second points are at different locations along the principal direction; and guide the user to move the probe or control the probe to move the image plane of the probe in order to rotate the image plane of the probe until the first and second points are aligned in the image, and thereby guide the probe to a required orientation for alignment of the ultrasound image with the principal direction.

    25. An apparatus as claimed in claim 23 or 24, wherein the computer system is configured to: using computer implemented image processing techniques, identify a principal direction of the optic nerve and the curve of the eye circle; based on the intersection of the principal direction and the curve of the eye circle, identify an area of the eye circle relevant to papilledema; and using an image processing algorithm, find one or more parameter(s) relating to the shape and/or size of a formation at the back of the eye in the area of the eye circle known to exhibit papilledema to thereby quantify the papilledema.

    Description

    [0071] Certain embodiments will now be described by way of example only and with reference to the accompanying drawings, in which:

    [0072] FIG. 1 illustrates a schematic of an ultrasound image that is segmented into background, eye and optic nerve sheath;

    [0073] FIG. 2 shows an example of coordinates along the side of the optic nerve sheath;

    [0074] FIG. 3 shows the addition of principal eigenvectors of each side of the optic nerve sheath, along with a principal direction calculated as the average of the two eigenvectors;

    [0075] FIG. 4 shows points along the eye circle and a circle fitted to those points;

    [0076] FIG. 5 is an example screenshot of a graphical user interface for guiding manual alignment of an ultrasound probe;

    [0077] FIG. 6 shows the graphical user interface with alignment of the probe with the principal direction;

    [0078] FIG. 7 is an ultrasound image of an eye including papilledema with markings overlaid showing steps used in quantification of the papilledema

    [0079] FIG. 8 is a schematic diagram of a segmented ultrasound image similar to that of FIGS. 1 to 4, with lines added indicating a proposed method for quantification of the papilledema similar to FIG. 7.

    [0080] The method and apparatus described herein may be used to guide a user to acquire an optimal view of an optic nerve sheath (ONS) by correctly aligning an ultrasound image obtained with an ultrasound probe with a principal direction of the ONS. This may be desired for example so that the intracranial pressure (ICP) of a patient can be measured non-invasively using the ultrasound image. Preferred aspects of an optimal image may include that the ONS is located in the horizontal centre of the image, and/or the ONS centreline (or mid-axis) is aligned with the vertical axis of the image, and/or the image plane cuts through the centre of the ONS.

    [0081] The proposed method proceeds as follows. An ultrasound probe is placed in a suitable location for obtaining images of anatomical structures in the region of the eye and across a volume/Region of Interest (ROI) expected to include the optic nerve and/or the optic nerve sheath. Images of the volume/ROI are obtained using the ultrasound probe. For every incoming image frame from the ultrasound probe the following processing steps are performed, as illustrated in FIGS. 1 to 3. These processing steps are preferably performed by computer implemented image processing techniques.

    [0082] FIG. 1 illustrates the first step of the proposed method wherein the image is segmented into three classes: eye 1, optic nerve and optic nerve sheath (ONS) 2, and background 3. It will be understood that more or fewer classes may be required depending on the structures present in the image. The segmentation may be performed using a neural network, for example using a deep convolutional neural network. Segmentation of the image is beneficial as it enables boundary features 4, 5, 6 in the image to be identified. In the schematic of FIG. 1, boundary features in the image that are representative of the ONS 2 and the eye 1 are identified. In general, it is preferred to identify boundary features representative of at least one of the ONS 2 and the optic nerve. Machine learning techniques may be used in relation to algorithms for automated processing of the images to find the boundary features. In an example the boundary between the eye 1 and the background 3 is represented by an eye circle 4, i.e. a circular arc around the back of the eye 1 when the ball of the eye 1 is seen in two dimensions, and the boundary between the ONS 2 and the background 3 is represented by first and second sides 5, 6.

    [0083] The boundary features 4, 5, 6 are then used to determine a principal direction 10 extending along the length of the optic nerve and/or the ONS 2, as shown in FIGS. 2 and 3. A first set of points 7 along the extent of the first and second sides 5, 6 are identified and coordinates of the first set of points 7 are calculated. The first set of points 7 may be identified by reference to changes in the ultrasound image, for example changes of intensity. A first principal eigenvector 8 is fitted to the points in the first set of points 7 which lie along the first side 5, and similarly a second principal eigenvector 9 is fitted to the points in the first set of points 7 which lie along the second side 6, as illustrated in FIG. 3. The first and second principal eigenvectors 8, 9 define directions extending along the first and second sides 5, 6 of the ONS 2 respectively. In order to calculate the principal eigenvectors 8, 9, at least two points on each of the first side 5 and the second side 6 are identified. A centre-line 10 of the ONS 2 is calculated from an average of the first and second principal eigenvectors 8, 9. The centre-line 10 defines the principal direction 10 of the ONS 2 and extends along the length of the ONS 2 and/or the optic nerve. Thus, the principal direction 10 with which the ultrasound image is desired to be aligned is identified.

    [0084] The principal direction 10 may be used to determine whether the ONS 2 is located in the horizontal centre of the ultrasound image and/or whether the ONS 2 is vertically aligned with the image, in accordance with preferred aspects of an optimal image as described above.

    [0085] The diameter of the ONS 2 may then be measured by the following steps as illustrated in FIG. 4. It may be desired to find the diameter in order to determine if the ultrasound image plane is cutting through the true centre of the ONS 2, and/or to perform processing techniques to determine conditions of the eye such as stiffness of the ONS 2 as explained previously. It is preferred that the following steps are carried out automatically, e.g. by computer implemented image processing techniques.

    [0086] A second set of points 11 along the eye circle 4 are identified and coordinates of the second set of points 11 are calculated. A fitted circle 12 is fitted to the second set of points 11 using any suitable fitting method, such as via a least squares method, for example using the Kasa circle fitting algorithm. The fitted circle 12 defines the curve of the eye circle 4.

    [0087] A normal 13 extends from the centre-line 10 in a perpendicular direction to the second side 6, although the normal could alternatively extend to the first side 5. The normal 13 is positioned within an area A, wherein the boundaries of area A may be defined by the first and second principal eigenvectors 8, 9, a top side 14, and a bottom side 15. The top side 14 may be positioned at a first predetermined depth below the fitted circle 12, and the bottom side 15 may be positioned at a second predetermined depth below the fitted circle 12. The first predetermined depth may be smaller than the second predetermined depth, for example the first predetermined depth may be 3 mm and the second predetermined depth may be 6 mm. The top side 14 and the bottom side 15 may extend in a direction perpendicular to the centre-line 10, i.e. parallel to the normal 13.

    [0088] The diameter of the ONS 2 is then measured as the distance from the first side 5 to the second side 6 along the normal 13 of the centre-line 10, within the area A.

    [0089] An observed maximum diameter of the ONS 2 is assumed to be the true diameter of the ONS 2. Therefore it may be desired to find the maximum diameter in order to ensure that the ultrasound image plane is cutting through the true centre of the ONS 2, in accordance with one of the features of an optimal image as described above. This may require an initial sweep of the ultrasound probe over the eye in order to observe the maximum diameter. The sweep may involve taking a plurality of ultrasound images through different image planes and measuring the diameter of the ONS 2 in each different image plane. The diameter may also be measured at different distances along the principal direction 10 in order to find the maximum diameter of the ONS 2. During the sweep if a measured diameter in a current image plane is larger than a previously determined maximum diameter, the maximum diameter may be replaced with the value of the current diameter.

    [0090] Once the maximum diameter has been determined, the current diameter can be compared with the maximum diameter to ascertain whether the image plane is cutting through the true centre of the ONS 2.

    [0091] It has now been ascertained whether the image plane of the ultrasound probe is aligned correctly such that the principal direction 10 of the ONS 2 is vertically and horizontally aligned with the image, and such that the image plane cuts through the true centre of the ONS 2. If the image plane of the ultrasound probe is not aligned correctly, it may be desired to adjust the alignment. In one example of the proposed method a graphical representation of the position of the ONS 2 in relation to the ultrasound image plane is used to guide a user to manually adjust the image plane of the probe by rotating the probe. The method may include identifying at least two points on the ONS boundary features 5, 6, such as two cross-sections of the ONS 2 and moving the probe to align those points thereby aligning the probe with the principal direction 10, as described in more detail below.

    [0092] In the illustrated example, first and second cross-sections of the identified boundary features are identified, wherein the first cross-section is located on the ONS 2 at a first predetermined depth below the eye circle 4 and the second cross-section is located on the ONS 2 at a second predetermined depth below the eye circle 4. For example the first predetermined depth may be 3 mm and the second predetermined depth may be 6 mm. In order to determine the relative alignment of the first and second cross-sections a graphical user interface or visual display can be used, which allows the user to receive feedback in relation to the degree of alignment of the cross-sections. The graphical user interface may also help guide the user to rotate the ultrasound probe into an aligned position such that an optimal image may be acquired.

    [0093] FIGS. 5 and 6 show an example graphical user interface 16 which may guide the user in this way. The interface 16 includes a guidance bar 17 and an ultrasound image 18. In the ultrasound image 18, the eye 1 and the ONS 2 are visible. The image 18 is overlaid with markings showing some of the boundary features identified by the steps illustrated in FIGS. 1 to 4. These markings may include the fitted circle 12, the centre-line 10, and the area A. The guidance bar 17 illustrates a projection of the ultrasound image plane. The guidance bar 17 may include guidance markings, for example a horizontal guidance line 19 and a vertical guidance line 20 which intersect at an intersection 21. The horizontal guidance line 19 represents the ultrasound image plane, and the vertical guidance line 20 is aligned with the centre of the ultrasound image. The guidance bar 17 may further include guidance markings, for example circles, which represent the identified cross-sections of the ONS 2. The markings may represent projections of the cross-sections on the ultrasound image plane. For example, a first circle 22 may represent a first cross-section along the principal direction 10 at a depth of 3 mm below the eye, such that the position of the first circle 22 along the horizontal guidance line 19 corresponds to the horizontal position of the first cross-section of the ONS 2 in the image. Similarly, a second circle 23 may represent a second cross-section along the principal direction 10 at a depth of 6 mm below the eye, such that the distance of the second circle 23 along the horizontal guidance line 19 corresponds to the horizontal position of the second cross-section. Although the Figures illustrate guidance lines and circular markings, alternative types of markings may be used to represent the relevant features.

    [0094] The first and second circles 22, 23 can be used to help the user visualise the position and orientation of the ONS 2 within the image plane. For example when the first and second circles 22, 23 are not overlapping, the ONS 2 is not vertically aligned in the image. When the circles 22, 23 are not centred on the vertical line 20, the ONS 2 is not horizontally centred in the image. FIG. 5 shows an example graphical user interface in this case, i.e. when the ONS 2 is not horizontally centred and not vertically aligned. As shown in the ultrasound image 18, the centre-line 10 of the ONS 2 is displaced to the left of the horizontal centre of the ultrasound image and is slanted at an angle to the vertical direction.

    [0095] Once the user has determined the relative position and/or orientation of the cross-sections, it may be desired to move the probe to correctly align the cross-sections. The probe is thus rotated and/or moved in translation until the cross-sections are aligned, i.e.

    [0096] until the first and second circles 22, 23 overlap completely or substantially, thereby determining a required orientation of the ultrasound probe for alignment with the principal direction. The rotating of the probe may occur manually as discussed above and/or may be done with an actuation mechanism. The alignment of the image relative to the defined cross sections can also be done by electronic steering of the beams of the transducer array, wherein the activations of the elements of the transducer array or the timing of any signal fed to the transducer elements is electronically controlled. The alignment of the image may also involve rotating the image plane of the probe whilst the physical structure of the probe remains in one place.

    [0097] The first and second circles 22, 23 may also be used to ensure the principal direction is centrally aligned with the ultrasound image by using the vertical guidance line 20 on the guidance bar 17. The image plane of the ultrasound probe is moved until the first and second circles 22, 23 representing the two cross-sections are centred on the vertical guidance line 20, at which point the ONS 2 is centrally positioned within the image frame.

    [0098] FIG. 6 shows a graphical user interface when the ultrasound probe is correctly aligned with the principal direction 10 and the ONS 2 is horizontally centred. As shown on the guidance bar 17, the first and second circles 22, 23 are overlapping substantially and are positioned on the intersection 21. Correspondingly, as shown in ultrasound image 18, the ONS 2 is vertically aligned and horizontally centred in the image. This demonstrates an optimal view of the ONS 2.

    [0099] The method may further include obtaining images of the eye and/or the ONS 2 after alignment of the image plane of the probe with the principal direction 10. The images may be used for example for automated assessment of the diameter of the ONS 2 as described above, and/or for determining a measure of stiffness of the ONS 2 as a marker for increased intracranial pressure. The method may comprise automated processing of the images in order to determine if the patient's eye exhibits symptoms related to a particular condition of the eye or the optic nerve complex, in particular identification and/or quantification of papilledema. A proposed method for quantification of papilledema is described in more detail below with reference to FIGS. 7 and 8.

    [0100] As previously discussed, one known way that papilledema presents itself is as a formation, which is typically in the form of a “bump” or protrusion, located at the back of the eye extending forward into the eye. The formation may be visible on an ultrasound image of an eye, in particular in the region of the eye circle 4.

    [0101] According to the proposed method, the area of the eye circle 4 relevant to papilledema is identified based on the intersection between the principal direction 10 and the curve of the eye circle, wherein the curve is defined by the fitted circle 12 as previously described. FIG. 7 shows an ultrasound image of an eye 1 having a formation 24 which is characteristic of papilledema. Once papilledema is identified, it may be desired to perform quantification. One technique for quantifying papilledema is to find the maximum extent of the formation 24 into the eye. Other possibilities include determining the area or volume of the formation 24. By way of example, further details are given below with reference to finding the maximum extent of the formation 24. The maximum extent of the formation 24 may be defined as the maximum distance between the back of the eye, i.e. the fitted circle 12, and an inner edge of the formation 24.

    [0102] The ultrasound image in FIG. 7 is overlaid with markings which include the eye circle 4, centre-line 10 and area A as identified by the steps described previously. The markings further include a series of lines 25 that extend from the fitted circle 12 to an inner edge of the formation 24 in a direction radially inward toward the centre of the eye 1. The inner edge of the formation 24 is the boundary between the formation 24 and the eye 1. The lines 25 may have a predetermined length, for example 2 mm. The lines 25 may be drawn from every point that is along the fitted circle 12 within a pre-determined distance (shown by a dashed line 26 in FIG. 8) either side of the intersection between the principal direction 10 and the fitted circle 12. The pre-determined distance may be 3 mm for example. Along every line 25, image intensities are sampled and a change in image intensity is used to determine a coordinate of the inner edge of the formation 24. The inner edge coordinate may be estimated using an existing edge detection method such as the step edge model. Therefore by finding the inner edge coordinate for all lines 25 on the fitted circle 12, a set of coordinates that defines the shape and size of the formation 24 is collected. The maximum extent of the formation 24 along any of lines 25 is determined, and said maximum extent is used as the parameter enabling identification and/or quantification of the papilledema.

    [0103] The parameter can be used directly to quantify the papilledema, and/or in combination with other parameters, and/or the parameters may be compared to a threshold value to determine if the patient is deemed to be at-risk or healthy in relation to papilledema.

    [0104] As described previously, there are other further methods that may be carried out using ultrasound images after alignment of the ultrasound image with the principal direction 10. In one example, a measure of stiffness of the optic nerve sheath can be determined as a marker for increased intracranial pressure, for example by monitoring dynamic properties of the optic nerve sheath, and/or of nearby regions, based on motion of the optic nerve sheath over a period of time. WO 2016/193168 describes an example of a method that may be used in relation to images obtained via the proposed alignment method. When being used to carry out measurements of dynamic properties, the method may include evaluating whether the operator has acquired an image sequence of sufficient quality, i.e. that the correct view of the ONS 2 is held over a period necessary to process the dynamic properties.