OPTHALMOLOGICAL TREATMENT DEVICE FOR DETERMINING A ROTATION ANGLE OF AN EYE
20230196577 · 2023-06-22
Inventors
Cpc classification
International classification
Abstract
An ophthalmological treatment device comprising a processor and a camera for determining a rotation of an eye of a person, the processor configured to: receive a reference image of the eye, the reference image having been recorded with the person in an upright position by a separate diagnostic device; record, using the camera, a current image of the eye, the current image being recorded with the person in a reclined position; and determine a rotation angle of the eye by comparing the reference image to the current image using a direct solver.
Claims
1. An ophthalmological treatment device comprising a processor and a camera for determining a rotation of an eye of a person, the processor configured to: receive a reference image of the eye, the reference image having been recorded with the person in an upright position by a separate diagnostic device; record, using the camera, a current image of the eye, the current image being recorded with the person in a reclined position; and determine a rotation angle of the eye by comparing the reference image to the current image using a direct solver.
2. The ophthalmological treatment device of claim 1, wherein the processor is further configured to control the ophthalmological treatment device using the rotation angle.
3. The ophthalmological treatment device of claim 1, wherein the processor is further configured to rotate a treatment pattern of a laser about the rotation angle, wherein the treatment pattern is configured for the eye of the person.
4. The ophthalmological treatment device of claim 1, wherein the rotation angle includes a cyclotorsion angle.
5. The ophthalmological treatment device of claim 1, wherein the direct solver is configured to determine the rotation angle using a pre-defined number of computational operations.
6. The ophthalmological treatment device of claim 1, wherein the processor is configured to determine the rotation angle of the eye by: identifying one or more non-local features of the reference image and non-local features of the current image, and matching the one or more identified non-local features of the reference image to the one or more identified non-local features of the current image, respectively.
7. The ophthalmological treatment device of claim 1, wherein the direct solver applies a pre-determined sequence of signal processing filters to both the reference image and the current image.
8. The ophthalmological treatment device of claim 7, wherein the pre-determined sequence of signal processing filters comprises one or more of: a convolutional operator, an activation function, or a pooling function.
9. The ophthalmological treatment device of claim 7, wherein the pre-determined sequence of signal processing filters is part of a neural network.
10. The ophthalmological treatment device of claim 9, wherein the neural network is trained to determine the rotation angle using supervised learning and a training dataset, wherein the training dataset comprises a plurality of training reference images, a plurality of corresponding training current images, and a plurality of corresponding pre-defined rotation angles.
11. The ophthalmological treatment device of claim 9, comprising: a first neural network and a second neural network both having identical architecture and parameters, the first neural network configured to receive the reference image as an input and to generate a reference image output vector, and the second neural network configured to receive the current image as an input and to generate a current image output vector, wherein the processor is configured to determine the rotation angle using the reference image output vector, the current image output vector, and a distance metric.
12. The ophthalmological treatment device of claim 5, wherein the processor is configured to determine the rotation angle by generating a reference image output vector using the reference image and the pre-determined sequence of signal processing filters; generating a current image output vector using the current image and the pre-determined sequence of signal processing filters; and determining a distance between the reference image output vector and the current image output vector using a distance metric.
13. The ophthalmological treatment device of claims 1, wherein the processor is further configured to pre-process the images, wherein pre-processing comprises one or more of: detecting an edge between the iris and the pupil of the eye in the reference image and/or the current image; detecting scleral blood vessels of the eye in the reference image and/or the current image; detecting the retina of the eye in the reference image and/or the current image; identifying a covered zone in the reference image, wherein the covered zone is a part of the eye covered by an eyelid; unrolling the reference image and/or the current image using a polar transformation; rescaling the reference image and/or the current image according to a detected pupil dilation in the reference image and/or the current image; image correcting the reference image and/or the current image by matching an exposure, a contrast, and/or a color; or resizing the reference image and/or the current image such that the reference image and the current image have a matching size.
14. The ophthalmological treatment device of claim 1, wherein the processor is configured to: receive a color reference image and/or an infrared reference image; and record, using the camera, a color current image and/or an infrared current image.
15. The ophthalmological treatment device of claim 1, wherein the processor is further configured to transmit the current image to a second ophthalmological treatment device.
16. A method for determining a rotation of an eye of a person comprising a processor of an ophthalmological treatment device performing the steps of: receiving a reference image of the eye, the reference image having been recorded with the person in an upright position by a separate diagnostic device; recording, using a camera of the ophthalmological treatment device, a current image of the eye, the current image being recorded with the person in a reclined position; and determining a rotation angle of the eye by comparing the reference image to the current image, using a direct solver.
17. The method of claim 16, further comprising rotating a treatment pattern of a laser about the rotation angle, which treatment pattern is configured for the eye of the person.
18. The method of claim 16, wherein determining the rotation angle of the eye, using the direct solver, comprises applying a pre-determined sequence of signal processing filters to both the reference image and the current image.
19. The method of claim 18, wherein the pre-determined sequence of signal processing filters comprises one or more of: a convolutional operator, an activation function, or a pooling function.
20. The method of claim 18, wherein the pre-determined sequence of signal processing filters is part of a neural network.
21. The method of claim 20, wherein the neural network is trained to determine the rotation angle using supervised learning and a training dataset, wherein the training dataset comprises a plurality of training reference images, a plurality of corresponding training current images, and a plurality of corresponding pre-defined rotation angles, respectively.
22. The method of claim 20, comprising a first neural network and a second neural network both having an identical architecture and identical parameters, the first neural network configured to receive the reference image as an input and generating a reference image output vector, and the second neural network configured to receive the current image as an input and to generate a current image output vector, wherein determining the rotation angle comprises using the reference image output vector, the current image output vector, and a distance metric.
23. A computer program product comprising a non-transitory computer-readable medium having stored thereon computer program code for controlling a processor of an ophthalmological treatment device to: receive a reference image of the eye, the reference image having been recorded with the person in an upright position by a separate diagnostic device; record, using the camera, a current image of the eye, the current image being recorded with the person in a reclined position; and determine a rotation angle of the eye by comparing the reference image to the current image using a direct solver.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] The herein described disclosure will be more fully understood from the detailed description given herein below and the accompanying drawings which should not be considered limiting to the disclosure described in the appended claims. The drawings in which:
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
DETAILED DESCRIPTION
[0055] Reference will now be made in detail to certain embodiments, examples of which are illustrated in the accompanying drawings, in which some, but not all features are shown. Indeed, embodiments disclosed herein may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Whenever possible, like reference numbers will be used to refer to like components or parts.
[0056]
[0057] As is schematically represented in
[0058] The laser source 16 is configured to generate a pulsed laser beam L. The laser source 16 comprises in particular a femtosecond laser for generating femtosecond laser pulses, which have pulse widths of typically from 10 fs to 1000 fs (1 fs=10.sup.−16 s). The laser source 3 is arranged in a separate housing or a common housing with the focussing optics 51.
[0059] The scanner system 17 is configured to steer the pulsed laser beam L delivered by the laser source 16 by means of the focussing optics 51 in the eye tissue 211 onto treatment points F on a treatment pattern t (comprising a laser trajectory). In an embodiment, the scanner system 17 comprises a divergence modulator for modulating the focal depth, or the treatment height, in the projection direction along the projection axis p. The scanner system 17 comprises, for example, a galvanoscanner or a piezo-driven scanner. Depending on the embodiment, the scanner 17 additionally comprises one or more deflecting mirrors, one or more resonant mirrors, or one or more oscillating mirrors, which are for example piezo-driven, or MEM (Micro-Electromechanical), or the scanner system 17 comprises an AOM (Acousto-Optical Modulator) scanner or an EOM (Electro-Optical Modulator) scanner.
[0060] As is schematically represented in
[0061] The focussing optics 51 are configured to focus the pulsed laser beam L, or its laser pulses, onto the treatment points F inside the eye tissue 211 for the pointwise tissue disruption. The focussing optics 51 comprise a lens system having one or more optical lenses. Depending on the embodiment, the focussing optics 51 comprise one or more movable lenses and/or a drive for moving the entire focussing optics 51 in order to set and adjust the focal depth, or the treatment height, in the projection direction along the projection axis p. In a further embodiment, a divergence modulator is provided in the beam path between the laser source 16 and the scanner system 17.
[0062] For the treatment and incision of incision surfaces C, C′ which have a lateral component in the x/y treatment plane normal to the projection direction which is comparatively larger than the depth component in the projection direction along the projection axis p, the scanner system 17 is configured to displace the treatment points F, onto which the laser pulses are focussed, with a higher scan speed on the treatment pattern t, t′ in relation to the focus adjustment speed of the focussing optics 51.
[0063] Although reference is made to the incision according to an incision surface C, C′, the treatment pattern t, t′ also relates to treatment of the eye by surface ablation, depending on the embodiment.
[0064] As is schematically represented in
[0065] In an embodiment, the patient interface 52, in particular the contact surface 53, has a flattened central section and the eye 21 is removably attached to the patient interface 52 by applanation, in which the normally curved surface of the cornea 211 is held in a flattened state against the contact surface 53 of the patient interface 52 by the suction element 54.
[0066] In an embodiment, the patient interface 52 does not have a contact surface 53 or a suction element 54 and the treatment takes place without fixing the eye 21. Specifically, the patient interface 52 and the eye 21 are separated by an air gap of several centimetres, for example.
[0067] As is schematically represented in
[0068] The communication interface 15 is further configured for data communication with one or more external devices. Preferably, the communication interface 15 comprises a network communications interface, for example an Ethernet interface, a WLAN interface, and/or a wireless radio network interface for wireless and/or wired data communication using one or more networks, comprising, for example, a local network such as a LAN (local area network), and/or the Internet.
[0069] The skilled person is aware that at least some of the steps and/or functions described herein as being performed on the processor 11 of the ophthalmological device 1 may be performed on one or more auxiliary processing devices connected to the processor 11 of the ophthalmological device 1 using the communication interface 15. The auxiliary processing devices can be co-located with the ophthalmological device 1 or located remotely, for example on a remote server computer.
[0070] The skilled person is also aware that least some of the data associated with the program code (application data) or data associated with a particular patient (patient data) and described as being stored in the memory 14 of the ophthalmological device 1 may be stored on one or more auxiliary storage devices connected to the ophthalmological device 1 using the network interface 15.
[0071] The ophthalmological device 1 optionally includes a user interface comprising, for example, one or more user input devices, such as a keyboard, and one or more output devices, such as a display. The user interface is configured to receive user inputs from an eye treatment professional, in particular based on, or in response to, information displayed to the eye treatment professional using the one or more output devices.
[0072] As is schematically represented in
[0073] The control module 12, more particularly the processor, determines a rotation angle θ of the eye 21 using a direct solver S, in particular a rotation angle θ in relation to the central axis m of the patient interface 52. The direct solver S is stored in the memory 14.
[0074] The rotation angle θ is an angle of rotation of the eye 21 about an axis. The axis is, for example, parallel to a central axis m of the patient interface 52.
[0075] As described in more detail with reference to
[0076] As described herein in more detail, the treatment pattern t is not rotationally symmetric for each eye, owing to some persons 2 having a degree of astigmatism. Therefore, it is important to account for any rotation of the eye 2, in particular by rotating the treatment pattern t according to the rotation angle θ.
[0077]
[0078] The diagnostic device 3 is configured to record and store the reference image 31 and/or reference interferometric data. The reference image 31 and/or reference interferometric data is then provided to the ophthalmological treatment device 1. For example, the reference image 31 and/or reference interferometric data is transmitted to the ophthalmological treatment device 1 using a data communications network, for example the Internet. Alternatively, the reference image 31 and/or reference interferometric data is stored to a portable data carrier which is then connected to the ophthalmological treatment device 1.
[0079]
[0080]
[0081]
[0082]
[0083] Due to the rotation of the eye by the rotation angle θ, the control module 11 is configured to rotate the incision surface C about the rotation angle θ such that a rotated incision surface C′ is incised in the eye.
[0084]
[0085] In an embodiment, in preparatory step, the eye 21 of the person 2 is fixed using a patient interface 54 as shown in
[0086] In a step S1, the control module 2, in particular the processor 21, is configured to receive a reference image 31. For example, the processor 21 is configured to receive the reference image 31 from the memory 14 or from an auxiliary memory device via the communication interface 15. The reference image 31 is an image of the eye 21 of the person 2 taken prior to eye treatment, in particular by a diagnostic device 3 as illustrated in
[0087] In a step S2, the control module 2, in particular the processor 21, instructs the camera 12 to record a current image 121 of the eye 21. Depending on the embodiment, one or more current images 121 of the eye are recorded.
[0088] In a step S3, the processor 21 compares the current image 121 to the reference image 31 to determine a rotation angle θ of the eye 21 in current image 121 with respect to the reference image 31. In particular, the processor 21 uses a direct solver S for comparing the images as shown in
[0089] By determining the rotation angle θ in a predictable time, the processor 21, in an embodiment, determines the rotation angle θ successively using successive current images 121 recorded by the camera, in real-time. This ensures that even if the person 2 shifts, rotates, or otherwise moves their head, the treatment pattern t is adjusted accordingly. Advantageously, this allows for treatment without the patient interface 52 being fixed to the surface of the eye 21.
[0090] Further, this may result in the processor 21 determining the rotation angle θ more quickly than iterative functions/algorithms and therefore results in a quicker overall treatment time as the person 2 does not have to lie down for as long. A quicker treatment is safer because the person 2 has less opportunity to move.
[0091] Depending on the embodiment, the direct solver S is implemented as a software application, an algorithm, and/or a function.
[0092] In an embodiment, the processor 21 is configured to display, on a display of the user interface of the ophthalmological treatment device 1, the reference image 31 and/or the current image 121 and the rotation angle θ. The processor 21 is configured to receive, via the user interface, user input from an eye treatment professional relating to the rotation angle θ. In particular, the user input comprises an indication to alter the determined rotation angle θ.
[0093] Depending on the embodiment, the processor 21 is configured to display the reference image 31 and the current image 121 simultaneously next to each other, preferably using a polar representation (as “unrolled” images), as explained below in more detail.
[0094] In an embodiment, the processor 21 is configured to display the reference image 31 and the current image 121 next to each other, e.g., one above the other, rendering the reference image 31 and/or the current image 121 such that both the reference image 31 and/or the current image 121 are both visible. In a preferred embodiment, a polar representation of the reference image 31 and a polar representation of the current image 121 are displayed. The polar representation maps a ring-shaped part of the images 31, 121 which relates to an area around the iris to a rectangular image, preferably of identical size.
[0095] The polar representation is generated, for example, by identifying a center point in the reference image 31 and a center point in the current image 121. The center points preferably correspond to the center of the eye, in particular the pupil, in the respective images 31, 121. The polar representation “unrolls” the images 31, 121, preferably by mapping a radial distance from the center point to a y-coordinate of the polar representation and mapping an azimuthal angle about the center point to an x-coordinate. Preferably, the polar representation of the reference image 31 and/or the current image 121 is displaced according to the rotation angle θ (the rotation angle being mapped to a displacement along the x-axis in the polar representation). If the rotation angle θ determined by the processor 21 is accurate, the displaced polar representation of the reference image 31 and/or the displaced polar representation of the current image 121 will align such that, at least in part, features of the reference image 31 and features of the current image 121 will line up, i.e. be present at the same horizontal positions.
[0096] Depending on the indication received as part of the user input, the rotation angle θ is manually updated to an updated rotation angle θ. The transformed reference image 31 and/or the transformed current image 121 are rendered according to the updated rotation angle θ. Thereby, the eye treatment professional is able to fine-tune the rotation angle θ determined by the processor 21 in an iterative and guided interaction. Once the rotation angle θ has been fine-tuned, the eye treatment professional can accept the updated rotation angle θ, which will be used for rotating the treatment pattern t to a determine a rotated treatment pattern t′ as described below.
[0097] In an embodiment, the ophthalmological treatment device 1 is controlled using the rotation angle θ. In particular, the processor 21 is configured to rotate the treatment pattern t about the rotation angle θ, thereby resulting in a rotated treatment pattern t′. The ophthalmological treatment device 1 controls the laser L according to the rotated treatment pattern t′ such that the laser L is directed onto one or more treatment points F. In an embodiment, the treatment pattern t, t′ comprises a laser trajectory. The laser trajectory includes, for example, one or more continuous laser paths and/or one or more discrete treatment points F. The treatment pattern t, t′ further includes, depending on the embodiment, one or more laser speeds, one or more laser spot sizes, and/or one or more laser powers.
[0098]
[0099] In an embodiment, the direct solver S, or a specific image pre-processing function, pre-processes the reference image 31 and/or the current image 121.
[0100]
[0101] The neural network N is configured to receive both the reference image 31 and the current image 121 as inputs. The neural network N is further configured to output the rotation angle θ. The neural network N is configured to preprocess the inputs using a sequence of preprocessing steps. The preprocessing steps are designed to process the reference image 31 and the current image 121 such that particular characteristics of the images match. For example, the preprocessing steps comprise image transformations such as a transformation to polar coordinates and/or color adjustments such as histogram matching. The neural network N then contains convolutional layers, activation functions (ReLU), pooling operations, fully-connected layers, and/or skip connections. In particular, the neural network N comprises two final and dense fully-connected layers configured to directly output the rotation angle θ.
[0102] In an embodiment, the the neural network N includes a ResNet-34 architecture with two fully-connected layers at the end configured to directly output the rotation angle θ.
[0103] The neural network N is configured to determine the rotation angle θ by identifying non-local features in both the reference image 31 and the current image 121. Non-local features identified in both images 31, 121 are matched and a distance between them is determined. This distance is then used to determine the rotation angle θ.
[0104] The neural network N is trained to output the rotation angle θ, for example, in the manner shown in
[0105] In a preferred embodiment, the neural network N is executed by the processor 11 in a GPU and/or in a TPU for faster execution.
[0106]
[0107] The Siamese neural networks N1, N2 start with a chain of preprocessing steps as described above, which may include image transformations such as a transformation to polar coordinates and color adjustments such as histogram matching. The neural network architecture for the neural networks N1, N2 then contains convolutional layers, activation functions (ReLU), pooling operations, fully connected layers, and/or skip connections. One or two downstream dense and fully-connected layers, preferably connected directly to the output, perform a low-dimensional embedding (preferably n<100) to generate the reference image output vector V1 and the current image output vector V2.
[0108] The final dense and fully-connected layers are trained during the training phase such that the distance obtained using the distance metric is minimized for input image pairs where no rotation is present (between the images of the image pair) and maximized for input image pairs having a large rotation (between the images of the image pair). In a preferred embodiment, the neural networks N1, N2 include a ResNet-34 architecture with two downstream fully-connected layers, preferably directly upstream from the output. The reference image output vector V1 and the current image output vector V2 are then used to determine the rotation angle θ using the distance metric as described above.
[0109]
[0110] The neural network N is initialized as an untrained neural network N of a particular architecture with random parameters, e.g. random weights and biases. The untrained neural network is trained using a training dataset comprising a large number, preferably in the order of 1000, more preferably at least 300 training reference images and associated training current images and training rotation angles. It is important that the training data set comprises a wide variety of lighting conditions as well as different eye shapes and iris colors to avoid any bias into the detector towards or against different ethnic groups. The training rotation angles are obtained from image pairs where the horizontal reference axis in the eye is marked before the patient is lying in supine position. The training dataset is then used to train the untrained neural network iteratively using supervised learning to generate the trained neural network N. In particular, the training dataset is segregated into a training subset, a test subset, and a validation subset. Data augmentation using rotation and/or mirroring of the input image pairs is used to enlarge the training set.
[0111] The neural network N can be successfully trained using, for example, the Adam optimizer using a learning rate of 3.Math.10.sup.−4. Training is helped significantly by injecting readily available pre-trained weights for a ResNet-34 architecture trained on the ImageNet database as a starting point.
[0112] The Siamese neural networks N1, N2 can be successfully trained using image pairs and a contrasting loss function.
[0113] Training of the neural networks N, N1, N2 takes place prior to treatment and is necessary only once. The trained neural networks N, N1, N2 is then stored in the memory 14 of the ophthalmological treatment device 1.
[0114] The above-described embodiments of the disclosure are exemplary and the person skilled in the art knows that at least some of the components and/or steps described in the embodiments above may be rearranged, omitted, or introduced into other embodiments without deviating from the scope of the present disclosure.