RETINAL IMAGING SYSTEM
20230014952 · 2023-01-19
Inventors
Cpc classification
A61B90/03
HUMAN NECESSITIES
A61B3/0075
HUMAN NECESSITIES
A61B3/12
HUMAN NECESSITIES
A61B3/14
HUMAN NECESSITIES
International classification
A61B3/12
HUMAN NECESSITIES
A61B3/00
HUMAN NECESSITIES
A61B3/14
HUMAN NECESSITIES
Abstract
A retinal imaging system is provided. The system comprises: a fundus camera having a focusing mechanism; an imaging module configured for imaging user's face and eyes and providing image date indicative of a relative orientation between an optical axis of the fundus camera and a line of sight of user's eye at user's eye target position; a position and alignment system configured and operable to utilize the image data indicative of said relative orientation for positioning the fundus camera at an operative position such that the optical axis substantially coincides with the line of sight of user's eye, to enable focusing the fundus camera on the retina; a sensing system comprising one or more sensors, configured and operable for monitoring a user's face position with respect to a predetermined registration position and generating corresponding sensing data; and a safety controller configured and operable to be responsive to the sensing data, and upon identifying that the user's face position with respect to the predetermined registration position
Claims
1. A retinal imaging system comprising: a fundus camera having a focusing mechanism; an imaging module configured for imaging user's face and eyes and providing image date indicative of a relative orientation between an optical axis of the fundus camera and a line of sight of user's eye at user's eye target position; a position and alignment system configured and operable to utilize the image data indicative of said relative orientation for positioning the fundus camera at an operative position such that the optical axis substantially coincides with the line of sight of user's eye, to enable focusing the fundus camera on the retina; a sensing system comprising one or more sensors, configured and operable for monitoring a user's face position with respect to a predetermined registration position and generating corresponding sensing data; and a safety controller configured and operable to be responsive to the sensing data, and upon identifying that the user's face position with respect to the predetermined registration position corresponds to a predetermined risk condition, generating a control signal to the position and alignment system to halt movements of the fundus camera.
2. The system of claim 1, comprising a control system which comprises: a position controller configured and operable to be responsive to the image data and the sensing data to generate position and alignment data to said position and alignment system to perform controllable movements of the fundus camera to bring the fundus camera to the operative position; and a movement controller configured and operable to be responsive to the sensing data and to the control signal from the safety controller to operate the position and alignment system to halt the movements of the fundus camera.
3. The system of claim 1 or 2, wherein the safety controller is configured and operable to analyze the sensing data from one or more sensors of the sensing system indicative of a distance between the user's face and the fundus camera to enable generation of said control signal upon identifying a change in said distance corresponding to the risk condition.
4. The system of claim 3, said one or more sensors providing the distance data comprises at least one ultrasound sensor.
5. The system of claim 1, wherein said position and alignment system comprises: a first driving mechanism operable in accordance with the alignment data for moving the fundus camera to a vertical aligned position of the optical axis corresponding to a vertical alignment with user's pupil; a second driving mechanism operable in accordance with the alignment data for moving the fundus camera to a lateral aligned position of the optical axis corresponding to substantial coincidence of the optical axis with the line of sight; and a third driving mechanism operable in accordance with the sensing data and a focal data of the fundus camera for moving the fundus camera along the optical axis to position a focal plane of the focusing mechanism at the retina of the user's eye.
6. The system of claim 5, wherein the position and alignment system further comprises a rotation mechanism for rotating the fundus camera with respect to at least one axis.
7. The system of claim 1, comprising a registration assembly for registering a position of user's face, said registration assembly comprising a face cradle for fixation of user's face at the registered position during imaging.
8. The system of claim 7, wherein said sensing system comprises one or more sensors on said face cradle for monitoring a degree of contact of the user's face to the face cradle.
9. The system of claim 8, wherein said one or more sensors on said face cradle include at least one of the following: at least one pressure sensor, or at least one IR sensor.
10. The system of claim 9, wherein said one or more sensors on said face cradle include at least one pressure sensor comprises at least three sensing elements located in three spaced-apart locations to monitor degree of contact in respective at least three contact points with the face cradle.
11. The system of claim 1, wherein said target position corresponds to a predetermined orientation of the user eye's line of sight with respect to at least one predetermined fixation target exposed to the user.
12. The system of claim 11, wherein said target position corresponds to intersection of the user eye's line of sight with the predetermined target presented by the fundus camera.
13. The system according to claim 1, further comprising at least one of the following: a calibration mechanism configured and operable to perform self-calibration of the system, said calibration mechanism comprising at least one imager, one or more calibration targets located in a field of view of said at least one imager, and a calibration controller configured and operable to receive and analyze image data from said at least one imager and determine a relative position of an optical head of the fundus camera with respect a region of interest; and an illumination system configured and operable to provide illumination within a region of interest where the user's face is positioned during imaging by the fundus camera.
14. The system according to claim 13, comprising said calibration mechanism, wherein said at least one calibration target includes at least one of the following: a two-dimensional element, color pattern, and QR codes.
15. The system of any one of the preceding claim 1, wherein the imaging module is characterized by at least one of the following: the imaging module comprises at least one imager; and the imaging module is configured and operable to image the user's eye using IR illumination to detect the eye pupil.
16. The system of claim 15, wherein the imaging module comprises the at least one imager and is characterized by at least one of the following: a) said at least one imager is configured as a 3D imager; and (b) the imaging module comprises two imagers with intersecting fields of view.
17. (canceled)
18. The system of claim 1, comprising a user interface utility configured and operable to provide position and fixation target instructions to the user.
19. The system of claim 18, characterized by at least one of the following: (i) said position and fixation target instructions correspond to registration of, respectively, the user's face position and orientation of the eye's line of sight and (ii) said position and fixation target instructions comprise at least one of audio and visual instructions.
20. (canceled)
21. The system according to claim 7, characterized by at least one of the following: the imaging module is further configured and operable to provide image data indicative of one or more parameters of the user, the system further comprising a face cradle position controller configured and operable to the image data indicative of one or more parameters of the user and generate operational data to a movement mechanism of the face cradle to automatically adjust position of the face cradle with respect to the fundus camera based on said one or more parameters of the user; the registration assembly is configured and operable for registering the position of user's face with respect to the fundus camera, the registration assembly comprising a support platform carrying the face cradle defining a face support surface for supporting the user's face at the registered position during imaging, the face support surface being tilted with respect to a vertical plane such that user's eyes look general forward and downwards towards the fundus camera.
22. The system according to claim 7, wherein the registration assembly is configured and operable for registering the position of user's face with respect to the fundus camera, the registration assembly comprising a support platform carrying the face cradle defining a face support surface for supporting the user's face at the registered position during imaging, the face support surface being tilted with respect to a vertical plane such that user's eyes look general forward and downwards towards the fundus camera, the system being further characterized by at least one of the following: (1) the fundus camera and the face cradle are mounted on the support platform; and (2) the face cradle comprises a face contact frame projecting from said face support surface.
23. (canceled)
24. (canceled)
25. The system of claim 22, wherein the face contact frame is characterized by at least one of the following: the face contact frame is made from elastic and flexible material composition; and the face contact frame is removably attachable to the face cradle, allowing the face contact frame to disposable or replaceable.
26. (canceled)
27. The system of claim 13, comprising said illumination system configured and operable to provide illumination within a region of interest where the user's face is positioned during imaging by the fundus camera, said illumination system being configured and operable to carry out one of the following: produce diffused (soft) light; and produce IR illumination.
28. (canceled)
29. (canceled)
30. The system according to claim 1, comprising at least one of the following: a triggering utility configured and operable to be responsive to the position and alignment data and the distance data to generate a triggering signal to the fundus camera upon identifying that the position and alignment data and the distance data satisfy an operational condition; and a data processor configured and operable to be responsive to retina image data from the fundus camera, and generate data indicative of a retinal condition and patient health condition.
31. (canceled)
32. (canceled)
33. The system according to claim 30, comprising the data processor configured and operable to be responsive to retina image data from the fundus camera, and generate data indicative of the retinal condition and patient health condition, the system being characterized by at least one of the following: said data processor is configured and operable to apply AI and deep learning processing to the retina image data; and the system is configured and operable to communicate with a remote station to transmit to the remote station data indicative of the retina image data.
34. (canceled)
35. A retinal imaging system comprising a face cradle and a fundus camera, wherein: the fundus camera is configured such that its optical axis is tilted with respect to a horizontal plane; and the face cradle defines a tilted face support surface for supporting a user's face in a free laying state with user's eyes looking forward and downwards towards a field of view of the fundus camera.
36. The support platform of claim 35, characterized by at least one of the following: the face cradle comprises a face contact frame projecting from said face support surface; the face contact frame is removably attachable to the face cradle, allowing the face contact frame to disposable or replaceable.
37. The support platform of claim 36, wherein the face cradle comprises a face contact frame projecting from said face support surface, the face contact frame being made from elastic and flexible material composition.
38. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0046] In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
[0047]
[0048]
[0049]
[0050]
DETAILED DESCRIPTION OF EMBODIMENTS
[0051] Referring to
[0052] Data indicative of retinal images are properly stored and can be accessed by a physician for on-line or off-line analysis. For example, the stored data can be transmitted to a central computer station and be accessed from a remote device via a communication network using any known suitable communication techniques and protocols. As described above, the image data can be processed using AI and Deep Learning techniques.
[0053] The system 100 includes such main parts as a fundus camera 104, an imaging module 112, a sensing system 116, a position and alignment system 120, a safety controller 144, and a control system 128. The fundus camera 104 is typically positioned in association with a face cradle unit 136.
[0054] The configuration may be such that the face cradle is equipped with a movement mechanism which is controllably operable to move the cradle unit enabling automatic adjustment of its position to meet the requirements for a specific user/patient (e.g. take into account user's height difference from an average or nominal value).
[0055] Although not shown in this schematic illustration, the fundus camera and the face cradle may be mounted on a common support platform. As will be described further below, the invention also provides a novel configuration for the support platform.
[0056] As described above, the invention is aimed at providing a self-operable retinal imaging system which provides user-safety and effective retinal imaging. During the retinal imaging session, the user is requested/instructed to bring his face and eyes to a target position, by positioning his face on the face cradle and pointing his view to a target image presented by the fundus camera.
[0057] The imaging module 112 includes at least one imaging unit, which includes one or more imagers configured an operable to acquire images of user's face, eyes, irises and possibly also pupils (e.g., using appropriate eye tracking technique or eye and gaze tracking technique) and generate corresponding image data. As described above, the imaging module 112 may include one or more additional imaging units adapted for imaging the scene including a region of interest outside the fundus camera field of view and generating corresponding “external” image data, which can be used for self-calibration purposes. \Hence, the image data ID from the imaging module 112 may be also used for the self-calibration of the system, which may be implemented using the calibration target(s) in the form of QR codes, color patterns, physical 2D or 3D shapes, etc.). Further, while at the user's eye target position (as described above) the image data ID indicative of a relative orientation of an optical axis OA of the fundus camera with respect to the line of sight LOS of user's eye is analyzed. As described above the targets used at the self-calibration and imaging stages may or may not be the same.
[0058] Analysis of the image data ID is used to operate the position and alignment system 120 for positioning the fundus camera 104 at an operative position with a proper alignment of the optical axis OA of the fundus camera 104 such that it substantially coincides with the line of sight LOS of user's eye, while at said target position, and, while at the aligned position, to operate a focusing mechanism 108 of the fundus camera 104 to focus the fundus camera on the retina. To this end, the position and alignment system 120 is configured and operable for moving the fundus camera 104 along three axes with respect to the user's eye while at said user's eye target position.
[0059] The sensing system 116 is configured and operable for monitoring a relative position between a user's face 150 and the fundus camera 104 and generating corresponding sensing data SD. The sensing data is received and analyzed by a safety controller 144 to properly generate a control/alarm signal. Also, both the sensing data (or results of sensing data analysis) and the image data are used by the control system 128 to initiate (trigger) the retinal imaging session by the fundus camera and monitor the progression of the imaging session.
[0060] The control system 128 is a computer system including inter alia data input and output utilities, memory, and a data processor and analyzer. The data processor and analyzer comprises a position controller utility 124 (typically in software) configured and operable to be responsive to the image data ID from the imaging module 112 to generate position and alignment data PAD to the position and alignment system 120 to control the movements of the fundus camera to bring the fundus camera to the operative position. The position controller 124 also includes a calibration utility 125 configured and operable to utilize the image data to generate operational data to the position and alignment system to bring the fundus camera to the operational position.
[0061] As mentioned above, the face cradle may be associated with a movement mechanism enabling automatic adjustment of its position. To this end, the same position controller 124, or a separate controller of the control system 128, may be configured and operable to generate movement data to operate the movement mechanism of the face cradle to implement controllable movement of the face cradle to automatically adjust the position of the face cradle.
[0062] Such face cradle position controller may be responsive to image data ID from an imager, which may be that of the imaging module 120 or a separate imager (one or more 2D cameras), adapted to image a scene in the vicinity of a region of interest (i.e. vicinity of face cradle) to identify user's face in the image and generate corresponding estimated user's data, e.g. user's height relative to standard average expected height. Based on this estimate, the controller generates position adjustment data including movement data indicative of a movement required to be performed by the face cradle to automatically bring the face cradle to the proper position in association with a specific user, i.e., adjust the face cradle height with respect to the camera's field of view.
[0063] Also, the data processor and analyzer includes a movement controller 132 (typically in software) configured and operable to be responsive to the sensing data SD from the sensing system 116 to properly control the movement of the fundus camera to keep the required and safety working distance and responsive to signals from the safety controller 144. Hence, when the safety controller properly identifies that there exists/appears a predetermined risk condition in the relative position between the user's face and the fundus camera, it generate a corresponding control signal CS to the movement controller 132 which operates the position and alignment system to halt any movement of the fundus camera.
[0064] The safety controller 144 may be a separate processing unit or may be part of the control system 128. The safety controller is preprogrammed to determine whether position data, as well as movement data indicative of a predicted change in the position of the fundus camera relative to the user's face, arrived or is approaching a critical value corresponding to a risk condition, to properly generate the control signal CS. It should also be noted that the safety controller may utilize the sensing date to identify a change in the user's face position with respect to the face cradle and generate corresponding control/alarm signal, which may initiate generation of predetermined instructions to the user, together with and independent from the respective operation of the position and alignment system.
[0065] As also exemplified in the figure, the control system 128 includes a data processor 127 configured and operable to receive retinal image data RID from the fundus camera unit 104 and process this data to determine whether it is indicative of a specific anomality (disease). To this end, the data processor 127 is configured to apply AI and deep learning processing to the image data RID and utilize/access predetermined database storing various retinal image data pieces in association with corresponding retinal conditions (and corresponding individual's health condition). Alternatively, or additionally, the control system 128 may be configured for data communication with a central station 129 to transmit the raw data including retinal image data RID obtained by the fundus camera to the central station or transmit to the central station data indicative of the retinal image data resulting from some preprocessing performed by the data processor 127, for further processing at the central station using AI and deep learning techniques. The retinal image data RID and/or results of the processing of such data may be recorded at the control system 128 and/or at the central station 129. As described above, the central station 129 may be configured for communication with a plurality of retinal imaging systems, and analyze data received from these to optimize the AI and Deep learning algorithms as well as update the central database.
[0066] Referring to
[0067] In a next step, the image data and the sensing data, while being continuously provided, are continuously analyzed by a data processor and analyzing utility of the control system (step 208). The image data is initially indicative of the user's face position with respect to the face cradle and also with respect to the fundus camera (i.e., a relative orientation of the line of sight of user's eye, while pointing to the target) and the optical axis of the fundus camera (i.e., along the x- and y-axes), and possibly also is indicative of a distance between the user's face and the fundus camera). The sensing data is indicative of the proper contact between the user's face and the face cradle, and also of a distance between the user's face and the fundus camera. It should be understood that the distance determination may be performed in a double-check mode using both the image data of the imaging module and the sensing data of the sensing system.
[0068] The image data analysis may include generation of position adjustment data for the face cradle unit in association with a specific user/patient, in order to operate a movement mechanism of the face cradle unit to automatically adjust the position of the face cradle unit with respect to the fundus camera (step 225).
[0069] The image and sensing data analysis includes navigation/guidance data generation to the position and alignment system and a risk condition analysis/prediction to identify, while controlling position and movement steps, whether such navigation approaches a risk condition (step 210). With regards to the navigation procedure, it should be noted that position and alignment data analysis provides for bringing the fundus camera to the proper operational position, i.e., position of the alignment of the optical axis of the fundus camera with the user's eye line of sight and positioning of the so-aligned fundus camera at a required working distance to the user's eye. When the control system identifies such a proper operational position of the fundus camera, a triggering signal is generated which actuates an auto-focus and auto illumination managed by the fundus camera using any suitable auto-focusing technique, e.g., that typically used in imaging systems including fundus cameras. However, it should be noted that these processes of auto-focus and auto illumination are triggered (capturing triggered) by the control system upon identifying that the fundus camera, while being navigated, approaches the fundus camera working distance. From the point the system triggered the fundus camera, all its operations are fully automatic (focus, illumination, image processing, etc.).
[0070] If during navigation or later during the fundus camera operation (imaging session) a risk condition is identified, the control/alarm signal is generated (step 212) and movements (and possibly also operation) of the fundus camera are halted (step 250). Such a risk condition may be associated with exaggerated proximity of the fundus camera to the user's eye, and/or user's face movement from the registered position, and/or insertion of hands or other things in between the face cradle and the fundus camera. All such unsafety situations can be properly detected by the sensing system (e.g. ultrasound sensor(s)), which determines the distance between the fundus camera and the face cradle and detects obstacle at distance below the working distance. It should also be understood that the imaging module, i.e. the camera(s), can also detect any change towards a risk condition, thus performing together with the sensing system a double-check to keep the safety operation of the system.
[0071] As long as safety is maintained, i.e. risk condition is not identified, the process continues with generating operational data (step 216) and performing retinal imaging process (step 240). As the retinal imaging session proceeds, respective instructions are being provided to the user for directing the user's gaze towards the field of view of the fundus camera (e.g. towards the target) and maintaining user's face position and gaze (e.g. by instructing the user to keep the eyes open). The method is performing iteratively the steps above until the retinal imaging process is completed consecutively for the two eyes.
[0072] Reference is made to
[0073] As shown in
[0074] The image data can thus be used to identify whether the user's face is properly positioned and if not enable generation of instructions to the user; and identify whether the user is looking onto the target, and if not enable generation of instructions to the user. Also, the image data can be used by face cradle position controller 133 to determine whether and how the position of the face cradle 136 is to be adjusted, via movement mechanism 137, to bring the user's face to proper position with respect to the camera field of view and/or registration target.
[0075] Further, the image data is used to determine required movements of the fundus camera along x- and y-axes in the plane perpendicular to the optical axis of the fundus camera (and possibly also along the optical axis, or z-axis) to bring the fundus camera to the operative position with respect to the user's eye.
[0076] The system 300 further includes a sensing system 116 associated with a safety controller 144, configured and operable as described above with reference to
[0077] As described above, and not specifically shown in
[0078] Further provided in the retinal imaging system 300 is a position and alignment system 120 including appropriate drive mechanisms performing displacement of the fundus camera with respect to the face cradle. Generally, the drive mechanisms provide movement of the fundus camera along three perpendicular axes, including two axes, x- and y-axes in the plane perpendicular to the optical axis of the camera and the z-axis being the optical axis of the fundus camera. It should be noted that an additional drive mechanism may be provided for rotation or pivotal movement of the fundus camera or at least its optical axis.
[0079] It should be noted that in the description the x- and y-axes are at times referred to as, respectively, horizontal and vertical axes. However, as mentioned above and will be described more specifically further below, the support plane supporting the fundus camera and the face cradle may be tilted with respect to the horizontal plane. In this case the x- and y-axes are respectively parallel and perpendicular to the support plane, and these terms should be interpreted and understood accordingly. Generally, the configuration may be such that the optical axis of the fundus camera, i.e. its field of view, is oriented at a certain angle (tilted) with respect to the horizontal plane “looking” in a generally forward and upward direction, and the face cradle is configured such that, when user's face is fixed on the face cradle user's field of view is oriented generally forward and downwards towards the field of view of the fundus camera.
[0080] The position and alignment system 120 operates by the operational data provided by the control system for bringing the fundus camera to an operative position (via navigation of its movements based on the analysis of the image and sensing data) such that the optical axis of the fundus camera substantially coincides with the line of sight of user's eye, while at said target position and the required working distance from the fundus camera, to keep the level of safety and enable focusing the fundus camera on the retina. As shown in the figure, the control system 128 is provided being in data communication with the imaging module 112, the safety controller 144 and possibly also directly with the sensing system 116, and data communication with the position and alignment system 120. The control system 128 is configured and operable as described above with reference to
[0081] It should be noted, although not specifically shown in the figure, that the retinal imaging system 300 may include or may be used with an illumination system configured and operable to provide diffused (soft) light and/or NIR illumination within a region of interest where the user's face is positioned during imaging by the fundus camera. The diffused (soft) light preferably has an appropriate temperature profile, e.g. substantially not exceeding 4500° K, and proper illumination intensity.
[0082]
[0083] It should be understood that, generally, the fundus camera and the face cradle may or may not be mounted on the same physical surface, but the orientations of the user's gaze and the optical axis of the fundus are to be considered with respect to a predetermined general plane. Hence, the common support plane 410 may or may not be constituted by a physical surface. In this not limiting example this is achieved by placing the fundus camera 104 and the face cradle 136 on a tilted surface 410 (defining the general support plane) of a wedge element 414. This configuration allows the face cradle 136 to define a face support surface 136A properly inclined with respect to a vertical plane, such that user's face can be positioned on said surface 136A freely laying on the face support surface with the user's eyes pointing generally forward and downwards towards the optical axis of the fundus camera (while looing on the target).
[0084] As also schematically illustrated in the example of
[0085] Although in this specific not limiting example of
[0086] As shown schematically in
[0087]
[0088] The face support surface has an appropriate optical window 504 (e.g. opening) allowing imaging of user's eyes via the optical window. As also shown in the figure, the face cradle 500 may for example include a face contact frame 506 located on and projecting from the face support surface 502. The face contact frame 506 may be removably mountable on/attachable to the face cradle 500. Also, the face contact frame 506 may be made from a properly elastic and flexible material composition (e.g. rubber, silicone, etc.) making the entire procedure more comfortable for the user providing for ergonomic and more stable position during the imaging session. The face cradle may be equipped with one or more sensing elements—three such sensing elements S1, S2 and S3 being shown in this specific not limiting example. It should be understood, although not specifically shown, that the imaging module may be integral with/mounted on the fundus camera housing or may be a separate unit appropriately located to acquire the images of user's face, eye, iris, pupil. Also, the safety controller, as well as the control system may be integral with the fundus camera housing or may be stand alone device(s) connectable to the respective devices/units of the system as described above.
[0089] Thus, the present invention provides a novel configuration of the self-operable retinal imaging system, enabling a user to perform retinal imaging without a need for highly-skilled operator, and actually any operator's assistance, owing to the high-degree safety functionality of the system. The retina images may be stored in a memory of the control system, to be accessed by a skilled person for analysis; and/or may be communicated to an external control station. The invention also provides a novel face cradle configuration, as well as a novel configuration of an integral retina imaging system.