METHOD FOR QUANTIFYING OCULAR DOMINANCE OF A SUBJECT AND APPARATUS FOR IMPLEMENTING A METHOD FOR QUANTIFYING OCULAR DOMINANCE OF A SUBJECT
20230036885 · 2023-02-02
Assignee
Inventors
- Martha Hernandez-Castaneda (Charenton-le-pont, FR)
- Paul VERNEREY (Charenton-Le-Pont, FR)
- Gildas Marin (Charenton-le-pont, FR)
Cpc classification
A61B3/032
HUMAN NECESSITIES
International classification
Abstract
An apparatus for quantifying ocular dominance of a subject, including at least one display for providing a first image representing a first target to a first eye of the subject, and for providing a second image representing a second target to a second eye of the subject, and a control unit to control the at least one display.
Claims
1. An apparatus for quantifying ocular dominance of a subject comprising: at least one display for providing a first image representing a first target to a first eye of the subject, and for providing a second image representing a second target to a second eye of the subject, and a control unit to control the at least one display, wherein the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point of the second target where PLi has the same position in the first image than PRi for 1≤i≤n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target VLi+VRi=VLj+VRj for any i and j.
2. The apparatus according to claim 1, further comprising a first optical unit and a second optical unit respectively in front of the first eye and in front of the second of the subject, a power modulator to change the optical power of the optical units in front of each eye, the power modulator being control by the control unit.
3. The apparatus according to claim 1, wherein the feature values of the target are the luminosity or the color.
4. The apparatus according to claim 1, wherein the shape of the targets is: an element grid comprising at least four elements, the at least three elements having the same shape and the same size, an element line comprising at least three elements, the at least three elements having the same shape and the same size, an element column comprising at least three elements, the at least three elements having the same shape and the same size, at least two fringes, letter(s) or optotype(s) or figure(s).
5. The apparatus according to claim 1, further comprising an adaptive algorithm executed by the control unit, the adaptive algorithm being configured to accept a report describing how the subject sees when presented the first image and the second image, and to calculate adjustments of the feature values of the first and/or second target on the first image and the second image to be presented in a next iteration of the first image and the second image according to the report, a target generating component configured to provide the next iteration of the first image and the second image to the subject.
6. The apparatus according to claim 1, further comprising an adaptive algorithm executed by the control unit, the adaptive algorithm being configured to accept a report describing how the subject sees when presented the first image and the second image, and to calculate adjustments of the optical power of the optical units in a next iteration of the first image and of the second image according to the report.
7. A refractometer comprising an apparatus for quantifying ocular dominance of a subject according to claim 1.
8. A set of images for quantifying the ocular dominance of a subject, comprising a first image representing a first target and a second image representing a second target, wherein: the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point of the second target where PLi has the same position in the first image than PRi for 1≤i≤n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target
VLi+VRi=VLj+VRj for any i and j.
9. A method for quantifying ocular dominance of a subject comprising providing a first image to a first eye of the subject, the first image representing a first target, providing a second image to a second eye of the subject, the second image representing a second target wherein the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point on the second target where PLi has the same position in the first image than PRi for 1<i<n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target VLi+VRi=VLj+VRj for any i and j. checking that the subject sees a fused image from the first image and the second image, the fused image comprising a fused target with feature values; generating a first report describing the features values of the fused image; determining which eye is the dominant eye of the subject based on the report.
10. The method according to claim 9, further comprising after determining which eye is the dominant eye of the subject: calculating adjustments to the feature values of the first and/or second target on the first image and the second image to be presented in a next iteration of the first image and the second image according to the report; providing the next iteration of the first image and the second image with the adjusted feature values; generating a second report describing how the subject sees the features values of the fused target from the next iteration of the first image and of the second image; performing the precedent steps until the subject indicates that according to its perception, the feature values of the fused target of the fused image are constant at each point of the fused target of the fused image; quantifying ocular dominance of the subject based on the reports of the subject.
11. The method according to claim 9, wherein the shape of the targets is a set of at least three elements, said at least three elements having the same shape and the same size, wherein the feature values of the target are the luminosity, and wherein the reports describe the location of the brightest and/or darkest element of the fused target of the fused image.
12. The method according to claim 9, wherein the target is a set of at least three elements, said at least three elements having the same shape and the same size, wherein the feature values of the target are the colors of the target,
VLi+VRi=VLj+VRj meaning that VLi corresponds to a first color and VLj corresponds to a second color which is the complementary color of the first color, wherein the reports describe the location of the colors on the fused target of the fused image.
13. The method according to claim 9, wherein the fused image is a 3-D stereoscopic image.
14. A method for adjusting a binocular balance of a subject comprising: providing a first image to a first eye of the subject, the first image representing a first target, providing a second image to a second eye of the subject, the second image representing a second target wherein the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point on the second target where PLi has the same position in the first image than PRi for 1≤i≤n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target VLi+VRi=VLj+VRj for any i and j. checking that the subject sees a fused image from the first image and the second image, the fused image comprising a fused target with feature values; generating a report describing the features values of the fused image; determining which eye is the dominant eye of the subject based on the report; providing a correction to the dominant eye of the subject by adjusting a power lens in front of the first eye and/or the second eye until the feature values of the fused image seems constant for the subject.
15. The method for adjusting a binocular balance of a subject according to claim 14, further comprising measuring the refraction of each eye of the subject; providing a correction based on the measured refraction by adjusting a power lens in front of the first eye and/or the second eye.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0079] The following description with reference to the accompanying drawings will make it clear what the invention consists of and how it can be achieved. The invention is not limited to the embodiment/s illustrated in the drawings. Accordingly, it should be understood that where features mentioned in the claims are followed by reference signs, such signs are included solely for the purpose of enhancing the intelligibility of the claims and are in no way limiting on the scope of the claims.
[0080] In the accompanying drawings:
[0081]
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
DETAILED DESCRIPTION OF EMBODIMENTS
[0089]
[0090] The apparatus comprises a display 7 such as an image display system, for providing a first image 20L, 30L representing a first target 22L, 32L to a first eye 3 of the subject 4, and for providing a second image 20R, 30R representing a second target 22R, 32R to a second eye of the subject 4. The first image and the second image may be provided to the subject 4 at the same time or at different time such that the subject has the perception to see the first image and the second image at the same time.
[0091] The first image 20L, 30L may be seen by the first eye 2 of the subject through a first optical unit 5 such as a set of lens, while the second image 22R, 32R may be seen by the second eye 3 of the subject 4 through a second optical unit 6 such as a set of lens.
[0092] In the embodiment of
[0093] Each of these optical units 5, 6 is intended to be placed in front of one of the eyes 2, 3 of the subject, close to this eye (not further than a five centimeters, in practice), so that this eye 2, 3 can see a screen 70 of the display 7 through the lens, through the set of lenses, or by reflection onto a mirror of the optical unit 5, 6.
[0094] Alternatively, the subject may see the display directly without the optical unit.
[0095] The apparatus is configured to enable ocular dominance quantification at various distances (near vision, far vision and/or intermediate vision) and/or for various eye gaze directions (for example natural eye gaze direction lowered for reading, horizontal eye gaze direction for far vision). This screen 70 is located at a distance from the subject comprised between 25 cm (for near vision) and infinity when using a specific imaging system (not represented), such as a Badal system, or, if no imaging system is used (or using a plane mirror), up to about 8 meters in practice, or such as a system similar to the one disclosed in EP 3 298 952 allowing the combination of a first image provided by a screen (that could be constituted of one or more peripheral image(s)), and a second image provided by an imaging module (that could be constituted of one or more of central images), both first and second images being possibly imaged at variable distances for the individual's eye.
[0096] The lens, the set of lenses, or the set of lenses and mirrors of each of the first and second optical units 5, 6 has an overall spherical power S (spherical optical power, expressed for instance in diopters). And the cylindrical components of its refractive power are those of an equivalent cylindrical lens that has a cylindrical power C (expressed for instance in diopters), and whose cylinder has an orientation represented by an angle α. Each of the first and second refraction correction, provided by the corresponding optical unit 5, 6, may be characterized by the values of these three refractive power parameters S, C and α. This refractive correction could be equally characterized by the values of any other set of parameters representing the above mentioned refractive power features of the optical unit 5, 6, such as the triplet {M, J0, J45}, where the equivalent sphere M is equal to the sphere S plus half of the cylinder C (M=S+C/2), and where J0=C/2*cos(2α) and J45=C/2*sin(2α) are the refractive powers of two Jackson crossed cylinders lenses representative of the cylindrical refractive power features of the lens or of the set of lenses of the optical unit 5, 6.
[0097] According to an embodiment of the present description, the lens, the set of lenses, or the set of lenses of the first and second optical units 5, 6 may be blurring lens which are spherical, in other words C=0.
[0098] Regarding now the display 7, the display may comprise a screen 70.
[0099] The whole extent of the screen 70 may be seen through each of the first and second optical units 5, 6.
[0100] The display 7 may be realized by means of a liquid-crystal display screen 70 that is able to display the first image 20L, 30L with a first polarization, and, at the same time, to display the second image 20R, 30R with a second polarization. The first and second polarizations are orthogonal to each other. For instance, the first and second polarizations are both rectilinear and perpendicular to each other. Or, similarly, the first polarization is a left-hand circular polarization while the second polarization is a right-hand circular polarization.
[0101] The first optical unit 5 in front of the first eye may comprise a first polarizing filter that filters the light coming from the image display system 7. The first polarizing filter filters out the second polarization, and lets the first polarization pass through so that it can reach the first eye 2 of the subject. So, through the first polarizing filter, the first eye 2 of the subject can see the first image 20L, 30L but not the second image 20R,
[0102] Similarly, the second optical unit in front of the second eye may comprise a second polarizing filter that filters the light coming from the display 7. The second polarizing filter filters out the first polarization, and lets the second polarization pass through so that it can reach the second eye 3 of the subject.
[0103] The display may use any other separation technics, such as «active» separation for which each image test is displayed alternatively at a high frequency while an electronic shutter synchronized is blocking the eye for which the image should not be addressed. Separation system could also use chromatic separation with chromatic filters both on the display and the eye in which each side/eye has different chromatic filters that block each other (for example red and green filters).
[0104] The first and second images (as represented on
[0105] Here, the screen 70 may fill a part of the subject's field of binocular view that is at least 5 degrees wide, or even at least 10 degrees wide.
[0106] In alternative embodiments, the display may be implemented by means of a reflective, passive screen (such as an aluminum-foil screen) and one or several projectors for projecting onto this screen the first image, with the first polarization, and the second image, with the second polarization, the first and second images being superimposed to each other, on the screen.
[0107] Alternatively, the apparatus may comprise two displays. According to one embodiment, the first image is displayed on the first image and the second image is displayed on the second image, for instance using a Head up Virtual Reality device.
[0108] Here, the screen of the first display and the screen of the second display may fill a part of the subject's field of the monocular or binocular view that is at least 5 degrees wide, or even at least 10 degrees wide.
[0109] In alternative embodiments, the first and second displays may be achieved, for instance, by means of a first and a second Badal-like systems, placed respectively in front of the first eye, and in front of the second eye of the subject. Each of these Badal-like systems would comprise at least one lens, and a displacement system to modify a length of an optical path that joins this lens to the display screen considered, in order to form an image of this display screen at a distance from the eye of the subject that is adjustable.
[0110] Anyhow, the at least one display is controlled by a control unit 8 of the apparatus 1.
[0111] The control unit 8, that may comprises at least one processor and at least one non-volatile memory, may be programmed to control the at least one display, to vary/adjust the feature values of the first and/or the second target, and/or to control the power modulator in order to change the optical power of the optical unit 5,6.
[0112] As presented in detail below and illustrated in
[0113] The first image 20L, 30L and the second image 20R, 30R are such that the first target 22L, 32L on the first image 20L, 30L has an identical position, an identical orientation, an identical size and an identical shape to the second target 22R, 32R on the second image 20R, 30R.
[0114] The first target 22L, 32L comprises n points (PL1, . . . , PLi, . . . PLj, PLn) and the second target 22R, 32R comprises n points (PR1, . . . , PRi, . . . , PLj . . . PRn) where
[0115] n≥2, 1≤i≤n and 1≤j≤n. Each point (PLi) of the first target 22L, 32L matches with a point (PRi) of the second target 22R, 32R where PLi has the same position in the first image 20L, 30L than PRi for 1 in the second image 20R, 30R. To each point (PLi) of the first target and to each point (PRi) of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target. The feature value of at least two points (PLi, PLj) of the first target differs; and for each n points (PLi, PRi) of the first and second target
VLi+VRi=VLj+VRj for any i and j.
[0116] Consequently, the feature value VRi, VRj of the two points (PRi, PRj) of the second target that match with the two points (PLi, PLj) of the first target, differs.
[0117] By “identical”, we mean that a level of similarity in position, orientation, size and shape between the first target and the second target is higher than a certain threshold.
[0118] It is noted however that, alternatively, the first and second target could be very similar from each other, yet not completely identical for example to enable a 3-D stereoscopic rendering of the scene represented. Still, in such a case, the first and second target would be similar enough that a level of similarity between them is higher than a given threshold.
[0119] This level of similarity could for instance be equal to a normalized correlation product between the first target and the second target, that is to say equal to the correlation product between them, divided by the square root of the product of the autocorrelation of the first target by the autocorrelation of the second target. In such a case, the level of similarity threshold mentioned above could be equal to 0.8, for instance.
[0120] The level of similarity could also be defined, between two targets similar in size/shape, as an angular deviation of less than 6° when observed by a subject at far vision distance, or as a difference of less than +/−1 diopter.
[0121] More generally, the level of similarity threshold could be equal to 0.8 times a reference level of similarity, this reference level of similarity being a level of similarity between the first target and the first image itself, computed in the same manner as the level of similarity between the first and second targets (except that it concerns the first target only).
[0122] Alternatively, the level of similarity threshold could be equal to 10 times a level of similarity computed between the first target and a random image.
[0123] The admissible range of level of similarity can be defined empirically by showing to a subject successive combination of two images with the same first reference image and different second images differing each from the first reference image and from another, and defining each with the first reference image a particular level of similarity. The lower limit of the admissible range of level of similarity will correspond to the highest level of similarity at which a subject cannot perceive a 3-D stereoscopic rendering of the scene represented. The upper limit of the admissible range of level of similarity will correspond to the lowest level of similarity at which a subject will complain of double vision or suppression.
[0124] The feature value may be the luminosity or the colors. In the case of the luminosity, the feature value of the target may correspond to a level of grey or to an intensity or an amplitude for a same color.
[0125] The feature value of at least two points (PLi, PLj) of the first target differs. The difference between the feature values defines a contrast. Advantageously, the feature value of at least three points (PLi, PLj) of the first target differs, making easier the perception of the contrast or variation between the point if dominance is unbalanced for the subject.
[0126]
[0127] In
[0128] Inside each element 24, the feature value may be constant as illustrated, for example, on
[0129]
VLi+VRi=VLj+VRj for any i and j.
[0130] Thus, as the diagram of
[0131] The step of checking may be implicit or explicit by asking to the subject.
[0132] If the subject does not see a fused image representing a fused target from the first image and the second image, the position (between the first image and the second image or between the subject and the images) and/or the size of the first image and the second image are adjusted and/or the feature values are varied on one or both images (to adjust the balance) such that the subject sees a fused image.
[0133] If there is no dominant eye between the first eye and the second eye, the fused target of the fused image perceived by the subject may correspond to the summed up target 22S, 32S, e.g. for the subject, all the points of the fused target seem to have a constant feature value.
[0134] Alternatively, if there is a dominant eye between the first eye or the second eye, the fused target perceived by the subject may not correspond to the summed up target and the subject may see the fused target with points having different feature values. For example: [0135] if the first eye is dominant, the subject sees the point PSi less dark than the point PSj; or [0136] if the second eye is dominant, the subject sees the point PSi darker than the point PSj.
[0137] Alternatively, the feature values may be the color of the first/second target. In this case, “VLi+VRi=VLj+VRj” means that VLi corresponds to a first color and VRi to a second color which is the complementary color of the first color and similarly VLj corresponds to another first color and VRj to another second color which is the complementary color of the another first color. For example, VLi is green, VRi is red, VLj is yellow and VRj is purple. Another example, VLi is green, VRi is red, VLj is red and VRj is green. Consequently: [0138] If there is no dominant eye, the fused target of the fused image is uniform, in other words the subject perceived the same color on all the fused target; [0139] if the first eye is dominant, the subject sees the point PSi on the fused target with a color closer to the color of the point PLi; or [0140] if the second eye is dominant, the subject sees the point PSi on the fused target with a color closer to the color of the point PRi.
[0141] Thus, in the case where VLi is green, VRi is red, VLj is red and VRj is green: [0142] if there is no dominant eye, the fused target of the fused image is uniform, in other words the subject perceived all the fused target in grey; [0143] if the first eye is dominant, the subject sees the point PSi with a color closer to green and PSj with a color closer to red; or [0144] if the second eye is dominant, the subject sees the point PSj with a color closer to green and PSi with a color closer to red.
[0145] Thus, according to an embodiment of the present disclosure, based on the perception of the subject of the fused target, a report is generated (step 83 in FIGS. 8A, 8B, 8C) which describes the perception of the subject of the fused target of the fused image and accordingly, the dominant eye of the subject is determined (step 84 in
[0146] According to an embodiment of the present disclosure, after generating the report, the feature values VLi, VRi of the first and/or the second target 22L, 32L, 22R, 32R may be adjusted (step 85 in
[0147] According to an embodiment of the present disclosure, for each n points (PLi, PRi) of the first and second target, the adjusted feature values VLi′, VLj′ of the first target and the adjusted feature values VRi′, VRj′ of the second target are such that
VLi′+VRi′=VLj′+VRj′ for any i and j.
[0148] This embodiment allows retrying the measurement with a different balance while respecting the initial condition.
[0149] According to another embodiment of the present disclosure,
VLi′+VRi′=VLj′+VRj′=VLi+VRi.
[0150] This embodiment allows keeping global contrast constant in time which is easier for the subject to evaluate changes.
[0151] According to an embodiment of the present disclosure, the next step is to provide (step 86 in
[0152] According to an embodiment of the present disclosure, a report describing the features values of the fused target from the next iteration of the first image and of the second image is generated (step 87 in
[0153] One embodiment to quantify the ocular dominance is to perform the precedent steps several times (step 88 in
[0154] Alternatively, another embodiment to quantify the ocular dominance is by adjusting binocular balance as illustrated in
[0155] As for the precedent embodiment, the appropriate correction may be obtained by iteration (not illustrated in
[0156] The change in power of the lens may be directly provided as a quantification value of the dominance. Or the change in power may be qualified dominance (strong or light for example). It may be useful to know or give this data to ECP to help him/her in case to make a decision for a prescription (for example in case of anisometropia). It may be a decision aid, other than visual acuity.
[0157] According to one embodiment, in order to perform the iteration steps, the control unit may execute an adaptive algorithm. The adaptive algorithm is configured to accept a report describing how the subject 4 sees when presented the first image 20L, 30L and the second image 20R, 30R, and to calculate adjustments to the feature values of the first and/or second target 22L, 32L, 22R, 32R on the first image 20L, 30L and the second image 20R, 30R to be presented in a next iteration of the first image 20L, 30L and the second image 20R, 30R according to the report. The next iteration of the first image 20L, 30L and the second image 20R, 30R to the subject 4 is provided by a target-generating component.
[0158] Advantageously, adjusting binocular balance allows determining a pair of ophthalmic lenses adapted to a wearer. For that, the steps may be: [0159] measuring (91 in
[0161] Then the preceding steps described for the method for adjusting the binocular balance are performed.
[0162] Alternatively, the adjustment of the binocular balance may occur at the beginning of the refraction process or before the refraction process, in particular if the refraction is a binocular refraction.
[0163] Measuring the refraction of each eye of the subject may be an objective measurement using an auto refractometer or a subjective measurement using a phoropter with monocular steps or binocular refraction.
[0164] Accordingly, one object of the present disclosure is a refractometer comprising an apparatus for quantifying ocular dominance of a subject as described in the present disclosure.
[0165] In the case where the patient does not suffer from hyperopia, advantageously, by correcting the dominant eye with a blurring lens instead of corrected by increase the optical power of the non-dominant eye, it allows also decreasing the thickness of the lens by at the end to subtract the blurring lens to the measured monocular refraction and thus to decrease the optical power.
[0166] The targets may be surrounded as illustrated, for example, in
[0167] As explained above in the section presenting a “summary”, making use of such first and second images improves the stability of the binocular vision of the subject 4 and makes the observation of these images more comfortable, avoiding blinking or flickering of the global image perceived by the subject (after fusion), and limiting ocular vergence issues. An ocular dominant eye method in which such images are provided to the subject, can thus be carried on faster, and leads to more accurate results.
[0168] A first, second, third, fourth and fifth couples of test images, each comprising a first image 30L and a second image 30R having the characteristics mentioned above, are described below (in reference to
[0169] According to one embodiment of the apparatus described here, one or several of these couples of images are stored in the memory of the control unit, so that they can be displayed by the display 7 when a ocular dominant method is carried on by means of the apparatus 1 described above. More generally, at least one computer program is stored in the memory of the control unit, this computer program comprising instructions, which, when the program is executed by the control unit 8, cause the apparatus 1 to carry out a method having the features presented above (like the methods described in detail below). This computer program comprises data representative of at least one of these couples of images.
Images
[0170] In each exemplary couples of images described below, the first image 30L represents a first target 32L to a first eye 3 of the subject 4, the second image 30R represents a second target 32R to a second eye of the subject 4.
[0171] In each exemplary couples of images described below, the first image 30L and the second image 30R are such that the first target 32L on the first image 30L has an identical position, an identical orientation, an identical size and an identical shape to the second target 32R on the second image 30R. As explained above for
[0172] One point may be a pixel of the display.
[0173] To each point (PLi) of the first target and to each point (PRi) of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target. The feature value VLi, VLj of at least two points (PLi, PLj) of the first target differs and for each n points (PLi, PRi) of the first and second target VLi+VRi=VLj+VRj for any i and j.
[0174] Consequently, the feature value VRi, VRj of the two points (PRi, PRj) of the second target which match with the two points (PLi, PLj) of the first target, differs.
[0175]
[0176] The feature value may be the luminosity as illustrated in
[0178] Each of the first and second images to be displayed may comprises: [0179] a central image with a target, and optionally [0180] a peripheral image that surrounds the central image and contributes usefully to a well-balanced fusion process between the left and right visual pathway, for the subject.
[0181] So, the images may be somehow composite images, and, besides, they comprise a peripheral image that is all the more stabilizing as the part of the field of view it occupies is wide. It is thus very useful to use a wide screen, as the one described above, to provide enough room to accommodate such composite images.
[0182] The
[0183] Advantageously, the rich and diversified visual content of the first peripheral image contributes to the stabilizing effect of this image. Indeed, it provides an abundant visual support, identical or similar to the one present in the second peripheral image, which enables a very stable and well-balanced fusion between the left and right visual pathways of the subject. It helps focusing and fusion because the 3D scene may bring elements of perception for monocular and binocular distances, which enable the visual system to stabilize. Besides, it captures the attention of the subject, from a visual point of view and helps maintaining the subject focused on the test images provided to him/her.
[0184] According to an embodiment, the peripheral image may be: [0185] abundant scenes, [0186] with 3D (perspectives), [0187] natural scenes [0188] stereoscopy (plus or minus disparities)
[0189] The shape of the targets 22L, 32L, 22R, 32R may be: [0190] an element grid comprising at least four elements, the at least three elements having the same shape and the same size. For example, the
[0195] The element may be a square, a circle, a star, an animal or any other kind of shape.
[0196] The figure may be a square, a circle, a star, an animal, an object or any other kind of shape.
[0197] Optionally, the target may comprise an uniform background as illustrated in
[0198] The feature values of the uniform background may be an average (VLi, VRi) or may any other constant value.
[0199] Optionally, the central image may comprise the target and a background as illustrated on
[0200] Besides, in each couple of images described above, instead of being identical, the first and second peripheral images could be such that, when the first and second images are superimposed to each other (with their respective frames coinciding with each other), some elements of the first peripheral image are slightly side-shifted with respect to the corresponding elements of the second peripheral image, to enable a 3-D stereoscopic rendering of the scene represented. More precisely, in such a case, the first peripheral image would represent an actual scene, object, or abstract figure as it would be seen from the position of the first eye 2 of the subject, while the second peripheral image would represent to the same scene, object or abstract figure, as it would be seen from the position of the second eye 3 of the subject.
[0201] Employing such stereoscopic images is a very efficient way to get rid of the suppression phenomenon described in the preamble. Indeed, with such test images, the subject has a very strong tendency to try to perceive the scene in a 3-dimensional manner, and thus takes into account both the left and right visual pathway in the fusion process (thus eliminating the “suppression phenomenon”), to obtain this 3-dimensional rendering. When such stereoscopic images are employed, the way to compute their level of similarity has to be adapted, to take into account their 3-dimensional nature.
[0202] The examples of
[0203] In the example of
[0204] In the example of
[0205] Optionally, the target may be surrounded with a peripheral area as illustrated in
[0206] Optionally, the feature value inside each element of the target are constant but different between at least two elements. This embodiment allows to make easier to the subject to describe the position of the feature values.
[0207] The method and images used in the apparatus of the present disclosure may use interactive elements or steps. A keyboard or pad may be used to input the answers of the subject or to enable the subject to go back to previous scenes if he/she wishes to. An indicator may be used to display graphically the degree of advancement of the method. Explanations on the test and/or a playful, nice story telling explaining the test and focusing on some objects that will be shown during the test (treasure hunt like test) may be given at the beginning of the tests in order to get attention, cooperation and understanding of tests (questions/answers) from the subject and above all to make sure that the subject is not stressed during the visual examination.
[0208] Although representative methods, apparatus and set of images have been described in detail herein, those skilled in the art will recognize that various substitutions and modifications may be made without departing from the scope of what is described and defined by the appended claims.