METHOD FOR QUANTIFYING OCULAR DOMINANCE OF A SUBJECT AND APPARATUS FOR IMPLEMENTING A METHOD FOR QUANTIFYING OCULAR DOMINANCE OF A SUBJECT

20230036885 · 2023-02-02

Assignee

Inventors

Cpc classification

International classification

Abstract

An apparatus for quantifying ocular dominance of a subject, including at least one display for providing a first image representing a first target to a first eye of the subject, and for providing a second image representing a second target to a second eye of the subject, and a control unit to control the at least one display.

Claims

1. An apparatus for quantifying ocular dominance of a subject comprising: at least one display for providing a first image representing a first target to a first eye of the subject, and for providing a second image representing a second target to a second eye of the subject, and a control unit to control the at least one display, wherein the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point of the second target where PLi has the same position in the first image than PRi for 1≤i≤n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target VLi+VRi=VLj+VRj for any i and j.

2. The apparatus according to claim 1, further comprising a first optical unit and a second optical unit respectively in front of the first eye and in front of the second of the subject, a power modulator to change the optical power of the optical units in front of each eye, the power modulator being control by the control unit.

3. The apparatus according to claim 1, wherein the feature values of the target are the luminosity or the color.

4. The apparatus according to claim 1, wherein the shape of the targets is: an element grid comprising at least four elements, the at least three elements having the same shape and the same size, an element line comprising at least three elements, the at least three elements having the same shape and the same size, an element column comprising at least three elements, the at least three elements having the same shape and the same size, at least two fringes, letter(s) or optotype(s) or figure(s).

5. The apparatus according to claim 1, further comprising an adaptive algorithm executed by the control unit, the adaptive algorithm being configured to accept a report describing how the subject sees when presented the first image and the second image, and to calculate adjustments of the feature values of the first and/or second target on the first image and the second image to be presented in a next iteration of the first image and the second image according to the report, a target generating component configured to provide the next iteration of the first image and the second image to the subject.

6. The apparatus according to claim 1, further comprising an adaptive algorithm executed by the control unit, the adaptive algorithm being configured to accept a report describing how the subject sees when presented the first image and the second image, and to calculate adjustments of the optical power of the optical units in a next iteration of the first image and of the second image according to the report.

7. A refractometer comprising an apparatus for quantifying ocular dominance of a subject according to claim 1.

8. A set of images for quantifying the ocular dominance of a subject, comprising a first image representing a first target and a second image representing a second target, wherein: the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point of the second target where PLi has the same position in the first image than PRi for 1≤i≤n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target
VLi+VRi=VLj+VRj for any i and j.

9. A method for quantifying ocular dominance of a subject comprising providing a first image to a first eye of the subject, the first image representing a first target, providing a second image to a second eye of the subject, the second image representing a second target wherein the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point on the second target where PLi has the same position in the first image than PRi for 1<i<n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target VLi+VRi=VLj+VRj for any i and j. checking that the subject sees a fused image from the first image and the second image, the fused image comprising a fused target with feature values; generating a first report describing the features values of the fused image; determining which eye is the dominant eye of the subject based on the report.

10. The method according to claim 9, further comprising after determining which eye is the dominant eye of the subject: calculating adjustments to the feature values of the first and/or second target on the first image and the second image to be presented in a next iteration of the first image and the second image according to the report; providing the next iteration of the first image and the second image with the adjusted feature values; generating a second report describing how the subject sees the features values of the fused target from the next iteration of the first image and of the second image; performing the precedent steps until the subject indicates that according to its perception, the feature values of the fused target of the fused image are constant at each point of the fused target of the fused image; quantifying ocular dominance of the subject based on the reports of the subject.

11. The method according to claim 9, wherein the shape of the targets is a set of at least three elements, said at least three elements having the same shape and the same size, wherein the feature values of the target are the luminosity, and wherein the reports describe the location of the brightest and/or darkest element of the fused target of the fused image.

12. The method according to claim 9, wherein the target is a set of at least three elements, said at least three elements having the same shape and the same size, wherein the feature values of the target are the colors of the target,
VLi+VRi=VLj+VRj meaning that VLi corresponds to a first color and VLj corresponds to a second color which is the complementary color of the first color, wherein the reports describe the location of the colors on the fused target of the fused image.

13. The method according to claim 9, wherein the fused image is a 3-D stereoscopic image.

14. A method for adjusting a binocular balance of a subject comprising: providing a first image to a first eye of the subject, the first image representing a first target, providing a second image to a second eye of the subject, the second image representing a second target wherein the first image and the second image are such that the first target on the first image has an identical position, an identical orientation, an identical size and an identical shape to the second target on the second image, the first target comprises n points and the second target comprises n points where n≥2, 1≤i≤n and 1≤j≤n; each point of the first target matches with a point on the second target where PLi has the same position in the first image than PRi for 1≤i≤n in the second image; to each point of the first target and to each point of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target; the feature value of at least two points of the first target differs; and for each n points of the first and second target VLi+VRi=VLj+VRj for any i and j. checking that the subject sees a fused image from the first image and the second image, the fused image comprising a fused target with feature values; generating a report describing the features values of the fused image; determining which eye is the dominant eye of the subject based on the report; providing a correction to the dominant eye of the subject by adjusting a power lens in front of the first eye and/or the second eye until the feature values of the fused image seems constant for the subject.

15. The method for adjusting a binocular balance of a subject according to claim 14, further comprising measuring the refraction of each eye of the subject; providing a correction based on the measured refraction by adjusting a power lens in front of the first eye and/or the second eye.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0079] The following description with reference to the accompanying drawings will make it clear what the invention consists of and how it can be achieved. The invention is not limited to the embodiment/s illustrated in the drawings. Accordingly, it should be understood that where features mentioned in the claims are followed by reference signs, such signs are included solely for the purpose of enhancing the intelligibility of the claims and are in no way limiting on the scope of the claims.

[0080] In the accompanying drawings:

[0081] FIG. 1 represents an apparatus for quantifying ocular dominance of a subject according to one example of the present description;

[0082] FIGS. 2A, 2B and 2C represent schematically a couple of images according to one embodiment of the present description, comprising a first image and a second image, to be provided respectively to the right eye and the left eye of the subject by the apparatus of FIG. 1, and an image which is the sum of the first and the second image;

[0083] FIGS. 3A, 3B and 3C represent three images according to an embodiment of the present description comprising a first image and a second image to be provided respectively to the right eye and the left eye of the subject by the apparatus of FIG. 1, and an image which is the sum of the first and the second image;

[0084] FIGS. 4A, 4B and 4C represent three images according to an embodiment of the present description comprising a first image and a second image to be provided respectively to the right eye and the left eye of the subject by the apparatus of FIG. 1, and an image which is the sum of the first and the second image;

[0085] FIGS. 5A, 5B and 5C represent three images according to an embodiment of the present description comprising a first image and a second image to be provided respectively to the right eye and the left eye of the subject by the apparatus of FIG. 1, and an image which is the sum of the first and the second image;

[0086] FIGS. 6A, 6B and 6C represent three images according to an embodiment of the present description comprising a first image and a second image to be provided respectively to the right eye and the left eye of the subject by the apparatus of FIG. 1, and an image which is the sum of the first and the second image;

[0087] FIGS. 7A, 7B and 7C represent three images according to an embodiment of the present description comprising a first image and a second image to be provided respectively to the right eye and the left eye of the subject by the apparatus of FIG. 1, and an image which is the sum of the first and the second image;

[0088] FIGS. 8A, 8B and 8C represents respectively some steps of a method for quantifying ocular dominance of a subject according to an embodiment of the present description, some steps of a method for quantifying ocular dominance of a subject according to another embodiment of the present description, some steps of a method for adjusting a binocular balance according to an embodiment of the present description.

DETAILED DESCRIPTION OF EMBODIMENTS

[0089] FIG. 1 represents schematically, from above, the main elements of an apparatus 1 for quantifying ocular dominance of a subject 4, in a binocular manner, that is while the subject 4 has both eyes opened and un-obstructed.

[0090] The apparatus comprises a display 7 such as an image display system, for providing a first image 20L, 30L representing a first target 22L, 32L to a first eye 3 of the subject 4, and for providing a second image 20R, 30R representing a second target 22R, 32R to a second eye of the subject 4. The first image and the second image may be provided to the subject 4 at the same time or at different time such that the subject has the perception to see the first image and the second image at the same time.

[0091] The first image 20L, 30L may be seen by the first eye 2 of the subject through a first optical unit 5 such as a set of lens, while the second image 22R, 32R may be seen by the second eye 3 of the subject 4 through a second optical unit 6 such as a set of lens.

[0092] In the embodiment of FIG. 1, each of the first and second optical units 5, 6 comprise a lens, a mirror, or a set of such optical components, that has adjustable optical power features. For instance, the lens may comprise a deformable liquid lens having an adjustable shape. Alternatively, the optical unit may comprise a set of non-deformable lenses having different optical powers, and a mechanical system that enables to select some of these lenses to group them to form the set of lenses through which the subject 4 can look. In this last case, to adjust the optical power of the set of lenses, other lenses stored in the optical unit replace one or several lenses of the set of lenses. Thus, in order to change the optical power of the optical unit 5, 6 in front of each eye 2, 3, the apparatus may comprise a power modulator, the power modulator may be controlled manually or by the control unit 8.

[0093] Each of these optical units 5, 6 is intended to be placed in front of one of the eyes 2, 3 of the subject, close to this eye (not further than a five centimeters, in practice), so that this eye 2, 3 can see a screen 70 of the display 7 through the lens, through the set of lenses, or by reflection onto a mirror of the optical unit 5, 6.

[0094] Alternatively, the subject may see the display directly without the optical unit.

[0095] The apparatus is configured to enable ocular dominance quantification at various distances (near vision, far vision and/or intermediate vision) and/or for various eye gaze directions (for example natural eye gaze direction lowered for reading, horizontal eye gaze direction for far vision). This screen 70 is located at a distance from the subject comprised between 25 cm (for near vision) and infinity when using a specific imaging system (not represented), such as a Badal system, or, if no imaging system is used (or using a plane mirror), up to about 8 meters in practice, or such as a system similar to the one disclosed in EP 3 298 952 allowing the combination of a first image provided by a screen (that could be constituted of one or more peripheral image(s)), and a second image provided by an imaging module (that could be constituted of one or more of central images), both first and second images being possibly imaged at variable distances for the individual's eye.

[0096] The lens, the set of lenses, or the set of lenses and mirrors of each of the first and second optical units 5, 6 has an overall spherical power S (spherical optical power, expressed for instance in diopters). And the cylindrical components of its refractive power are those of an equivalent cylindrical lens that has a cylindrical power C (expressed for instance in diopters), and whose cylinder has an orientation represented by an angle α. Each of the first and second refraction correction, provided by the corresponding optical unit 5, 6, may be characterized by the values of these three refractive power parameters S, C and α. This refractive correction could be equally characterized by the values of any other set of parameters representing the above mentioned refractive power features of the optical unit 5, 6, such as the triplet {M, J0, J45}, where the equivalent sphere M is equal to the sphere S plus half of the cylinder C (M=S+C/2), and where J0=C/2*cos(2α) and J45=C/2*sin(2α) are the refractive powers of two Jackson crossed cylinders lenses representative of the cylindrical refractive power features of the lens or of the set of lenses of the optical unit 5, 6.

[0097] According to an embodiment of the present description, the lens, the set of lenses, or the set of lenses of the first and second optical units 5, 6 may be blurring lens which are spherical, in other words C=0.

[0098] Regarding now the display 7, the display may comprise a screen 70.

[0099] The whole extent of the screen 70 may be seen through each of the first and second optical units 5, 6.

[0100] The display 7 may be realized by means of a liquid-crystal display screen 70 that is able to display the first image 20L, 30L with a first polarization, and, at the same time, to display the second image 20R, 30R with a second polarization. The first and second polarizations are orthogonal to each other. For instance, the first and second polarizations are both rectilinear and perpendicular to each other. Or, similarly, the first polarization is a left-hand circular polarization while the second polarization is a right-hand circular polarization.

[0101] The first optical unit 5 in front of the first eye may comprise a first polarizing filter that filters the light coming from the image display system 7. The first polarizing filter filters out the second polarization, and lets the first polarization pass through so that it can reach the first eye 2 of the subject. So, through the first polarizing filter, the first eye 2 of the subject can see the first image 20L, 30L but not the second image 20R,

[0102] Similarly, the second optical unit in front of the second eye may comprise a second polarizing filter that filters the light coming from the display 7. The second polarizing filter filters out the first polarization, and lets the second polarization pass through so that it can reach the second eye 3 of the subject.

[0103] The display may use any other separation technics, such as «active» separation for which each image test is displayed alternatively at a high frequency while an electronic shutter synchronized is blocking the eye for which the image should not be addressed. Separation system could also use chromatic separation with chromatic filters both on the display and the eye in which each side/eye has different chromatic filters that block each other (for example red and green filters).

[0104] The first and second images (as represented on FIG. 2, for instance), coincide with each other (their respective frames coincide with each other), on the screen 70. They both fill the same zone, on this screen.

[0105] Here, the screen 70 may fill a part of the subject's field of binocular view that is at least 5 degrees wide, or even at least 10 degrees wide.

[0106] In alternative embodiments, the display may be implemented by means of a reflective, passive screen (such as an aluminum-foil screen) and one or several projectors for projecting onto this screen the first image, with the first polarization, and the second image, with the second polarization, the first and second images being superimposed to each other, on the screen.

[0107] Alternatively, the apparatus may comprise two displays. According to one embodiment, the first image is displayed on the first image and the second image is displayed on the second image, for instance using a Head up Virtual Reality device.

[0108] Here, the screen of the first display and the screen of the second display may fill a part of the subject's field of the monocular or binocular view that is at least 5 degrees wide, or even at least 10 degrees wide.

[0109] In alternative embodiments, the first and second displays may be achieved, for instance, by means of a first and a second Badal-like systems, placed respectively in front of the first eye, and in front of the second eye of the subject. Each of these Badal-like systems would comprise at least one lens, and a displacement system to modify a length of an optical path that joins this lens to the display screen considered, in order to form an image of this display screen at a distance from the eye of the subject that is adjustable.

[0110] Anyhow, the at least one display is controlled by a control unit 8 of the apparatus 1.

[0111] The control unit 8, that may comprises at least one processor and at least one non-volatile memory, may be programmed to control the at least one display, to vary/adjust the feature values of the first and/or the second target, and/or to control the power modulator in order to change the optical power of the optical unit 5,6.

[0112] As presented in detail below and illustrated in FIGS. 2A, 2B and 2C, the at least one display 7 provides a first image 20L, 30L representing a first target 22L, 32L to a first eye 3 of the subject 4, and provides a second image 20R, 30R representing a second target 22R, 32R to a second eye of the subject.

[0113] The first image 20L, 30L and the second image 20R, 30R are such that the first target 22L, 32L on the first image 20L, 30L has an identical position, an identical orientation, an identical size and an identical shape to the second target 22R, 32R on the second image 20R, 30R.

[0114] The first target 22L, 32L comprises n points (PL1, . . . , PLi, . . . PLj, PLn) and the second target 22R, 32R comprises n points (PR1, . . . , PRi, . . . , PLj . . . PRn) where

[0115] n≥2, 1≤i≤n and 1≤j≤n. Each point (PLi) of the first target 22L, 32L matches with a point (PRi) of the second target 22R, 32R where PLi has the same position in the first image 20L, 30L than PRi for 1 in the second image 20R, 30R. To each point (PLi) of the first target and to each point (PRi) of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target. The feature value of at least two points (PLi, PLj) of the first target differs; and for each n points (PLi, PRi) of the first and second target


VLi+VRi=VLj+VRj for any i and j.

[0116] Consequently, the feature value VRi, VRj of the two points (PRi, PRj) of the second target that match with the two points (PLi, PLj) of the first target, differs.

[0117] By “identical”, we mean that a level of similarity in position, orientation, size and shape between the first target and the second target is higher than a certain threshold.

[0118] It is noted however that, alternatively, the first and second target could be very similar from each other, yet not completely identical for example to enable a 3-D stereoscopic rendering of the scene represented. Still, in such a case, the first and second target would be similar enough that a level of similarity between them is higher than a given threshold.

[0119] This level of similarity could for instance be equal to a normalized correlation product between the first target and the second target, that is to say equal to the correlation product between them, divided by the square root of the product of the autocorrelation of the first target by the autocorrelation of the second target. In such a case, the level of similarity threshold mentioned above could be equal to 0.8, for instance.

[0120] The level of similarity could also be defined, between two targets similar in size/shape, as an angular deviation of less than 6° when observed by a subject at far vision distance, or as a difference of less than +/−1 diopter.

[0121] More generally, the level of similarity threshold could be equal to 0.8 times a reference level of similarity, this reference level of similarity being a level of similarity between the first target and the first image itself, computed in the same manner as the level of similarity between the first and second targets (except that it concerns the first target only).

[0122] Alternatively, the level of similarity threshold could be equal to 10 times a level of similarity computed between the first target and a random image.

[0123] The admissible range of level of similarity can be defined empirically by showing to a subject successive combination of two images with the same first reference image and different second images differing each from the first reference image and from another, and defining each with the first reference image a particular level of similarity. The lower limit of the admissible range of level of similarity will correspond to the highest level of similarity at which a subject cannot perceive a 3-D stereoscopic rendering of the scene represented. The upper limit of the admissible range of level of similarity will correspond to the lowest level of similarity at which a subject will complain of double vision or suppression.

[0124] The feature value may be the luminosity or the colors. In the case of the luminosity, the feature value of the target may correspond to a level of grey or to an intensity or an amplitude for a same color.

[0125] The feature value of at least two points (PLi, PLj) of the first target differs. The difference between the feature values defines a contrast. Advantageously, the feature value of at least three points (PLi, PLj) of the first target differs, making easier the perception of the contrast or variation between the point if dominance is unbalanced for the subject.

[0126] FIG. 2A illustrates the first image 20L representing a first target 22L. FIG. 2B illustrates the second image 20R representing a second target 22R. The first target 22L and the second target 22R has an identical position, an identical orientation, an identical size and an identical shape on the first image and on the second image. Indeed the first target 22L and the second target 22R are an identical element line comprising four circular elements 24 having the same size, the element line are oriented horizontally and positioned respectively in the middle of the image.

[0127] In FIGS. 2A and 2C, the first image and the second image comprise n points and the features values are the luminosity. The first image comprises the points PLi and PLj that have respectively the feature value VLi and VLj. VLi and VLj correspond to different levels of grey, in particular in FIG. 2A, PLi is bright and PLj is dark. The second images comprises the points PRi and PRj which match respectively with PLi and PLj. PRi and PRj have respectively the feature value VRi and VRj, VRi and VRj correspond to different levels of grey, in particular FIG. 2C, inversely to the first image in FIG. 2A, PRj is bright and PRi is dark.

[0128] Inside each element 24, the feature value may be constant as illustrated, for example, on FIGS. 2A, 2B and 2C.

[0129] FIG. 2B shows an image 20S representing the summed up target 22S of the feature values VL of the first target 22L with the feature value VR of the second target 22R for each n points PLi, PRi of the first and second target, in other words at each point PSi of the summed up target 22S, the feature value VSi=VLi+VRi. As shown in FIG. 2B, the level of grey is the same at each point of the summed up target 22S:


VLi+VRi=VLj+VRj for any i and j.

[0130] Thus, as the diagram of FIGS. 8A, 8B, 8C according three embodiments of the present disclosure illustrates the method, the first image representing the first target is provided 81 to the first eye of the subject, the second image representing the second target is provided 81 to the second eye of the subject. Thanks to the display as explained before, the subject sees a fused image from the first image and the second image representing a fused target from the first target and the second target. However, to be sure, the methods may comprise a step 82 to check if the subject sees a fused target.

[0131] The step of checking may be implicit or explicit by asking to the subject.

[0132] If the subject does not see a fused image representing a fused target from the first image and the second image, the position (between the first image and the second image or between the subject and the images) and/or the size of the first image and the second image are adjusted and/or the feature values are varied on one or both images (to adjust the balance) such that the subject sees a fused image.

[0133] If there is no dominant eye between the first eye and the second eye, the fused target of the fused image perceived by the subject may correspond to the summed up target 22S, 32S, e.g. for the subject, all the points of the fused target seem to have a constant feature value.

[0134] Alternatively, if there is a dominant eye between the first eye or the second eye, the fused target perceived by the subject may not correspond to the summed up target and the subject may see the fused target with points having different feature values. For example: [0135] if the first eye is dominant, the subject sees the point PSi less dark than the point PSj; or [0136] if the second eye is dominant, the subject sees the point PSi darker than the point PSj.

[0137] Alternatively, the feature values may be the color of the first/second target. In this case, “VLi+VRi=VLj+VRj” means that VLi corresponds to a first color and VRi to a second color which is the complementary color of the first color and similarly VLj corresponds to another first color and VRj to another second color which is the complementary color of the another first color. For example, VLi is green, VRi is red, VLj is yellow and VRj is purple. Another example, VLi is green, VRi is red, VLj is red and VRj is green. Consequently: [0138] If there is no dominant eye, the fused target of the fused image is uniform, in other words the subject perceived the same color on all the fused target; [0139] if the first eye is dominant, the subject sees the point PSi on the fused target with a color closer to the color of the point PLi; or [0140] if the second eye is dominant, the subject sees the point PSi on the fused target with a color closer to the color of the point PRi.

[0141] Thus, in the case where VLi is green, VRi is red, VLj is red and VRj is green: [0142] if there is no dominant eye, the fused target of the fused image is uniform, in other words the subject perceived all the fused target in grey; [0143] if the first eye is dominant, the subject sees the point PSi with a color closer to green and PSj with a color closer to red; or [0144] if the second eye is dominant, the subject sees the point PSj with a color closer to green and PSi with a color closer to red.

[0145] Thus, according to an embodiment of the present disclosure, based on the perception of the subject of the fused target, a report is generated (step 83 in FIGS. 8A, 8B, 8C) which describes the perception of the subject of the fused target of the fused image and accordingly, the dominant eye of the subject is determined (step 84 in FIGS. 8A, 8B, 8C).

[0146] According to an embodiment of the present disclosure, after generating the report, the feature values VLi, VRi of the first and/or the second target 22L, 32L, 22R, 32R may be adjusted (step 85 in FIG. 8B) manually or thanks to a dimmer switch, a modulator or a control unit, preferably a control unit which may vary/adjust the feature values of the first and/or the second target.

[0147] According to an embodiment of the present disclosure, for each n points (PLi, PRi) of the first and second target, the adjusted feature values VLi′, VLj′ of the first target and the adjusted feature values VRi′, VRj′ of the second target are such that


VLi′+VRi′=VLj′+VRj′ for any i and j.

[0148] This embodiment allows retrying the measurement with a different balance while respecting the initial condition.

[0149] According to another embodiment of the present disclosure,


VLi′+VRi′=VLj′+VRj′=VLi+VRi.

[0150] This embodiment allows keeping global contrast constant in time which is easier for the subject to evaluate changes.

[0151] According to an embodiment of the present disclosure, the next step is to provide (step 86 in FIG. 8B) to the first eye a next iteration of the first image 20L, 30L with the adjusted/varied feature values VLi′, VLj′ to the first eye 3 and to provide a next iteration of the second image 20R, 30R with the adjusted/varied feature values VRi′, VRj′ to the second eye 2.

[0152] According to an embodiment of the present disclosure, a report describing the features values of the fused target from the next iteration of the first image and of the second image is generated (step 87 in FIG. 8B); An other words, a report describing how the subject sees the features values of the fused target from the next iteration of the first image and of the second image is generated

[0153] One embodiment to quantify the ocular dominance is to perform the precedent steps several times (step 88 in FIG. 8B) until the subject indicates that according to its perception, the feature values of the target of the fused image are constant at each point of the target of the fused image. Finally, from the adjusted feature values of the last iteration for which the subject perceives constant feature values at each point of the fused target, ocular dominance of the subject is quantified (step 89 in FIG. 8B). For example the ocular dominance is quantified by making the ratio between a feature value at a point of the target of the first image presented to the dominant eye under a feature value at the matched point of the target of the second image presented to the other eye e.g. if the dominant eye is the first eye, the ratio is VLi′NRi′. With this ratio, the quality/degree of dominance ie, strong/weak dominance may be also evaluated.

[0154] Alternatively, another embodiment to quantify the ocular dominance is by adjusting binocular balance as illustrated in FIG. 8C. Thus, after determining 84 the dominant eye of the subject, a correction is provided 100 to the first eye and/or the second eye with for example a lens, preferably to the dominant eye with for example a blurring lens. Advantageously, by correcting the dominant eye with a blurring lens instead of correcting by increasing the optical power of the non-dominant eye, it allows to avoid the accommodation to be corrected and not the ocular dominance.

[0155] As for the precedent embodiment, the appropriate correction may be obtained by iteration (not illustrated in FIG. 8C). In other words, a first correction is provided to the first eye and/or the second eye. Then, according to an embodiment of the present disclosure, a report describing the features values of the fused target or in other words how the subject sees through the lens the features values of the fused target, is generated. After, the precedent steps are repeated several times with different optical power lens or different blurring lens until the subject indicates that according to its perception, the feature values of the target of the fused image are constant at each point of the target of the fused image. Finally, from the optical power or the blurring lens, ocular dominance of the subject is quantified and/or the binocular balance is adjusted.

[0156] The change in power of the lens may be directly provided as a quantification value of the dominance. Or the change in power may be qualified dominance (strong or light for example). It may be useful to know or give this data to ECP to help him/her in case to make a decision for a prescription (for example in case of anisometropia). It may be a decision aid, other than visual acuity.

[0157] According to one embodiment, in order to perform the iteration steps, the control unit may execute an adaptive algorithm. The adaptive algorithm is configured to accept a report describing how the subject 4 sees when presented the first image 20L, 30L and the second image 20R, 30R, and to calculate adjustments to the feature values of the first and/or second target 22L, 32L, 22R, 32R on the first image 20L, 30L and the second image 20R, 30R to be presented in a next iteration of the first image 20L, 30L and the second image 20R, 30R according to the report. The next iteration of the first image 20L, 30L and the second image 20R, 30R to the subject 4 is provided by a target-generating component.

[0158] Advantageously, adjusting binocular balance allows determining a pair of ophthalmic lenses adapted to a wearer. For that, the steps may be: [0159] measuring (91 in FIG. 8C) the refraction of each eye of the subject, [0160] providing (92 in FIG. 8C) a correction based on the measured monocular refraction by adjusting a power lens in front of the first eye and/or the second eye,

[0161] Then the preceding steps described for the method for adjusting the binocular balance are performed.

[0162] Alternatively, the adjustment of the binocular balance may occur at the beginning of the refraction process or before the refraction process, in particular if the refraction is a binocular refraction.

[0163] Measuring the refraction of each eye of the subject may be an objective measurement using an auto refractometer or a subjective measurement using a phoropter with monocular steps or binocular refraction.

[0164] Accordingly, one object of the present disclosure is a refractometer comprising an apparatus for quantifying ocular dominance of a subject as described in the present disclosure.

[0165] In the case where the patient does not suffer from hyperopia, advantageously, by correcting the dominant eye with a blurring lens instead of corrected by increase the optical power of the non-dominant eye, it allows also decreasing the thickness of the lens by at the end to subtract the blurring lens to the measured monocular refraction and thus to decrease the optical power.

[0166] The targets may be surrounded as illustrated, for example, in FIGS. 2A, 2B, 2C in order to increase the concentration of the subject on the target.

[0167] As explained above in the section presenting a “summary”, making use of such first and second images improves the stability of the binocular vision of the subject 4 and makes the observation of these images more comfortable, avoiding blinking or flickering of the global image perceived by the subject (after fusion), and limiting ocular vergence issues. An ocular dominant eye method in which such images are provided to the subject, can thus be carried on faster, and leads to more accurate results.

[0168] A first, second, third, fourth and fifth couples of test images, each comprising a first image 30L and a second image 30R having the characteristics mentioned above, are described below (in reference to FIGS. 3A, 3B, 3C, 4A, 4B, 4C, 5A, 5B, 5C, 6A, 6B, 6C, 7A, 7B, and 7C) in the section entitled “images”.

[0169] According to one embodiment of the apparatus described here, one or several of these couples of images are stored in the memory of the control unit, so that they can be displayed by the display 7 when a ocular dominant method is carried on by means of the apparatus 1 described above. More generally, at least one computer program is stored in the memory of the control unit, this computer program comprising instructions, which, when the program is executed by the control unit 8, cause the apparatus 1 to carry out a method having the features presented above (like the methods described in detail below). This computer program comprises data representative of at least one of these couples of images.

Images

[0170] In each exemplary couples of images described below, the first image 30L represents a first target 32L to a first eye 3 of the subject 4, the second image 30R represents a second target 32R to a second eye of the subject 4.

[0171] In each exemplary couples of images described below, the first image 30L and the second image 30R are such that the first target 32L on the first image 30L has an identical position, an identical orientation, an identical size and an identical shape to the second target 32R on the second image 30R. As explained above for FIGS. 2A, 2B, 2C, the first target 32L comprises n points (PL1, . . . , PLi, . . . PLj, . . . PLn) and the second target (22R, 32R) comprises n points (PR1, . . . , PRi, . . . , PLj . . . PRn) where n≥2, 1≤i≤n and 1≤j≤n. Each point (PLi) of the first target 32L matches with a point (PRi) of the second target 32R where PLi has the same position in the first image 30L than PRi for 1≤i≤n in the second image 30R.

[0172] One point may be a pixel of the display.

[0173] To each point (PLi) of the first target and to each point (PRi) of the second target corresponds respectively a feature value VLi for the first target and a feature value VRi for the second target. The feature value VLi, VLj of at least two points (PLi, PLj) of the first target differs and for each n points (PLi, PRi) of the first and second target VLi+VRi=VLj+VRj for any i and j.

[0174] Consequently, the feature value VRi, VRj of the two points (PRi, PRj) of the second target which match with the two points (PLi, PLj) of the first target, differs.

[0175] FIGS. 2B, 3B, 4B, 5B, 6B, 7B illustrate the image 30S with the summed up target 32S and as observed, the feature values are constant at each point of the summed up target.

[0176] The feature value may be the luminosity as illustrated in FIGS. 2A, 2B, 2C, 3A, 3B, 3C, 4A, 4B, 4C, 6A, 6B, 6C, 7A, 7B, 7C. Thus the feature value of the target may correspond to a level of grey or to an intensity or an amplitude for a same color. This embodiment presents the advantage to avoid issues related to color blindness because the different feature values do not correspond to different colors. [0177] Alternatively, the feature value may be the color (not shown).

[0178] Each of the first and second images to be displayed may comprises: [0179] a central image with a target, and optionally [0180] a peripheral image that surrounds the central image and contributes usefully to a well-balanced fusion process between the left and right visual pathway, for the subject.

[0181] So, the images may be somehow composite images, and, besides, they comprise a peripheral image that is all the more stabilizing as the part of the field of view it occupies is wide. It is thus very useful to use a wide screen, as the one described above, to provide enough room to accommodate such composite images.

[0182] The FIGS. 2A, 2B, 2C and 6A, 6B, 6C show images with an uniform peripheral image. Alternatively, the FIGS. 3A, 3B, 3C, 4A, 4B, 4C, 5A, 5B, 5C and 7A, 7B, 7C present a rich and diversified content of the peripheral image.

[0183] Advantageously, the rich and diversified visual content of the first peripheral image contributes to the stabilizing effect of this image. Indeed, it provides an abundant visual support, identical or similar to the one present in the second peripheral image, which enables a very stable and well-balanced fusion between the left and right visual pathways of the subject. It helps focusing and fusion because the 3D scene may bring elements of perception for monocular and binocular distances, which enable the visual system to stabilize. Besides, it captures the attention of the subject, from a visual point of view and helps maintaining the subject focused on the test images provided to him/her.

[0184] According to an embodiment, the peripheral image may be: [0185] abundant scenes, [0186] with 3D (perspectives), [0187] natural scenes [0188] stereoscopy (plus or minus disparities)

[0189] The shape of the targets 22L, 32L, 22R, 32R may be: [0190] an element grid comprising at least four elements, the at least three elements having the same shape and the same size. For example, the FIGS. 3A, 3B, 3C illustrate targets being an element grid comprising a matrix of six by six elements 34. [0191] an element line comprising at least three elements, the at least three elements having the same shape and the same size. For example, the FIGS. 2A, 2B, 2C and 7A, 7B, 7C illustrate targets being an element line comprising four elements 24 in FIGS. 2A, 2B, 2C and nine elements 24 in FIGS. 7A, 7B, 7C; the peripheral image is uniform on FIGS. 2A, 2B, 2C and rich on FIG. 7A, 7B, 7C as representing a house interior. [0192] an element column comprising at least three elements, the at least three elements having the same shape and the same size. For example, the FIGS. 6A, 6B, 6C illustrate targets being an element column comprising four elements 34. [0193] at least two fringes. The fringes may be horizontal or vertical. For example, the FIGS. 5A, 5B, 5C illustrate targets being a set of nine vertical fringes, which alternate between dark fringes and white fringes. [0194] letter(s) or optotype(s) or figure(s). For example, the FIG. 4 illustrate targets being a set of letters on an uniform background.

[0195] The element may be a square, a circle, a star, an animal or any other kind of shape.

[0196] The figure may be a square, a circle, a star, an animal, an object or any other kind of shape.

[0197] Optionally, the target may comprise an uniform background as illustrated in FIGS. 4A, 4B, 4C, or not comprise an uniform background as illustrated for example in FIGS. 3A, 3B, 3C.

[0198] The feature values of the uniform background may be an average (VLi, VRi) or may any other constant value.

[0199] Optionally, the central image may comprise the target and a background as illustrated on FIGS. 2A, 2B, 2C, 3A, 3B, 3C, 6A, 6B, 6C, 7A, 7B, 7C with for example a white background in order to highlight the target.

[0200] Besides, in each couple of images described above, instead of being identical, the first and second peripheral images could be such that, when the first and second images are superimposed to each other (with their respective frames coinciding with each other), some elements of the first peripheral image are slightly side-shifted with respect to the corresponding elements of the second peripheral image, to enable a 3-D stereoscopic rendering of the scene represented. More precisely, in such a case, the first peripheral image would represent an actual scene, object, or abstract figure as it would be seen from the position of the first eye 2 of the subject, while the second peripheral image would represent to the same scene, object or abstract figure, as it would be seen from the position of the second eye 3 of the subject.

[0201] Employing such stereoscopic images is a very efficient way to get rid of the suppression phenomenon described in the preamble. Indeed, with such test images, the subject has a very strong tendency to try to perceive the scene in a 3-dimensional manner, and thus takes into account both the left and right visual pathway in the fusion process (thus eliminating the “suppression phenomenon”), to obtain this 3-dimensional rendering. When such stereoscopic images are employed, the way to compute their level of similarity has to be adapted, to take into account their 3-dimensional nature.

[0202] The examples of FIGS. 2A, 2B, 2C, 3A, 3B, 3C, 6A, 6B, 6C and 7A, 7B, 7C work very similarly because in these fourth embodiments, the target is a set of dots positioned horizontally, vertically or in matrix.

[0203] In the example of FIGS. 4A, 4B, 4C, using letters, the first target 32L has a grey background and darker letters, the second target 32R has a grey background and lighter letters identical to the first target. A right dominant eye will get the user perceive the text with a brighter level (white letter/grey background). A left dominant eye will get the user perceive the text with a darker level (dark text/grey background). Balance will make the text very difficult to perceive.

[0204] In the example of FIGS. 7A, 7B, 7C, the first target 32L is a set of nine vertical fringes, which alternates between dark fringes and white fringes. The second target 32R has identical shape but has PI phase shift. Optionally, in this case, spatial frequency may be selected in order to have low spatial frequency, lower than eye resolution (period >>1′ arc). Thus, when binocular vision is balanced, the user will perceive the fringes with minimum contrast. When binocular vision is not balanced, the fringes will be perceived with higher contrast. Optionally, the fringes may be temporally oscillating to improve visibility.

[0205] Optionally, the target may be surrounded with a peripheral area as illustrated in FIGS. 2A, 2B, 2C, 3A, 3B, 3C, 6A, 6B, 6C and 7A, 7B, 7C in order to increase the concentration and the facility of the observation of the target.

[0206] Optionally, the feature value inside each element of the target are constant but different between at least two elements. This embodiment allows to make easier to the subject to describe the position of the feature values.

[0207] The method and images used in the apparatus of the present disclosure may use interactive elements or steps. A keyboard or pad may be used to input the answers of the subject or to enable the subject to go back to previous scenes if he/she wishes to. An indicator may be used to display graphically the degree of advancement of the method. Explanations on the test and/or a playful, nice story telling explaining the test and focusing on some objects that will be shown during the test (treasure hunt like test) may be given at the beginning of the tests in order to get attention, cooperation and understanding of tests (questions/answers) from the subject and above all to make sure that the subject is not stressed during the visual examination.

[0208] Although representative methods, apparatus and set of images have been described in detail herein, those skilled in the art will recognize that various substitutions and modifications may be made without departing from the scope of what is described and defined by the appended claims.