Method for determining an optical parameter of a lens

11262273 · 2022-03-01

Assignee

Inventors

Cpc classification

International classification

Abstract

A method implemented by computer means for determining at least one optical parameter of a lens of eyewear adapted for a person, the method comprising: —an image reception step, during which at least a first image and a second image are received, the first image comprising a front view of the face of the person with at least one part of an eye of the person being directly visible, and the second image comprising a front view of the face of the person with said part of the eye of the person being visible through at least part of the lens, and —an optical parameter determination step, during which at least one optical parameter of the lens is determined based on a comparison between said part on the first and the second image.

Claims

1. A method implemented by computer means for determining at least one optical parameter of a lens of eyewear adapted for a person, the method comprising: an image reception step, during which at least a first image and a second image are received, the first and second images are depth maps comprising depth information, the first image and the second image each comprise a front view of a face of the person with at least one part of an eye of the person being visible, and said part of the eye of the person is visible through at least part of the lens on the second image, an optical parameter determination step, during which first and second image data are extracted from respective depth information and compared together to determine at least one optical parameter of the lens.

2. The method according to claim 1, wherein during the image reception step, the first image comprises a front view of the face of the person with at least one part of an eye of the person being directly visible, and the second image comprises a front view of the face of the person with said part of the eye of the person being visible through at least part of the lens.

3. The method according to claim 1, wherein during the image reception step, the first image and the second image both comprise a front view of the face of the person with at least one part of an eye of the person being visible through at least part of the lens, the first image corresponds to a first eye-lens distance between the lens and said part of the eye, the second image corresponds to a second eye-lens distance between the lens and said part of the eye, and the second eye-lens distance is different from the first eye-lens distance.

4. The method according to claim 1, further comprising: a first image acquisition step, during which the first image is acquired by a portable electronic device comprising an image acquisition module, a lens positioning step, during which the lens is positioned relatively to said part of the eye of the person at a position corresponding to the second image, and a second image acquisition step during which the second image is acquired by the portable electronic device comprising the image acquisition module.

5. The method according to claim 4, further comprising prior to the first image acquisition step: a first lens positioning step, during which the lens is positioned relatively to said part of the eye of the person at a position corresponding to the first image.

6. The method according to claim 4, wherein during the image reception step, a third image is received, the third image comprising a side view of the face of the person, wearing the eyewear, wherein said part of the eye of the person is directly visible.

7. The method according to claim 6, wherein during the image reception step, a fourth image is received, the fourth image comprising a side view of the face of the person, wherein the person is wearing the eyewear.

8. The method according to claim 7, further comprising: acquiring a fifth image including a view of the eyewear positioned on a flat surface along with an object with at least one known dimension by a portable electronic device comprising an image acquisition module, acquiring the third image comprising the side view of the face of the person, wearing the eyewear, wherein said part of the eye of the person is directly visible, by the portable electronic device, a fourth image acquisition step for acquiring the fourth image comprising the side view of the face of the person, wherein the person is wearing the eyewear, by the portable electronic device.

9. The method according to claim 1, wherein said part of the eye includes iris or pupil.

10. The method according to claim 1, wherein on the first image, an object with at least one known dimension, such as a credit card, is positioned in the same plane as said part of the eye of the person.

11. The method according to claim 10, further including: a scaling step, during which, when the person wears the eyewear, a distance between said part of the eye of the person and the lens, and the distance from which the lens is seen on at least one image received during the image reception step are determined relatively to at least one known dimension, and wherein during the parameter determination step, the at least one optical parameter of the lens is determined based on the distance between said part of the eye of the person and the lens, and the distance from which the lens is seen on at least one image received during the image reception step.

12. The method according to claim 1, wherein on the second image, an object of at least one known dimension, such as a credit card, is positioned in a plane tangent to the lens at a reference point chosen on the front surface of said lens.

13. The method according to claim 1, wherein during the image reception step, a third image is received, the third image comprising a view of the eyewear positioned on a flat surface along with an object with at least one known dimension.

14. A method for ordering a lens of eyewear adapted for a person, comprising: an optical parameter determining step, during which at least one optical parameter of the lens is determined by the method according to claim 1, and an ordering step, during which an lens having the at least one determined optical parameter is ordered.

15. A system comprising at least a receiver, an electronic storage medium, and a processor, the receiver, the electronic storage medium and the processor being configured to communicate one with another, the receiver being able to receive a first and a second image, the first image is a depth map comprising depth information and comprising a front view of the face of a person with part of an eye of the person being directly visible, and the second image is a depth map comprising depth information and comprising a front view of the face of the person with said part of the eye of the person being visible through at least part of a lens, and the electronic storage medium comprising one or more stored sequences of instructions which, when executed by the processor, are able to perform an optical parameter determination step, during which first and second image data are extracted from respective depth information and compared together to determine at least one optical parameter of a lens.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Non-limiting embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:

(2) FIGS. 1A, 1B and 1C are schematic diagrams of the steps of a method for determining an optical parameter of a lens according to embodiments of the invention;

(3) FIGS. 2A-2E show examples of images received during the reception step of a method according to an embodiment of the invention;

(4) FIGS. 3A-3D show examples of image acquisition steps of a method according to an embodiment of the invention;

(5) FIGS. 4 and 5A-5B show examples of images received during the reception step comprising an object with at least one known dimension, according to an embodiment of the invention;

(6) FIGS. 5C-5D show examples of image acquisition steps, wherein the images show an object with at least one known dimension, according to an embodiment of the invention;

(7) FIG. 6 is a schematic diagram of the steps of a method for ordering a lens of eyewear adapted for a person according to the invention; and

(8) FIG. 7 is a schematic diagram of a system comprising at least a reception unit, an electronic storage medium and a processing unit, the electronic storage medium carrying the instructions of a computer program product able to perform a method according to the invention.

(9) Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help improve the understanding of the embodiments of the present invention.

DETAILED DESCRIPTION OF THE DRAWINGS

(10) The invention relates to a method implemented by computer means for determining at least one optical parameter of a lens of eyewear adapted for a person.

(11) The at least one optical parameter of the lens may include the type of lens, such as single vision, bifocal or progressive.

(12) The at least one optical parameter of the lens may include at least one parameter of the dioptric function of the lens.

(13) The at least one optical parameter of the lens may include at least one parameter of the optical design of the lens, such as the distribution of optical power on the surface of the lens.

(14) The lens may include a progressive lens and the at least one optical parameter of a lens may include at least one of the following: corridor, far vision and near vision points, sphere, cylinder, cylinder axis, prism base, prism axis, transmittance and color.

(15) The at least one optical parameter of the lens may be determined for a visual reference zone of the lens. The visual reference zone may generally correspond to an upper or a lower zone of the lens.

(16) As illustrated on FIG. 1, the method comprises at least an image reception step S12 and an optical parameter determination step S16.

(17) During the image reception step S12, at least a first image and a second image are received.

(18) By image is understood any image type or image format. Images include two-dimensional images or three-dimensional images such as depth maps generated from stereo images, from light-field images or from video.

(19) The first image and the second image each comprise a front view of the face of the person with at least one part of an eye of the person being visible. Said part of the eye of the person is visible through at least part of the lens at least on the second image.

(20) In some embodiments, on the first image said part 2 of the eye of the person is directly visible, as illustrated on FIG. 2A.

(21) In other embodiments, on the first image and the second image both comprise a front view of the face of the person with at least one part 2 of an eye of the person being visible through at least part of the lens 4, the first image corresponds to a first eye-lens distance between the lens 4 and said part 2 of the eye, the second image corresponds to a second eye-lens distance between the lens 4 and said part 2 of the eye, and the second eye-lens distance is different from the first eye-lens distance.

(22) By view is understood a graphical projection of a three-dimensional object onto a planar surface which is referred to in this document as a plan image.

(23) On an image comprising a front view of the face of the person, the positioning of the iris within the eye corresponds to a gaze direction perpendicular to the plan image.

(24) An object such as said part 2 of the eye of the person being directly visible on an image is understood as said object being separated from the plan image by a straight line which is not interrupted by any physical obstacle.

(25) Said part 2 of the eye of the person being seen through at least part of the lens 4 is understood as being present on the second image behind said at least part of the lens 4.

(26) Said part 2 of the eye of the person may include the iris or the pupil. A particularly advantageous effect is that the determination of the shape and dimensions of the iris and of the pupil is easy and accurate due to a high contrast difference between the iris and the white of the eye and/or, depending on the color of the iris, between the pupil and the iris.

(27) On the second image, the person may be wearing an eyewear with the lens 4.

(28) Alternatively, the person may be holding an eyewear with the lens 4 at a distance in front of his face. Said distance may for example be measured by any method known to the person skilled in the art. Said distance could be for example equal to the length of the temple of the frame.

(29) An increased distance between said part 2 of the eye of the person and the lens 4 results in an increased deformation of said part 2 of the eye of the person on the second image, compared to the first image. Provided that the increased deformation does not lead to said part 2 of the eye of the person appearing too small on the second image with respect to the resolution of the second image, a better accuracy of the method is reached.

(30) During the optical parameter determination step S16, at least one optical parameter of the lens is determined based on a comparison between said part 2 on the first and the second image.

(31) For example, by comparing the color of said part 2 of the eye on the first and the second image, it is possible to determine optical parameters of the lens related to color.

(32) According to an embodiment, said optical parameter of the lens is determined based on a comparison on the deformation of said part 2 of the eye of a person between the first image and the second image. In the sense of the invention, deformation is a change in dimensions and/or in shape.

(33) In the case where the first image comprises a direct view of the part 2 of the eye of the user, the algorithm used to perform the optical parameter determination step S16 requires values of the following distances: the distance between said part 2 of the eye and the image acquisition module used to acquire the first image while the first image was acquired, and the distance between said part 2 of the eye and the lens 4 while the second image was acquired.

(34) In the case where both the first and the second image comprise a view of the part 2 of the eye of the user through the lens 4, the algorithm used to perform the optical parameter determination step S16 requires values of the distance between the lens 4 and the image acquisition module used to acquire the first image while the first image was acquired and of the distance between the lens 4 and the image acquisition module used to acquire the second image while the second image was acquired.

(35) In the case where the first image and the second image are both depth maps or other images obtained by mapping or facial recognition technologies, such as the technologies implemented in Kinect or Face ID, the images may comprise depth information, thus the required values may be enclosed in the images, extracted from the image data and determined relatively to each other.

(36) In the case where the first image and the second image are both two-dimensional images such as photographs, the required values may be determined either relatively to each other or as absolute numerical values.

(37) The required values may be predetermined, for example at least one default value may be entered.

(38) Alternatively, the required values may be input manually by the person.

(39) Alternatively, the required values may be selected in a database, based on manual inputs by the person and/or based on at least one image.

(40) Alternatively, according to another embodiment, the method according to the invention may further include a scaling step S14.

(41) During the scaling step S14, the required values for the calculations are determined relatively to at least one known dimension. The required values are determined based at least on the first and the second images. Said known dimension is to be understood as either the known length of an object, or the known distance between two objects.

(42) The required values for the calculations may be determined based on calibration. Calibration consists in using an object whose size is known to know the dimension of other objects in a photo.

(43) An object having one known dimension may be a credit card, or any object which may be referenced in a database comprising a list of objects and at least their corresponding lengths. An example of such objects is a spectacle frame. In this case, the person may for example select the spectacle frame he/she is wearing in a database comprising a list of spectacle frames and their corresponding lengths. Some dimensions of the frame may also be read directly on the frame. In this example, the calibration can be done directly during the acquisition of at least the first and second images.

(44) Alternatively, an object having one known dimension may be an object which itself is calibrated with another object having one known dimension.

(45) According to an embodiment, during the image reception step S12, a third image may be received, the third image comprising, as illustrated on FIG. 2C, a view of the eyewear comprising the lens 4. The eyewear is positioned on a flat surface along with an object 6 with at least one known dimension, such as a credit card. Thus the absolute dimensions and shape of the eyewear may be determined and the eyewear may be further used, on other images, as a scale. Advantageously, this image may be used to perform a calibration of the spectacle frame prior to the scaling step S14.

(46) The following example illustrates an embodiment of the invention in which the method comprises a scaling step S14.

(47) In this example, during the reception step S12, three images corresponding to the following description are received: the first image, as illustrated on FIG. 5A, is a front view of the person having a spectacle frame at the level of his forehead and a credit card positioned in the same plane as a part 2 of the eye of the person, the first image is acquired by an image acquisition module positioned at a distance d1 from the part 2 of the eye of the person, the second image, as illustrated on FIG. 5B, is a front view of the person having the spectacle frame on his nose so that the lens 4 is before the part 2 of the eye of the person and a credit card positioned in a plane tangent to the lens 4 at a reference point chosen on the front surface of said lens 4, the second image is acquired by an image acquisition module positioned a distance d2 from the part 2 of the eye of the person, and the third image is a front view of the person having the spectacle frame at the level of his forehead, the third image is acquired by an image acquisition module positioned a third distance d3 from the part 2 of the eye of the person.

(48) Any of the distances d1, d2 and d3 may be equal to or different from each other.

(49) In this example, the credit card is an object 6 of one known dimension, this known dimension is in the same plane as the part 2 of the eye of the person on the first image. Therefore the credit card may be used to determine the dimensions of said part 2 of the eye of the person on the first image.

(50) Moreover, in this example, the known dimension of the credit card is in a plane tangent to the lens 4 at a reference point chosen on the front surface of said lens 4 on the second image. Therefore, the dimensions of the spectacle frame are calibrated with the credit card based on the second image, thus the spectacle frame is also an object of at least one known dimension, which may be used to determine the dimensions of said part 2 of the eye of the person as seen on the second image.

(51) Furthermore, either the credit card or the spectacle frame may be used to determine characteristic dimensions of the face of the person.

(52) In this example, during the scaling step S14, the distance d1 between said part 2 of the eye and the image acquisition module used to acquire the first image while the first image was acquired, and the distance between said part 2 of the eye and the lens 4 while the second image was acquired are determined based on the acquired images.

(53) In this example, during the optical parameter determination step S16, at least one optical parameter of the lens is determined based on a comparison between said part 2 on the first and the second image and based on the distances determined during the scaling step S14.

(54) The method may further include an image data providing step S13.

(55) During the image data providing step S13, image data are provided for the first and the second images. The image data comprise a dimension relating at least to characteristics, such as settings or technical specifications, of the image acquisition device used to acquire each image.

(56) In the following example illustrated on FIG. 4, we consider the following dimensions or distances: L.sub.PD is the interpupillary distance of the person, L.sub.M is the length of an object, here a spectacle frame of eyewear with the lens 4, L′.sub.PD is, in the plan image of the second image, the distance corresponding to the interpupillary distance of the person, L′.sub.M is, in the plan image of the second image, the distance corresponding to the length of the object on the second image, d.sub.VO is the distance between the optical lens and the part 2 of the eye of the person, d is the distance between the lens 4 and the lens of the image acquisition device which acquired the second image, and f′.sub.C is the focal length of the lens of the image acquisition device, represented as a pinhole camera, which acquired the second image.

(57) Among these values, the required distances for the calculations during the optical parameter determining step S16 are d.sub.VO and d.

(58) L′.sub.M and L′.sub.PD are dimensions which may be determined directly from the second image. f′.sub.C is a dimension related to the image acquisition device.

(59) L.sub.PD may be a known dimension, for example separately provided, measured, or determined in a similar manner from the first image if the first image comprises a view of both eyes directly seen.

(60) d.sub.VO may be determined using the equation d.sub.VO=d(L.sub.PD/L′.sub.PD×L′.sub.M/L′.sub.M−1).

(61) According to an embodiment, during the image reception step S12, a fourth image may be received. The fourth image comprises, as illustrated on FIG. 2D, a side view of the face of the person, with the lens 4 being in the same position with regard to said part 2 of the eye as on the first image.

(62) By side view is understood a view in which the head of the person is oriented to the side. In other words, on the fourth image, the center of said part 2 of the eye and the center of the lens 4 are on a straight line substantially parallel to the plan image.

(63) Advantageously, characteristic side-view dimensions on the head of the person may be determined. In particular, through the association of the third and the fourth image, characteristic side-view dimensions on the head of the person may be scaled relatively to the dimensions of the eyewear.

(64) On the fourth image, said part 2 of the eye of the person may be directly visible. Advantageously, in this case the distance between said part 2 of the eye of the person and the lens 4 may be determined. In particular, through the association of the third and the fourth image, the distance between said part 2 of the eye of the person and the lens 4 may be scaled relatively to the dimensions of the eyewear.

(65) According to an embodiment, during the image reception step S12, a fifth image may be received, the fifth image comprising, as illustrated on FIG. 2E, a side view of the face of the person, wherein said part 2 of the eye is directly visible.

(66) Advantageously, if said part 2 of the eye is not directly visible on the fourth image, it is possible from the fifth image to determine the position of said part 2 of the eye of the person on the fourth image as if it was directly visible.

(67) Thus, the distance between the lens 4 and said part 2 of the eye may be determined. It may be assumed that this distance is the same on every image where the person is wearing the eyewear, for example on the second image.

(68) In addition, the method according to the invention may further include a viewing condition reception step S10.

(69) During the viewing condition reception step, at least two viewing conditions associated to different images may be received. The viewing conditions may be metadata included in an image. Advantageously, viewing conditions associated to every image may be received.

(70) Viewing conditions may include a lighting parameter, which may include at least a value in a radiometry unit or in a photometry unit.

(71) Viewing conditions may include a reflection parameter. For example, an image may be reflected from a mirror.

(72) In an embodiment, the viewing conditions are similar for the first image and the second image.

(73) Alternatively, the method according to the invention may further include a viewing condition determination step S11.

(74) During the viewing condition determination step, at least two viewing conditions associated to different images may be determined. As an example of a viewing condition, a lighting parameter, such as brightness, may be determined by image treatment of at least part of an image. Advantageously, viewing conditions associated to every image may be determined.

(75) For this reason, it is possible to use some characteristic distances measured on the face to determine a ratio factor between different images.

(76) The viewing conditions may also include the distance between the image acquisition module and an element such as the lens 4 or the part 2 of the eye of the person while each image is acquired. Indeed, such distances may be different from an image to another.

(77) The scaling step S14 and/or the optical parameter determination step S16 may also be adapted based on the viewing conditions.

(78) For example, if the face has not the same size in photo 1 without glasses and photo 2 with eyeglasses, this implies that the distance between the image acquisition module and the face has changed. A ratio factor may thus be calculated to take into account for the power calculation.

(79) Alternatively, differences in viewing conditions, such as brightness, between two images may result in a deformation of the part 2 of the eye, the deformation being independent from the lens 4. Thus the difference in apparent shape or dimensions of the image of said part of the eye on different images may be corrected from the variations in the viewing conditions.

(80) In an embodiment, the method according to the invention may also comprise a first image acquisition step S2, as illustrated on FIG. 3A, during which a first image is acquired by a portable electronic device 8 comprising an image acquisition module 10.

(81) FIG. 5C illustrates a specific embodiment in which during the first image acquisition step S2, an object 6 of at least one known dimension is positioned so that on the first image, said part 2 of the eye of the person and at least one known dimension of the object 6 are in the same plane.

(82) In an embodiment, the portable electronic device 8 is a smartphone, a personal digital assistant, a laptop, a webcam or a tablet computer. The portable electronic device may comprise a battery, and may communicate with a reception unit, for example by wireless communication. Advantageously, the image acquisition step S2 may be carried out easily in any location.

(83) The method may comprise a lens positioning step S3, during which the lens 4 is positioned relatively to said part 2 of the eye of the person at a position corresponding to the second image.

(84) The method may comprise a second image acquisition step S4, as illustrated on FIG. 3B, during which a second image is acquired by the portable electronic device 8 comprising the image acquisition module 10, wherein the image acquisition module 10 is positioned at a second distance from the lens 4.

(85) During the second image acquisition step S4, the face of the person, the lens 4 and the image acquisition module 10 are positioned in such a way that said part 2 of the eye of the person is visible from the image acquisition module 10 through at least part of the lens 4.

(86) In an embodiment, the method further comprises, prior to the first image acquisition step S2, a first lens positioning step S1, during which the lens 4 is positioned relatively to the part 2 of the eye of the person at a position corresponding to the first image. Thus the invention allows a person to determine optical parameters of his/her eyewear for example by simply using his/her smartphone, or another portable electronic device, to take a series of images of his/her face with and without eyewear, then having the images processed.

(87) As illustrated on FIG. 5D, during the second image acquisition step S4, a reference element with at least one known dimension in a plane may be positioned so that on the second image the known dimension is in a plane tangent to the lens 4 at a reference point chosen on the front surface of said lens 4.

(88) Advantageously, the method according to the invention may include a third image acquisition step S6 for acquiring a third image by a portable electronic device 8 comprising an image acquisition module 10.

(89) Advantageously, the method according to the invention may include a fourth image acquisition step S8, as illustrated on FIG. 3C, for acquiring a fourth image by a portable electronic device 8 comprising an image acquisition module 10.

(90) Advantageously, the method according to the invention may include a fifth image acquisition step S9, as illustrated on FIG. 3D for acquiring a fifth image by a portable electronic device 8 comprising an image acquisition module 10.

(91) The invention may further relate to a method for ordering a second lens of eyewear adapted for a person, as illustrated on FIG. 6.

(92) The method comprises at least an optical parameter determining step S17 and an ordering step S18.

(93) During the optical parameter determining step S17, at least one optical parameter of a first lens is determined by a method according to the invention, as illustrated on FIG. 1.

(94) During the ordering step S18, a second lens having the at least one determined optical parameter of the first lens is ordered.

(95) The invention may further relate to a computer program product comprising one or more stored sequences of instructions which, when executed by a processing unit 20, are able to perform at least the optical parameter determining step S16 of the invention.

(96) The invention may further relate, as illustrated on FIG. 7, to a system comprising at least a reception unit 22, an electronic storage medium 24 and a processing unit 20, the reception unit 22, the electronic storage medium 24 and the processing unit 20 being configured so as to communicate one with another, the reception unit 22 being able to receive a first image and a second image, the first image comprising a front view of the face of the person with said part 2 of the eye of the person being directly visible, and the second image comprising a front view of the face of the person with said part 2 of the eye of the person being visible through at least part of the lens 4, and the electronic storage medium 24 comprising one or more stored sequences of instructions which, when executed by the processing unit 20, are able to perform an optical parameter determination step S16, during which at least one optical parameter of a lens 4 is determined based on a comparison between at least one part 2 of an eye of a person on the first image and the second image.

(97) The system may further comprise image acquisition means configured so as to communicate at least with the reception unit, the image acquisition means being able to acquire the first image and the second image.

(98) Examples of such systems may include a smartphone, a laptop computer, a desktop computer, a tablet computer or a personal digital assistant.

(99) The invention has been described above with the aid of embodiments without limitation of the general inventive concept.

(100) Many further modifications and variations will suggest themselves to those skilled in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims.

(101) In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.