METHOD AND DEVICE FOR AUTOMATICALLY DETERMINING PRODUCTION PARAMETERS FOR A PAIR OF SPECTACLES
20230221585 · 2023-07-13
Inventors
Cpc classification
International classification
Abstract
A method and device for automatically determining production parameters for a pair of spectacles. The method comprises capturing head image data for at least a part of the head of a spectacles wearer and determining a head parameterization for at least a part of the head, the head parameterization indicating head parameters for at least the part of the head, which parameters are relevant for the adjustment of a pair of spectacles. The head parameters comprise a lens grinding parameter and a spectacles support parameter. The method also comprises providing a spectacles parameterization indicating relevant spectacles parameters for adjusting the spectacles for the wearer; and performing data mapping for the head parameterization and the spectacles parameterization, in which for the adjustment of the spectacles for the spectacles wearer at least one spectacles parameter is adjusted according to an associated head parameter and/or at least one further spectacles parameter is determined.
Claims
1. A method for automatically determining production parameters for a pair of spectacles, the following being provided in one or more processors configured for data processing, and the method comprising: capturing head image data for at least a part of the head of a spectacles wearer; determining a head parameterization for at least the part of the head of the spectacles wearer, the head parameterization indicating head parameters for at least the part of the head of the spectacles wearer, which parameters are relevant for the adjustment of a pair of spectacles, and the head parameters comprising at least one lens grinding parameter and at least one spectacles support parameter; providing a spectacles parameterization for the spectacles, the spectacles parameterization indicating relevant spectacles parameters for adjusting the spectacles for the spectacles wearer; and performing data mapping for the head parameterization and the spectacles parameterization, in which for the adjustment of the spectacles for the spectacles wearer at least one of the following is provided: at least one spectacles parameter is adjusted according to an associated head parameter, and at least one further spectacles parameter is determined.
2. The method according to claim 1, further comprising: capturing RGB head image data for at least a part of the head of the spectacles wearer; providing calibration data indicative of a calibration of an image recording device used to capture the RGB head image data; and determining the at least one lens grinding parameter using the RGB head image data and the calibration data by means of image data analysis, wherein a localization vector associated with the pupils is determined, which indicates an image pixel position for the pupils.
3. The method according to claim 2, further comprising: providing depth image data; and determining the at least one lens grinding parameter using the RGB head image data, the depth image data, and the calibration data.
4. The method according to claim 2, further comprising: providing reference feature data that indicates a biometric reference feature for the spectacles wearer; and determining the at least one lens grinding parameter using the RGB head image data, the reference feature data, and the calibration data.
5. The method according to claim 1, characterized in that the at least one spectacles parameter or the at least one further spectacles parameter includes a real grinding height for spectacles designed as varifocal spectacles, and further comprising: determining at least one fixed point of a real spectacles frame of real spectacles, which indicates a transition between a spectacles lens and the spectacles frame, wherein a localization vector associated with the at least one fixed point of the spectacles frame is determined, which indicates an image pixel position for the at least one fixed point of the spectacles frame; and vertically projecting a pupil mark indicative of the pupil onto the spectacles frame.
6. The method according to claim 1, characterized in that the at least one spectacles parameter or the at least one further spectacles parameter includes a virtual grinding height for spectacles designed as varifocal spectacles, and further comprising: providing a 3D model of virtual spectacles, from which a spectacles parameterization for the virtual spectacles is determined; determining at least one fixed point of a spectacles frame of the virtual spectacles, which indicates a transition between a spectacles lens and the spectacles frame, wherein a localization vector associated with the at least one fixed point of the spectacles frame is determined, which indicates an image pixel position for the at least one fixed point of the spectacles frame; and vertically projecting a pupil mark indicative of the pupil onto the spectacles frame.
7. The method according to claim 6, characterized in that the 3D model of the virtual spectacles is selected from a large number of different virtual spectacles, for which a respective 3D model is stored in a storage device.
8. The method according to claim 1, further comprising: determining a 3D coordinate system; mapping the head parameterization for at least a part of the head of the spectacles wearer and the spectacles parameterization into the 3D coordinate system, and determining one or more of the following parameters in the 3D coordinate system: horizontal pupillary distance, face width at pupillary level, real grinding height, and virtual grinding height.
9. The method according to claim 1, characterized in that, based on the head parameterization, a temple length for the temples of the spectacles and a bending point for the temples are determined for the adjustment of the spectacles.
10. The method according to claim 1, characterized in that the head parameters include one or more lens grinding parameters from the following group: horizontal pupillary distance and head width.
11. The method according to claim 1, characterized in that the head parameters include one or more spectacles support parameters from the following group: face width at the pupillary level, nose width, nose attachment point, ear attachment point, distance between nose and ears, and cheek contour.
12. A device for automatically determining production parameters for a pair of spectacles, comprising one or more processors configured for data processing and configured for: receiving head image data for at least a part of the head of a spectacles wearer; determining a head parameterization for at least a part of the head of the spectacles wearer, the head parameterization indicating head parameters for at least the part of the head of the spectacles wearer, which parameters are relevant for the adjustment of a pair of spectacles, and the head parameters comprising at least one lens grinding parameter and at least one spectacles support parameter; providing a spectacles parameterization for the spectacles, the spectacles parameterization indicating relevant spectacles parameters for adjusting the spectacles for the spectacles wearer; and performing data mapping for the head parameterization and the spectacles parameterization, in which for the adjustment of the spectacles for the spectacles wearer at least one of the following is provided: at least one spectacles parameter is adjusted according to an associated head parameter, and at least one further spectacles parameter is determined.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] Further embodiments are explained below with reference to the drawings, in which:
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
DETAILED DESCRIPTION OF FURTHER EMBODIMENTS
[0033] A method and a device for automatically determining production parameters for a pair of spectacles are described below using various embodiments.
[0034] In this case, according to
[0035] Commercially available mobile devices, for example mobile phones or laptop computers, are able to record a range of environmental measurement data, for example by means of one or more sensor or recording devices: CMOS camera, infrared camera, distance sensors, and point projectors.
[0036] Image recordings can be captured by means of the recording device 2, from which images digital image information can be determined: image data (RGB information); depth data (in particular distances), and calibration data (such as resolution, angle, etc.). 3D data is determined from digital image information, with the following being provided in one embodiment (cf. also further explanations below):
[0037] (i) Points of interest (POI) are detected in the image data, e.g. pupils, frames, noses, etc., up to the entire part of the head (e.g. face)
[0038] (ii) These POIs are mapped to the depth data with the help of the calibration data and biometric data (in particular for plausibility checks).
[0039] (iii) The necessary distances can be calculated from the “vectors” determined in this way. The mapping is done from “2D to 3D.” In other words, the POI is a vector (x, y), and after the mapping there is a vector (x, y, z) taking into account the depth data.
[0040] For example, POIs are mapped to the depth image data, including calibration data. The reference feature data, which indicate a biometric reference feature, can also be used for a plausibility check, for example a distribution of the horizontal pupillary distance in the population. This serves for security. For example, a warning can be generated if an unusual pupillary distance is determined, which deviates from a typical range. In this way, a corresponding action can be initiated, for example the spectacles wearer can be asked to repeat the measurement, for example to record image data/sensor data again.
[0041] If, in an alternative embodiment, no depth data is available, for example because the recording device 2 does not have a corresponding sensor, the POIs are mapped using a so-called reference method. As explained above, it is provided here that the iris (in particular the iris diameter) be used as a reference, for example. The biometric data will continue to be used for plausibility checks.
[0042] With regard to POI, only the pupil can be relevant, but also the iris contour (or the pixel position in the image). From the iris contour, the pupil (both as pixel position in the RGB image), and the diameter, the horizontal pupillary distance can be determined, in particular when no depth image data is available.
[0043] In particular for processing the image recordings and the user inputs, the recording device is connected to a data processing device 3 via a wireless or wired connection that is configured to exchange data. The data processing device 3 has one or more processors for data processing and is connected to a storage device 4.
[0044] In particular, respective spectacles data for a large number of different spectacles models are stored in the storage device 4, the spectacles data specifying characteristic data for the various spectacles models.
[0045] The data processing device is optionally connected to a production device 5 that is configured to receive parameters for spectacles to be produced and to produce them automatically, in particular a spectacles frame or parts thereof, using a 3D printer.
[0046] In the method for automatically determining production parameters for spectacles, one or more of the following parameters are determined: pupillary distance (PD), real grinding height (rSH) and virtual grinding height (vSH). With regard to an adjustment of the spectacles, provision can be made to adjust a front part, nose pads, and/or temples of the spectacles in particular during the procedure.
[0047] In particular, the following steps can be provided, which are explained in more detail below: data collection by means of image recordings; determining features and reference points and applying a projection methodology. Image data with light, color, and distance information about the recorded object (head of the spectacles wearer 1) are important for data collection. These objects are primarily a part of the face of the spectacles wearer and spectacles models. This data from the “2D world” is projected stably and precisely into a 3D world coordinate system using multi-stage projection and filter methods. The desired final measurement data for a spectacles lens centering, a custom manufacture of spectacles frames, and/or a personalized spectacles model recommendation can then be determined from position matrices.
[0048] I. Determination of Parameters
[0049] The pupillary distance (PD) is defined as the horizontal distance in millimeters (mm) between the two pupils. The center points of both pupils are used as the starting points for the measurement. The pupillary distance is necessary for centering the spectacles lens of single vision and varifocal lenses.
[0050] The real grinding height (rGH) is the vertical distance in mm of the pupil to the inner lower edge of the spectacles frame that the spectacles wearer wears during the measurement. The grinding height is necessary in order to be able to grind varifocal spectacles lenses.
[0051] The virtual grinding height (vGH) is the vertical distance in mm of the pupil to the inner lower edge of the virtual spectacles frame that the spectacles wearer sees projected onto his face via the screen of the mobile device. The grinding height is necessary in order to be able to grind varifocal spectacles lenses.
[0052]
[0053] a) Features and Reference Points
[0054] According to one embodiment, the following points are defined and determined: pixel of interest in two-dimensional RGB image (POI) (pupil position, frame position of real spectacles, frame position of virtual spectacles); 3D world coordinate system; depth data in 2D depth image, and calibration data.
[0055] Pixel of interest in two-dimensional RGB image (POI): In order to determine the parameters PD, rGH, and vGH, it is necessary to determine the exact position of the pupils, for example the deepest point of the spectacles frame, the so-called box size. For this purpose, RGB images and camera calibration data (resolution of the recorded image, camera angle information) are analyzed for the corresponding mobile device (recording device 2). The pupils are determined using a pupil finder methodology (image analysis algorithms) and stored in a localization vector (POI). With the help of the calibration data, the pupils can be clearly localized as pixel information (x, y) in the RGB image.
[0056] In one embodiment, the pupil finder methodology provides a two-stage method. First, a cascaded finding of the pupil (so-called “Cascaded Convolutional Neural Networks”) is performed: (i) finding the face; (ii) finding the eye area; (iii) finding the iris; and (iv) finding the pupil. In a further step, plausibility data for comparison (biometric information) are also provided. a plausibility check can be carried out in each step of the method, for example with the help of the biometric data, for example according to the following scheme: Step (1)—Has the iris been found inside the eye area?; Step (2)—Has the pupil been found inside the iris?; . . . ; Step (n)—Is the calculated pupillary distance within a plausible range, for example 50 to 70 mm?; . . . . This supports the stability and accuracy of the method.
[0057]
[0058] For rGH, it is necessary to determine the exact position of the frame. For this purpose, a fixed point on the frame is defined as follows: [0059] Frame: The relevant point on the frame is the transition between the lens and frame of the spectacles (and thus the “inner point of the spectacles”) [0060] Vertical projection: Starting from the pupil found, a vertical projection onto the frame (as defined above) is carried out
[0061] Using a line finder methodology, the frame fixed points (left and right side) are determined and stored in a localization vector. This vector is congruent with the camera's calibration data, so the exact pixel position of the frame fixed points is known. A spectacles frame in an image represents a line geometry. That means an algorithm that specializes in finding lines (and thus the frame) is chosen—in particular where the line begins and where it ends. We currently use the Houghs Line Finder.
[0062]
[0063] The virtual spectacles are provided as a modeled 3D object to determine the virtual grinding height (vGH). Here, the exact dimensions are known. The lower central point of the bridge is defined as the anchor point on the spectacles.
[0064] 3D world coordinate system: The starting point is the definition of a world coordinate system. This is a Euclidean system with an anchor point at the origin. This anchor point is defined by the lens front of the RGB camera. The orientation of the coordinate axes is defined as follows: [0065] x-axis: Parallel to the horizontal orientation of the mobile device [0066] y-axis: Parallel to the vertical orientation of the mobile device [0067] z-axis: Parallel to the camera recording direction of the mobile device
[0068] Depth data in 2D depth image: Advanced mobile devices provide depth information. These greyscale images are captured synchronously with the RGB images, and the depth and RGB images can be congruently transformed together with the calibration data. The depth images contain the distance from the depth lens to the recorded object per pixel.
[0069] Calibration data: Each RGB and depth image pair contains various calibration data that further specify the capture. It is assumed that the following quantities are available or can be extracted by software: angle along x-y axis for POI; angle along y-z-axis for POI; resolution RGB image and resolution depth image.
[0070] The formalization is explained in more detail below:
[0071] 1. Image Pixels of Interest in 2D RGB Image (POI) [0072] POI=Pixels of interest=Position of relevant pixel (e.g., found pupil) in the RGB image [0073] img.sub.RGB=RGB image [0074] img.sub.DEP=depth image [0075] p.sub.1=Position of the left pupil in the RGB image={x.sub.p.sub.
[0081] 2. 3 D World Coordinate System [0082] World coordinate system=Euclidean coordinate system with three dimensions [0083] Anchor point=Origin point=Camera lens exit point=(0,0,0)
[0084] 3. Depth Image Data in 2D Depth Image
[0085] 4. Calibration Data [0086] a.sub.xy.sub.
[0090] b) Projection Methodology
[0091] In one embodiment, the projection methodology comprises four steps: [0092] determining angles between axes in the world coordinate system [0093] determining the distance to the POI [0094] projecting 2D input images to the 3D world coordinate system [0095] calculating the distance
[0096] i) Angles Between Axes in the World Coordinate System
[0097] To determine the angle between axes in the world coordinate system, the following is provided: the two angles are required for the projection into the world coordinate system. These are available in the calibration data and can be used for the projection.
[0098] ii) Determining the Distance to the POI
[0099] To determine the distance to the POI, the following is provided: a connection to the depth image must be established from the localization of the POI in the RGB image in order to determine the distance of the POI from the camera. This is done using a mapping method that takes into account the resolution of the RGB and the depth image. The resolution of the RGB and the depth image is usually different. A total of three cases can be distinguished: [0100] Case 1: The resolution of RGB image and depth image match; [0101] Case 2: The resolution of the RGB image is greater than the resolution of the depth image; [0102] Case 3: The resolution of the RGB image is smaller than the resolution of the depth image.
[0103] The aim is to derive the depth information (=distance in mm) for a pixel in the RGB image that has already been found to be relevant (e.g., the pupil or the frame):
[0104] Case 1:
[0105] The coordinates of the POI in the RGB image are projected exactly onto the coordinates in the depth image. The corresponding distance information can be determined.
[0106] Case 2:
[0107] To describe the initial situation,
[0108] Two cases can occur: [0109] The POI lies entirely within a depth pixel (striped area). Congruence is determined as follows: the POI is projected onto the depth image pixel with the same coordinates. [0110] The POI is in more than one depth pixel (checkered area). Congruence is determined as follows: the POI is projected to the arithmetic average of the distances of all affected depth image pixels.
[0111] The corresponding distance information can be determined.
[0112] Case 3:
[0113] To describe the initial situation,
[0114] iii) Projection of 2D Input Images to 3D World Coordinate System
[0115] The position in the 3D world coordinate system is calculated from the pixel distance and the two angular dimensions using a Euclidean position formula.
[0116] iv) Calculation Distance
[0117] The distance between two points in the 3D world coordinate system is calculated using a Euclidean distance formula:
[0118] PD: The pupillary distance is specified in mm and is calculated from the two pupil points in the world coordinate system.
[0119] rGH: The real grinding height is specified in mm and is calculated from a pupil point and a real frame point in the 3D world coordinate system.
[0120] vGH: The virtual grinding height is specified in mm and is calculated from a pupil point and a virtual frame point in the 3D world coordinate system.
[0121] A possible formalization is explained in more detail below.
[0122] ii) Determining the Distance to the POI
[0123] Case 1
[0124] d.sub.POI=depth information from the POI in the depth image at the pixel position {x.sub.poi, y.sub.POI}
[0125] Case 2
[0126] Case 3
[0127] iii) Projection of 2D Input Images to 3D World Coordinate System
(x,y,z)=map(d.sub.POI,a.sub.xy,a.sub.yz)
[0128] iv) Calculation Distance
dist.sub.PD=√{square root over ((x.sub.p.sub.
dist.sub.RGHL=√{square root over ((x.sub.p.sub.
dist.sub.RGHL=(x.sub.p.sub.
[0129] One or more of the following advantages can result from the different versions: [0130] Increase in measurement accuracy—With the proposed solution, a measurement accuracy of less than one mm can be achieved. [0131] Minimization of measurement variance: With the proposed solution, a measurement variance of less than two millimeters can be achieved (one standard deviation). [0132] All measurements can be carried out with just a commercially available mobile device [0133] No real spectacles are required to determine the grinding height, a virtual fitting on the mobile device is sufficient. This allows the recommendation of suitable spectacles frames, the measurement of the necessary parameters for successful spectacles lens centering, and the frame adjustment without the presence of an optician. This allows a qualitatively equivalent online purchase of spectacles, as well as at stationary vending machines in the sense of a self-service principle. The return rate in current online spectacles sales can be significantly reduced through better recommendation and measurement, which has a positive effect on the profitability of online spectacles sellers and protects the environment by reducing the package quantities.
[0134] II. Adjusting Frames for a Custom-Made Pair of Spectacles
[0135] a) Features and Reference Points
[0136] The spectacles include in particular the front part, the left and right temples, and nose pads. A projection methodology is used to determine the optimal frame size.
[0137] The following points are determined: canonical spectacles model and modification points. Canonical in this context means the definition of the spectacles components and the sizes and shape adjustments: canonical model={all components, size adjustments, shape adjustments}. The specific spectacles model is then calculated from this finite number of combinations and retrieved from the memory. A canonical spectacles model can be defined in an embodiment using the following components: front, temples (left and right), nose pads (left and right), bending point temples (left and right), and bending angle temples (left and right).
[0138]
[0139] Modification points: The frame adjustment takes place separately for each component using the following modification points:
[0140] Front: Width of the entire front part 71. The scaling is done with aspect ratio stability.
[0141] Temple: Overall length of temple 72, bending point 73, and bending angle 74
[0142] b) Projection Methodology
[0143] The projection methodology consists of two steps: front part projection method and temple projection method.
[0144] i) Front Part Projection Method
[0145] Facial measurement data are collected and applied on a discrete grid, from which the size of the front part 71 can be determined. In addition, “aesthetic principles” may be considered, for example as follows: (i) women tend to wear larger spectacles; and (ii) the eyebrows should be above the spectacles. Also, usually the pupils should not be in the lower half of the lens.
[0146] Facial measurement data: Pupillary distance and face width are collected. Pupillary distance is part of the above claim. Face width is defined as the total width of the recognizable face at the pupillary level. The face width can be captured using current face recognition methods.
[0147]
[0148] Projection: The size of the front part can be derived for a grid point tuple determined from pupillary distance and face width.
TABLE-US-00001 TABLE 1 Example table for a projection, classification S, . . . , XL is an example and is projected onto a cardinal scale Pupillary distance <45 mm 45-55 mm 55-65 mm >65 mm S S M M <140 mm Face width S M M L 140-150 mm M M L L 150-160 mm M L L XL >160 mm
[0149] S, M, L classification is for illustrative purposes. Different sizes and shapes are used for each component of the spectacles, which are provided with an ID number. When determining the spectacles, an optimal size and shape is selected for each component.
[0150] A possible formalization is explained in more detail below:
[0151] Facial Measurement Data [0152] PD=Pupillary distance in mm [0153] FW=Face width in mm, measured at eye level [0154] FP=Front part width in mm, measured at the widest point
[0155] Discrete Grid [0156] R.sub.PD=Pupillary distance grid={p.sub.1, . . . , p.sub.N}, N∈ [0157] R.sub.FW=Face width Grid={g.sub.1, . . . , g.sub.M}, M∈
[0158] R.sub.FP=Grid for front part widths={f.sub.1, . . . , f.sub.L}, L∈
[0159] Projection
[0160] ii) Temple Projection Method
[0161] Facial measurement data is collected and placed on a discrete grid, from which the temple length and bending point can be determined. Here, too, additional aesthetic principles can be taken into account. For example, the temples for women's spectacles should always be a little longer, as they often put their spectacles up in their hair.
[0162] Facial measurement data: Two facial feature points are located: nose attachment point and ear attachment point. The nose attachment point and ear attachment point serve as references for the contact points of the nose pads and temples. These points can be captured using face recognition methods.
[0163]
[0164] Discrete grid: A discrete grid is created along the dimension “Length of the temple to the bending point.” For this purpose, static data is collected on these variables, i.e., the average distribution in the population is used and an equidistant grid is formed from this distribution. The temples are divided into equidistant lengths and associated with each grid point from “Length of the temple to the bending point.”
[0165] Projection: The length of the temple and the bending point can be derived for a determined grid point from “Length of the temple to the bending point.”
TABLE-US-00002 TABLE 2 Example table for a projection, classification S, . . . , XL is an example and is projected onto a cardinal scale Length of the temple to the bending point <100 mm 100-110 mm 110-120 mm >120 mm S M L XL
[0166] A possible formalization is explained in more detail below:
[0167] Facial Measurement Data [0168] NP=Nose attachment point in the world coordinate system [0169] EP=Ear attachment point in the world coordinate system [0170] FW=Face width in mm, measured at eye level
[0171] Discrete Grid [0172] R.sub.NED=Grid for projected nose-to-ear distances={n.sub.1, . . . , n.sub.N}, N∈ [0173] R.sub.TL=Grid for temple lengths up to the bending point={b.sub.1, . . . , b.sub.L}, L∈
[0174] Projection
n.sub.i(b.sub.1 . . . b.sub.QVL), Q≤L; i∈
[0175] One or more of the following advantages can result from the different versions. It is possible to adapt a spectacles frame to an individual head shape. All that is needed is a standard mobile device. An automated method has been created (scalable). The delivery time can be shortened by combining it with 3D printing technology. In addition, wearing comfort can be significantly increased with custom-made spectacles. It also eliminates the need for subsequent adjustment of the spectacles frame to the wearer's head, for example in the nose or ear region, which in turn eliminates the need for the presence of an optician and allows for online or over-the-counter sale of spectacles.
[0176] III. Determining a Spectacles Recommendation
[0177] In order to determine a recommendation, all relevant input data is captured. This includes facial analysis data, preferences about existing objects (spectacles), and visual data (images of spectacles).
[0178] a) Features and Reference Points
[0179] The following points are determined: face width, portfolio, and preferences of the spectacles wearer.
[0180] Face width: The face shape is a relevant aspect when it comes to the fashionable fit of spectacles. The face width is used for this purpose and is defined as the recognizable width of the face in mm at the level of the eyes.
[0181] Portfolio: The spectacles portfolio includes all relevant spectacles that are available for deriving the recommendation. Each item of this portfolio contains two pieces of information: an RGB image of the spectacles and a classification according to descriptive features (shape, color, style, etc.).
[0182] Preferences: Preferences are a binary vector that assigns the preference (preferred, not preferred) to each image.
[0183] A possible formalization is explained in more detail below:
[0184] 1. Face Width [0185] FW=face width in mm, measured at the pupillary level
[0186] 2. Portfolio
[0187] 3. Preferences [0188] M=number of spectacles with preference, M≤N [0189] p.sub.j=preference for spectacles with index j, j∈{1, . . . , M}, p.sub.j∈{0,1}
[0190] b) Projection Methodology
[0191] The projection methodology consists of three steps: face projection method, image projection method, and image preference method.
[0192] i) Face Projection Method
[0193] Facial measurement data is collected and placed on a discrete grid, from which the recommended spectacles can be determined.
[0194] Facial measurement data: The face width is collected. The face width is defined as the total width of the recognizable face at the pupillary level. The face width can be captured using current face recognition methods.
[0195] Discrete grid: A discrete grid is created along each face width dimension. For this purpose, static data is collected on these variables (distribution in the population) and an equidistant grid is formed from the distribution. The spectacles portfolio is divided into equidistant sizes and associated with each grid point based on the face width.
[0196] Projection: The recommended spectacles can be derived for a grid point determined from the face width.
[0197] ii) Image Projection Method
[0198] For a fixed RGB image with recognizable spectacles (input image), a trained neural network is used to perform feature extraction. As a result, a suitable similarity metric is used, which compares the input image with every image in the portfolio and sorts it according to confidence.
[0199] The similarity metric is provided with a confidence level from which applies “these spectacles are similar to the input image,” so that a recommendation sub-portfolio can be derived.
[0200] iii) Image Preference Method
[0201] For a set of fixed RGB images with recognizable spectacles (input images from the recording device 2), a trained neural network is used to perform feature extraction. As a result, a similarity metric is used, which compares the input images with each image for existing spectacles and sorts them according to confidence. A preference vector can be used as an additional input parameter, which indicates one or more preferences determined from the input images. Such a preference may concern qualitative factors for the user, for example one or more factors from the following group: sunglasses or regular spectacles, color, material, brand, and the like.
[0202] The similarity metric is provided with a confidence level from which applies “these spectacles are similar to the input image and preferred,” so that a recommendation sub-portfolio can be derived.
[0203] A possible formalization is explained in more detail below:
[0204] 1. Face Projection Method
[0205] 2. Image Projection Method [0206] img.sub.input=Input image with recognizable spectacles [0207] NN(img.sub.input)=Trained neural network with input image [0208] α={α.sub.(1), . . . , α.sub.(N)}=NN(img.sub.input)=Sorted confidence vector
[0209] 3. Image Preference Method
[0210] img.sub.input={img.sub.input.sub.
[0211] p={p.sub.1, . . . , p.sub.M}=Preference vector for all images j=1, . . . , M
[0212] NN(img.sub.input, p)=Trained neural network with input images 1, . . . , M and preference vector
[0213] α={α.sub.(1), . . . , α.sub.(N)}=NN(img.sub.input, p)=Sorted confidence vector
[0214] One or more of the following advantages can result from the different versions: all in one mobile device; consideration of all relevant visual data and consideration of preferences. So far, only self-selection was possible online, but this was insufficient, since spectacles wearers do not know how their head size compares to the rest of the spectacles wearer population. In concrete terms, this means that nobody says of themselves: “I have a statistically significantly large head.” Recommendations based purely on preference are inadequate when it comes to spectacles. Head and face shape recognition can be automated without the presence of an optician, for example online or at a self-service machine, and also combined with deep learning-based preference recognition.
[0215] The features disclosed in the above description, the claims, and the drawings may be of relevance, both individually and also in any combination, for realizing the different embodiments.