Device and method for measuring a surface topography, and calibration method

10935372 · 2021-03-02

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and a device for measuring the topography and/or the gradients and/or the curvature of an optically active surface of an object are disclosed. The device allows the object to be arranged in a receiving region with a contact surface for contact with the object. Inside the device, there is a plurality of point light sources that provide light that is reflected at the surface to be measured of an object arranged in the receiving region. The device includes at least one camera with an objective assembly and an image sensor for detecting a brightness distribution which is produced on a light sensor by the light of the point light sources reflected at the surface to be measured.

Claims

1. A method for measuring at least one of a topography, a gradient, or a curvature of an optically effective surface of an article, the method comprising: reflecting light emitted by one or more light spots at the optically effective surface of the article to be measured; capturing a brightness distribution of the reflected one or more luminous spots; assigning the brightness distribution to the light spots causing the brightness distribution; predetermining a first topography; iteratively performing: determining reflection locations on the article for the light emitted by the light spots; calculating surface normals {right arrow over (n)} at the reflection locations for the light emitted from the light points, determining directional derivatives at the surface normal {right arrow over (n)} and interpolating the directional derivatives onto a regular polar grid; and calculating a further topography of the article by integrating the directional derivatives, determined by the surface normals {right arrow over (n)}, on the regular polar grid by way of solving the linear system of equations (I) reproduced below: ( A T A C T C 1 ) z = ( r 2 z + 2 z z ( r = 0 ) z ( bearing ) ) , ( I ) wherein A is a Jacobian matrix describing the numerical, polar derivatives on the polar grid and C a constraint matrix C describing the height of the article, measured at bearing points, and the polar condition z(0,)=z(0); up to a convergence criterion; and establishing at least one of an absolute position or a relative position of at least one point on the surface of the article to be measured in a coordinate system that is fixed relative to an apparatus.

2. The method as claimed in claim 1, further comprising: determining reflection locations for the light emitted by the light points on the article from intersections of light rays calculated from centroids of the luminous spots of the brightness distribution with the further topography.

3. An apparatus for measuring at least one of a topography, a gradient, or a curvature of an optically effective surface of an article, the apparatus comprising: a holding device configured to arrange the article on an abutment resting against the article in a recording region; a plurality of point light sources configured to illuminate the optically effective surface of the article and a recording device configured to capture a brightness distribution composed of luminous spots, the brightness distribution being caused by the light of the plurality of point light sources, reflected at the surface of the article to be measured, to illuminate the optically effective surface of the article; a device configured to establish at least one of an absolute position or a relative position of the luminous spots; a device configured to establish the at least one of the absolute position or the relative position of at least one point on the optically effective surface of the article in a coordinate system that is fixed relative to an apparatus, the apparatus being configured to iteratively: assign the brightness distribution to the point light sources causing the brightness distribution and predetermining a first topography; determine reflection locations on the article for the light emitted by the point light sources; calculate surface normals {right arrow over (n)} at the reflection locations for the light emitted from the point light sources, determine directional derivatives at the surface normals {right arrow over (n)}, and interpolate the directional derivatives onto a regular polar grid; and calculate a further topography of the article by integrating the interpolated directional derivatives of the optically effective surface, the directional derivatives being determined by the surface normals {right arrow over (n)}, on the regular polar grid by solving the linear system of equations (I) reproduced below: ( A T A C T C 1 ) z = ( r 2 z + 2 z z ( r = 0 ) z ( bearing ) ) , ( I ) wherein A is a Jacobian matrix describing the numerical, polar derivatives on the polar grid and C a constraint matrix C describing the height of the article, measured at bearing points, and the polar condition z(0,)=z(0); up to a convergence criterion.

4. The apparatus as claimed in claim 3, wherein the reflection locations for the light emitted by the point light sources on the article are determined from intersections of light rays calculated from the centroids of the luminous spots of the brightness distribution with the further topography.

5. The apparatus as claimed in claim 3, wherein the recording device comprises at least one camera having a lens assembly and an image sensor, on which the brightness distribution of light that is reflected at the optically effective surface is caused.

6. The apparatus as claimed in claim 5, wherein the lens assembly has an optical axis that lies flush with an adjustment axis.

7. The apparatus as claimed in claim 6, wherein the abutment resting against the article in a recording region is a spherical abutment.

8. The apparatus as claimed in claim 7, further comprising: a multiplicity of point light sources that provide light, the light being reflected at the optically effective surface of an article to be measured that is arranged in the recording region.

9. The apparatus as claimed in claim 5, further comprising: a computer unit having a computer program and a non-transitory memory, in which the computer program is stored, wherein the computer program calculates the at least one of the topography, the gradient, or the curvature of an optically effective surface of the article with an algorithm from a brightness distribution captured by the image sensor, the brightness distribution being caused on an image sensor by light of the point light sources that is reflected at the optically effective surface to be measured.

10. The apparatus as claimed in claim 3, wherein the point light sources are arranged on a three-dimensional half-space.

11. The apparatus as claimed in claim 3, further comprising: a lens element arranged between the recording region and a lens assembly, the refractive power of the lens element being positive or negative.

12. The apparatus as claimed in claim 11, wherein the lens element has an optical axis that lies flush with the optical axis of the lens assembly.

13. The apparatus as claimed in claim 3, further comprising: a computer with a computer program stored on a non-transitory memory, the computer program determining the position of the point light sources with an optimization method when executed on the computer.

14. The method as claimed in claim 1, further comprising: determining the at least one of the absolute position or the relative position of the light spots with an optimization method.

15. A computer program stored on a non-transitory memory having program code implementing a method as claimed in claim 1 when the program is executed on a computer unit.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The disclosure will now be described with reference to the drawings wherein:

(2) FIG. 1 shows a first apparatus for measuring the topography and/or the gradient and/or the curvature of an article in the form of a spectacle lens using point light sources;

(3) FIG. 2 shows a brightness distribution on an image sensor of a camera in the apparatus;

(4) FIG. 3 shows the influence of manufacturing tolerances on apparatus constants of the apparatus;

(5) FIG. 4 and FIG. 5 show the spatial relationship of the reflection points for the light at the article in the form of a spectacle lens, captured on the image sensor, and the relative position of the point light sources in the apparatus;

(6) FIG. 6A and FIG. 6B show the assignment of reflection points for the light, captured on the image sensor, from different point light sources in the apparatus;

(7) FIG. 7 shows the establishment of the abutment point of an optically effective surface of an article to be measured, at an abutment in the apparatus;

(8) FIG. 8A shows the image of a disk-shaped test object arranged in the apparatus for the purposes of determining machine parameters or apparatus constants;

(9) FIG. 8B shows a brightness distribution, captured by the image sensor of the apparatus, of reflections of the light of the point light source at a sphere, for the purposes of determining machine parameters or apparatus constants of the apparatus;

(10) FIG. 8C shows the image of a white disk in the apparatus, for the purposes of determining machine parameters or apparatus constants of the apparatus;

(11) FIG. 9 shows a second apparatus for measuring the topography of an article in the form of a spectacle lens;

(12) FIG. 10 shows the influence of manufacturing tolerances on apparatus constants of the apparatus;

(13) FIG. 11 and FIG. 12 show the spatial relationship of the reflection points, captured on an image sensor in the apparatus, for the light at the article to be measured and the absolute and/or relative position of point light sources.

DESCRIPTION OF EXEMPLARY EMBODIMENTS

(14) The apparatus 10 that is shown in FIG. 1 is configured to measure the topography and/or the gradient and/or the curvature of an article 12 in the form of a spectacle lens. The apparatus 10 has a recording region 14, in which the article 12 can be arranged on a three-point bearing 16. The three-point bearing 16 is a device for arranging the article 12 in the recording region 14. The three-point bearing 16 holds the article 12 on spherical abutments 17 at three points on an optically effective surface 18 to be measured.

(15) The apparatus 10 comprises a multiplicity of point light sources 20 in the form of LEDs, which, in the case of activation, emit light in the ultraviolet spectral range, at a wavelength of 365 nm. These light-emitting diodes are embodied as SMDs and arranged on printed circuit boards 22, 22, 22, which are assembled to form a carrier structure 24. Approximately 2000 point light sources 20 are situated on the carrier structure 24 with the printed circuit boards 22, 22, 22.

(16) In the apparatus 10, the point light sources 20 are positioned on a hypersurface 26 in the form of a lateral surface of a polyhedron. The carrier structure 24 assembled from the printed circuit boards 22, 22, 22 has an edged form and therefore great vibrational stability. In the apparatus 10, the hypersurface 26 delimits a three-dimensional half-space 28, which has an open edge 30. Ultraviolet (UV) light at a wavelength of 365 nm is absorbed in the glass material of commercially available spectacle lenses made of plastic. What this achieves is that the light of the point light sources 20, at the optically effective surface 32, which, in the recording region 14, lies opposite the optically effective surface 18 of an article 12 in the form of a spectacle lens, does not cause reflections that bother the measurement of the optically effective surface 18.

(17) To measure spectacle lenses made of quartz glass in the apparatus 10, the point light sources 20 produce light, the following applying to the wavelength thereof: 300 nm. Light of this wavelength renders it possible to suppress bothersome reflections of the light of the point light sources 20 from the rear side of an article 12 in the form of a spectacle lens.

(18) In principle, metallic surfaces and lacquered surfaces can also be measured in the apparatus 10, with, in this case, the wavelength of the light of the point light sources 20 expediently lying in the visible spectral range. For the purposes of measuring rough surfaces, where the following applies to the roughness Rz thereof:
Rz>1m,
it is advantageous if the point light sources 20 emit light, whose wavelength lies in the infrared spectral range, since rough surfaces have a mirror effect for this light.

(19) The apparatus 10 contains a camera 34 with an optical imaging system 36, which has a stop 38, a lens assembly 40 with lenses and a UV band-elimination filter 41. The optical imaging system 36 has an optical axis 42 that passes through the recording region 14. The camera 34 is arranged on the side of the carrier structure 24 with the printed circuit boards 22, 22, 22 facing away from the recording region 14. It captures an article 12 in the form of a spectacle lens, which is positioned in the recording region 14, through a recess 44 in the printed circuit board 22 of the carrier structure 24.

(20) The camera 34 serves to capture a brightness distribution, which is caused on an image sensor 46 by the light of the point light sources 20 that is reflected at the surface 18 to be measured. With the light beam 48, the light from a point light source 20 reaches the surface 18 of the article 12 in the form of a spectacle lens, which is arranged in the recording region 14. At the point 50 with the coordinates (X.sub.S,Y.sub.S,Z.sub.S), the light with the angle of incidence .sub.E in relation to the surface normal {right arrow over (n)} is reflected at the angle of reflection .sub.A=.sub.E according to the law of reflection. Then, this light reaches the camera 34 as a light beam 48.

(21) Consequently, the brightness distribution on the image sensor 46 contains information about the inclination of the tangential planes at the surface 18 of the article 12 in the form of a spectacle lens to be measured, at those points at which the light of the point light sources 20 is reflected in such a way that it is caught by the camera 34.

(22) The imaging system 36 of the camera 34 is set in such a way that a brightness distribution captured on the image sensor 46 does not experience vignetting as a result of the recess 44 and the depth of field is so large that the brightness distributions for optically effective surfaces of articles 12 in the form of spectacle lenses, which have different surface topographies and which are measured in the apparatus 10, can be resolved on the image sensor 46. Here, the image sensor 46 can be displaced in linearly movable fashion, in a manner corresponding to the double-headed arrow 54, along an adjustment axis 52 that passes through the lens assembly 40.

(23) The arrangement of the point light sources 20 on the carrier structure 24 in the apparatus 10 ensures that the topography of an optically effective surface 18 can be measured with a good resolution, even if the surface is inclined at an angle of up to 45 in relation to the optical axis 42 of the imaging system 36 of the camera 34. This is because the light of the point light sources 20 is then reflected in such a way that it is supplied to the image sensor 46 in the camera 34 through the recess 44 in the carrier structure 24.

(24) The computer unit 56 is a device for activating different point light sources 20. It serves to capture the brightness distribution caused on the image sensor 46 of the camera 34 by the light of the point light sources 20 that was reflected at the optically effective surface 18.

(25) FIG. 2 shows such a brightness distribution on the image sensor 46. The brightness distribution has a multiplicity of luminous spots 57 with a centroid 58 as a geometric center of the luminous spots; the location (x, y) thereof in the plane of the image sensor 46 contains the information about the topography and/or the gradients and/or the curvature of an optically effective surface 18 of the article 12.

(26) To evaluate a brightness distribution captured by the image sensor 46 in the camera 34, the computer unit 56 in the apparatus 10 contains a computer program, which is stored in a memory of the computer unit 56. For a light beam 48 captured by the camera 34 on the image sensor 46, the light beam 48 emanating from a point 50 on the optically effective surface 18 of the article 12 arranged in the recording region 14, the computer program, using a processor of the computer unit 56, calculates the surface normal {right arrow over (n)} at this point on the basis of machine parameters of the apparatus 10, which were established in a calibration method. Then, the topography of the surface 18 is calculated by integration and interpolation in an iteration method, wherein known coordinates of at least one point of the surface 18 and the brightness distribution captured by the image sensor 46 are resorted to. To this end, the computer program in the computer unit 56 uses computer algorithms, which depend on the apparatus constants or machine parameters of the apparatus 10. Here, the computer program and the computer unit 56 also facilitates the calculation of the local refractive powers of the optically effective surface 18 from the calculated topography of the article 12, and also the calculation of the curvature and the gradient of the optically effective surface 18.

(27) FIG. 1 shows that the number of point light sources 20 in a solid angle element 60 of the unit sphere with the solid angle coordinates (polar angle), (azimuth angle) in a spherical coordinate system 62, the origin of which lies at the center of symmetry 64 of the three-point bearing 16 and which has a north pole 68 pointing to the camera 34, decreases with increasing polar angle .

(28) By contrast, the number of point light sources 20 in a solid angle element 60 about the center of symmetry 64 of the three-point bearing 16 is substantially independent of the azimuth angle for a given polar angle .

(29) In order, in the apparatus 10, to determine the surface normals at N.sup.2 nodes 66, which are arranged equidistantly from one another, of an optically effective surface 32 of an article 12 with the radius of curvature R, which is arranged in the recording region 14, it is necessary that the central projections of two neighbouring point light sources 20 on the carrier structure 24 onto a hemisphere 69 with the radius R.sub.H, which is spanned about the center of symmetry 64 of the three point bearing 16, have the following distance d() in the direction of the polar angle :

(30) d ( ) R H .Math. 2 N R H .Math. 4 N sin - 1 BF 2 R

(31) In the apparatus 10, the arrangement of the point light sources 20 is adapted to a predetermined number of nodes by virtue of the distance d() in the interior of the hemisphere 69 being reduced and continuously increasing to the outside.

(32) When seen from each point lying within the recording region 14, the point light sources 20 in the apparatus 10 are distributed over a solid angle , which satisfies the following relationship: sr, typically 1.8 sr.

(33) In an alternative exemplary embodiment of the apparatus 10, the recording region 14 has a displaceable implementation in relation to the arrangement of the point light sources 20 received at the carrier structure 19. In this way, it is possible to vary the solid angle over which the point light sources 20 are distributed, as seen from points lying within the recording region 14.

(34) To be able to accurately calculate the topography of the optically effective surface 18 of an article 12 in the form of a spectacle lens with a computer program in the computer unit 56, the apparatus quantities in relation to the apparatus 10 in the form of machine parameters, i.e., apparatus constants of the apparatus 10, which are listed below, must be known:

(35) size of the light-sensitive area of the image sensor 46 and the spacing of the light-sensitive sensor pixels;

(36) scale of the optical imaging of the camera 34;

(37) spatial absolute and/or relative position of the image sensor 46 of the camera 34 in relation to the optical axis 42 of the optical imaging system 36 of the camera 34;

(38) distortion of the optical imaging system 36 of the camera 34;

(39) lateral displacement of the camera;

(40) position, i.e., the coordinates, of each point light source 20 in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34; and

(41) coordinates of the center and radius of the spherical abutment in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34.

(42) The aforementioned apparatus constants can have values that differ in each individual apparatus 10, even in the case of apparatuses 10 which, in principle, have the same construction; this is due to different conditions of individual assemblies inserted therein and due to manufacturing tolerances.

(43) Thus, for an apparatus 10 for measuring the topography of an article 12 in the form of a spectacle lens, as shown in FIG. 1, these apparatus constants must be determined in a calibration method before the apparatus can be used to measure the topography and/or the gradient and/or the curvature of an optically effective surface of the article 12 with corresponding accuracy.

(44) FIG. 3 shows the influence of different manufacturing tolerances on different apparatus constants of the apparatus 10. On a microscopic scale, the optical axis 42 of the optical imaging system 36 of the camera 34 is arranged offset to the center 70 of the image sensor 46. In general, the axis of symmetry 72 of the carrier structure 24 differs from the optical axis 42 of the optical imaging system 36 of the camera 34 and from the axis of symmetry 74 of the three-point bearing 16. The point light sources 20 emit the light at a distance h from the surface of a printed circuit board 22. The printed circuit board 22 with the recess 44 is arranged at the distance d from the stop 38 of the optical imaging system 36 of the camera 34. The image sensor 46 is positioned at the distance z from the stop 38 of the optical imaging system 36 of the camera 34. The center of symmetry 64 of the three-point bearing 16 has the distance a from the plane of the stop 38 of the optical imaging system 36 of the camera 34.

(45) Exact knowledge of the apparatus constants is obligatory so that a high measurement accuracy can be obtained with the apparatus for measuring the topography of an article 12 in the form of a spectacle lens.

(46) Thus, the knowledge of the size of the light-sensitive area of the image sensor 46 and the distance of the light-sensitive sensor pixels from one another is necessary, with knowledge of the imaging scale of the imaging system 36 of the camera 34, to be able to deduce the absolute and/or relative position of a point 50 on the optically effective surface 18 of an article 12 in the form of a spectacle lens, at which point a light beam 48 that is incident on the article 12 is reflected.

(47) Knowledge about the distortion of the optical imaging system 36 of the camera 34 is necessary to be able to specify an imaging scale in relation to the light points captured by the image sensor 46.

(48) The knowledge of the position, i.e., the coordinates, of a spherical abutment in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34 is necessary in order to be able to specify the coordinates of a point 50 on the optically effective surface 18 of the article 12 in the form of a spectacle lens when the article 12 is received in the three-point bearing 16.

(49) Knowledge of the position, i.e., the coordinates, of each point light source 20 in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34 and knowledge of the spatial absolute and/or relative position of the image sensor 46 of the camera 34 in relation to the optical axis 42 of the optical imaging system 36 of the camera 34 is required in order, by means of triangulation, to be able to deduce the accurate absolute and/or relative position of points of reflection on the optically effective surface 18 of an article in the form of a spectacle lens.

(50) The exact calculation of the topography of the optically effective surface 18 of an article 12 in the form of a spectacle lens is explained below on the basis of FIG. 4 and FIG. 5.

(51) The computer program in the computer unit 56 of the apparatus makes use of an algorithm for the purposes of calculating the topography of the optically effective surface 18 of the article 12 in the form of a spectacle lens, the algorithm carrying out the following steps:

(52) Step 1: determining the centroids 58 of the individual luminous spots 57 in a brightness distribution on the light-sensitive surface of the image sensor 46.

(53) Step 2: assigning the centroids 58 of a luminous spot 57 in a brightness distribution on the light-sensitive surface of the image sensor 46 to a certain point light source 20.

(54) Step 3: determining the point 50 of the reflection on the optically effective surface 18 of the article 12 for each centroid 58.

(55) Step 4: calculating the surface normal {right arrow over (n)} on the optically effective surface 18 of the article 12 for each centroid 58.

(56) Step 5: calculating the topography, e.g., as form and/or sag s of the optically effective surface 18, by integrating the surface normal {right arrow over (n)}, i.e., by integrating the directional derivatives of the surface 18 determined by the surface normals {right arrow over (n)}.

(57) Step 6: calculating the local curvatures of the optically effective surface 18 of the article 12 by differentiating the surface normal {right arrow over (n)}, i.e., by differentiating the entries of the vector of the surface normal {right arrow over (n)}.

(58) Steps after Step 6: iteratively repeating steps 4 to 6 up to a convergence criterion.

(59) The convergence criterion means that iterative steps 4 to 6 are repeated until a normal difference of topographies calculated in two successive steps is less than a predetermined tolerance.

(60) To determine the absolute and/or relative position of the image of the individual point light sources 20, captured by the camera 34, the centroids 58 are calculated as the centroids of the intensity of the intensity distributions of an individual luminous spot 57 caused by the light of a point light source 20 on the image sensor 46. Here, a luminous spot 57 is defined by a pixel neighborhood that is provided by a manual or automatically calculated filter. Then, the centroid is calculated by propagating each (x, y)-position to the closest center, taking account of a gradient increase or a mean shift. In order to shorten the computation time required to this end, the signal of the image sensor 46 is filtered in order to free the image signal of the image sensor 46 from bothersome noise and insignificant background signals. This then leads to the (X, Y) coordinates of the images of the individual point light sources 20 in the plane of the light-sensitive surface of the image sensor 46.

(61) Here, these coordinates can be estimated with sub-pixel accuracy. In the case of 2000 point light sources and a camera with 4 MPixel, this yields an advantageous data reduction of approximately 4 000 000/4000=1000/1. After estimating the centroids from the image data, there is an assignment of the (X, Y)-position to a point light source 20.

(62) On account of the reflection on the optically effective surface 18 of the article 12, the brightness distribution caused by the point light sources 20 on the light-sensitive surface of the image sensor 46 is generally distorted. However, the absolute and/or relative position of the luminous spots 57 is maintained in a grid network that is defined by the arrangement of the point light sources 20 on the carrier structure 24. By virtue of the article 12 being illuminated in the apparatus 10 by a special sequence or group of point light sources 20, the sequence or group being formed by individual point light sources 20 or else a ring of point light sources 20, which are closest in relation to the camera 34, the direct assignment of at least one intensity distribution on the light-sensitive surface of the image sensor 46 to a certain point light source 20 is facilitated. Then, proceeding from this point light source 20, the following point light sources 20 are assigned on account of their neighborhood relationships, and so a list of origins arises.

(63) FIG. 6A and FIG. 6B explain the assignment of individual luminous spots 57 on the image sensor 46 when resorting to the point light sources 20 underlying these luminous spots 57 with the computer program in the computer unit 56. This assignment is implemented incrementally. Here, in each assignment step, the computer program makes use of a list of point light sources 20, which are active and in a known grid network. With knowledge of the grid network 75 shown in FIG. 6A, the luminous spots 57 of adjacent point light sources 20 that are observed on the image sensor 46, shown in FIG. 6B, are assigned to one another. By way of example, this can be implemented, initiated on the basis of a so-called origin list. However, as an alternative thereto, it is also possible to undertake the assignment on the basis of a list of point light sources 20 that have already been assigned and likewise are initiated with the origin list. After the assignment step of the neighbors, all newly assigned point light sources 20 then are adopted in the list of the active point light sources 20, wherein the old entries of this list are copied into the list of the already assigned point light sources 20. Here, care is taken that there are no common point light sources 20 in both lists. This leads to a spatial expansion of the assignment on the two-dimensional grid of point light sources 20 and even allows the assignment of point light sources 20 if no point light sources are situated at individual grid points of the grid.

(64) For a luminous spot 57 that is assigned to an active point light source 20, the closest neighbors must correspond to the adjacent point light sources 20. To this end, as shown in FIG. 6A, a simple Euclidean distance is taken as a neighborhood relationship. The respective active point light source 20 then forms the origin of a local coordinate system. Here, a vector corresponds to each neighborhood point, a corresponding vector of the reference having to be assigned to the vector. This assignment is implemented by minimizing the sum of the vector dot products, corresponding to a minimum overall angle distortion of the assignment. By way of example, this minimization can be implemented by applying the so-called Hungarian method on the basis of the Kuhn-Munkres algorithm.

(65) However, as an alternative thereto, it is also possible to carry out the corresponding assignment of light spots on the image sensor 46 and point light sources 20 by recording image sequences. Here, point light sources 20 are actuated in groups or individually and a sequence of images is recorded with the image sensor 46. Then, the assignment is implemented by analyzing the corresponding image sequence. Here, all point light sources 20 can be numbered in the entirety thereof by way of their sequence code, i.e., on-off, for example. However, to this end, it is also possible for the point light sources to be only provided locally with a code. Then, this is a development of the algorithm described above, which assigns the point light sources 20. Here, it should be noted that the corresponding prescription for the local assignment can be modifiable in space or time.

(66) For the purposes of a unique global assignment of all possible luminous spots to all point light sources 20 in the apparatus 10, log.sub.2 [number of point light sources]
individual images are required in a sequence. By way of example, if the apparatus 10 for measuring the topography contains 48 point light sources 20, eleven individual images are required for a global assignment and three individual images are required for a local assignment.

(67) Taking account of the imaging scale and the optional distortion correction, the incident light ray i is then calculated for each centroid 58 using the computer program of the computer unit 56. Here, the optics of the imaging system 36 of the camera 34 is modeled as a thick lens after taking account of the distortion. Then, the incident light ray is described as follows:

(68) l .Math. = l .Math. 0 + m .Math. l .Math. n = ( 0 0 - a ) + m ( - x c - y c as )

(69) Here, the optically effective surface 18 of the article 12 in the form of a spectacle lens is provided as a numerical model surface, which is defined either from the preceding analysis iteration or by initiation as a plane. Therefore, the location of the reflection is given by the intersection of the ray with this surface and determined numerically by an optimization according to the parameter m. Here, the Euclidean distance of the ray point i(m) to the surface is minimized. The next point of the surface is then calculated by a nearest neighbor interpolation.

(70) A surface normal custom character of the reflecting surface emerges as follows from the reflection equations:

(71) n .Math. = r .Math. n - l n .Math. .Math. r .Math. n - l n .Math. .Math.

(72) Here, custom character.sub.n is the direction that connects a point light source 20 and the point 50 on the optically effective surface 18 of the article 12, on which an incident beam impinges, wherein the following applies:

(73) r .Math. = ( x s y s z ( x s , y s ) ) + m .Math. r .Math. n and l .Math. = l .Math. 0 + m .Math. l .Math. n = ( 0 0 - a ) + m ( - x c - y c as )

(74) By virtue of the surface of the test object being parameterized as a surface function z(xs,ys), the following emerges for the surface normal custom character:

(75) n .Math. = ( - x z - y z 1 ) = ( - z x - z y 1 )

(76) Then, this equation is directly solved according to the derivatives .sub.zx and .sub.zy of the optically effective surface 18 of the article 12 in the form of a spectacle lens, i.e., the surface of the article. Hence, the derivatives of the surface at the locations (xs, ys) are obtained; however, as a rule, the locations are distributed in irregular fashion on account of the distortion of the surface on the article 12. Then, for integration purposes, the derivatives are initially interpolated onto a regular polar grid, which makes the method robust against individual points missing. On the polar grid, the x, y-derivatives are converted to polar derivatives thereon, i.e., to derivatives according to radius r and angle in accordance with the chain rule of the differential equation. The numerical, polar derivative on the polar grid is then described by a Jacobian matrix A, which corresponds to the so-called N-point finite-difference method. Here, by implementing von Neumann boundary conditions, the edge points are modeled as a one-sided derivative. Here, a constraint matrix C, which describes the measured height of the test object at the bearing points and the polar condition z(0,)=z(0), is defined in addition to the Jacobian matrix. The topography of the surface z then emerges from the solution of the linear system of equations:

(77) ( A T A C T C 1 ) z = ( r 2 z + 2 z z ( r = 0 ) z ( bearing ) )

(78) As an alternative thereto, the integration can also be implemented by the fit of local basis functions. Here, the area is modeled as a linear combination of basis functions, e.g., as a linear combination of biharmonic splines.

(79) The derivative of this linear combination then is the linear combination of the derivatives of the basis functions. This leads to a linear system of equations, which can be solved according to a weighting of the linear combination. The constraint of the bearing should also be taken into account here, as described above. The final area finally is the linear combination of the established weightings with the basis functions.

(80) Then, the local curvatures of the surface are calculated directly by the differential-geometric relationships from the surface inclinations .sub.zx and .sub.zy. By way of example, Gaussian curvature K:

(81) K = x 2 z .Math. y 2 z - xy 2 z ( 1 + x z 2 + x z 2 ) 2

(82) The topography of the optically effective surface 18 of the article 12 in the form of a spectacle lens is finally established in an iterative method. After initializing z with a standard form, e.g., a plane or a sphere with a predefined radius, the derivative is calculated and the topography is newly determined by an integration. These steps are repeated until the determined topography changes less than a termination criterion. The convergence of the method is ensured by virtue of the distance a of the principal planes of the article 12 in the form of a spectacle lens and of the imaging system 36 of the camera 34 being large, and so the Banach fixed point theorem applies in the present case.

(83) An accurate calculation of the topography of the optically effective surface 18 of an article 12 in the computer unit 56 requires the accurate knowledge of the position of a point on the surface of the test object in a coordinate system that is referenced to the image sensor 46 of the camera 34.

(84) As explained below on the basis of FIG. 7, the abutment points of the article 12 in the form of a spectacle lens at a spherical abutment 17 of the three-point bearing 16 is established by calculation in the computer unit 56 as follows:

(85) The optically effective surface 18 of the article 12 is locally modeled as a spherical surface in the region of a spherical abutment of the three-point bearing 16 in a first step.

(86) Thereupon, the contact point of this spherical surface with the spherical surface of the corresponding spherical abutment is determined on the basis of a known radius and on the basis of a known center of the spherical abutment.

(87) The machine parameter of the size of light-sensitive surface of the image sensor 46 and the spacing of the light-sensitive sensor pixels arranged thereon emerges from the usual manufacturer's information in relation to image sensors or image sensors in a camera.

(88) In order to establish the following machine parameters or apparatus constants:

(89) scale of the optical imaging of the camera 34;

(90) spatial absolute and/or relative position of the image sensor 46 of the camera 34 in relation to the optical axis 42 of the optical imaging system 36 of the camera 34;

(91) distortion of the optical imaging system 36 of the camera 34;

(92) lateral displacement of the image sensor 46 of the camera 34;

(93) position, i.e., the coordinates, of each point light source 20 in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34; and

(94) coordinates of the center and radius of the spherical abutment in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34;

(95) the apparatus 10 is operated in a calibration mode.

(96) In detail, the following steps are implemented to this end:

(97) Scale

(98) To establish the scale of the optical imaging of the camera 34, the calibration disk in the form of a disk-shaped test object 76, shown in FIG. 8A, is measured using the apparatus 10. This test object 76 is a blackened metal plate, in which a defined, precisely known pattern has been introduced. For measurement purposes, the test object 76 is inserted into the three-point bearing 16 of the apparatus 10. Then, different images of the test object 76 are recorded by the camera 34, the position of the image sensor 46 of the camera 34 being displaced in the direction of the adjustment axis 52 according to the double-headed arrow 54 while the images are being recorded. Here, in accordance with the position of the image sensor 46 of the camera 34, the image of the pattern of the test object 76 changes on the image sensor 46 on account of the change in the imaging scale connected to the displacement.

(99) Distortion

(100) To establish the distortion of the optical imaging system 36 of the camera 34, the imaging scale, the image distance and the object distance are determined for different positions of the image sensor 46 along the adjustment axis 52 thereof from known information in relation to the size of the light-sensitive surface of the image sensor 46 and the spacing of the light-sensitive sensor pixels arranged thereon.

(101) As an alternative or in addition thereto, it is also possible to evaluate local deviations between the image of the disk-shaped test object 76 and the known, actual form of the disk-shaped test object 76 in order thus to determine a distortion correction for the imaging optical unit by virtue of a correction vector being calculated for each sensor pixel of the image sensor 46 of the camera 34. This correction vector then renders it possible to deduce corresponding real spatial coordinates from the image data in relation to an article 12 recorded by the camera 34.

(102) Moreover, it is alternatively or additionally also possible to deduce the exact position of the image sensor 46 relative to the optical axis 42 of the imaging system 36 of the camera 34 from recordings of the disk-shaped test object 76 with the camera 34 at different positions of the image sensor 46 along the adjustment axis 52 thereof

(103) Lateral Displacement of the Image Sensor, Magnification and Distortion

(104) To establish the lateral displacement of the image sensor 46 of the camera 34 relative to the optical axis 42, a plurality of images of the disk-shaped test object 76 are once again captured at different positions of the image sensor 46 along the adjustment axis 52. On the basis of a regression of observed points, spots or image regions corresponding to one another in relation to the real displacement of the image sensor 46, the optical axis 42 of the optical imaging system 36 of the camera 34, the magnification scale and the distortion of the optical imaging system 36 are then determined directly. What is exploited here is that if the adjustment axis 52 of the image sensor 46 is exactly parallel to the optical axis 42 of the camera 34, points corresponding to one another that were extracted from an image scale linearly about a defined point that corresponds to the intersection of the optical axis 42 with the plane of the image sensor 46.

(105) It should be noted that the adjustment axis 52 of the image sensor 46 of the camera 34 does not necessarily exactly coincide with the optical axis 42 of the optical imaging system 36 of the camera 34. In the case of a displacement of the image sensor 46 and the defocus connected therewith, these circumstances lead to a x, y-drift of the luminous spots 57 of the brightness distribution in the plane of the image sensor 46. However, this drift can be determined by measuring a so-called scaling point of the apparatus 10 in the form of the vertex or the circumcenter of the three-point bearing 16. The aforementioned scaling point is distinguished in that, when measuring spherical articles with a different curvature in the apparatus 10, an identical focus setting of the optical imaging system 36 of the camera 34 causes the luminous spots 57 of the brightness distribution to always scale in a radially symmetric manner about this point. Therefore, by way of a corresponding regression of image data of the image sensor 46, it is possible to determine the sought-after scaling point in the same way as in the above-described establishment of the imaging scale.

(106) Here, a migration of the scaling point can be uniquely traced back to a drift of the sensor plane. As a consequence thereof, the drift accompanying the displacement of the image sensor 46 can be established as a function of the position of the image sensor 46 on the adjustment axis 52, i.e., the Z-coordinate, and can then be compensated. Here, the scaling point is determined as a limit value in the form of the center point of the pattern of the point light sources 20 in the image hypothetically captured by the camera 34, the image corresponding to a sphere with the radius of 0 that is measured in the apparatus 10.

(107) The apparatus constants of the apparatus 10, explained above, are typically determined in the computer unit 56 using an optimization method, which is based on measuring a plurality of spherical test bodies with different radii.

(108) Position of Each Point Light Source

(109) The position of each point light source 20 in the apparatus 10 is established by evaluating images, captured by the camera 34, with a brightness distribution 79 of reflections of the light of the point light sources 20 at test bodies with a known geometry on the basis of an optimization method.

(110) By way of example, FIG. 8B shows a brightness distribution 79, captured by the image sensor 46 of the apparatus 10, of reflections of the light of the point light sources 20 at a sphere, for the purposes of determining machine parameters or apparatus constants of the apparatus 10. Here, use is made of the fact that, when measuring test bodies with different radii, the position of the point light sources 20, the light of which produces the brightness distributions captured by the image sensor 46, is the same in each case for the different test bodies. This boundary condition is used in the optimization method to calculate optimized positions of the point light sources 20 in the apparatus 10 from the brightness distributions of different test bodies.

(111) In a modified configuration of this optimization method, the boundary condition of the position of the point light sources 20 is complemented by further boundary conditions, by means of which the geometry of the carrier structure 24 for the point light sources 20 is taken into account.

(112) By way of example, if the carrier structure 24 is a hemisphere, all point light sources 20 lying on a spherical shell applies. If the carrier structure 24 is a hemisphere with form defects, this has the consequence that all point light sources 20 lie on an ellipsoid surface or spherical surface with disturbance contours, which can be described by Zernike polynomials, for example.

(113) Finally, if the carrier structure 24 is a polyhedron and if the point light sources 20 are LEDs arranged on a printed circuit board, this means that the point light sources 20 of a corresponding group of point light sources 20 are situated on a common plane, wherein it is possible to exploit the fact that the SMD-equipping of printed circuit boards manufacturing method facilitates the positioning of LEDs with maximum deviations of <0.1 mm. By way of example, if the polyhedron is made of 16 printed circuit boards (PCBs) with five degrees of freedom and three abutment points each, it is only necessary to still determine 89 free parameters with the corresponding optimization method, i.e., around 6000 degrees of freedom in the case of 2000 LEDs, for example.

(114) Coordinates of the Center and Radius of a Spherical Abutment

(115) To determine the bearing parameters, i.e., the coordinates of the centers of the bearing spheres in a coordinate system that is fixed in relation to the apparatus in the form of the coordinate system of the image sensor 46 of the camera 34, the camera 34 is initially used to record an image 81 of a white disk arranged in the three-point bearing 16 of the apparatus 10, as shown in FIG. 8C. Here, the white disk is illuminated by all point light sources 20. The center and the radius of the spherical bearings of the three-point bearing 16 in the apparatus 10 is then determined by calculation by means of image processing. The image 81 of the corresponding recording contains information required to this end. The spherical abutments 17 are imaged in the image 81 under perspective projections onto ellipses, from which then the parameters of the sphere center are determined on the basis of the known radius and the known imaging scale.

(116) It is also possible to adopt as free parameters the sphere centers in x, y, z or only in z into the above-described optimization method for establishing the position of each point light source 20. Because this measure increases the number of degrees of freedom of the corresponding optimization method, this leads to the method converging somewhat slower.

(117) However, it should be noted that the above-described optimization method can be used to determine the apparatus constants or machine parameters of the apparatus 10 in the form of the position, i.e., the coordinates, of a spherical abutment in a coordinate system 78 that is referenced to the image sensor 46 of the camera 34 and the position, i.e., the coordinates, of each point light source 20 in a coordinate system 78 that is referenced to the image sensor 46 of the camera 34 and the spatial absolute and/or relative position of the image sensor 46 of the camera 34 in relation to the optical axis 42 of the optical imaging system 36 of the camera 34 with, in principle, two different test bodies, too.

(118) Moreover, it should also be noted that an option for establishing the bearing points also consists of measuring the three-point bearing 16 in the apparatus 10 using a coordinate measuring machine. To this end, the entire measurement system is placed on the main plate of a coordinate measuring machine. This allows the coordinates of the bearing points and the relative positions of the planes of the polyhedron and the relative and/or absolute position of the hemisphere to be determined.

(119) In a simplified method, only the three-point bearing 16 is measured on a coordinate measuring machine. Thus, the relative absolute and/or relative position of the three bearing points in relation to one another is known.

(120) All relative quantities, which are determined in advance as reliable and stable with external measurement methods reduce the numbers of open parameters in the optimization method and consequently increase the stability of the latter.

(121) Moreover, it should be noted that corresponding bearing points can be determined in an optimization method using a known test object.

(122) In addition, it should be noted that the sought-after coordinates of the abutment points can also be determined using a further measurement system that is integrated into the apparatus 10, the further measuring system containing a measuring probe, for example. For the purposes of referencing a point on the article 12, in the form of a spectacle lens, relative to the image sensor 46, it is further also possible, in principle, to establish the position of the article 12 with a measuring apparatus in a holder, which is matched to the apparatus 10 and the spatial dimensions of which are exactly known. Hence it is possible, when arranging the holder for the article 12, in the form of a spectacle lens, in the apparatus 10, to deduce the position of a corresponding point on the article 12, i.e., the coordinates thereof, in a coordinate system 78 that is referenced to the image sensor 46 of the camera 34.

(123) FIG. 9 shows a second apparatus 10 for measuring the topography of an article 12 in the form of a spectacle lens using point light sources 20. To the extent that the components and elements of the apparatus 10 correspond to components and elements of the apparatus 10 described above, these are identified by the same numbers in the reference signs. The apparatus 10 has a carrier structure 24 in the form of a hemisphere.

(124) In the case of articles 12 embodied as spectacle lenses, the radius of which approximately corresponds to the radius of the hemisphere of the carrier structure 24, all reflected rays r will be incident at a point. Therefore, only very few point light sources 20 are imaged on the image sensor 46 of the camera 34 in this case. This has as a consequence that, using the apparatus 10 shown in FIG. 1 for measuring the topography of an article 12 in the form of a spectacle lens, such spectacle lenses cannot be measured, or at best can only be measured inaccurately, as a matter of principle.

(125) The apparatus 10 shown in FIG. 9 for measuring the topography of an article 12 in the form of a spectacle lens, by contrast, also facilitates the measurement of spectacle lenses that have a radius approximately corresponding to the radius of the hemisphere of the carrier structure 24. To the extent that the components and elements of the apparatus 10 correspond to components and elements of the apparatus 10 in FIG. 1, these are identified by the same numbers in the reference signs.

(126) The apparatus 10 contains a field lens element 80 arranged close to the recording region 14, the field lens element steering the light of the point light sources 20 that was reflected at the optically effective surface 18 of the article 12 in the direction of the camera 34.

(127) To be able to accurately calculate the topography of the optically effective surface 18 of an article 12 with the computer program in the computer unit 56, it is necessary, in relation to the apparatus 10, to know in addition to the apparatus quantities, described above in relation to the apparatus 10 from FIG. 1, in the form of machine parameters, i.e., the apparatus constants of

(128) size of the light-sensitive area of the image sensor 46 and the spacing of the light-sensitive sensor pixels;

(129) scale of the optical imaging of the camera 34;

(130) spatial absolute and/or relative position of the image sensor 46 of the camera 34 in relation to the optical axis 42 of the optical imaging system 36 of the camera 34;

(131) distortion of the optical imaging system 36 of the camera 34;

(132) lateral displacement of the camera 34;

(133) position, i.e., the coordinates, of each point light source 20 in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34; and

(134) coordinates of the center and radius of a spherical abutment in a coordinate system 78 that is fixed in relation to the apparatus and referenced to the image sensor 46 of the camera 34;

(135) the accurate absolute position, relative position and form, i.e., in particular, the radius and the thickness of the field lens element 80, as a further apparatus quantity.

(136) The field lens element 80 refracts the light emitted by the point light sources 20 before and after the reflection at the optically effective surface 18 of the article 12 in the form of a spectacle lens. What this achieves is that the angle of incidence in relation to the surface normal of the light emitted by the point light sources 20 on the optically effective surface 18 of the article 12 to be measured becomes steeper in general. What this can achieve is that the topography of optically effective concave surfaces of an article 12 to be measured can also be determined with high accuracy.

(137) FIG. 10 shows the influence of manufacturing tolerances on different apparatus constants of the apparatus 10. On a microscopic scale, the optical axis 42 of the optical imaging system 36 of the camera 34 is arranged offset to the center 70 of the image sensor 46 in this case. In general, the axis of symmetry 72 of the carrier structure 24 differs from the optical axis 42 of the optical imaging system 36 of the camera 34 and from the axis of symmetry 74 of the three-point bearing 16. The point light sources 20 emit the light at a distance h from the surface of a printed circuit board 22. The recess 44 is arranged at the distance d from the stop 38 of the optical imaging system 36 of the camera 34. The image sensor 46 is positioned at the distance z from the stop 38 of the optical imaging system 36 of the camera 34. The center of symmetry 64 of the three-point bearing 16 has the distance a from the plane of the stop 38 of the optical imaging system 36 of the camera 34.

(138) Exact knowledge of the apparatus constants is necessary so that a high measurement accuracy can be obtained with the apparatus for measuring the topography of an article 12 in the form of a spectacle lens.

(139) The exact calculation of the topography of the optically effective surface 18 of an article 12 in the form of a spectacle lens is explained below on the basis of FIG. 11 and FIG. 12.

(140) The computer program in the computer unit 56 of the apparatus once again, in this case, makes use of an algorithm for the purposes of calculating the topography of the optical effective surface 18 of the article 12, the algorithm carrying out the following steps:

(141) Step 1: determining the centroids 58 of the individual intensity distributions in a brightness distribution on the light-sensitive surface of the image sensor 46.

(142) Step 2: assigning the centroids 58 of an intensity distribution on the light-sensitive surface of the image sensor 46 to a certain point light source 20.

(143) Step 3: determining the location of the reflection on the optically effective surface 18 of the article 12 for each centroid 58.

(144) Step 4: calculating the surface normal {right arrow over (n)} on the optically effective surface 18 of the article 12 for each centroid 58.

(145) Step 5: calculating the topography, e.g., as form and/or as sag s of the optically effective surface 18, by integrating the surface normal {right arrow over (n)}, i.e., by integrating the directional derivatives of the surface 18 determined by the surface normals {right arrow over (n)}.

(146) Step 6: calculating the local curvatures of the optically effective surface 18 of the article 12 by differentiating the surface normal {right arrow over (n)}, i.e., by differentiating the entries of the vector of the surface normal {right arrow over (n)}.

(147) Steps after Step 6: iteratively repeating steps 4 to 6 up to a convergence criterion, i.e., these steps are repeated until a normal difference of the topographies calculated in two successive steps is less than a predetermined tolerance.

(148) To determine the location of the reflection on the optically effective surface 18 of the article 12 in the form of a spectacle lens, the computer program in the computer unit 56 of the apparatus 10 calculates an incident ray on the optically effective surface 18 of the article 12 in the form of a spectacle lens and then the point of intersection of this incident ray with the article 12 using Snell's law of refraction in a so-called standard ray tracing method. Thereupon, a sighting, with which the direction of the reflected beam that is incident on a corresponding point light source 20 is found, is determined with the computer program. Then, this sighting is established in an optimization routine, which minimizes the distance between the point light source 20 and the emergence beam from the field lens element 80. This optimization may yield a number of solutions, wherein the computer program selects that which is most similar to the neighboring points.

(149) The corresponding sighting then supplies the direction of the reflected ray and, consequently, the angle of reflection at the test object, or the surface normal {right arrow over (n)} at the point of incidence 50 of the corresponding incident ray i. The calculation of the optically effective surface 18 of the article 12 in the form of a spectacle lens is then implemented in the present case using the same method as without the field lens element 80. Therefore, the field lens element 80 is only of importance for calculating the surface normal {right arrow over (n)}.

(150) It should be noted that the apparatuses described above are suitable, in principle, for measuring the topography of the surface of any measurement object whose surface has a reflectance R>50% for the light of the point light sources. In particular, corresponding measurement objects can be: optical elements in the form of lenses, optical units, spheres and aspheres, spectacle lenses, progressive lenses, metallic components with a shiny, in particular polished, surface, lacquered components and plastics components. Moreover, the apparatus described above can be used, in principle, to measure the topography of a human or animal eye.

(151) To sum up, the following preferred features of the disclosure should be noted in particular: The disclosure relates to a method and an apparatus 10 for measuring the topography and/or the gradients and/or the curvature of an optically effective surface 18 of an article 12. The apparatus 10 facilitates arranging the article 12 in a recording region 14 with an abutment 17 that is able to be placed on the article 12. Here, in the recording region 14, it is possible to establish the absolute and/or relative position of at least one point on the surface 18 of the article in a coordinate system 78 that is fixed in relation to an apparatus. In the apparatus 10, there are a multiplicity of point light sources 20 that provide light, the light being reflected at the surface 18 of an article to be measured that is arranged in the recording region 14. The apparatus 10 comprises at least one camera 34 with a lens assembly 40 and with an image sensor 46 for capturing a brightness distribution, which is caused on an image sensor 46 by the light of the point light sources 20 that is reflected at the surface 18 to be measured.

(152) The foregoing description of the exemplary embodiments of the disclosure illustrates and describes the present invention. Additionally, the disclosure shows and describes only the exemplary embodiments but, as mentioned above, it is to be understood that the disclosure is capable of use in various other combinations, modifications, and environments and is capable of changes or modifications within the scope of the concept as expressed herein, commensurate with the above teachings and/or the skill or knowledge of the relevant art.

(153) The term comprising (and its grammatical variations) as used herein is used in the inclusive sense of having or including and not in the exclusive sense of consisting only of The terms a and the as used herein are understood to encompass the plural as well as the singular.

(154) All publications, patents and patent applications cited in this specification are herein incorporated by reference, and for any and all purposes, as if each individual publication, patent or patent application were specifically and individually indicated to be incorporated by reference. In the case of inconsistencies, the present disclosure will prevail.

LIST OF REFERENCE SIGNS

(155) 10, 10 Apparatus 12 Article 14 Recording region 16 Three-point bearing 17 Abutment 18 Optically effective surface 19 Carrier structure 20 Point light source, light spot 22, 22, 22 Printed circuit boards 24 Carrier structure 26 Hypersurface 28 Half-space 30 Edge 32 Optically effective surface 34 Camera 36 Imaging system 38 Stop 40 Lens assembly 41 UV band-elimination filter 42 Optical axis 44 Recess 46 Image sensor 48, 48 Light beam 50 Point 52 Adjustment axis 54 Double-headed arrow 56 Computer unit 57 Luminous spot 58 Centroids 60 Solid angle element 62 Spherical coordinate system 64 Center of symmetry 66 Node 68 North Pole 69 Hemisphere 70 Center 72 Axis of symmetry 74 Axis of symmetry 75 Grid network 76 Test object 78 Coordinate system 79 Brightness distribution 80 Lens element, field lens element 81 Image