Polyhedral sensor arrangement and method for operating a polyhedral sensor arrangement

10393851 ยท 2019-08-27

Assignee

Inventors

Cpc classification

International classification

Abstract

A sensor arrangement comprises at least a first, a second, and a third light sensor. A three-dimensional framework comprises at least a first, a second, and a third connection means which are connected to the at least first, second, and third light sensor, respectively. The first, the second, and the third connection means are configured to align the at least first, second, and third light sensor along a first, second, and third face of a polyhedron-like volume, respectively, such that the sensor arrangement encloses the polyhedron-like volume. The invention also relates to a method for operating the sensor arrangement.

Claims

1. A method for operating a sensor arrangement comprising at least a first, a second, and a third light sensor, wherein each light sensor is aligned along a face of a polyhedron-like volume, respectively, so that the sensor arrangement encloses the polyhedron-like volume, the method comprising the steps of: collecting, from the at least first, second, and third light sensor, respective sensor signals depending on a light source to be detected, determine, from the collected sensor signals, projections onto the principal axes of a three-dimensional coordinate system, wherein the projections onto the principal axes of the three-dimensional coordinate system determine components of a position vector V indicative of a position of the light source, and wherein the components of the position vector V are determined based on a scalar dot product or by the projection onto unit vectors a.sub.x, a.sub.y, a.sub.z, respectively, and determine, from the determined projections, a direction of the light source to be detected with respect to the three-dimensional coordinate system, wherein the light sensor is aligned such that a sum of sensor signals of all light sensors meets the constraint cos.sup.2()+cos.sup.2()+cos.sup.2()=K.sup.2, where K is a number of counts resulting from any one of the light sensors pointing directly at the light source and where , , and are given as the angles between the position vector V and the x, y, and z principal axes, respectively.

2. The method according to claim 1, wherein the light sensors generate respective sensor signals with a cosine response depending on a direction cosine when light from the light source is incident under an angle.

3. The method according to claim 1, comprising aligning the at least first, second, and third light sensors with respect to the principal axes of the three-dimensional coordinate system, respectively.

4. The method according to claim 3, wherein the light sensors are aligned with their light sensitive surfaces along the unit vectors.

5. The method according to claim 3, wherein the step of aligning the at least first, second, and third light sensors comprises defining a continuous region of interest into which the light source to be detected emits a predetermined amount of light.

6. The method according to claim 1, wherein the at least a first, a second, and a third light sensor are further aligned to a spatially fixed reference and wherein the fixed reference is defined by coordinates X.sup.+ which denotes north, X.sup. which denotes south, y.sup.+ which denotes west, and Y.sup. which denotes east and wherein the z-axis of the three-dimensional coordinate system is aligned facing upward and downward, denoted as Z.sup.+ and Z.sup., respectively.

7. The method according to claim 1, wherein the sensor arrangement is initialized by evaluating the light sensors by pairwise comparing the sensor signals of opposing sensors, respectively.

8. The method according to claim 6, wherein the light sensor that has larger response is chosen over its opposing one for further processing and the selection of light sensor determines in which quadrant or octant of the three-dimensional coordinate system the light-source to be detected is positioned.

9. The method according to claim 8, wherein in the three-dimensional coordinate system four quadrants are defined, abbreviated as X.sup.+/Y.sup.+, X.sup.+/Y.sup., X.sup./Y.sup.+ and X.sup./Y.sup., or wherein in the case of six light sensors the pairwise comparing of the sensor signals also includes the opposing sensors along the z-axis involving unit vectors a.sub.z or a.sub.z to determine an up/down orientation.

10. The method according to claim 1, wherein from the sensor signals a relative brightness of the light source to be detected is determined by taking the square root of the sum of squared sensor signals, or by taking the square root of the sum of squared difference signals, wherein the difference signals depend on the sensor signals.

11. The method according to claim 10, wherein the sensor signals are measured in terms of counts.

12. The method according to claim 1, wherein the direction of the light source to be detected is determined as a zenith angle, an elevation angle and/or an azimuth angle.

13. The method according to claim 1, wherein the steps of collecting sensor signals, determining the projections onto the principal axes and determining the direction of the light source to be detected is continuously repeated so as to track the direction of the light source within the three-dimensional coordinate system.

14. The method according to claim 1, wherein the three-dimensional framework comprises the polyhedron-like volume, and wherein the three-dimensional framework comprises: a solid body having the polyhedron-like volume, and first, second, and third connectors constituting the first, second, and third face of the polyhedron-like volume, wherein the connectors are configured to provide electrical terminals to the at least first, second, and third light sensor.

15. The method according to claim 1, wherein the three-dimensional framework comprises a grid having an envelope comprising the polyhedron-like volume.

16. The method according to claim 1, wherein the polyhedron-like volume comprises a four-faceted pyramid with a square base or a frustum thereof, the first, second, and third light sensors are aligned along the first, second, and third face of the four-faceted pyramid or the frustum thereof, and a fourth light sensor is aligned to a fourth face of the four-faceted pyramid or the frustum thereof.

17. The method according to claim 1, wherein the polyhedron-like volume comprises a cube, the first, second, and third light sensors are aligned along the first, second, and third face of the cube, and the fourth light sensor is aligned along the fourth face of the cube.

18. The method according to claim 1, wherein the sensor signals are examined to determine a diffusivity index which increases for diffuse lighting and is near unity nor non-diffuse lighting conditions.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIGS. 1A, 1B, IC show exemplary embodiments of a sensor arrangement according to the principle presented,

(2) FIG. 2 shows an exemplary embodiment of the method for operating a sensor arrangement according to the principle presented, and

(3) FIGS. 3A-3B show an exemplary test result of an application using a sensor arrangement according to the principle presented.

DETAILED DESCRIPTION

(4) FIGS. 1A, 1B, IC show exemplary embodiments of a sensor arrangement according to the principle presented. Generally, the sensor arrangement comprises a polyhedron-like volume with individual light sensors connected and aligned along faces of the said volume.

(5) Generally, the polyhedron-like volume can have the shape of any polyhedron or similar structures, e.g. a frustum of a polyhedron. According to a geometrical definition (see http://en.wikipedia.org/wiki/Polyhedron) a polyhedron is a three-dimensional shape that is made up of a finite number of polygonal faces which are parts of planes; the faces meet in pairs along edges which are straight-line segments, and the edges meet in points called vertices. Cubes, prisms and pyramids are examples of polyhedra. The polyhedron surrounds a bounded volume in three-dimensional space; sometimes this interior volume is considered to be part of the polyhedron, sometimes only the surface is considered, and occasionally only the skeleton of edges. FIG. 1A shows four-faceted pyramid, FIG. 1B shows a frustum thereof, and FIG. 1C shows a cube as examples of polyhedral-like volumes.

(6) The polyhedron-like volume can be thought of as a reference framework. In the exemplary embodiments shown in FIGS. 1A, 1B, and 1C this framework is a solid body to which the light sensors are connected. The body has the polyhedron-like volume and comprises faces which constitute connection means to which the light sensors are (mechanically and electrically) connected. Furthermore, the faces may provide electrical terminals necessary to operate the sensor arrangement.

(7) Alternatively, the framework can have a grid or skeleton design but its envelope comprises or encloses the polyhedron-like volume (not shown). In this design the framework is no solid body and the connection means rather are parts of the grid.

(8) The light sensors are connected to the connection means or to faces of the polyhedron-like volume (indicated as spheres in the drawings). The light sensors comprise a photodiode, a Complementary Metal Oxide Semiconductor (CMOS) light detector, and/or a CCD, respectively. Photodiodes may be sensitive to infrared light. These can reside behind optically opaque material so that they are not visible.

(9) The light sensors typically have a direction dependent detection characteristic. The term cosine response relates to the fact that the light sensors generate a sensor signal S which depends on a direction cosine when light is incident under some angle , i.e. S cos . The terms response and sensor signal will be used as equivalents hereinafter.

(10) FIG. 2 shows an exemplary embodiment of the method for operating a sensor arrangement according to the principle presented. The sensor arrangement can be used to track the position of a (point-like) light-source, such as the sun, and to determine the intensity of that light-source by using the exemplary method steps explained below. For example, under exposure to direct sunlight, i.e. clear skies, the response of light sensors can be evaluated to not only find the absolute intensity, but also the direction of the sun.

(11) In this context, the exemplary embodiment is based on a sensor cube, e.g. with five faces of the cube are populated by respective light sensors, in particular by ambient light sensors, which all have a cosine response. The cube is depicted on the right side of the drawing in which a first, a second, and a third light sensor are visible. A fourth and a fifth light sensor are present but are connected to faces of the cube not visible in the drawing.

(12) However, the sensor arrangement can be complemented by a sixth light sensor which can be connected to the remaining face of the cube and along a.sub.z. Whereas five light sensors on the cube can be used for sensing over a hemisphere, the addition of the sixth sensor allows for truly omnidirectional sensing. In the following both embodiments will be described in parallel and changes in the method will be highlighted.

(13) The cube defines a three-dimensional coordinate system with principal axes x, y, z, for example a Cartesian coordinate system (see left side of FIG. 2). The coordinate system is described by the unit vectors a.sub.x, a.sub.y, and a.sub.z, respectively. The light sensors are connected to the cube with their light sensitive surfaces aligned along a.sub.x, a.sub.x, a.sub.y, a.sub.y and a.sub.z and oriented away from the surface of the cube. The sixth light sensor can be aligned along a.sub.z. In other words with the fife sensor cube one sensor, for example, the fifth light sensor, is oriented along the z-axis but not in the opposite direction. With the six sensor cube both directions are occupied.

(14) For easier reference the light sensors will be referred to by their corresponding unit vectors, e.g. first light sensor a.sub.x, second light sensor a.sub.x, third light sensor a.sub.y, fourth light sensor a.sub.y, and fifth light sensor a.sub.z. The sixth sensor, if present, will be referred to as a.sub.z.

(15) For position determination or tracking of the sun it can be advisable to align the light sensors with respect to a fixed reference. This reference can be given by earth's magnetic field, for example. Hereinafter X.sup.+ denotes north, X.sup. south, Y.sup.+ west, and Y.sup. east. The z-axis is aligned facing upward and downward, which will be denoted as Z.sup.+ and Z.sup. hereinafter. In the following it is assumed that the first light sensor is aligned to north, the first light sensor is aligned north X+, the second light sensor X is aligned south, the third light sensor is aligned west Y+, the fourth light sensor is aligned east Y, and the fifth light sensor is aligned up. The sixth light sensor, if present, is aligned down.

(16) After aligning the sensor arrangement in a next step the method for operating a sensor arrangement is initialized by evaluating the light sensors, i.e. by pairwise comparing the sensor signals Sx+, Sx, Sy+, and Sy of opposing sensors (i.e. a.sub.x vs. a.sub.x, then a.sub.y vs. a.sub.y). The light sensor that has larger response is chosen over its opposing one for further processing. This selection determines in which quadrant (or in the case of having a sixth sensor, which octant) of the coordinate system the light-source is positioned. In the coordinate system defined above there are four such quadrants, abbreviated as X.sup.+/Y.sup.+, X.sup.+/Y.sup., X.sup./Y.sup.+ and X.sup./Y.sup.. In the case of six light sensors the pairwise comparing of the sensor signals also includes the opposing sensors a.sub.z vs. a.sub.z) in order to determine up/down orientation.

(17) In this exemplary method, once the quadrant, and eventually up/down, has been determined the sensor signals of the remaining three sensors (i.e. the two a.sub.x and a.sub.y light sensors having the larger response, along with the a.sub.z or a.sub.z light sensor) are selected and their sensor signals processed. The processing involves finding the length of the position vector V. For example, the light-source to be detected is the sun and its position is given by the position vector V having the coordinates x.sub.0, y.sub.0, and z.sub.0 or in terms of angles, angle , , and , which are given as the angles between the position vector and the x, y, and z axes, respectively (see drawing on the left side).

(18) An arbitrary direction vector with coordinates x, y, z has a length l of
l={square root over (x.sup.2+y.sup.2+z.sup.2)}.

(19) The components of position vector V can be found by means of the scalar dot product or by projection onto the unit vectors a.sub.x, a.sub.y, and a.sub.z, respectively. By definition the unit vectors have length one. In general terms the position vector V is given as
V=V.sub.x.Math.a.sub.x+V.sub.y.Math.a.sub.y+V.sub.z.Math.a.sub.z,
wherein V.sub.x, V.sub.y, V.sub.z denote the vector components or magnitudes along the x, y, and z axes, respectively. In fact, V.sub.x, V.sub.y, V.sub.z are the projections of V onto the unit vectors a.sub.1, a.sub.y, and a.sub.z, respectively. This means that
V.sub.x=V.Math.a.sub.x, V.sub.y=V.Math.a.sub.y, and V.sub.z=V.Math.a.sub.z.

(20) The scalar dot product .Math. can also be evaluated by taking the product of a vector's magnitude or length. For example, V.sub.x=V.Math.a.sub.x=|V|.Math.|a.sub.x|cos()=|V| cos(), since |a.sub.x|=1. Similarly, V.sub.y=|V| cos() and V.sub.z=|V| cos().

(21) For illustration purposes it is now assumed that three light sensors pointed along X.sup.+, Y.sup.+, and Z.sup.+ have been selected. The light sensors (e.g. photodiodes) have a cosine angular response. For example, light incident on the sensor arrangement under vector V making an angle .sub.x with a surface normal of the light sensor pointed in the X.sup.+ direction will induce a response that is proportional to V.Math.cos()=V.Math.a.sub.x. If, for example, all three light sensors pointed along X.sup.+, Y.sup.+, and Z.sup.+ are aligned so that their fields of view are illuminated by the impinging light, then their respective responses will allow for measuring both the direction and the intensity of the incident light vector V, assuming either are matched, or can be matched by calibration, and have equal or known responsivity to the light.

(22) The sensor response S.sub.z of the a.sub.z light sensor is proportional to the cosine of the sun's zenith angle , i.e. the angle of position vector V with the z axes (where the z axis is assumed to point upwards, e.g. towards the sky). It can also be shown that the sum of sensor signals of all light sensors should meet the following constraint:
cos.sup.2()+cos.sup.2()+cos.sup.2()=K.sup.2,
where K is the number of counts resulting from any one of the light sensors pointing directly at the sun. In FIG. 2, K is the length of the direction vector pointing towards the sun. Please note that the term cosine response used above indicates that the sensor signals Sx, Sy, Sz are given as
S.sub.x.sup. cos(), S.sub.y.sup. cos(), S.sub.z.sup. cos().

(23) The strength of the sensor signals Sx, Sy, Sz can be measured in terms of counts. Let the sensor signals Sx, Sy, Sz (in counts) of the a.sub.x, a.sub.y, and a.sub.z light sensor be given by X.sup.+.sub.c, Y.sup.+.sub.c, and Z.sup.+.sub.c, . . . , respectively. Then the relative brightness of the sun (in counts) can be found as
K={square root over (X.sub.c.sup.2+Y.sub.c.sup.2+Z.sub.c.sup.2)}.

(24) In other words without having to track the sun, e.g. by moving the whole sensor arrangement, the brightness of the sun can be monitored using the equation above. In addition, the sensor signals can be used to find the zenith angle as

(25) = arcos ( Z c K )
or the elevation angle EA=90.

(26) Next, the azimuth angle AZ of the light source, e.g. the sun, can be determined as follows. Here X.sup.+.sub.c and X.sup.+.sub.c are the counts of the North and South facing detectors respectively, and Y.sup.+.sub.c and Y.sup..sub.c are the counts of the West and East facing sensors respectively, and let X.sub.c=max(X.sup.+.sub.c,X.sup..sub.c) and Y.sub.c=max(Y.sup.+.sub.c,Y.sup..sub.c), wherein max( ) denotes the maximum. Then
=arctan(Y.sub.c/X.sub.c),
with
AZ= if in quadrant where (X.sup.+.sub.c>X.sup..sub.c and Y.sup.+.sub.c>Y.sup..sub.c)
AZ=180+ if in quadrant where (X.sup.+.sub.c<X.sup..sub.c and Y.sup.+.sub.c>Y.sup..sub.c
AZ=180 if in quadrant where (X+.sub.c>Y.sub.c and Y.sup.+.sub.c<Y.sup..sub.c
AZ=+ if in quadrant where (X.sup.+.sub.c>Y.sub.c and +Y.sub.c>Y.sup..sub.c).

(27) Here, the definition of azimuth angle AZ is the conventional system of: North=0, East=90, South=180 and West=270. Note that in the northern hemisphere, it will generally be the case that 90AZ270. For tracking the position of the light-source the above mentioned steps are continuously repeated and the resulting coordinates saved.

(28) Instead of the above described initial pairwise selection of light sensors, it might be better to introduce difference signals such as X.sub.c=(X.sup.+.sub.cX.sup..sub.c), Y.sub.c=(Y.sup.+.sub.cY.sup..sub.c), and Z.sub.c=(Z.sup.+.sub.cZ.sup..sub.c). This would help subtract out non-sunlight related background (ambient light), and would simplify the formulas. For example, the relative brightness of the sun (in counts) can be found as
K={square root over (X.sub.c.sup.2+Y.sub.c.sup.2+Z.sub.c.sup.2)}.

(29) Now we can find the zenith angle , as

(30) = arcos ( Z c K ) ,
and the elevation angle EA=90. Then =arctan(Y.sub.c/X.sub.c). In computer programming the arctan function is evaluated using the a tan 2 function which accepts two signed arguments in order to always evaluate the arctangent within the proper quadrant, e.g. AZ=degrees(a tan 2(Y.sub.c, X.sub.c)).

(31) Then the sun should never directly illuminate the light sensor facing to the north, as long as the sensor arrangement is used in the northern hemisphere. Correspondingly the sun should never directly illuminate the light sensor facing to the south, as long as the sensor arrangement is used in the southern hemisphere. Generally, that light sensor could be omitted.

(32) The above introduced method can be extended to other geometries than a cube. Other geometries besides cubes are lower and higher order polyhedrons (see also FIGS. 1A-C). In particular, three or four faceted pyramids as well as pyramids with the apex lopped off (frustum), such as shown in FIGS. 1A-C could be populated with light sensors placed at some, e.g. excluding the bottom side or all faces. In this way the sensors point more upward towards the sky they are trying to monitor. Taking the cosine response on each light sensor as the projection onto a given position vector V, in a direction orthogonal to the face of the sensor, the concept of finding the direction of a light-source such as the sun or other pseudo point source can be generalized.

(33) Consider a number of N light sensors, and the i'th light sensor be described by a direction vector a.sub.ia.sub.x+b.sub.ia.sub.y+c.sub.ia.sub.z, where 1iN. Further assume all light sensors have the same sensitivity and that the sensor signal or response (in number of counts) of each light sensor is given by R.sub.i.

(34) The unit vectors are no longer mutually orthogonal, and the net normalized x, y, and z components need to normalized by summing the individual responses of all light sensors and dividing that summed response by the projection of all of the light sensors onto the x, y and z axes. Let the final unit direction vector be V.sub.d=Xa.sub.x+Ya.sub.y+Za.sub.z, where {square root over (X.sup.2+Y.sup.2+Z.sup.2)}=1. Then each light sensor (with normal vector given by V.sub.i, where 1iN, for an N detector setup) will in general have a projection onto all three axes, which can be described by (V.sub.i).sub.x, (V.sub.i).sub.y and (V.sub.i).sub.z, where (V.sub.i).sub.x=V.sub.i.Math.a.sub.x, (V.sub.i).sub.y=V.sub.i.Math.a.sub.y etc. In order to find the net x, y and z responses from multiple detectors, the responses need to be normalized as well.

(35) FIGS. 3A and 3B show an exemplary test result of an application using a sensor arrangement according to the principle presented. For tracking the position of the light-source the above mentioned steps are continuously repeated and the resulting coordinates saved. Then, in a next step it is possible to map to other coordinate systems, for example geographical systems. Also, knowing the date and the latitude/longitude coordinates of the site, one can also compute time from the position of the sun. Conversely, knowing the date and time and latitude/longitude, thus, being able to compute the sun's absolute position, one could use this sensor arrangement as a compass by inverting the calculations.

(36) FIG. 3A shows the tracking result using a prototype sensor arrangement using five light sensors in a cube arrangement as introduced above. The arrangement also contained a bubble level and a compass to orient the cube to the cardinal direction points. The sensor cube was placed underneath a tinted bubble to allow the light sensors to operate in direct sunlight without saturating. The five light sensors could be interfaced via the connection means to an I.sup.2C multiplexor board which is in turn interfaced to a PICkit Serial module that controlled the I.sup.2C multiplexor board using, for example, a Visual Basic program to sample each light sensor in the cube in sequence and store the readings.

(37) The whole setup was connected via a USB cable to a laptop computer inside a car to collect the tracking data. FIG. 3A shows the data plotted in an x, y representation while the car was driven around. The result shows the directional trajectory of the car in the x, y plane. For comparison FIG. 3B shows the actual trajectory on a map. This proves that the sensor arrangement is capable of accurate directional sensing. The fact that the two trajectories do not meet in the start and end point is due to the fact that the car was driven with changing speed. However, this effect can be accounted for and only occurs in moving reference systems. For stationary tracking this is no issue.

(38) One can also determine cloudy vs. sunny conditions by examining the sensor outputs, for example by tracking the intensity of the light-source with time. Furthermore, by using color sensors, the sensors could also detect color temperature and other data which could provide further information such as partial clouded conditions, smog conditions, haze etc. Presence of haze, thin clouds and other scattering conditions could also be ascertained from the sensor information by looking at clues such as the ratio of X.sup.+.sub.c to X.sup..sub.c e.g. If, for instance, X.sup..sub.c is facing south and the unit is in the northern hemisphere, then X.sup..sub.c should be significantly larger that X.sup.+.sub.c in direct sunlight. However if it is very hazy, then (X.sup..sub.c)/(X.sup.+.sub.c) may be only slightly larger than one.

(39) Moreover, one could be able to sense diffuse light levels in a manner similar to a diffuser dome sensor used in most lux meters, as mentioned in the beginning, by sensing a signal proportional to {square root over ((X.sub.c.sup.).sup.2+(X.sub.c.sup.++).sup.2+(Y.sub.c.sup.).sup.2+(Y.sub.c.sup.+).sup.2+(Z.sub.c.sup.+).sup.2)}.

(40) In order for the system to function best, the sensor cube could be placed on a flat black surface to minimize effects of light picked up from the environment. The sensor might also be mounted in a shallow box to assure that the sensor only receives significant light for points ABOVE the horizon (the horizon will be affected by presence of nearby trees, buildings etc.).

(41) In another embodiment (not shown) the sensor arrangement is used for a diffuse sky detector, detecting, e.g. cloudy or foggy skies verses sunny skies. Let D be a diffusivity index which increases for diffuse (e.g. cloudy) skies, and is near unity for sunny (non-diffuse) sky conditions. Assuming a sensor arrangement of five light sensors, then:

(42) D = ( X c - ) 2 + ( X c + ) 2 + ( Y c - ) 2 + ( Y c + ) 2 ( X c - - X c + ) 2 - ( Y c - - Y c + ) 2 ,
wherein the diffusivity index D will approach unity in a non-diffuse (e.g. direct sunlight) environment, but will grow large in diffuse light. For example, in direct sunlight, assuming that X.sup..sub.c and Y.sup.+.sub.c might be exposed to direct sunlight, whereas X.sup.+.sub.c and Y.sup..sub.c are shadowed. This would make the denominator term large since the differences (X.sup..sub.cX.sup.+.sub.c) and (Y.sup..sub.cY.sup.+.sub.c) will be large, which should make D small. (I.e. a small Diffusivity index indicating that the light is direct instead of diffuse).

(43) However, on a very cloudy day the diffuse light would scatter or distribute light in all directions such that the (X.sup..sub.c, X.sup.+.sub.c) and (Y.sup..sub.c, Y.sup.+.sub.c) light sensor pairs would receive almost equal amounts of light making the differences (X.sup..sub.cX.sup.+.sub.c) and (Y.sup..sub.cY.sup.+.sub.c) very small, which makes the denominator small, thus making the diffusivity index D large, indicating diffuse light.