Shape measurement sensor
11085760 · 2021-08-10
Assignee
Inventors
- Munenori Takumi (Hamamatsu, JP)
- Haruyoshi Toyoda (Hamamatsu, JP)
- Yoshinori Matsui (Hamamatsu, JP)
- Kazutaka Suzuki (Hamamatsu, JP)
- Kazuhiro Nakamura (Hamamatsu, JP)
- Keisuke Uchida (Hamamatsu, JP)
Cpc classification
H01L27/14605
ELECTRICITY
H01L31/02164
ELECTRICITY
H01L27/14603
ELECTRICITY
H01L31/02165
ELECTRICITY
G01B11/25
PHYSICS
H01L31/02024
ELECTRICITY
International classification
G01B11/00
PHYSICS
Abstract
Provided is a shape measurement sensor including a light-receiving unit and a calculation unit. The light-receiving unit includes a plurality of pixel pairs. Each of the pixel pairs includes a first pixel and a second pixel that is disposed side by side with the first pixel along a first direction. In the first pixel, as an incident position is closer to one end of the light-receiving unit in a second direction, an intensity of a first electric signals decreases. In the second pixel, as the incident position is closer to the one end, an intensity of a second electric signal increases. The calculation unit calculates the incident position in the second direction for each of the pixel pairs on the basis of the intensity of the first electric signal and the intensity of the acquired second electric signal.
Claims
1. A shape measurement sensor that detects light that is emitted to irradiate a measurement line on a surface of an object and is reflected on the surface of the object to measure a surface shape of the object, the shape measurement sensor comprising: a light-receiving unit to which the light reflected on the measurement line is incident from a direction that is inclined with respect to an irradiation direction of the light; and a calculation unit that detects an incident position of the light in the light-receiving unit, and calculates position information of each position on the measurement line on the basis of the incident position, wherein the light-receiving unit includes a plurality of pixel pairs, each of the pixel pairs including a first pixel that generates a first electric signal corresponding to an incident light amount of the light and a second pixel that is disposed side by side with the first pixel along a first direction intersecting the irradiation direction and generates a second electric signal corresponding to an incident light amount of the light, and the pixels pairs being arranged along the first direction, in the first pixel, as the incident position is closer to one end of the light-receiving unit in a second direction intersecting the first direction, the intensity of the first electric signals decreases, in the second pixel, as the incident position s closer to the one end in the second direction, the intensity of the second electric signal increases, and the calculation unit acquires the first electric signal and the second electric signal for each of the pixel pairs, and calculates the incident position in the second direction for each of the pixel pairs on the basis of the intensity of the acquired first electric signal and the intensity of the acquired second electric signal.
2. The shape measurement sensor according to claim 1, wherein the calculation unit calculates the incident position in the second direction for each of the pixel pairs by using a ratio between the intensity of the first electric signal and the intensity of the second electric signal.
3. The shape measurement sensor according to claim 1, wherein the calculation unit calculates the incident position in the second direction for each of the pixel pairs by using a ratio between the intensity of the first electric signal or the intensity of the second electric signal, and a total value of the intensity of the first electric signal and the intensity of the second electric signal.
4. The shape measurement sensor according to claim 1, wherein the light-receiving unit further includes a first transmission filter which covers the first pixel and through which the light is transmitted, and a second transmission filter which covers the second pixel and through which the light is transmitted, a transmittance of the light in the first transmission filter decreases as it is closer to the one end in the second direction, and a transmittance of the light in the second transmission filter increases as it is closer to the one end in the second direction.
5. The shape measurement sensor according to claim 1, wherein the light-receiving unit further includes a first light-shielding part that covers another portion of the first pixel excluding one portion of the first pixel, and shields the light, and a second light-shielding part that covers another portion of the second pixel excluding one portion of the second pixel and shields the light, a width of the one portion of the first pixel in the first direction decreases as it is closer to the one end in the second direction, and a width of the one portion of the second pixel in the first direction increases as it is closer to the one end in the second direction.
6. The shape measurement sensor according to claim 1, wherein a width of the first pixel in the first direction decreases as it is closer to the one end in the second direction, and a width of the second pixel in the first direction increases as it is closer to the one end in the second direction.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DESCRIPTION OF EMBODIMENTS
(17) Hereinafter, an embodiment of a shape measurement sensor of the present disclosure will be described in detail with reference to the accompanying drawings. In description of the drawings, the same reference numeral will be given to the same element, and redundant description thereof will be appropriately omitted.
(18)
(19) As illustrated in
(20) The light source 3 includes a lens for forming the laser light L1 in a line shape in the measurement line ML. For example, the lens is a cylindrical lens, and condenses the laser light L1 in the direction D1 while spreading the laser light L1 in the direction D2. After the laser light L1 progresses along the direction D3 and passes through the lens, the laser light L1 is simultaneously emitted to respective positions on the measurement line ML of the surface 2a in a state of spreading in the direction D2. Here, the object 2 relatively moves with respect to the light source 3 and the image capturing device 4 along the direction D1 in accordance with movement of the movement stage in the direction D1. According to this, irradiation with the laser light L1 to the measurement line ML is sequentially performed with respect to the respective positions of the surface 2a along the direction D1.
(21) For example, the image capturing device 4 is a vision camera including a vision chip that performs from acquisition of an image of the reflected light L2 from the measurement line ML to image processing. The image capturing device 4 sequentially captures images of the reflected light L2 reflected from the measurement line ML at a predetermined frame rate with respect to respective positions of the surface 2a along the direction D1, and performs processing of signals acquired from the image capturing. The image capturing device 4 includes a light-receiving unit 10 to which the reflected light L2 reflected on the measurement line ML is incident, and a signal processing unit 30 that processes signals output from the light-receiving unit 10 in correspondence with incident of the reflected light L2. The light-receiving unit 10 is provided in an inclination direction Db inclined from the measurement line ML with respect to an irradiation direction Da of the laser light L1 toward the measurement line ML. The inclination direction Db is inclined with respect to the irradiation direction Da in a direction excluding the direction D2 along the measurement line ML. In this embodiment, the inclination direction Db is inclined with respect to the irradiation direction Da in the direction D1 that intersects the measurement line ML. The reflected light L2 reflected on the measurement line ML is incident to the light-receiving unit 10 from the inclination direction Db.
(22) Here, the configuration of the image capturing device 4 will be described in more detail with reference to
(23) The pixels P.sub.1 to P.sub.N respectively generate charge signals Dx.sub.1 to Dx.sub.N corresponding to incident light amounts of the reflected light L2 that is incident. Specifically, when the reflected light L2 is incident to the first pixels 12, the first pixels P.sub.1, P.sub.3, . . . , and P.sub.N-1 generate charge signals Dx.sub.1, Dx.sub.3, . . . , and Dx.sub.N-1 (first electric signals) corresponding to incident light amounts of the reflected light L2. Similarly, when the reflected light L2 is incident to the second pixels P.sub.2, P.sub.4, . . . , and P.sub.N, the second pixels P.sub.2, P.sub.4, . . . , and P.sub.N generate charge signals Dx.sub.2, Dx.sub.4, . . . , and Dx.sub.N (second electric signals) corresponding to incident light amounts of the reflected light L2. The pixels P.sub.1 to P.sub.N outputs the charge signals Dx.sub.1 to Dx.sub.N to the other end 10b side in the Y-direction.
(24) The light-receiving unit 10 further includes a plurality of first transmission filters 14 which are respectively disposed on the plurality of first pixels 12, and a plurality of second transmission filters 15 which are respectively disposed on the plurality of second pixels 13.
(25) In
(26) As described above, an incident light amount of the reflected light L2 that is transmitted through the first transmission filter 14 having the above-described transmittance and is incident to the first pixel 12 gradually decreases (or decreases step by step) as an incident position of the reflected light L2 is closer to the one end 10a, and gradually increases (or increases step by step) as the incident position is closer to the other end 10b. According to this, intensities of the charge signals Dx.sub.1, Dx.sub.3, . . . , and Dx.sub.N-1 generated in the first pixels 12 also gradually decreases (or decreases step by step) as the incident position is closer to the one end 10a, and also gradually increases (or increases step by step) as the incident position is closer to the other end 10b.
(27) In contrast, the transmittance of the second transmission filter 15 gradually increases (or increases step by step) as it is closer to the one end 10a, and gradually decreases (or decreases step by step) as it is closer to the other end 10b on the second pixel 13. An incident light amount of the reflected light L2 that is transmitted through the second transmission filter 15 having the above-described transmittance and is incident to the second pixel 13 gradually increases (or increases step by step) as an incident position of the reflected light L2 is closer to the one end 10a, and gradually decreases (or decreases step by step) as the incident position is closer to the other end 10b. According to this, intensities of the charge signals Dx.sub.2, Dx.sub.4, . . . , and Dx.sub.N generated in the second pixels 13 also gradually increases (or increases step by step) as the incident position is closer to the one end 10a, and also gradually decreases (or decreases step by step) as the incident position is closer to the other end 10b. An increase direction or a decrease direction of the transmittance in the Y-direction is reversed between the first transmission filters 14 and the second transmission filters 15.
(28) The signal processing unit 30 is provided on the other end 10b side (output side) in the Y-direction with respect to the pixels P.sub.1 to P.sub.N. The signal processing unit 30 reads out the charge signals Dx.sub.1 to Dx.sub.N for each of the pixels P.sub.1 to P.sub.N, and detects the incident position of the reflected light L2 for each of the pixel pairs 11 in the light-receiving unit 10 on the basis of the charge signals Dx.sub.1 to Dx.sub.N which are read out. A reading-out type of the charge signals Dx.sub.1 to Dx.sub.N by the signal processing unit 30 is, for example, a rolling shutter type. That is, the signal processing unit 30 sequentially executes reading-out of the charge signals Dx.sub.1 to Dx.sub.N from the pixels P.sub.1 to P.sub.N, and discarding (reset) of the charges accumulated in the pixels P.sub.1 to P.sub.N in a pixel unit. The reading-out type of the charge signals Dx.sub.1 to Dx.sub.N by the signal processing unit 30 may be a global shutter type. In this case, the signal processing unit 30 reads out the charge signals Dx.sub.1 to Dx.sub.N for every frame rate, and executes reset of the charges of all of the pixels P.sub.1 to P.sub.N.
(29) The signal processing unit 30 includes a plurality of switch elements 31, a shift register 32, an amplifier 33, an A/D converter 34, and a calculation unit 35. Input terminals of the switch elements 31 are electrically connected to the pixels P.sub.1 to P.sub.N, respectively. The shift register 32 is provided to sequentially read out the charge signals Dx.sub.1 to Dx.sub.N from the pixels P.sub.1 to P.sub.N. The shift register 32 outputs a control signal for controlling an operation of the switch elements 31. The switch elements 31 are sequentially closed by the control signal that is output from the shift register 32. When the switch elements 31 are sequentially closed, the charge signals Dx.sub.1 to Dx.sub.N generated in the pixels P.sub.1 to P.sub.N are sequentially output from output terminals of the switch elements 31. The amplifier 33 is electrically connected to the output terminals of the switch elements 31, and outputs a voltage value corresponding to the charge signals Dx.sub.1 to Dx.sub.N output from the output terminals of the switch elements 31. The A/D converter 34 is electrically connected to the amplifier 33, converts voltage values output from the amplifier 33 into digital values, and outputs the digital values. The digital values are values corresponding to intensities of the charge signals Dx.sub.1 to Dx.sub.N. Accordingly, hereinafter, description may be given in a state of substituting the digital values with the intensities of the charge signals Dx.sub.1 to Dx.sub.N.
(30) The calculation unit 35 is electrically connected to the A/D converter 34, and acquires digital values output from the A/D converter 34, that is, digital values corresponding to the charge signals Dx.sub.1 to Dx.sub.N for each of the pixel pair 11. According to this, the calculation unit 35 can acquire position coordinates of the pixel pairs 11 which output the charge signals Dx.sub.1 to Dx.sub.N in the X-direction for each of the pixel pairs 11 as position information Lx indicating the incident position of the reflected light L2 in the X-direction. Here, when the first pixel 12 and the second pixel 13 of an r.sup.th pixel pair 11 are respectively set as P.sub.2r-1 and P.sub.2r (r=1, 2, . . . R, R represents the number of the pixel pairs 11), and the position information Lx in the r.sup.th pixel pair 11 is set as Lx.sub.r, a position coordinate of the r.sup.th pixel pair 11 in the X-direction, that is, the position information Lx.sub.r is expressed, for example, by an average value (x.sub.2r-1+x.sub.2r)/2 of the sum of a position coordinate x.sub.2r-1 of the first pixel P.sub.2r-1 in the X-direction and a position coordinate x.sub.2r of the second pixel P.sub.2r in the X-direction (refer to the following Expression (1)).
(31) The calculation unit 35 calculates position information Ly that is an incident position of the reflected light L2 in the Y-direction for each of the pixel pairs 11 on the basis of the intensities of the charge signals Dx.sub.1 to Dx.sub.N which are acquired for each of the pixel pairs 11. As described above, the intensities of the charge signals Dx.sub.1, Dx.sub.3, . . . , and Dx.sub.N-1 decrease as the incident position of the reflected light L2 is closer to the one end 10a of the light-receiving unit 10, and the intensities of the charge signals Dx.sub.2, Dx.sub.4, . . . , and Dx.sub.N increase as the incident position of the reflected light L2 is closer to the one end 10a. The calculation unit 35 calculates the position information Ly for each of the pixel pairs 11 by using a variation of the intensities of the charge signals Dx.sub.1 to Dx.sub.N with respect to the incident position of the reflected light L2, and by using a ratio between the intensities of the charge signals Dx.sub.1, Dx.sub.3, . . . , and Dx.sub.N-1 and the intensities of the charge signals Dx.sub.2, Dx.sub.4, . . . , and Dx.sub.N.
(32) Here, when position information Ly in the r.sup.th pixel pair 11 is set as Ly.sub.r and charge signals output from the first pixel P.sub.2r-1 and the second pixel P.sub.2r are set as Dx.sub.2r-1 and Dx.sub.2r, the position information Ly.sub.r is calculated by taking a ratio between the intensity of the charge signal Dx.sub.2r-1 and the intensity of the charge signal Dx.sub.2r. Accordingly, the position information Lx.sub.r and the position information Ly.sub.r are expressed by the following Expression (1).
(33)
(34) The position information Ly.sub.r may be calculated by taking a ratio between the intensity of the charge signal Dx.sub.2r (or the intensity of the charge signal Dx.sub.2r-1) and a total value of the intensity of the charge signal Dx.sub.2r-1 and the charge signal Dx.sub.2r. In this case, the position information Lx.sub.r and the position information Ly.sub.r are expressed by the following Expression (2).
(35)
(36) In Expression (1) or Expression (2), the position information Lx.sub.r may be expressed by the position coordinate x.sub.2r-1 of the first pixel P.sub.2r-1 in the X-direction. In this case, the position information Lx.sub.r and the position information Ly.sub.r are expressed by the following Expression (3) or Expression (4). In addition, the position information Lx.sub.r may be expressed by the position coordinate x.sub.2r of the second pixel P.sub.2r in the X-direction.
(37)
(38) The calculation unit 35 calculates two-dimensional position information of respective positions on the measurement line ML of the surface 2a of the object 2 on the basis of the position information Lx.sub.r and the position information Ly.sub.r which are obtained as described above. Specifically, the calculation unit 35 calculates the two-dimensional position information of each position on the measurement line ML by associating the position information Lx.sub.r of each of the pixel pairs 11 with each position on the measurement line ML in the direction D2, and by associating the position information Ly.sub.r of the pixel pair 11 with a height of the position on the measurement line ML from the disposition surface S. In addition, the calculation unit 35 calculates two-dimensional information of the measurement line ML at each position of the surface 2a along the direction D1 in correspondence with movement of the object 2 in the direction D1. According to this, it is possible to measure a three-dimensional shape of the surface 2a of the object 2.
(39) Description will be given of an effect obtained by the shape measurement sensor 1 of this embodiment described above. In the shape measurement sensor 1, the laser light L1 from the light source 3 is reflected on the measurement line ML of the surface 2a of the object 2, and the reflected light L2 that is reflected is incident to the light-receiving unit 10. When the reflected light L2 is incident to the first pixel P.sub.2r-1, the charge signal Dx.sub.2r-1 corresponding to an incident light amount of the reflected light L2 is generated from the first pixel P.sub.2r-1. Similarly, when the reflected light L2 is incident to the second pixel P.sub.2r, the charge signal Dx.sub.2r corresponding to the incident light amount of the reflected light L2 is generated from the second pixel P.sub.2r. The calculation unit 35 acquires the generated charge signal Dx.sub.2r-1 and charge signal Dx.sub.2r for each of the pixel pairs 11 to detect the position information Lx.sub.r indicating the incident position of the reflected light L2 in the X-direction for each of the pixel pairs 11. In addition, the calculation unit 35 calculates the position information Ly.sub.r indicating the incident position in the Y-direction for each of the pixel pairs 11 by using a relationship between the incident position of the reflected light L2 in the Y-direction, and the intensities of the charge signal Dx.sub.2r-1 and the charge signal Dx.sub.2r. In this manner, the position information Lx.sub.r and the position information Ly.sub.r which indicate the incident positions of the reflected light L2 in the light-receiving unit 10 is detected for each of the pixel pair 11. In addition, the two-dimensional position information of each position on the measurement line ML is calculated on the basis of the position information Lx.sub.r and the position information Ly.sub.r, and the object 2 is moved in the direction D1, the three-dimensional shape of the surface 2a of the object 2 is measured. In the shape measurement sensor 1 according to this embodiment, it is possible to detect the position information Ly.sub.r indicating the incident position in the Y-direction in addition to the position information Lx.sub.r indicating the incident position in the X-direction for each of the pixel pairs 11 with only the charge signal Dx.sub.2r-1 and the charge signal Dx.sub.2r which are generated from each of the pixel pairs 11. That is, it is possible to detect the two-dimensional information of the incident position of the reflected light L2 without separately generating a charge signal for detecting position information indicating the incident position in the Y-direction. According to this, it is possible to suppress an increase of the number of the charge signals, and it is possible to suppress an increase of time necessary for reading-out of the charge signals. Accordingly, according to the shape measurement sensor 1, it is possible to detect the incident position of the reflected light L2 in the light-receiving unit 10 at a high speed. As a result, it is possible to measure the three-dimensional shape of the surface 2a of the object 2 at a high speed, and it is possible to realize reduction of the measurement time and high definition of the measurement results.
(40) In the shape measurement sensor 1, the calculation unit 35 calculates the position information Ly.sub.r indicating the incident position in the Y-direction for each of the pixel pairs 11 by using a ratio between the intensity of the charge signal Dx.sub.2r-1 and the intensity of the charge signal Dx.sub.2r. In this case, it is possible to calculate the position information Ly.sub.r with a simple calculation process, and thus it is possible to detect position information Lx.sub.r and the position information Ly.sub.r which indicate the incident position of the reflected light L2 at a high speed.
(41) In the shape measurement sensor 1, the calculation unit 35 may calculate the position information Ly.sub.r for each of the pixel pairs 11 by using a ratio between the intensity of the charge signal Dx.sub.2r-1 or Dx.sub.2r and a total value of the charge signal Dx.sub.2r-1 or Dx.sub.2r. In this manner, when the intensity of the charge signal Dx.sub.2r-1 or Dx.sub.2r is standardized by a total value of the intensity of the charge signal Dx.sub.2r-1 or Dx.sub.2r, it is possible to compensate a fluctuation of the intensities of the charge signals Dx.sub.2r-1 and Dx.sub.2r. According to this, it is possible to detect the position information Lx and the position information Ly.sub.r with accuracy.
(42) In the shape measurement sensor 1, the light source 3 irradiates respective positions on the measurement line ML with the line-shaped laser light L1. In this case, as described above, the calculation unit 35 can appropriately calculate the two-dimensional position information of the respective positions on the measurement line ML by associating the respective positions on the measurement line ML with position coordinates of the pixel pairs 11 in the X-direction.
First Modification Example
(43)
(44) In this modification example, the calculation unit 35 can also detect the position information Lx.sub.r and the position information Ly.sub.r which are expressed by Expression (2) or Expression (4). In this case, an adder is provided on a connection line between one input terminal of each of the amplifiers 33 and each of the pixel pairs 11. The adder includes two input terminals which are electrically connected to the first pixel 12 and the second pixel 13, respectively, and an output terminal that is electrically connected to the one input terminal of the amplifier 33. The adder calculates a total value of the charge signal Dx.sub.2r and the charge signal Dx.sub.2r which are generated from the first pixel 12 and the second pixel 13, and outputs the calculated total value to the one input terminal of the amplifier 33. The amplifier 33 outputs a ratio between the charge signal Dx.sub.2r-1 (or charge signal Dx.sub.2r) input from the other input terminal of the amplifier 33 and the total value output from the adder to the calculation unit 35. According to this configuration, the calculation unit 35 can detect the position information Lx.sub.r and the position information Ly.sub.r with Expression (2) or Expression (4). According to the image capturing device 4A according to this modification example, the position information Lx.sub.r and the position information Ly.sub.r can be acquired as in the embodiment, and thus it is possible to obtain the same effect as in the embodiment. In addition, according to the image capturing device 4A according to this modification example, it is not necessary to sequentially perform reading-out of the charge signals Dx.sub.1 to Dx.sub.N from the pixels P.sub.1 to P.sub.N differently from the embodiment, and thus it is possible to simultaneously acquire position information of respective positions on the measurement line ML of the surface 2a of the object 2. According to this, it is possible to measure the surface shape of the object 2 in real time.
Second Modification Example
(45)
(46)
(47) The calculation unit 35 acquires the charge signal Dx.sub.2r-1 and the charge signal Dx.sub.2r for each of the pixel pairs 11 by associating the charge signal Dx.sub.2r-1 and the charge signal Dx.sub.2r which are output from the amplifiers 33A and 33B with time at which the reflected light L2 is incident to each of the pixel pairs 11. Specifically, when time in incidence to an r.sup.th pixel pair 11 is set as t.sub.r, the calculation unit 35 acquires time information t.sub.r for each of the pixel pairs 11 as the position information Lx.sub.r by associating a position coordinate of the pixel pair 11, from which the charge signal Dx.sub.2r-1 and the charge signal Dx.sub.2r are output, in the X-direction with the time information t.sub.r. In addition, as in the embodiment, the calculation unit 35 calculates the position information Ly.sub.r for each of the pixel pairs 11 on the basis of intensities of the charge signals Dx.sub.2r-1 and Dx.sub.2r acquired for each of the pixel pairs 11. Accordingly, in this modification example, the position information Lx.sub.r and the position information Ly.sub.r are expressed by the following Expression (5) or Expression (6).
(48)
Third Modification Example
(49)
(50) When the light-receiving unit 10A includes the pixels P.sub.1 to P.sub.N having the above-described shape, as the incident position of the reflected light L2 in the first pixel 12A is closer to the one end 10a, an incident light amount of the reflected light L2 incident to the first pixels 12A decreases, and according to this, intensities of charge signals Dx.sub.1, Dx.sub.3, . . . , and Dx.sub.N-1 generated in the first pixels 12A also decrease. In contrast, the incident position of the reflected light L2 in the second pixel 13A is closer to the one end 10a, an incident light amount of the reflected light L2 incident to the second pixels 13A increases, and according to this, intensities of charge signals Dx.sub.2, Dx.sub.4, . . . , and Dx.sub.N generated in the second pixels 13A also increase. Even in a case where the pixels P.sub.1 to P.sub.N have the above-described shape, it is possible to detect the position information Lx.sub.r and the position information Ly.sub.r as in the embodiment, and thus it is possible to obtain the same effect as in the embodiment.
(51) The shape and the arrangement of the pixels P.sub.1 to P.sub.N are not limited to the above-described shape.
(52) It is not necessary for the arrangement of the pixels P.sub.1 to P.sub.N in the X-direction to be an arrangement in which the first pixel 12B and the second pixel 13B are alternately arranged in parallel, and the arrangement may be another arrangement.
(53)
(54)
(55)
Fourth Modification Example
(56)
(57) On the other hand, each of the second light-shielding parts 17 is disposed on the second pixel 13, and shields the incident reflected light L2. The second light-shielding part 17 covers another portion excluding one portion 13a of each of a plurality of the second pixels 13. A width of the one portion 13a in the X-direction gradually increases (or increases step by step) as it is closer to the one end 10a, and gradually decreases (or decreases step by step) as it is closer to the other end 10b. In an example, the one portion 13a has an isosceles triangular shape that tapers toward the other end 10b side in the Y-direction. In this case, the second light-shielding part 17 has a shape that is hollowed out in the isosceles triangular shape.
(58) When the light-receiving unit 10E includes the first light-shielding parts 16 and the second light-shielding parts 17, in a plurality of the first pixels 12, as an incident position of the reflected light L2 in the Y-direction is closer to the one end 10a in the Y-direction, an incident light amount of the reflected light L2 incident to the first pixels 12 decreases, and according to this, intensities of charge signals Dx.sub.1, Dx.sub.3, . . . , and Dx.sub.N-1 generated in the first pixels 12 also decrease. In contrast, in the second pixels 13, as the incident position of the reflected light L2 in the Y-direction is closer to the one end 10a in the Y-direction, an incident light amount of the reflected light L2 incident to the second pixels 13 increases, and according to this, intensities of charge signals Dx.sub.2, Dx.sub.4, . . . , and Dx.sub.N generated in the second pixels 13 also increase. Even in this aspect, it is possible to obtain the same effect as in the embodiment.
Fifth Modification Example
(59)
(60) The signal processing units 30C and 30D are provided on both sides of the pixels P.sub.1 to P.sub.N in the Y-direction, respectively. Each of the signal processing units 30C and 30D includes the plurality of switch elements 31, the shift register 32, the amplifier 33, and the A/D converter 34. Input terminals of the switch elements 31 of the signal processing unit 30C are electrically connected to a plurality of the regions 12E and 13E, and the input terminals of the switch elements 31 of the signal processing unit 30D are electrically connected to the regions 12F and 13F. The calculation unit 35 is electrically connected to the A/D converter 34 of the signal processing unit 30C, and the A/D converter 34 of the signal processing unit 30D. As in the embodiment, the calculation unit 35 calculates position information Lx.sub.r and position information Ly.sub.r with respect to an incident position of the reflected light L2 incident to the light-receiving unit 10F on the basis of charge signal DxE.sub.1 or DxE.sub.N generated in the regions 12E and 13E and charge signal DxF.sub.1 or DxF.sub.N generated in the regions 12F and 13F.
(61) In the image capturing device 4J of this modification example, each of the pixels P.sub.1 to P.sub.N is divided into two parts, and as a result, the charge signal DxE.sub.1 or DxE.sub.N generated in the regions 12E and 13E are read out by the signal processing unit 30C, and the charge signal DxF.sub.1 or DxF.sub.N generated in the regions 12F and 13F are read out by the signal processing unit 30D. According to this, in each of the pixels P.sub.1 to P.sub.N, it is possible to shorten a distance from a portion to which the reflected light L2 is incident to each of the switch elements 31. As a result, utilization efficiency of the reflected light L2 incident to the pixels P.sub.1 to P.sub.N is raised, and accuracy of the position information Lx.sub.r and the position information Ly.sub.r can be improved.
Sixth Modification Example
(62)
(63) Here, the metal wires 20 extending along the Y-direction are respectively provided on the pixels P.sub.1 to P.sub.N, and the metal wires 20 are respectively connected to the switch elements 31 so that the charge signals Dx.sub.1 to Dx.sub.N pass through the metal wires 20. According to this, it is possible to improve the movement speed of the charge signals Dx.sub.1 to Dx.sub.N, and it is possible to improve a reading-out speed of the charge signals Dx.sub.1 to Dx.sub.N.
(64) The shape measurement sensor of the present disclosure is not limited to the embodiment and the modification examples, and various modifications can be additionally made. For example, the embodiment and the modification examples may be combined in correspondence with an object and an effect which are required.
REFERENCE SIGNS LIST
(65) 1, 1A: shape measurement sensor, 2: object, 2a: surface, 3: light source, 4, 4A, 4B to 4H, 4J, 4K: image capturing device, 10, 10A to 10G: light-receiving unit, 10a: one end, 10b: other end, 11, 11A to 11D: pixel pair, 12, 12A to 12D: first pixel, 12a, 13a: one portion, 13, 13A to 13D: second pixel, 14: first transmission filter, 15: second transmission filter, 16: first light-shielding part, 17: second light-shielding part, 20: metal wire, 30, 30A to 30D: signal processing unit, Da: irradiation direction, Db: inclination direction, Dx.sub.1 to Dx.sub.N: charge signal, L1: laser light, L2: reflected light, Lx, Ly: position information, ML: measurement line.