Optical touch system and method having image sensors to detect objects over a touch surface
09766753 · 2017-09-19
Assignee
Inventors
- Tzung Min Su (Hsinchu, TW)
- Ming Tsan Kao (Hsinchu, TW)
- Chun-Sheng Lin (Hsinchu, TW)
- Chih Hsin Lin (Hsinchu, TW)
- Yi Hsien Ko (Hsinchu, TW)
Cpc classification
G06F3/0416
PHYSICS
G06F2203/04104
PHYSICS
International classification
Abstract
The present invention discloses embodiments for an optical touch systems and methods. One embodiment of the present invention is directed to an optical touch system that includes at least two image sensors configured to detect a plurality of objects over a touch surface to generate a plurality of object images; a plurality of first light receiving elements, wherein the first light receiving elements are arranged on a side of the touch surface along a first direction and are configured to detect the objects; and a processing unit configured to calculate a plurality of candidate coordinate data, based on the object images, and select the coordinate data that represents coordinate data of the objects from the candidate coordinate data, based on detection data of the first light receiving elements.
Claims
1. An optical touch system, comprising: a plurality of first light detectors, arranged on a side of a touch surface along a first direction and configured to detect a plurality of objects on the touch surface, wherein at least two of the first light detectors detect the plurality of objects; at least two image sensors, configured to detect the plurality of objects to generate a plurality of object images; and a processor, configured to calculate a plurality of candidate coordinate data, based on the object images, to group the plurality of candidate coordinate data into two groups, and to select one group of coordinate data representing the objects from the two groups of candidate coordinate data, based on positions of the at least two of the first light detectors detecting the plurality of objects, wherein the processor is further configured to obtain a first distribution width, based on the detection data of the first light detectors, and to select the coordinate data representing the objects, based on the first distribution width, the first distribution width being a distance between the farthest two of the first light detectors that detect the objects, calculate a second distribution width of each group of candidate coordinate data, and compare the first distribution width with the second distribution width.
2. The optical touch system of claim 1, wherein the processor obtains coordinate data in the first direction, based on the detection data of the first light detectors, and selects the coordinate data that represent the objects, based on the coordinate data.
3. The optical touch system of claim 1, wherein when the processor performs selection, the processor is configured to group the candidate coordinate data, wherein the number of candidate coordinate data in each group is equal to the number of object images generated by the image sensor, and all of the candidate coordinate data in any group of candidate coordinate data correspond to different sensing paths of the image sensors.
4. The optical touch system of claim 1, wherein a first light detector that detects the objects generates a signal that is smaller than a signal generated by a first light detector that does not detect the objects.
5. The optical touch system of claim 1, further comprising a plurality of second light detectors, wherein the second light detectors are arranged on a side of the touch surface along a second direction, wherein the second direction extends transverse to the first direction, and the second light detectors are configured to detect the objects, and the processing unit further selects coordinate data that represents the objects from the plurality of candidate coordinate data, based on detection data of the second light detectors.
6. The optical touch system of claim 1, wherein the first light detectors are configured on the side of the touch surface that is opposite to one of the at least two image sensors and adjacent to the other of the at least two image sensors, wherein a portion of the first light detectors that are closer to the image sensors are less densely spaced than another portion of the first light detectors that are farther from the image sensors.
7. The optical touch system of claim 1, wherein each of the first light detectors comprises a photo-transistor.
8. The optical touch system of claim 1, further comprising a plurality of light-emitting devices configured to illuminate the objects.
9. The optical touch system of claim 8, wherein one of the light-emitting devices is disposed opposite to the first light detectors.
10. The optical touch system of claim 1, wherein the position of the at least two of the first light detectors detecting the plurality of objects represents at least two position values along the first direction of the touch surface, and the processor selects coordinate data having the same position values along the first direction with the at least two of the first light detectors detecting the plurality of objects.
11. An optical touch-sensing method, which comprises the steps of: receiving a plurality of first images and a plurality second images related to a plurality of objects from at least one image sensor; receiving detection data from a plurality of first light detectors configured to detect the objects and convert light received from the objects into electrical signals, wherein the first light detectors correspond to different coordinate data along a first direction, respectively; calculating a plurality of candidate coordinate data of the plurality of objects, based on the plurality of first and second images; grouping the plurality of candidate coordinate data into two groups; and selecting one group of candidate coordinate data that represents the plurality of objects from the two groups of candidate coordinate data, based on positions of the plurality of the first light detectors detecting the plurality of objects, wherein the step of selecting coordinate data that represents the objects comprises steps of: obtaining a first distribution width, based on the detection data of the first light detectors, selecting the coordinate data that represents the objects, based on the first distribution width, and the first distribution width is a distance between the farthest two of the first light detectors that detect the objects, calculating a second distribution width of each group of candidate coordinate data, and comparing the second distribution width with the first distribution width.
12. The optical touch-sensing method of claim 11, wherein the step of selecting coordinate data that represents the objects comprises of obtaining coordinate data of the objects along the first direction, based on the detection data, and selecting the coordinate data that represents the objects, based on the coordinate data.
13. The optical touch-sensing method of claim 11, wherein the first images and the second images are a plurality of object images generated by two image sensors, respectively, or the first images and the second images are a plurality of object images and a plurality of virtual object images produced by one image sensor, respectively.
14. The optical touch-sensing method of claim 11, wherein the positions of the plurality of the first light detectors detecting the plurality of objects represents a plurality of position values along the first direction of the touch surface, and the selected coordinate data having the same position values along the first direction with the plurality of the first light detectors detecting the plurality of objects.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The objectives and advantages of the present invention are illustrated with the following description and upon reference to the accompanying drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF THE INVENTION
(10)
(11) Continuing the above description, the two image sensors 222 and 224 are configured to detect a plurality of objects O.sub.1 and O.sub.2 on the touch surface 20, and generate pictures that comprise a plurality of object images, respectively. Through analyzing the picture generated by the image sensor 222, the processing unit 23 can determine sensing paths S.sub.21 and S.sub.22 that are blocked by the objects O.sub.1 and O.sub.2, respectively; and through analyzing the picture generated by the image sensor 224, the processing unit 23 can determine sensing paths S.sub.23 and S.sub.24 that are blocked by the objects O.sub.1 and O.sub.2, respectively. When the processing element 23 calculates the intersecting points of the sensing paths of the two image sensors 222 and 224 to obtain coordinate data of the two objects, it will obtain four samples of candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224 that are at the intersections of the sensing paths S.sub.21, S.sub.22, S.sub.23 and S.sub.24. The four samples of candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224 include actual touch positions of the objects O.sub.1 and O.sub.2 (the candidate coordinate data P.sub.213 and P.sub.224) and ghost touch positions (the candidate coordinate data P.sub.214 and P.sub.223). With respect to the calculation of the aforementioned touch-sensing paths and candidate coordinate data, U.S. patent application Ser. No. 13/302,481 can be used as a source of reference, and is incorporated herein by reference.
(12) Continuing the above description, the first light receiving elements 212 are arranged on a side of the touch surface 20 along an X-axial direction, and configured to detect the objects O.sub.1 and O.sub.2. According to an embodiment, one of the light-emitting devices 254 is disposed opposite to the first light-receiving elements 212 to provide light for a detection operation. Preferably, the first light-receiving elements 212 are primarily used to obtain a distribution or location information of the objects O.sub.1 and O.sub.2 along the X-axial direction for determining which of the candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224 are the actual touch positions. Because the first light-receiving elements 212 are not used to accurately calculate the coordinate data of the objects O.sub.1 and O.sub.2, the light-emitting devices 252, 254 and 256 do not need to emit narrow beams, but they may be scattering light sources. According to an embodiment, it is preferred that the first light-receiving elements 212 are primarily used to obtain a one-dimensional signal distribution or location information of the objects O.sub.1 and O.sub.2 along the X-axial direction.
(13) Referring to
(14) According to an embodiment, the light receiver 21 may further include a circuit, which can generate a one-dimensional signal distribution extending along the X-axial direction, based on the electric signals generated by the first light-receiving elements 212, to serve as detection data, which is then transmitted to the processing units 23. According to a different embodiment, based on the electric signals generated by the first light-receiving elements 212, the optical touch system 2 may further determine the first light-receiving elements 212 located at positions x.sub.5 to x.sub.7 to be the detection data, and the detection data is then transmitted to the processing unit 23. According to an embodiment, the processing unit 23 determines the first light-receiving elements 212 that detect the objects O.sub.1 and O.sub.2, based on the electric signals. According to an embodiment, because the electric signals generated by the first light-receiving elements 212 at positions x.sub.5 and x.sub.7 are higher than the electrical signal generated by the first light-receiving element 212 at position x.sub.6, it can be determined that the objects O.sub.1 and O.sub.2 are located between positions x.sub.5 and x.sub.7 in the X-axial direction.
(15) Continuing the above description, from the plurality of candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224, the processing unit 23 selects coordinate data P.sub.213 and P.sub.224 that represents the objects O.sub.1 and O.sub.2, based on the detection data of the first light-receiving elements 212 transmitted by the light receiver 21. According to an embodiment, the detection data reflects location information or a distribution of the objects O.sub.1 and O.sub.2 in the X-axial direction. The processing unit 23 compares the candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224 with the distribution to determine the actual touch positions (i.e. candidate coordinate data P.sub.213 and P.sub.224) of the objects O.sub.1 and O.sub.2. According to an embodiment, based on the detection data of the first light-receiving elements 212, the processing unit 23 obtains a first distribution width (|x.sub.7−x.sub.5|) of the objects O.sub.1 and O.sub.2. The processing unit 23 then calculates second distribution widths between any two of the candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224, and compares the second distribution widths with the first distribution width to select the candidate coordinate data P.sub.213 and P.sub.224 with the second distribution width that is closest to the first distribution width as the coordinate data that represents the objects O.sub.1 and O.sub.2.
(16) According to an embodiment, when there are two candidate coordinate data, and there are several consecutive first light-receiving elements 212 (as illustrated in
(17) According to another embodiment, when there are at least two groups of candidate coordinate data and there are at least two groups of first light-receiving elements 212 (e.g. the first group comprises the first light-receiving elements 212 corresponding to positions x.sub.5 to x.sub.6 and the second group comprises the first light-receiving elements 212 corresponding to positions x.sub.6 to x.sub.7) in the light receiver 21 that detect the objects O.sub.1 and O.sub.2. Then, a centroid/center of each group of the light receiving elements 212 will be calculated, and a distance between the two groups will be calculated using the centroids/centers to serve as the first distribution width.
(18) According to still another embodiment, when there are at least two groups of candidate coordinate data, and there are at least two groups of first light-receiving elements 212 (e.g. the first light-receiving elements 212 corresponding to positions x.sub.5 to x.sub.6 and to positions x.sub.6 to x.sub.7) in the light-receiver 21 that detect objects O.sub.1 and O.sub.2, left borders or right borders of the two groups of the first light-receiving elements 212 are subtracted from each other, or a left border of the left group of first light-receiving elements 212 and a right border of the right group of the first light-receiving element 212 are subtracted from each other to obtain the first distribution width.
(19) According to still another embodiment, when there are two groups of candidate coordinate data, and there are at least two groups of first light-receiving elements 212 (e.g. the first light-receiving elements 212 corresponding to positions x.sub.5 to x.sub.6 and to positions x.sub.6 to x.sub.7) in the light-receiver 21 that detect objects O.sub.1 and O.sub.2, and based on detection data of the first light-receiving elements 212, the processing unit 23 obtains approximate coordinate data, x.sub.5 and x.sub.7 in the X-axial direction in the present embodiment, of the objects O.sub.1 and O.sub.2. Then, from the candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224, the processing unit 23 selects the candidate coordinate data P.sub.213 and P.sub.224 with coordinate data that are closest to the approximate coordinate data (x.sub.5 and x.sub.7) as the coordinate data that represents the objects O.sub.1 and O.sub.2.
(20) In the foregoing embodiments, the first light-receiving elements 212 are not used to accurately determine the touch positions of the objects O.sub.1 and O.sub.2, but are used to help determine the actual touch positions from the candidate coordinate data P.sub.213, P.sub.214, P.sub.223 and P.sub.224 to be P.sub.213 and P.sub.224. Therefore, the need for narrow beam light sources to achieve one-to-one correspondence with the first light-receiving elements 212 is no longer required, and only an approximate distribution width in the X-axial direction or approximate X-axial coordinate data of the objects O.sub.1 and O.sub.2 are required to obtain accurate positions P.sub.213 and P.sub.224.
(21) According to an embodiment, when the number of object images generated by the image sensor 222 and the number of object images generated by the image sensor 224 are the same, the processing unit 23 can group the candidate coordinate data. The number of candidate coordinate data in each group is equal to the number of object images, and the candidate coordinate data in a group represents points on different sensing paths corresponding to the image sensors 222 and 224. For example, P.sub.213 and P.sub.224 are located on different sensing paths S.sub.21 and S.sub.22 of the image sensor 222 and are located on different sensing paths S.sub.23 and S.sub.24 of the image sensor 224. Hence, P.sub.213 and P.sub.224 can be treated as a group, and similarly, P.sub.214 and P.sub.223 can be treated as a group. In contrast, P.sub.213 and P.sub.214 are located on the same sensing path S.sub.21 of the image sensor 222, and therefore P.sub.213 and P.sub.214 will not form a group. According to an embodiment, the processing unit 23 can calculate a distribution width of each group of candidate coordinate data in the X-axial direction, and then compare each distribution width with the first distribution width (which is roughly |x.sub.7−x.sub.5|) obtained, based on the detection data, to determine the actual touch position. According to another embodiment, the processing unit 23 may obtain coordinate data of the farthest two of the candidate coordinate data in the X-axial direction, and then compare the coordinate data with the coordinate data x.sub.5 and x.sub.7 obtained, based on the detection data, to determine the actual touch position. The embodiments where the light-receiving elements 212 are arranged along the X-axial direction should be considered as examples. The present invention is not limited to be implemented in this manner. The first light-receiving elements 212 may also be arranged along a non X-axial direction, such as the Y-axial direction.
(22) According to an embodiment, the optical touch system 2 further includes a memory unit 24, configured to store a program to be executed by the processor or to store data required for the processing unit 23 to execute the program, such as the generated pictures of the image sensors 222 and 224 received, and the one-dimensional signal distribution generated by the light-receiving elements 212.
(23)
(24) Particularly, the objects O.sub.1, O.sub.2 and O.sub.3 block a portion of the light that travels toward the Y-axial light-receiving elements 412, causing an intensity I.sub.y of light transmitted to the Y-axial light-receiving elements 412 to be lower at position y.sub.3 and positions y.sub.5 to y.sub.7. Therefore, a one-dimensional signal distribution along the Y-axial direction generated by the Y-axial light-receiving elements 412 is lower at position y.sub.3 and positions y.sub.5 to y.sub.7.
(25) According to an embodiment, based on detection data of the Y-axial light-receiving elements 412, the processing unit obtains a largest distribution width (|y.sub.7−y.sub.3|) of the objects O.sub.1, O.sub.2 and O.sub.3 along the Y-axial direction. According to an embodiment, because the objects O.sub.1, O.sub.2 and O.sub.3 are located on different sensing paths of the image sensors 422 and 424, the processing unit treats P.sub.414, P.sub.425, P.sub.436 as a first group, P.sub.414, P.sub.426, P.sub.435 as a second group, P.sub.415, P.sub.424, P.sub.436 as a third group, P.sub.415, P.sub.426, P.sub.434 as a fourth group, P.sub.416, P.sub.424, P.sub.435 as a fifth group, and P.sub.416, P.sub.425, P.sub.434 as a sixth group when performing a comparison with the largest distribution width. During the comparison, the first group of which a distribution width is larger than |y.sub.7−y.sub.3|, and the fourth, fifth and sixth groups of which distribution widths are smaller than |y.sub.7−y.sub.3| are eliminated. The processing unit further compares which of the second and third groups of the candidate coordinate data are closer to the coordinate data y.sub.3 and y.sub.7 in the Y-axial direction obtained, based on detection data, and then selects the second group that is the closest. In the aforementioned embodiment, the largest distribution width is first compared, and then the coordinate data is compared. Alternatively, the coordinate data may be directly compared.
(26) The light-receiving elements 412 can be configured on the side of the touch surface 40 that is opposite to one of the at least two image sensors 422 and 424 and adjacent to the other of the at least two image sensors 422 and 424. In the present embodiment, the plurality of Y-axial light-receiving elements 412 are configured on the side 47 of the touch surface 40 that is opposite to one of the at least two image sensors 422 and 424 and adjacent to the other of the at least two image sensors 422 and 424. In this embodiment, the Y-axial light-receiving elements 412 that are closer to the image sensors 422 or 424 are less densely spaced than those that are farther from the image sensors 422 or 424.
(27)
(28) According to an embodiment, the optical touch system 5 further includes a plurality of X-axial light-receiving elements 512. The processing unit uses the image sensors 532, 534 and 536 to generate candidate coordinate data of the objects on the touch surface 50, obtains an approximate X-axial distribution width or X-axial coordinate data through X-axial light receiving elements 512, and then selects the coordinate data that represent the objects from the candidate coordinate data, based on the approximate X-axial distribution or X-axial coordinate data of the objects.
(29) According to an embodiment, the optical touch system 5 further includes a plurality of Y-axial light-receiving elements 522. The processing unit uses the image sensors 532, 534 and 536 to generate candidate coordinate data of the objects on the touch surface 50, obtains an approximate Y-axial distribution or approximate Y-axial coordinate data through Y-axial light receiving elements 522, and then selects the coordinate data that represents the objects from the candidate coordinate is data, based on the approximate Y-axial distribution or Y-axial coordinate data of the objects.
(30) According to an embodiment, the optical touch system 5 includes a plurality of X-axial light-receiving elements 512, a plurality Y-axial light-receiving elements 522, and the processing unit can use detection data of the X-axial light receiving elements 512 and detection data of the Y-axial light receiving elements 522 to obtain an approximate X-axial distribution width or approximate X-axial coordinate data, and an approximate Y-axial distribution width or approximate Y-axial coordinate data for selecting the coordinate data that represents the objects from the candidate coordinate data.
(31)
(32) Continuing the above description, the reflective element 65, arranged on a side of the touch surface 60, is configured to generate a plurality of virtual objects O.sub.1′ and O.sub.2′ from a plurality of objects O.sub.1 and O.sub.2 on the touch surface 60. The image sensor 622 is configured to detect the objects O.sub.1 and O.sub.2 and the virtual objects O.sub.1′ and O.sub.2′ to generate a picture that may comprise a plurality of object images and a plurality of virtual object images. Through analyzing the picture generated by the image sensor 622, the processing unit 63 can determine sensing paths S.sub.61 and S.sub.62 that are blocked by the objects O.sub.1 and O.sub.2, respectively, and sensing paths S.sub.63 and S.sub.64 which seem to be blocked by the objects O.sub.1′ and O.sub.2′, respectively. The sensing paths S.sub.63 and S.sub.64 can be viewed as virtual sensing paths S.sub.63′ and S.sub.64′ that are generated by a virtual image sensor 622′ and are blocked by the objects O.sub.1 and O.sub.2. In this way, the present embodiment can be considered to be the same as the above-mentioned embodiment with two image sensors. Therefore, by calculating intersecting points of the sensing paths S.sub.61 and S.sub.62 and sensing paths S.sub.63′ and S.sub.64′, candidate coordinate data P.sub.613, P.sub.614, P.sub.623 and P.sub.624 can be obtained. According to the intensity I.sub.y of light transmitted to the Y-axial light receiving elements 612, the Y-axial light receiving elements 612 will generate a one-dimensional signal distribution. Based on the one-dimensional signal distribution generated by the Y-axial light receiving elements 612, the processing unit obtains a distribution width (|y.sub.6−y.sub.4|) of the objects O.sub.1 and O.sub.2 in the Y-axial direction, or coordinate data (y.sub.6 and y.sub.4) of the objects O.sub.1 and O.sub.2 in the Y-axial direction, whereby the processing unit selects coordinate data at P.sub.613 and P.sub.624 to be coordinate data of the objects to O.sub.1 and O.sub.2. According to a different embodiment, three objects may also be detected and/or the number of image sensors may also be increased. Details for these embodiments have been provided above and are omitted herein.
(33) According to an embodiment, the optical touch system 6 further includes a memory unit 64, configured to store a program to be executed by the processor or to store data required for the processing unit 63 to execute the program, such as the generated pictures of the image sensor 622 received and the one-dimensional signal distribution generated by the light-receiving elements 612.
(34) In the foregoing embodiment, the light receiving elements that have detected objects will generate signals that are lower than other light receiving elements. However, the present invention is not limited to be implemented in this manner. The light receiving elements may be designed to receive light reflected from the objects; and therefore, the light receiving elements that have detected the objects will generate signals that are higher than other light receiving elements.
(35)
(36) According to an embodiment, step S808 includes obtaining a first distribution width, based on the detection data of the first light-receiving elements, and selects the coordinate data that represents the coordinate data of the objects, based on the first distribution width. According to an embodiment, the candidate coordinate data includes a plurality of groups of candidate coordinate data, and for each group of candidate coordinate data, a second distribution width is calculated, and compared with the first distribution width. According to an embodiment, the first distribution width is a distance between the farthest two of the first light-receiving elements that have detected the objects.
(37) According to a different embodiment, step S808 includes determining coordinate data of the objects in the first direction, based on the detection data, and selecting coordinate data that represent the coordinate data of the objects.
(38) To summarize the foregoing embodiments, the optical touch system and method of the present invention use image sensors to obtain candidate coordinate data of a plurality of objects over a touch surface and then employ a light receiving unit arranged along a side of the touch surface to assist in selecting coordinate data of the objects from the candidate coordinate data, thereby achieving more accurate positioning of touch positions of the plurality of objects using a scattering light source instead of a narrow beam light source.
(39) Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.