Interactive input system and method
09582119 ยท 2017-02-28
Assignee
Inventors
- Clinton Lam (Calgary, CA)
- David Popovich (Calgary, CA)
- Robbie Rattray (Calgary, CA)
- Grant McGibney (Calgary, CA)
Cpc classification
F21V5/04
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G06F3/0425
PHYSICS
G06F3/0418
PHYSICS
G06F3/0354
PHYSICS
G03B21/134
PHYSICS
G06F3/0428
PHYSICS
International classification
G03B21/134
PHYSICS
G06F3/0354
PHYSICS
H04N9/31
ELECTRICITY
F21V5/04
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F21V21/14
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G06F3/041
PHYSICS
Abstract
An image capture method comprises generating a synchronization signal based on modulated illumination; and synchronizing image frame capture of at least one image sensor using the synchronization signal with the illumination timing of an active pointer within a region of interest in the field of view of the at least one image sensor.
Claims
1. An image capture method comprising: generating a synchronization signal based on a modulated illumination signal; using said synchronization signal to synchronize: (1) illumination timing of an illumination assembly emitting a light curtain over a region of interest, (2) image frame capture by at least one image sensor of the region of interest, and (3) illumination timing of an illumination source of an active pointer positioned within the region of interest such that at least two image frames are captured including: a first image frame being captured when said illumination assembly is on, and a second image frame being captured when the illumination source of said active pointer is on and said illumination assembly is off, wherein the first image frame and the second image frame are successive image frames; processing the at least two image frames captured to: identify a passive pointer when there is a bright region in the first image frame and there is no bright region in the second image frame and determine a location of the passive pointer within the region of interest of the first image frame; and processing the at least two image frames captured to: identify said active pointer when there is not bright region in the first image frame and there is a bright region in the second image frame and determine the location of the active pointer within the region of interest of the second image frame.
2. The method of claim 1, further comprising generating the modulated illumination signal using the illumination source of the active pointer when the active pointer is brought into said region of interest and processing the modulated illumination signal to generate said synchronization signal.
3. The method of claim 2, wherein said modulated illumination signal comprises a carrier signal modulated by periodic signals generated by the illumination source of said active pointer, the periodic signals being generated at a rate to which the frame rate of the at least one image sensor is adjusted.
4. The method of claim 3, wherein the carrier signal has a frequency different from frequencies of signals emitted by conventional consumer electronic device infrared emitters.
5. The method of claim 2, wherein the modulated illumination signal further comprises active pen tool attribute information, the attribute information being associated with the determined location of the active pointer.
6. The method of claim 1, wherein image frames captured when said illumination assembly is on have a longer integration time than image frames captured when said illumination assembly is off.
7. The method of claim 1, wherein the synchronization signal is based on the frequency at which the illumination assembly that emits the light curtain over the region of interest is switched on and off.
8. The method of claim 7, further comprising modulating the light curtain that is emitted over said region of interest by said illumination assembly.
9. The method of claim 8, wherein the active pointer detects the modulated light curtain and the illumination source of the active pointer emits illumination in an on and off pattern that is timed so that the illumination source of the active pointer emits illumination when the modulated light curtain is off.
10. The method of claim 7, further comprising modulating the light curtain that is emitted over said region of interest by said illumination assembly, following each on phase of said light curtain, conditioning said light curtain to a low intensity state before turning said light curtain off.
11. The method of claim 10, wherein the active pointer detects the modulated light curtain in the low intensity condition and the illumination source of the active pointer emits illumination in an on and off pattern that is timed so that the illumination source of the active pointer emits illumination when the modulated light curtain is off.
12. The method of claim 1, further comprising: orienting a position of a light curtain emitted by an illumination assembly so that the light curtain is parallel to the region of interest on a display surface illuminated by the illumination assembly.
13. The method of claim 1, further comprising: upon determining that an intensity and a size of the bright region in the second image frame are greater than an intensity and a size of the bright region in the first image frame, identifying that said active pointer is hovering over a display surface when there is a bright region in the first image frame and there is a bright region in the second image frame, wherein the display surface is illuminated by the illumination assembly.
14. The method of claim 1, further comprising: upon determining that an intensity and a size of the bright region in the second image frame are lesser than an intensity and a size of the bright region in the first image frame, identifying that said active pointer is in contact with a display surface when there is a bright region in the first image frame and there is a bright region in the second image frame, wherein the display surface is illuminated by the illumination assembly.
15. An interactive input system comprising: at least one image sensor having a field of view aimed at a region of interest; an illumination assembly configured to emit a light curtain over said region of interest; and processing structure configured to: process image frames captured by said at least one image sensor and determine locations of one or more pointers brought into said region of interest, wherein illumination timing of the illumination assembly, image frame capture of said at least one image sensor and illumination timing of an illumination source of an active pointer brought into said region of interest are synchronized using a synchronization signal based on the modulated illumination signal such that at least two image frames are captured including: a first image frame being captured when said illumination assembly is on, and a second image frame being captured when the illumination source of said active pointer is on and said illumination assembly is off, wherein the first image frame and the second image frame are successive image frames; process the at least two image frames captured to: identify a passive pointer when there is a bright region in the first image frame and there is no bright region in the second image frame and determine a location of the passive pointer within the region of interest of the first image frame; and process the at least two image frames captured to: identify said active pointer when there is no bright region in the first image frame and there is a bright region in the second image frame and determine the location of the active pointer within the region of interest of the second image frame.
16. The interactive input system of claim 15, wherein said active pointer is configured to generate the modulated illumination signal when the active pointer is brought into said region of interest and wherein said processing structure is configured to process the modulated illumination signal to generate said synchronization signal.
17. The interactive input system of claim 16, wherein said modulated illumination signal comprises a carrier signal modulated by periodic signals generated by the illumination source of said active pointer, the periodic signals being generated at a rate used to adjust the frame rate of the at least one image sensor.
18. The interactive input system of claim 17, wherein the carrier signal has a frequency different from frequencies of signals emitted by conventional consumer electronic device infrared emitters.
19. The interactive input system of claim 16, wherein the modulated illumination signal further comprises active pen tool attribute information, the attribute information being associated with the determined location of the active pointer.
20. The interactive input system of claim 15, wherein said at least one image sensor is controlled so that image frames captured when said illumination assembly is on have a longer integration time than image frames captured when said illumination assembly is off.
21. The interactive input system of claim 15, wherein said illumination assembly is configured to modulate the light curtain that is emitted over said region of interest.
22. The interactive input system of claim 21, wherein the active pointer is configured to detect the modulated light curtain and the illumination source of the active pointer is configured to emit illumination in an on and off pattern that is timed so that the illumination source of the active pointer emits illumination when the modulated light curtain is off.
23. The interactive input system of claim 15, wherein said illumination assembly is configured to modulate the light curtain that is emitted over said region of interest, following each on phase of said light curtain, the light curtain being conditioned to a low intensity state before being turned off.
24. The interactive input system of claim 23, wherein the active pointer is configured to detect the modulated light curtain in the low intensity condition and the illumination source of the active pointer is configured to emit illumination in an on and off pattern that is timed so that the illumination source of the active pointer emits illumination when the modulated light curtain is off.
25. The interactive input system of claim 15, wherein the processing structure is further configured to: orient a position of the light curtain emitted by the illumination assembly so that the light curtain intersects the region of interest on a display surface illuminated by the illumination assembly.
26. The interactive input system of claim 15, wherein the processing structure is further configured to: upon determining that an intensity and a size of the bright region in the second image frame are greater than an intensity and a size of the bright region in the first image frame, identify that said active pointer is hovering over a display surface when there is a bright region in the first image frame and there is a bright region in the second image frame, wherein the display surface is illuminated by the illumination assembly.
27. The interactive input system of claim 15, wherein the processing structure is further configured to: upon determining that an intensity and a size of the bright region in the second image frame are lesser than an intensity and a size of the bright region in the first image frame, identify that said active pointer is in contact with a display surface when there is a bright region in the first image frame and there is a bright region in the second image frame, wherein the display surface is illuminated by the illumination assembly.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments will now be described more fully with reference to the accompanying drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)
(44)
(45)
(46)
(47)
(48)
(49)
(50)
(51)
(52)
(53)
(54)
(55)
(56)
(57)
(58)
(59)
(60)
(61)
(62)
(63)
(64)
(65)
(66)
(67)
(68)
(69)
(70)
DETAILED DESCRIPTION OF THE EMBODIMENTS
(71) Turning now to
(72) The interactive projector 112 comprises a housing 120 that accommodates three main modules, namely a projection module 122, an imaging module 124 and a touch processing module 126 as shown in
(73) In this embodiment, the projection module 122 comprises an audio power amplifier and speaker subsystem 130, a touch processing module power subsystem 132 and an image projection subsystem 134. Power for the interactive projector 112 is supplied by a power cable 136 that runs through the support assembly 102 and connects the projection module 122 to an AC mains or other suitable power supply. The projection module 122 also comprises a plurality of input ports and output ports. In particular, the projection module 122 comprises VGA video and stereo VGA audio ports that receive video and audio data output by the general purpose computing device 114. The image projection subsystem 134 is responsive to video data received from the general purpose computing device 114 and is configured to project the image onto the support surface W within the region of interest 106. The audio power amplifier and speaker subsystem 130 is responsive to audio data received from the general purpose computing device 114 and is configured to broadcast audio that accompanies the video image projected onto the support surface W within the region of interest 106. The touch processing module power subsystem 132 provides power to the touch processing module 126 and to the illumination assembly 150.
(74) The general purpose computing device 114 is also connected to a USB pass-through port 138 of the projection module 122 that allows the general purpose computing device 114 to communicate with the touch processing module 126. The projection module 122 further comprises microphone in, composite video and stereo audio, HDMI, USB service and RS-232 input ports as well as audio, VGA, ECP power and ECP control output ports.
(75) The imaging module 124 in this embodiment comprises an image sensor (not shown) having a resolution of 752480 pixels, such as that manufactured by Micron under model No. MT9V034 fitted with an optical imaging lens. The lens of the image sensor has an IR-pass/visible light blocking filter thereon and provides the image sensor with a 118 degree field of view so that the field of view of the image sensor at least encompasses the entire region of interest 106. As a result, the field of view of the image sensor covers an area ranging from 67 inches up to 100 inches diagonal in any of 16:9, 16:10 or 4:3 aspect ratios. In this manner, any pointer such as a user's finger F, a passive pen tool, an active pen tool or other suitable object that is brought into the region of interest 106 in proximity with the display surface appears in the field of view of the image sensor and thus, is captured in image frames acquired by the image sensor.
(76) Turning now to
(77) The general purpose computing device 114 in this embodiment is a personal computer or other suitable processing device comprising, for example, a processing unit, system memory (volatile and/or non-volatile memory), other non-removable or removable memory (e.g. a hard disk drive, RAM, ROM, EEPROM, CD-ROM, DVD, flash memory, etc.) and a system bus coupling the various computing device components to the processing unit. The general purpose computing device 114 may also comprise networking capabilities using Ethernet, WiFi, and/or other network formats, to enable access to shared or remote drives, one or more networked computers, or other networked devices.
(78) Turning now to
(79) During operation, when the illumination units 152, 154 and 156 are powered, the IR laser diodes 162 emit infrared illumination that travels down confined paths defined by the focusing barrels 166. The infrared illumination exiting the focusing barrels 166 is focused onto the cylindrical collimating lens 170 by the focusing lenses 168. The cylindrical collimating lens 170 in turn emits the fan-shaped sheet of IR illumination or light curtain LC over the entire region of interest 106.
(80) Each illumination unit 152, 154 and 156 is responsible for providing the IR illumination for an associated sector of the light curtain LC. The circumferential spacing of the illumination units 152, 154 and 156 and the configuration of the cylindrical collimating lens 170 are selected so that adjacent sectors overlap. As can be seen in
(81) Turning now to
(82) Each adjustment mechanism 200 and 202 comprises a spindle 208 that is affixed to its respective adjustment knob 190 or 192 and that passes through a washer 210 and a passage in the front face plate 182. The distal end of the spindle 208 threadably engages a threaded hole in the back plate 198. A coil spring 212 surrounds the spindle 208 and bears against the front face plate 182 and the back plate 198. Rotation of an adjustment knob 190, 192 in one direction imparts rotation of the spindle 208 causing the spindle 208 to advance into the threaded hole in the back plate 198. As the spindle 208 is fixed relative to the front face plate 182, this action results in the back plate 198 being pulled towards the front face plate 182 against the bias of the spring 212. As a result, the illumination assembly 150, which is mounted to the back plate 198, is moved away from the plane of the region of interest 106. Rotation of the adjustment knob 190, 192 in the other direction causes the spindle 208 to retreat from the threaded hole in the back plate 198 resulting in the back plate being pushed away from the front face plate 182. As a result, the illumination assembly 150 is moved towards the plane of the region of interest 106. Thus, by rotating the adjustment knobs 190 and 192, the plane of the light curtain LC can be adjusted so that it is parallel to the plane of the region of interest 106 in the horizontal dimension. The plane of the light curtain LC can also be adjusted to increase its distance from or decrease its distance to the plane of the region of interest 106 by rotating the adjustment knobs 190 and 192.
(83) When the adjustment knob 196 is rotated in one direction, rotation of the adjustment knob causes the adjustment mechanism 204 to tilt the back plate 198 so that the back plate upwardly angles away from the front face plate 182. When the adjustment knob 196 is rotated in the other direction, rotation of the adjustment knob causes the adjustment mechanism 204 to tilt the back plate 198 so that the back plate upwardly angles towards the front plate. Thus, by rotating the adjustment knob 196, the plane of the light curtain LC can be adjusted so that it is parallel to the plane of the region of interest 106 in the vertical dimension.
(84) In this embodiment, the IR receiver 110 comprises a pass filter so that only IR signals on a carrier having a frequency within the limits of the pass filter are detected. The limits of the pass filter are set so that IR signals generated by IR remote controls of conventional consumer electronic devices are blocked thereby to avoid interference from such IR remote controls. The IR receiver 110 communicates with the DSP 140 of the touch processing module 126.
(85) The interactive input system 100 allows a user to interact with the region of interest 106 using both passive pointers and active pointers. As mentioned above, passive pointers may comprise fingers, passive pen tools or other suitable objects.
(86) When the tip 304 of the active pen tool 300 is brought into contact with the support surface W with a force exceeding the activation threshold, the tip switch 306 is triggered. As a result, power from the power source 312 is supplied to the printed circuit board. In response, the microcontroller 308 drives the LEDs 310 causing the LEDs to turn on and provide infrared illumination to the tip 304. During driving of the LEDs 310, the microcontroller 308 pulses supply power to the LEDs causing the LEDs 310 to switch on and off at a rate equal to the frame rate of the image sensor, in this example, 120 frames per second (fps). When the LEDs 310 are turned on, the illumination output by the LEDs 310 is modulated by a carrier having a frequency within the limits of the pass filter of the IR receiver 110.
(87) The operation of the interactive input system 110 will now be described with particular reference to
(88) In particular, when a passive pointer such as a finger is within the region of interest 106 in proximity to the display surface and the illumination assembly 150 is turned on, the finger is illuminated by the light curtain LC and reflects IR illumination. As a result, the illuminated finger appears as a bright region on an otherwise dark background in image frames captured by the image sensor of imaging module 124. When the active pen tool 300 is within the region of interest and brought into contact with the display surface such that the active pen tool 300 is conditioned to emit modulated illumination via its tip 304 and when the illumination assembly 150 is turned off, the active pen tool 300 appears as a bright region on an otherwise dark background in image frames captured by the imaging module 124. The touch processing module 126 receives and processes the captured image frames to detect the coordinates and characteristics of bright regions in the captured image frames, as described in U.S. Patent Application Publication No. 2010/0079385 entitled METHOD FOR CALIBRATING AN INTERACTIVE INPUT SYSTEM AND INTERACTIVE INPUT SYSTEM EXECUTING THE CALIBRATION METHOD to Holmgren et al. and assigned to SMART Technologies ULC, the disclosure of which is incorporated herein by reference in its entirety. The detected coordinates are then mapped to display coordinates and provided to the general purpose computing device 114 via the projection module 122.
(89) In order to yield a strong signal or bright region representing an active pen tool 300 in captured image frames and overcome ambient light interference (i.e. increase the signal to noise ratio and improve the robustness of active pen tool detection), it is desired to synchronize illumination of the active pen tool 300 with the exposure timing of the image sensor. In this embodiment, to achieve such synchronization, the modulated illumination output by the active pen tool 300 is used to generate a synchronization signal that in turn is used to synchronize image sensor exposure timing and illumination assembly switching with the active pen tool illumination as will be described.
(90) When the tip 304 of the active pen tool 300 is illuminated, the IR receiver 110 detects the modulated illumination output by the active pen tool 300 due to the fact that the carrier has a frequency within the limits of its pass filter. The IR receiver 110 removes the carrier from the detected modulated illumination to isolate the periodic IR signals output by the active pen tool 300 at the image sensor frame rate. The IR receiver 110 in turn outputs corresponding modulated signals to the DSP 140. The DSP 140 continually monitors the IR receiver 110 to determine if modulated signals are being output (step 406). If modulated signals are detected by the DSP 140, the DSP 140 terminates generation of the system synchronization signals 434 and in turn generates periodic pen tool synchronization signals 432 (step 408). The DSP 140 in turn conveys the pen tool synchronization signals 432 to the image sensor via its PWM port to synchronize the timing of image frame capture to the on/off switching of the active pen tool modulated illumination and also provides the pen tool synchronization signals 432 to the illumination assembly 150 via its GPIO port to similarly synchronize on/off switching of the illumination assembly 150 (step 410). As will be appreciated, the switching of the illumination assembly 150 is controlled such that the light curtain LC is turned off when the LEDs 310 of the active pen tool 300 are powered, and the light curtain LC is turned on when the LEDs 310 of the active pen tool 300 are turned off.
(91) The image frames that are captured by the image sensor of the imaging module 124 are conveyed to the DSP 140 and processed in the manner described above. Image frames captured by the image sensor while the illumination assembly 150 is turned off are processed by the DSP 140 to detect the bright region therein corresponding to the illuminated pen tool tip 304 (step 412). Image frames captured by the image sensor while the illumination assembly 150 is turned on are processed by the DSP 140 to detect a bright region therein corresponding to a finger or other passive pointer within the region of interest 106 and proximate to the display surface that is illuminated by the light curtain LC (step 414).
(92) As mentioned above, the DSP 140 continually monitors the IR receiver 110 to determine if it is outputting modulated signals (step 406). If the DSP 140 does not detect modulated signals for a threshold period of time, the DSP 140 terminates the pen tool synchronization signals 432 and regenerates the system synchronization signals 434, which are then used by the DSP 140 to control the timing of image frame capture and illumination assembly switching in the manner described above (step 416). In this case, only image frames captured by the image sensor while the illumination assembly 150 is turned on are processed by the DSP 140 to detect a bright region therein corresponding to a finger or other passive pointer within the region of interest 106 and proximate to the display surface that is illuminated by the light curtain LC (step 418).
(93) Following step 404, if the DSP 140 does not detect modulated signals, the process proceeds to step 418 so that the DSP 140 only processes image frames captured by the image sensor while the illumination assembly 150 is turned on.
(94)
(95) If desired, the modulated illumination output by the active pen tool 300 can be embedded with additional codes, data or other information representing active pen tool attribute information such as pen tool color and/or pen tool function information to allow the interactive input system 100 to support different functions such as left click, right click and erase, etc.
(96) As will appreciated, the above methodology provides advantages in that misidentification of the active pen tool 300 as a finger or other passive pointer before contact with the display surface can be avoided since the light curtain LC is not turned on during active pen tool detection. In addition, the image sensor, illumination assembly 150 and active pen tool 300 can be configured to have a long integration time in the finger detection mode (i.e. when the illumination assembly 150 is turned on) and a short exposure time in the active pen tool detection mode (i.e. when the illumination assembly 150 is turned off). In this manner, the interactive input system 100 will maximize the intensity of the pointer in captured image frames and eliminate ambient light interference as much as possible.
(97) The methodology described above supports detection of a single active pen tool 300 within the region of interest 106. In certain environments, the ability to detect multiple active pen tools is desired. An embodiment of the interactive input system 100 that provides this functionality will now be described with particular reference to
(98) In this embodiment, rather than using the modulated illumination output by the active pen tool 300 to generate a synchronization signal that is used to synchronize image sensor exposure timing and illumination assembly switching with the active pen tool illumination, the light curtain LC is modulated. The IR receiver 110 in this case is not used. Instead, the IR receiver 314 in the active pen tool 300 is used.
(99) Similar to the previous embodiment, with the interactive input system 100 powered, the general purpose computing device 114 provides video data and accompanying audio data, if any, to the projection module 122 of the interactive projector 112. The image projection subsystem 134 in turn projects an image onto the display surface. If accompanying audio data is received, the audio power amplifier and speaker subsystem 130 broadcasts the accompanying audio. At the same time, the DSP 140 of the touch processing module 126 generates periodic system synchronization signals 530 (step 500) and outputs the system synchronization signals to the illumination assembly 150 via its GP 10 port. In response to the system synchronization signals 530, the illumination assembly 150 is driven in a manner that results in the light curtain LC being turned on and off periodically (step 502). When the illumination assembly 150 is turned on, the DSP 140 signals the control boards 164 of the illumination units to modulate the illumination emitted by the IR laser diodes 162. As a result, the IR illumination of the light curtain LC is modulated by a carrier having a frequency significantly different than the typical frequencies of conventional IR remote controls. The frequency of the carrier is also sufficiently high such that when the illumination assembly 150 is turned on, the light curtain LC appears continuously on to the image sensor.
(100) The DSP 140 also outputs the system synchronization signals 530 to the image sensor of the imaging module 124 via its PWM port. In response to the system synchronization signals, the image sensor is conditioned to capture image frames in synchronization with the on/off switching of the illumination assembly 150 (step 504). Again, for each operation cycle of the image sensor, the image sensor is conditioned to capture a pair of image frames. The first image frame is captured with the illumination assembly 150 turned on and the second image frame is captured with the illumination assembly 150 turned off.
(101) When a passive pointer such as a finger is within the region of interest 106 and proximate to the display surface and the illumination assembly 150 is turned on, the finger is illuminated by the light curtain LC and reflects IR illumination. As a result, the illuminated finger appears as a bright region on an otherwise dark background in captured image frames. When the active pen tool 300 is brought into proximity of the region of interest 106, the IR receiver 314 adjacent the tip 304 detects the modulated light curtain LC. In response, the IR receiver 314 activates the microcontroller 308 and generates signals 532 that are synchronized with the operation cycle of the image sensor. When the tip 304 of the active pen tool 300 is brought into contact with the display surface with a force above the threshold activation source, the microcontroller 308 uses the signals 532 so that the LEDs 310 are powered only when the light curtain LC is turned off so that the illuminated tip 304 of the active pen tool 300 appears as a bright region in captured image frames.
(102) For each pair of captured image frames, the first image frame that is captured while the illumination assembly 150 is turned on is processed by the DSP 140 to determine if a bright region exists therein representing a pointer (step 506). If so, the bright region is identified as a finger (step 508) and the location of the finger is determined in the manner described previously (step 510). If no bright region is detected in the first image frame at step 506, the second image frame is processed by the DSP 140 to determine if a bright region exists therein representing the active pen tool 300 (step 512). If so, the bright region is identified as the active pen tool (step 514) and the location of the active pen tool 300 is determined in the manner described previously (step 510).
(103) Although the time sequences in
(104) Referring now to
(105) The DSP 140 also outputs the system synchronization signals 632 to the image sensor of the imaging module 124 via its PWM port. In response to the system synchronization signals 632, the image sensor is conditioned to capture image frames (step 604). During image frame capture, the exposure time of the image sensor is the same as the duration of the on phase of the light curtain LC. For each operation cycle of the image sensor, the image sensor is conditioned to capture a pair of image frames. The first image frame is captured with the illumination assembly 150 turned on and the second image frame is captured with the illumination assembly 150 turned off. The shortened image sensor exposure allows each image frame to be processed by the DSP 140 before the next image frame is captured.
(106) When a passive pointer such as a finger is within the region of interest 106 and proximate to the display surface and the illumination assembly 150 is turned on, the finger is illuminated by the light curtain LC and reflects IR illumination. As a result, the illuminated finger appears as a bright region on an otherwise a dark background in captured image frames. When the active pen tool 300 is brought into the region of interest 106, the IR receiver 314 adjacent the tip 304 detects the modulated low intensity IR illumination. In response, the IR receiver 314 activates the microcontroller 308. The microcontroller 308 in turn powers the LEDs 310 when the light curtain LC is turned off so that the illuminated tip 304 of the active pen tool 300 appears as a bright region in captured image frames allowing active pen tool hover to be detected. When the tip 304 of the active pen tool 300 is brought into contact with the display surface with a force above the threshold activation force, the microcontroller 308 powers the LEDs 310 irrespective of whether the light curtain LC is turned on or off.
(107) For each pair of captured image frames, the first image frame is processed by the DSP 140 to determine if a bright region exists therein representing a pointer (step 606). If no bright region is detected in the first image frame, the second image frame is processed by the DSP 140 to determine if a bright region exists therein representing a pointer (step 608). If so, the bright region is identified as an active pen tool 300 that is approaching the display surface but has not yet contacted the display surface or that is hovering in front of the support surface W (step 610). This scenario is represented by
(108) At step 606, if a bright region is detected in the first image frame, the DSP 140 processes the second image frame to determine if a bright region also exist therein (step 612). If no bright region is detected in the second image frame, the bright region in the first image frame is identified as a finger or other passive pointer (step 614). This scenario is represented by
(109) As will be appreciated, when a bright region exists in only one of the first and second image frames pointer detection is relatively easy. However, when a bright region exists in both of the first and second image frames, pointer ambiguity may arise. To resolve pointer ambiguity, the intensity and size of the bright regions in the first and second image frames are examined as will now be described. At step 612, if a bright region also exists in the second image frame, the DSP 140 compares the intensity and size of the bright region in the first and second image frames to determine if the bright region in the second image frame is brighter and bigger than that in the first image frame (step 618). If so, the bright region is identified as an active pen tool 300 that is hovering over the display surface and its location is determined in the manner previously described. This scenario is represented by
(110) Although the illumination assembly 150 is described as emitting modulated low intensity IR illumination following each on phase of the light curtain, it will be appreciated that the emission of modulated low intensity IR illumination can precede each on phase of the light curtain. Also, although the size and intensity of bright regions in a pair of image frames are compared, when an active pen tool is proximate the light curtain LC, to distinguish between pen tool hover and pen tool contact conditions, alternatives are available. For example, the tip of the active pen tool 300 may be coated with an IR anti-reflection material such that the active pen tool does not reflect IR illumination.
(111)
(112) As described above, the illumination assembly 150 emits a fan-shaped sheet of IR illumination over the region of interest 106 to facilitate detection of passive pointers brought into the region of interest 106. Since the emitted IR illumination is not visible to human eyes, during installation, it can be difficult for a user to use the adjustable support 108 to adjust the light curtain LC to bring the light curtain LC to its desired position substantially parallel to the plane of the region of interest 106. Methods for adjusting the position of the light curtain LC to bring it substantially to its desired position will now be described.
(113) Following installation, the initial position of the light curtain LC is typically not known. To determine the position of the light curtain LC, in one embodiment the adjustment knobs 190, 192 and 196 are used to orient the position of the light curtain LC such that the light curtain LC intersects the region of interest 106 and impinges on the display surface. When the light curtain LC impinges on the display surface, IR illumination is reflected back towards the adjustable support 108 and appears in image frames captured by the imaging module 124 as a line referred to hereinafter as an intersection line. One full rotation of each adjustment knob 190, 192 and 196 will cause the light curtain LC to tilt at a known angle.
(114) The captured image frames are processed by the touch processing module 126 to determine the position of the light curtain LC. The position of the light curtain LC is then compared to the desired position of the light curtain LC, and the amount of adjustment to be made is calculated. As will be described below, at least three captured image frames are required to accurately calculate the amount of adjustment to be made. The user is then prompted to adjust each of the adjustment knobs 190, 192 and 196 until the light curtain LC is positioned at the desired position. In this embodiment, the user is prompted through an image projected on the display surface by the projection module 122. It will be appreciated that the user may be prompted through other types of feedback such as for example, audio feedback through use of the speaker subsystem 130.
(115) As shown in
(116) The plane of the light curtain LC is designated as X-Y. The rotation angle about the X-axis is designated as .sub.x and the rotation angle about the Y-axis is designated as .sub.y. As will be appreciated, once the values of angles .sub.x and .sub.y are determined, the interactive input system 100 is able to prompt the user how to adjust the adjustment knobs 190, 192 and 196 such that the X-Y plane of the light curtain LC is parallel to the X-Y plane of the region of interest 106. The values of angles .sub.x and .sub.y are determined using an iterative analysis, as will be described.
(117) In this embodiment, one full rotation of each of the adjustment knobs 190 and 192 will cause the light curtain LC to tilt a total of 1.5 degrees about the Y-axis. One full rotation of adjustment knob 196 will cause the light curtain LC to tilt a total of 1.5 degrees about the X-axis. It will be appreciated by those of skill in that the adjustment knobs may be calibrated to cause different amounts of tilt when rotated. For example, one full rotation of an adjustment knob may cause the light curtain LC to a tilt a total of 0.75 degrees or 3.0 degrees about a respective axis.
(118) Turning now to
(119) Turning now to
(120) The values of the angles .sub.x and .sub.y are calculated by fitting data of the intersection lines to solve a convex optimization problem as shown in
(121) Two positions of the Y-axis are calculated by rotating the light curtain LC about the X-axis by angle .sub.x and angle .sub.x+.sub.2, wherein angle .sub.2 is obtained at step 408 above (step 4120). Three rotation matrices are calculated about the Y-axis to represent three possible orientations of the light curtain LC that result in the generation of the three intersection lines (step 4122). The first possible orientation is represented by the light curtain LC rotated about the first position of the Y-axis at angle .sub.y. The second possible orientation is represented by the light curtain LC rotated about the first position of the Y-axis at angle .sub.y+.sub.1. The third possible orientation is represented by the light curtain LC rotated about the second position of the Y-axis at angle .sub.y+.sub.1. Three final rotation matrices are calculated by combining the rotation matrices calculated in step 4122 with a rotation matrix calculated about the X-axis at the angle .sub.x and the angle .sub.x+.sub.2 (step 4124). The direction of each estimated intersection line resulting from the intersection of the light curtain LC and the display surface is calculated using the rotation matrices obtained in step 4124 (step 4126). The equation of each estimated intersection line is calculated using an arbitrary point from the captured image frames and the direction of the estimated intersection line obtained in step 4126 (step 4128). The error between the intersection lines measured from the captured image frames and the estimated intersection lines is calculated using convex optimization techniques such as that described in the publication entitled Convex Optimization authored by Stephen Boyd and Lieven Vandenberghe and published by Cambridge University in 2004 (step 4130). The error is compared to an error threshold and if the error is greater than the threshold, the method returns to step 4120 using a new estimated value for angles .sub.x and .sub.y (step 4132). Once the error is less than the error threshold, the values of angles .sub.x and .sub.y are determined and the method continues to step 4134.
(122) Once the values of angles .sub.x and .sub.y are determined, the coordinates (X.sub.c, Y.sub.c, Z.sub.c) of the pivot point O of the light curtain LC are calculated according to the method shown in
(123) Using the values of angles .sub.x and .sub.y and the coordinates (X.sub.c, Y.sub.c, Z.sub.c) of the pivot point O of the light curtain LC, the adjustment required to adjust the light curtain LC to its desired position is calculated. As will be appreciated, once angles .sub.x and .sub.y are determined, the light curtain LC is adjusted by the negative of angle .sub.x+.sub.2 and angle .sub.y+.sub.1. Once the required adjustment is calculated the amount of adjustment required is communicated to the user (step 414).
(124)
(125)
(126)
(127) The final position of the light curtain LC is shown in
(128) Three light curtains LC were installed at different locations and at different angles and the method for adjusting the position of the light curtain LC described above was tested. The test results are shown in Table 1 below.
(129) TABLE-US-00001 TABLE 1 Test Results for three initial light curtain LC positions Group 1 Group 2 Group 3 True Estimated True Estimated True Estimated Value Value Value Value Value Value X 936 935.7085 936 931.0489 803 803.0450 Y 150 150.3607 150 146.3855 177 177.8764 Z 25 25.0369 25 25.3362 37 37.0135 x 1.5 1.4999 2 2.0008 1.378 1.3772 y 5 4.9983 0.5 0.5068 4.5304 4.5307
(130) In Table 1, the true value represents the actual pivot point of the illumination assembly 150. During each test, the light curtain LC was oriented to impinge on the display surface thereby to generate the three intersection lines. After completion of the iterative analysis using the above-described method, the estimated values were computed based on the captured image frames of the three intersection lines. Group 2 used the same illumination assembly pivot point as Group 1, however different angles .sub.x and .sub.y were used. Group 3 used a different illumination assembly pivot point and different angles .sub.x and .sub.y, than Groups 1 and 2. As can be seen, the estimated values obtained are very close to the true values.
(131) Once the position of the light curtain LC has been adjusted as described above, the light curtain LC may need to be finely adjusted. To finely adjust the position of the light curtain LC, a gauge tool 500 is used as shown in
(132) As shown in
din=d1d2(A/L)(1)
Dx=(D/B)(L/d2)din(2)
where L is the length of the middle portion 506, B is the length of the diagonal line 508, A is the distance between the edge of the middle portion 506 and the edge of the diagonal line 508, D is the width of the middle portion 508, d1 is the distance between the inside edge of dot D1 and the middle of dot D2, d2 is the distance between the inside edges of dots D1 and D3, and din is the distance between dot D2 and the left edge of the middle portion 506. Since the distance A between the edge of the middle portion 506 and the edge of the diagonal line 508, the length B of the diagonal line, the width D of the middle portion 508 and length L of the middle portion are known, the value of the distance Dx between the light curtain LC and the region of interest 106 is easily calculated. Measurements of distance Dx at various locations of the region of interest 106 may be used to calculate the desired fine adjustment of the light curtain LC.
(133) Based on the data of the distance Dx or the visual observation in the image frame, corresponding adjustment may be performed manually by rotating the adjustment knobs 190, 192 and 196. It will be noted that distance A, length B, width D, length L, and distance Dx are physical measurements and are measured in millimeters, and distance d1, distance d2, and distance din are image measurements and are measured in pixels.
(134) In this embodiment, the length L of the middle portion 506 is 140 mm, the distance A between the edge of the middle portion 506 and the edge of the diagonal line 508 is 20 mm, the length B of the diagonal line 508 is 100 mm, and the width D of the middle portion 506 is 40 mm. It will be appreciated that other dimensions may be used.
(135)
(136)
(137) Although the gauge tool is described above as being rectangular in shape, those skilled in the art will appreciate that other shapes may be used. For example, the gauge tool may be triangular in shape. Further, the white diagonal line on the top surface of the gauge tool may be replaced with a bright region of other shape or pattern such as for example a triangular shape or a stair-type pattern.
(138) Although embodiments are described wherein the gauge tool is used to finely adjust the position of the light curtain LC based on the calculated value of distance Dx after an initial method for adjusting the position of the light curtain LC has been carried out, those skilled in the art will appreciate that the gauge tool may be used to adjust the position of the light curtain LC without the use of the above-described light curtain adjustment method.
(139) Although the interactive input system is shown as comprising an interactive projector mounted adjacent the distal end of a boom-like support assembly, those of skill in the art will appreciate that alternatives are available. For example,
(140) Although methodologies and gauge tools for adjusting the position of the light curtain so that it is substantially parallel with the display surface have been described above, alternatives are available. For example,
(141) In use, when the user holds the handle 712 and presses the front plate 704 of the gauge tool 700 against the display surface, with the top surface 710 of the top plate 702 generally facing the imaging module 124, the contact switch 716 closes, which consequently results in the first and second indicators 720a and 720b being turned on. With the gauge tool 700 positioned in this manner, the light curtain LC impinges on the top plate 702. The top plate 702 of the gauge tool 700 in turn reflects the light curtain LC allowing the imaging module 124 to capture an image frame including the reference mark 708.
(142)
(143) When the user applies the gauge tool 700 against the display surface as described above, and the light curtain LC impinges on the top plate 702, the reference mark 708 reflects the light curtain LC. The imaging module 124 captures image frames of the reference mark 708, and transmits the captured image frames to the touch processing module 126.
(144) By virtue of the shape of the first portion 732 of the reference mark 708, the width of the bright band 742 provides an indication of the distance between the display surface and the light curtain LC. By capturing image frames of the reference mark 708 at different positions within the region of interest 106, the interactive input system is generally able to determine whether the light curtain LC is parallel to the display surface by checking whether the width of the bright band 742 of the first portion 732 is constant in captured image frames.
(145) In this embodiment, the touch processing module 126 calculates the distance between the display surface and the light curtain LC, and provides instructions to users regarding how to adjust the light curtain LC so that the plane of the light curtain LC becomes generally parallel to the plane of the display surface. The distance Dz between the display surface and the light curtain LC is calculated from the ratio of the width of the bright band 742 and the distance between bright bands 742 and 744, expressed by the following equations:
R=l.sub.1l.sub.2(3)
Dz=D*(Lmax*RLmin)/(LLmin)(4)
where l.sub.1 and l.sub.2 are parameters in image pixels measured from the image frame 740, and D, L, Lmin and Lmax are predefined physical measurements (e.g., in millimeters or inches) of the reference mark, respectively.
(146) In addition to calculating the distance Dz between the light curtain LC and the display surface, the interactive input system also detects the location (X, Y) of the gauge tool. After the user applies the gauge tool 700 at various locations within the region of interest 106 and against the display surface, the interactive input system obtains a set of three-dimensional (3D) position data (X, Y, Dz) describing the position of the light curtain LC in the 3D space in front of the region of interest 106, which is used by a light curtain alignment procedure for aligning the light curtain LC, as will now be described below.
(147) The set of 3D position data (X, Y, Dz) is the position data in a 3D coordinate system (X, Y, Z) defined for the region of interest 106. As shown in
(148) As described above with reference to
(149) A point (X.sub.1, Y.sub.1, Z.sub.1) in the (X, Y, Z) coordinate system can be converted to a point (X.sub.1, Y.sub.1, Z.sub.1) in the (X, Y, Z) coordinate system as:
X.sub.1=WX.sub.1+X.sub.0(5)
Y.sub.1=HY.sub.1+Y.sub.0(6)
where H is the height of the display surface (taken to be the unit of length), W is the width of the display surface and X.sub.0 and Y.sub.0 are predefined offsets.
(150) The steps of the method for adjusting the position of the light curtain LC are shown in
AX+BY+CZ+d=0(7)
(151) Fitting the mathematical model to the set of data points is a least-squares problem such that the coefficients A, B, C are solvable via singular value decomposition (SVD). The coefficients A, B, C are the un-normalized components of a unit vector n perpendicular to the light curtain plane, and d is the distance from the closest point on the light curtain plane to the pivot point origin O. The normalized unit vector n, also referred to unit normal, is then calculated from the coefficients A, B, C (step 920), according to:
(152)
where [A, B, C].sup.T represents the transpose of the row vector [A, B, C].
(153) After the unit normal n is determined, the angle between the unit normal and the Z-axis, and the components along the X and Y axes, respectively, are calculated (step 930).
(154) As shown in
cos()=n.Math.k(9)
and,
sin()={square root over (1cos().sup.2)}(10)
where k=[0, 0, 1].sup.T is a unit vector along the Z-axis. The axis of rotation, also a unit vector, is then the vector cross-product:
v=nk(11)
which in this case is:
v=[n.sub.y,n.sub.x,0].sup.T
where n.sub.x and n.sub.y are the components of the unit normal n along the X-axis and Y-axis, respectively. The axis v and angle allow a rotation matrix R to be specified according to:
(155)
where, .sub.ij is the Kronecker delta symbol, whose value is 1 if i=j and zero otherwise, and, .sub.i,j,k is the Levi-Civita symbol, whose value is 1 for a cyclic permutation of the indices i,j,k (e.g., 1,2,3 or 2,3,1), 1 for an acyclic permutation (e.g., 2,1,3), and zero otherwise (e.g., 1,2,1). Rotation matrix R should be an orthogonal matrix, meaning that:
det(R)=1
R.sup.T.Math.R=I.sub.3
where
(156)
(157) Any data point X=[X, Y, Z].sup.T can be de-rotated by computing:
R.sup.T.Math.X
where
(158)
since the inverse of an orthogonal matrix is just its transpose. From rotation matrix R the rotation angles needed to make the plane of the light curtain LC generally parallel to the plane of the display surface can be obtained. If Q=R.sup.T, then the appropriate angles are:
(159)
where .sub.x is the tilt angle, and .sub.y is the roll angle. The corresponding number of turns of each adjustment knob are calculated (step 940) according to:
(160)
(161) The parameters pitch.sub.tilt and pitch.sub.roll refer to the spindle pitches of the corresponding adjustment knobs, and (d.sub.tilt, d.sub.roll) are the distances through which the turns are applied. To resolve the roll into turns of the left and right adjustment knobs 190 and 192, the following equations are used:
(162)
(163) In the above, d is the shortest distance between the plane of the light curtain LC and the origin O as defined earlier, and t is the target depth, which is the desired distance between the plane of the light curtain LC and the display surface, in this embodiment, 6 mm. Given the correct number of turns for the adjustment knobs, the light curtain LC can be adjusted to this specified depth t in front of the display surface. Once the number of turns of each adjustment knob is known, the adjustment knobs are rotated accordingly to render the plane of the light curtain LC parallel to the plane of the display surface (step 950).
(164) During plane fitting, it has been found that some data points may be noisy resulting in numerous outliers (i.e., data points greater than some distance tolerance of the best mathematical model). If these outliers are used during plane fitting, a bias will be introduced into the least-squares estimate.
(165) Random sample consensus is a general technique for obtaining robust fits in a wide range of fitting problems. The main goal is to find a fit that does not contain the influence of any outliers in the data set. Generally, outliers are data points that cannot be described by the mathematical model of interest, namely that mathematical model which appears to describe most of the data points well. Any fit must describe a certain minimum number of data points. In the case of a plane in 3D space, three (X, Y, Z) points are required for model definition while a 2D line requires only two data points for model definition. The minimum number of data points is referred to as the minimum sample set (MSS). Given a model and a MSS, the distance from the model to each data point (X, Y, Z) can be computed. In the case of a plane, this distance is the orthogonal distance from the plane to each data point. By employing RANSAC, any data points that lie beyond some maximum distance T from the MSS are excluded. The RANSAC algorithm selects minimum sample sets at random and looks for the one which maximizes the number of inliers or data points within a distance tolerance T of the best model. This is known as the consensus set. The selection of minimum sample sets does not continue indefinitely, but concludes after a fixed number of trials or iterations, which are computed adaptively as the RANSAC algorithm proceeds. Once the consensus set has been identified, then a regular least-square fit to the inliers is performed.
(166) The steps, for one trial or iteration, are discussed with reference to the flowchart shown in
(167) An important point to appreciate about RANSAC is that it is non-deterministic, unlike regular least-squares. What this means in practice is that, for a given data set, slightly different results will be obtained if the RANSAC algorithm is run several times in succession. However, the final least-squares fit based on the inliers found by RANSAC should be very similar. This makes the method robust.
(168) Rather than using a specific fixed distance threshold T, an automatic threshold selection may be employed in the light curtain alignment calculation. In this case, an appropriate value for the threshold T is chosen based on the input data set. In this embodiment, five passes are made through a given (X, Y, Z) data set, and each time three random non-collinear points are chosen to define a plane. For all of the points, the plane to point distances are found, and their average and standard deviations are computed. This gives five values of , and the smallest of these values is chosen as the threshold T.
(169)
(170) For the data points shown in
(171) A=0.012553
(172) B=0.015068
(173) C=0.865219
(174) D=0.50101
(175) which results in a plane unit normal:
(176) n=[0.014504-0.017411, 0.999743].sup.T
(177) This unit normal in turn leads to computing of the rotation matrix R:
(178)
from which rotation angles .sub.x=0.00774 degree and .sub.y=0.831057 degree are found. The corresponding rotation angle of the adjustment knobs are:
(179) N.sub.tilt=0.6304
(180) N.sub.left roll=3.2423
(181) N.sub.right roll=4.4441
(182) The adjustment knobs 190, 192 and 196 can then be rotated according to the calculated results to bring the plane of the light curtain LC generally parallel to the plane of the display surface.
(183)
(184) Several other data sets were also tested. pattern. As can been seen, the plane fits well to the inliers represented by .circle-solid. for all cases. The outliers represented by are excluded.
(185) It should be noted that since the RANSAC fitting is a robust method, the path that the gauge tool follows across the display surface in order to generate the set of data points typically will not affect the plane fitting result. This is to say that for a given light curtain orientation, the data points generated in response to any pattern of gauge tool movement across the display surface will generate a similar result. Tables 2 to 4 below show three examples. Four sets of data points generated as a result of different patterns of gauge tool movement across the display surface were obtained and tested for each of four light curtain orientations. In each table, the results for set 1 were computed from data points generated in response to movement of the gauge tool across the display surface following a T pattern. The results for set 2 were computed from data points generated in response to movement of the gauge tool across the display surface following a x pattern. The results of set 3 were computed from data points generated in response to movement of the gauge tool across the display surface following a pattern. The results of set 4 were computed from data points generated in response to movement of the gauge tool across the display surface following a pattern. Ideally, the angles .sub.x, .sub.y and the number of turns of the adjustment knobs are not affected by the gauge tool movement pattern that resulted in the generation of the data points. Namely, the values in the same column for the same parameter in the following tables should agree with each other for the same light curtain orientation.
(186) TABLE-US-00002 TABLE 2 Set .sub.X (deg) .sub.Y (deg) Tilt turns Left roll turns Right roll turns 1 0.6761 0.3126 0.4272 1.8056 1.3535 2 0.9524 0.3573 0.6018 3.9438 3.4271 3 0.7594 0.1253 0.4742 2.5717 2.3905 4 0.8898 0.1560 0.5622 4.0044 3.7788
(187) TABLE-US-00003 TABLE 3 Set .sub.X (deg) .sub.Y (deg) Tilt turns Left roll turns Right roll turns 1 1.1175 0.0706 0.7060 1.1006 1.2026 2 1.2158 0.2448 0.7682 2.5544 2.9084 3 1.1748 0.6581 0.7423 2.8535 3.8052 4 1.0882 0.8363 0.6875 2.5651 3.7744
(188) TABLE-US-00004 TABLE 4 Set .sub.X (deg) .sub.Y (deg) Tilt turns Left roll turns Right roll turns 1 1.7170 1.3125 1.0848 2.0381 3.9359 2 1.6596 1.5698 1.0485 2.6662 4.9359 3 1.8341 1.1424 1.1587 0.5863 2.2382 4 1.6950 1.2836 1.0709 1.7994 3.6554
(189) In these tables, negative and positive numbers represent required turns of the adjustment knobs in different directions. For sets 1, the number of turns for the left and right roll is much smaller than that of the other sets, indicating perhaps that a T pattern movement for the gauge tool across the display surface is not the best pattern to use. The reason is that this simple pattern may not adequately represent the plane of the light curtain LC, especially when there is a reflection. Therefore, it is best to use those patterns that cover more of the region of interest.
(190) In order to further test the RANSAC method, an independent implementation using Mobile Robot Planning Toolkit (MRPT) was employed to compare with the RANSAC method. The MRPT contains an example of 3D plane fitting that was adapted for use with the light curtain data set. The plane surface normal n was computed from least-squares fitting to the inliers and compared with the results computed using the RANSAC method. Table 5 below shows the results of the comparison.
(191) TABLE-US-00005 TABLE 5 Set MRPT RANSAC 1 [8.13088e05, 0.000205192, [0.000514, 0.000151, 1] 1.0] 2 [0.00629618, 0.000768107, [0.004804, 0.001647, 1] 0.999987] 3 [0.00136505, 0.0182697, 1] [0.002177, 0.016413, 0.999863] 4 [0.00354709, 0.0172904, 1] [0.003328, 0.016776, 0.999854] 5 [0.0141567, 0.0163843, 1] [0.014804, 0.016896, 0.999748] 6 [0.00124424, 0.00138161, 1] [0.002758, 0.001385, 0.999995] 7 [0.000706494, 0.000130609, [0.000286, 0.000212, 1] 1.0]
(192) As will be appreciated, the results from the implementation of RANSAC agree with those from the independent implementation of MRPT.
(193) If light curtain LC does not intersect the display surface, the straight-forward fit of a plane to the data set using RANSAC yields the target light curtain position. As a result, the geometric relationship between the target light curtain position and its current position can be readily converted into a sequence of adjustment knob rotations as mentioned above. However, in many cases, the light curtain intersects the display surface resulting in reflections of the light curtain from the display surface, either within the region of interest 106 or in an area surrounding the region of interest. Examples of some scenarios are shown in
(194) For cases where the reflection line RL is present, an extended RANSAC plane fitting method is employed and generalized to account for observations in the reflection. If a reflection line RL is present, then the generated data points corresponding to the reflection line have Z<0. Equation (5) is rewritten and has the following condition:
(195)
(196) In other words, knowing (A, B, C) allows a test to be performed. Moreover, finding all such points then allows the sign of Z to be changed and the normal RANSAC algorithm to be applied to the modified data. This process is known as unfolding.
(197)
(198) At this step, the data points representing the reflection line RL are not known. Another possible solution of plane fitting is that the right part of the data set could be as result of the reflection line RL and unfolded, which would result in plane fitting to these data points. Therefore, it is important to know which plane represents the real light curtain plane, and not its reflection.
(199) To differentiate the light curtain from its reflection, the following rules are considered.
(200) Firstly, if there are data points having Z-values that are positioned on the same side as the illumination assembly 150 relative to the reflection line RL, then these data points represent the light curtain LC and not its reflection. During implementation, the reflection line RL is calculated from the plane at Z=0. Then the locations of the data points and the illumination assembly 150 with respect to the reflection line RL will be known. This rule is illustrated in
(201) In some other cases, a whiteboard may be placed on the support surface that positions the illumination assembly 150 such that a portion of the light curtain may be behind the display surface. In this case, if there are inlier data points in the area of the display surface that a direct ray of the light curtain LC would be blocked by the display surface then it must be a reflection. This is illustrated by the cases shown in
(202) For some other cases, additional steps may need to be performed in order for the real light curtain to be detected. In these instances, the current light curtain plane position is recorded. The user is then instructed to make a safe rotation. Safe means the rotation applied to the adjustment knobs by the user causes the light curtain LC to rotate around the reflection line RL. The new light curtain plane is then determined and compared with the previous light curtain plane. The result of the comparison allows the user to be directed to a non-ambiguous case. This is illustrated in
(203) Another approach to deal with the reflections is to describe the shape of the display surface directly rather than unfold it. The light curtain is described using a folded plane model which is fitted to the (X, Y, Z) data using a nonlinear least-squares technique. In the folded plane model, the light curtain plane given by the following equation is fitted to the data points:
Z=|A.Math.X+B.Math.Y+C|(20)
(204)
(205) After the plane fitting, the light curtain position is identified using the rules discussed above. The number of turns of each adjustment knob to position the light curtain so that the light curtain plane is generally parallel to the plane of the display surface is calculated and the adjustment instructions are presented to the user on the display surface via an alignment wizard. The alignment procedure with reference to the alignment wizard is shown in
(206) During operation, the user is prompted to follow graphical instructions of the alignment wizard presented on the display surface. When the alignment wizard starts, it requests the user to input the current settings of the adjustment knobs, such as the value shown in an indicator widow (not shown) below each adjustment knob or a demarcation thereon. Then, the alignment wizard graphically instructs the user how to hold the gauge tool and move it across the display surface as shown in
(207)
(208) The next step shown in
(209) As discussed above, the measured data points are obtained by moving the gauge tool 700 across the display surface along predetermined paths following certain patterns. Examples of the disclosed patterns include a T pattern, an x pattern, a pattern and a pattern. Other patterns can of course be employed. When the gauge tool is at certain locations, such as at the upper left corner of the display surface as shown in
(210) To solve this problem, the upper surface of the top plate of the gauge tool corresponding to the white part of the reference mark may be designed to have a sawtooth profile.
(211)
(212)
(213) In addition to the gauge tool, the design of the reference mark on the top surface of the gauge tool is not limited to the embodiments discussed above.
(214) Turning now to
(215) In general, regardless of configuration, the gauge tool should have at least one measurable parameter whose value is determined by the distance between the light curtain LC and the display surface. The interactive input system as a result is able to detect the gauge tool and measure the at least one measurable parameter at different locations within the region of interest 106. The light curtain LC is determined to be aligned with the display surface when the at least one measurable parameter maintains a constant value at different gauge tool positions within the region of interest 106. As described above, the reference mark on the gauge tool may be configured such that the value of the at least one measurable parameter is a predefined monotonic function of the distance between the light curtain LC and the display surface. The predefined monotonic function may be a continuous function (such as for example a linear function, a polynomial function, an exponential function or the like), a discrete function (e.g., a step function or the like), or a combination of continuous and discrete functions.
(216) In the examples described above, the plane of the display surface is assumed to be planar or generally planar. As will be appreciated, in many instances, the display surface may be warped or curved. In the previous embodiment, the surface model of the plane was expressed by Equation 7. In order to accommodate a warped or curved display surface, a second-order surface model can be used that expresses the plane according to the expression:
Z=.sub.0+.sub.1.Math.X+.sub.2Y+.sub.3.Math.X.Math.Y+.sub.4.Math.X.sub.2+.sub.4.Math.X.sub.2+.sub.5.Math.Y.sub.2(21)
(217) The coefficients .sub.0 to .sub.5 are found by fitting the second-order surface model to the observed (X, Y, Z) data points. A robust M-estimate fitting approach such as that described in the publication entitled Numeral Recipes authored by Press et al., Section 15.7.2, Third edition, Cambridge University Press 2008 or in the publication entitled The Geometry of Multiple Images authored by Faugeras et al., Section 6.4.1, MIT Press 2001 is employed.
(218) With the description of the display surface shape available, the light curtain LC is adjusted so that the plane of the light curtain is positioned relative to the warped or curved display surface in an optimal sense. In particular, adjustment of the light curtain LC is constrained so that the plane of the light curtain approaches only to within some minimum distance of the display surface. This ensures that the light curtain LC does not intersect the display surface. An example of such a minimum light curtain distance plane fit is shown in
(219) If desired, a finger orientation procedure may also be employed.
(220)
(221) Although adjustment knobs are used to adjust the position of the light curtain LC, those skilled in the art will appreciate that other types of mechanical or electrical mechanisms may be used. For example, in another embodiment each adjustment knob may be coupled to a motor to automatically rotate the adjustment knobs to align the plane of the light curtain LC with the plane of the display surface once the desired light curtain position is determined.
(222) In the above embodiments, the region of interest and the display surface are described as being a portion of the support surface. If desired, the region of interest and the display surface can be bounded by a frame secured to the support surface or otherwise supported or suspended in a generally upright manner. The frame may comprise a tray to hold one or more active or passive pen tools. Alternatively, the region of interest and the display surface may be defined by a whiteboard or other suitable surface secured to the support surface.
(223) Although embodiments have been described, those of skill in the art will appreciate that variations and modifications may be made without departing from the scope thereof as defined by the appended claims.