TOUCH-BASED INPUT DEVICE
20230085902 · 2023-03-23
Inventors
- Tobias Gulden Dahl (Oslo, NO)
- Magnus Christian Bjerkeng (Oslo, NO)
- Andreas Vogl (Oslo, NO)
- Odd Kristen Østern Pettersen (Trondheim, NO)
Cpc classification
G01H9/00
PHYSICS
G06F3/0416
PHYSICS
International classification
Abstract
An input device comprises a plurality of optical vibration sensors mounted in a common housing. Each optical vibration sensor comprises a diffractive optical element; a light source arranged to illuminate the diffractive optical element such that a first portion of light passes through the diffractive optical element and a second portion of light is reflected from the diffractive optical element; and a photo detector arranged to detect an interference pattern generated by said first and second portions of light. The optical vibration sensor is configured so that in use, after the first portion of light passes through the diffractive optical element, the first portion of light is reflected from a reflective surface onto the photo detector. The input device is placed in contact with a surface of a solid body, and an object is brought into physical contact with the surface of the solid body, thereby causing vibrations in the solid body. The vibrations are detected using two or more of the optical vibration sensors. The relative phase(s) of the vibrations are used to determine information regarding the point of contact of the object on the surface of the solid body.
Claims
1. An input device comprising a plurality of optical vibration sensors mounted in a common housing, each optical vibration sensor comprising: a diffractive optical element; a light source arranged to illuminate said diffractive optical element such that a first portion of light passes through the diffractive optical element and a second portion of light is reflected from the diffractive optical element; and a photo detector arranged to detect an interference pattern generated by said first and second portions of light; wherein each optical vibration sensor is configured so that in use, after the first portion of light passes through the diffractive optical element, the first portion of light is reflected from a reflective surface onto the photo detector; wherein the input device is adapted to be placed on or attached to a surface of a solid body; and wherein the input device is configured: using two or more of the optical vibration sensors, to detect vibrations in the solid body; and to use at least one of (i) a relative phase of the vibrations and (ii) a relative amplitude of the vibrations to determine information regarding a point of contact of an object on the surface of the solid body.
2. The input device of claim 1, wherein the input device is configured to determine information regarding the point of contact of the object with the surface from a detected composite signal comprising direct and indirect vibrations.
3. The input device of claim 2, wherein the input device is configured to determine information regarding the point of contact of the object with the surface from a residual signal obtained by subtracting an estimate of a direct signal from the composite signal.
4. The input device of claim 1, wherein the input device is configured to determine information regarding the point of contact of the object with the surface using one or more estimated partial impulse responses, wherein an estimated partial impulse response is an estimate of part of a composite signal that is expected to be detected by an array of optical vibration sensors for a corresponding expected touch input.
5. The input device of claim 1, wherein the input device is configured to use a range or subset of frequencies of the vibrations preferentially or exclusively to determine information about the point of contact of the object on the surface of the solid body.
6. The input device of claim 1, wherein the input device is configured to determine one or more parameters relating to at least one of (i) the solid body and (ii) a position of the input device on the solid body.
7. The input device of claim 1, wherein the reflective surface from which the first portion of light is reflected is the surface of the solid body.
8. The input device of claim 1, wherein each optical vibration sensor comprises a membrane, wherein the membrane comprises the reflective surface from which the first portion is reflected.
9. The input device of claim 8, wherein each optical vibration sensor comprises a mechanical coupling between the membrane and the solid body surface.
10. The input device of claim 9, wherein the mechanical coupling comprises a mass attached to the membrane.
11. The input device of claim 1, wherein the common housing is shaped so that it is in contact with the solid body only at a periphery of the common housing when the input device is resting on a substantially planar surface of the solid body in use.
12. The input device of claim 1, wherein the common housing is shaped so that there is physical contact between the common housing and the solid body in regions of the common housing adjacent to the optical vibration sensors when the input device is resting on a substantially planar surface of the solid body in use.
13. The input device of claim 1, wherein the input device has at least one of the following dimensions: a maximum dimension of the at least one input device that is less than 0.2 m; and a separation between adjacent optical vibration sensors that is less than 4 cm.
14. The input device of claim 1, wherein the information regarding the point of contact comprises the position of the point of contact of the object on the surface of the solid body.
15. The input device of claim 1, wherein the vibrations are reflected vibrations.
16. The input device of claim 1, wherein the vibrations are vibrations caused by the object being brought into physical contact with the surface of the solid body.
17. A method of receiving an input by determining information regarding a point of contact of an object on a surface of a solid body, the method comprising: a) placing at least one input device in contact with the surface of the solid body, wherein the at least one input device comprises a plurality of optical vibration sensors mounted in a common housing, each optical vibration sensor comprising: a diffractive optical element; a light source arranged to illuminate said diffractive optical element such that a first portion of light passes through the diffractive optical element and a second portion of light is reflected from the diffractive optical element; and a photo detector arranged to detect an interference pattern generated by said first and second portions of light; wherein each optical vibration sensor is configured so that in use, after the first portion of light passes through the diffractive optical element, the first portion of light is reflected from a reflective surface onto the photo detector; b) causing vibrations in the solid body; c) detecting the vibrations using two or more optical vibration sensors of the plurality of optical vibration sensors; and d) using at least one of (i) a relative phase of the vibrations and (ii) a relative amplitude of the vibrations to determine information regarding the point of contact of the object on the surface of the solid body.
18. The method of claim 17, wherein using the relative phase of the vibrations or the relative amplitude of the vibrations to determine the information regarding the point of contact on the surface comprises using a direction of arrival algorithm.
19. The method of claim 17, further comprising determining a radius of curvature of a vibration wavefront originating at the point of contact to determine a distance from the at least one input device to the point of contact.
20. The method of claim 17, wherein the at least one input device comprises two input devices, wherein each input device determines a respective direction of arrival of a vibration wavefront from the point of contact, the method further comprising determining an intersection of the directions of arrival to determine the information regarding the point of contact on the surface.
21. The method of claim 17, wherein the at least one input device is placed on a region or face of the solid body that differs from a region or face where the point of contact is made.
22. The method of claim 17, wherein the information regarding the point of contact comprises the position of the point of contact of the object on the surface of the solid body.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0083] Certain preferred embodiments will now be described, by way of example only, with reference to the accompanying drawings, in which:
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097]
[0098]
[0099]
DETAILED DESCRIPTION
[0100]
[0101] The input device 4 comprises a main housing 16, which comprises an outer housing portion 18, and an internal housing portion 20. The internal housing portion 20 is made from foam to suppress the propagation of vibrations. The sensor housing 6 is made of a higher density material such as stainless steel. The outer housing portion 18 is provided with a peripheral rim 22 upon which the input device 4 rests when it is placed on a table surface 24. A reflective mat 26 is provided on the table top surface 24. The reflective mat 26 is easily portable along with the input device, e.g., it may be provided as a single piece that can be rolled up, or multiple smaller pieces, e.g., tiles, that can be arranged on the table top surface 24.
[0102] It should be appreciated that the mat is not essential and that the device could be placed directly on a suitable surface. No fixing is necessary; the weight of the device (which may be contributed to significantly by a large internal battery) could provide the necessary coupling. Equally it could be mounted to a surface—e.g., on the opposite face to that which is intended to be touched by a user.
[0103] In use, the input device 4 rests on the surface 24, and a user touches the surface 24 in a region in a vicinity of (but not necessarily close to or immediately adjacent) the input device. For example, the input device may be placed in a corner of a table, and a user may tap elsewhere on the table. The contact of the user's finger or an input object causes vibrations in the surface 24, which propagate towards the input device 4 and into the reflective mat 26 underneath the optical vibration sensor 2.
[0104] To detect the vibrations using the optical vibration sensor 2, the laser diode 10 projects a laser beam 28 towards the diffraction grating 14. A first portion 30 of the radiation is reflected from the grating 14 onto the photo detector 12. A second portion 32 is transmitted and diffracted by the grating 14, and impinges on the reflective mat 26 which reflects the second portion 32 onto the photo detector 12. The first and second portions 30, 32 interfere to form an interference pattern at the photo detector 12. The interference pattern is dependent on the relative phase of the first and second portions 30, 32, and therefore depends on the distance between the grating 14 and the reflective mat 26 (which, as noted above, vibrates along with the surface 24 following the touch by the user).
[0105] The vibrations may also propagate into the outer housing portion 18 via the rim 22, but due to the material of the inner housing portion 20, which suppresses vibrations, the vibrations do not propagate to the sensor housing 6. Consequently, the sensor housing 6 and the grating 14 mounted therein are substantially isolated from the surface 24 as it vibrates. As a result, the distance between the grating 14 and the reflecting mat 26 varies according to the vibrational amplitude. The interference pattern that is detected at the photo detector 12 therefore also varies with the vibration amplitude, as explained below.
[0106] When the first and second portions of light are reflected and transmitted respectively by the grating, they are diffracted to form a diffraction pattern having a number of peaks corresponding to zeroth, first, second etc. orders. The photodetector is positioned to receive the zeroth order peaks of both portions of light, although it could also be positioned to receive the first order peaks, or a higher order. In some embodiments, two photo detectors are provided in each sensor, where one photo detector is positioned to receive the zeroth order peaks and the other to receive the first order peaks. Focusing optics may be used to direct the peaks onto the respective photo detectors, or this could be achieved by the diffractive element itself (e.g., in some embodiments, the diffractive element is a Fresnel diffractive lens, which focuses the relevant diffraction order peak to a point).
[0107] To derive the separation of the reflective surface 24 and the grating 14 (and thus the vibration amplitude) from the interference pattern, the resultant amplitude of the zeroth order peaks (and/or of the first order peaks) of the interfering diffraction patterns are measured. When the optical path length between the grating 14 and the reflective mat 26 is half of the wavelength λ of the laser light 28 or an integer multiple thereof, the zeroth diffraction order peaks constructively interfere and the first order peaks destructively interfere. The light received at the photo detector 12, which receives the zeroth order peak, is therefore at a maximum. If a second photodetector is provided to receive the first order peak, the light it receives is at a minimum. When the optical path length is (2n+1)λ/4, where n is an integer, the zeroth diffraction order peaks constructively interfere instead, so the light received at the photodetector 12 is at a minimum, while at the second photo detector (if provided) the received light would be at a maximum. Having two photodetectors therefore extends the dynamic range of the sensor. As will be appreciated, the optical path length is dependent on the physical distance between the grating and the surface, e.g., the optical path length and the physical distance may be equal, or otherwise related by in a way that can be measured or calculated.
[0108]
[0109] The sensitivity of the vibration sensor is determined by the change in output signal for a given change in displacement of the reflective surface. It can be seen from
[0110] Although it is possible to measure the vibrations with only one photodetector, having two photodetectors to measure the zeroth and first diffraction orders respectively may advantageously provide an extended dynamic range.
[0111] By recording the variation in light intensity detected at the photodetector 12 in the manner described above, the phase and amplitude of vibrations at a point directly underneath the vibrational sensor 2 can be determined. As discussed further below, the input device 4 comprises a plurality of optical vibration sensors 2, and so the vibrations at a plurality of points on the reflective mat 26 can be detected in this way.
[0112]
[0113]
[0114]
[0115] Each optical vibration sensor 2 detects a vibration having a particular phase and amplitude. The phases and amplitudes of a wavefront (in particular, of a plane wave) arriving at an array of detectors can be used to calculate the direction of arrival of the wavefront using known methods, for example, direction of arrival (DOA) algorithms such as MUSIC and ESPRIT. Using such methods, it is possible to determine from the phase and amplitude of detected incident waves at an array of detectors the direction of propagation of a plane wave.
[0116] In the embodiment depicted in
[0117] The radius of curvature of an arriving wavefront can be determined by calculating the direction of arrival at various points across the widths of the array of optical vibration sensors 2. For example, at a first group of optical vibration sensors 52 the direction of arrival is in a first direction 54, while for a second group of optical vibration sensors 56, the direction of arrival is in a different direction 58. Accordingly, direction of arrival algorithms can be used to determine the curvature of the vibration wave fronts, and thus to determine the origin of the vibrations, because the radius of curvature depends on how far away the point of contact is. If the point of contact is far away, the wave fronts will be less curved, while if the point of contact is closer to the input device 4, the vibration wave fronts will be more curved, for example, as shown in
[0118]
[0119] Instead, the position of the point of contact 62 is determined using two input devices 4a, 4b that are placed at different locations on the surface 24. Each input device 4a, 4b determines a respective direction of arrival 66, 68. The position of the point of contact 62 can thus be calculated as the point at which lines 66 and 68 cross. Although it is particularly advantageous to have the two input devices 4a, 4b separated by a large distance (e.g., large compared to the width of the array of sensors 2), this method is also effective if the input devices are placed close together, although the precision with which position of the point of contact 68 is determined may be lower.
[0120] The direction of arrival may be calculated by each input device 4a, 4b, e.g., by an on board processor on each device. The input devices 4a, 4b may communicate with one another directly or via a remote device to calculate the points of origin 62. Alternatively, the data from the input devices 4a, 4b may be transmitted to a remote device, and the calculation of the point of origin may be calculated at the remote device.
[0121] It should be understood by those skilled in the art that exactly the same principle as is described above can be used to determine the direction of arrival of a plane wave or pseudo-plane wave (e.g., from a further-field finger contact) using an array of sensors in a single housing such as those shown in
[0122] In a similar manner to that described with reference to
[0123] It is also possible to use vibrations received indirectly to determine the position of a point of contact. This is described below with reference to
[0124] A sensor array 100 in accordance with the invention is placed in contact with a surface 102. A finger 101 touches (or moves across) a point on the surface 102, generating vibrations. The acoustic energy of the vibrations travels in all directions, including the direction straight towards the sensor array 100. This is illustrated as wavefronts 103 and 104. Different wave modes will arise in the material, and those wave modes can have different speeds and hence arrive at the array 100 at different times. For example, tapping on the surface will create predominantly bending waves. An object or finger drawn across the surface will create Rayleigh (surface) waves, as well as shear waves. This is represented in
[0125] In addition, there will be indirect waves impinging on the array 100, depicted as wavefronts 105, 106 arising from reflection of the vibrations from one edge of the surface 102 and wavefronts 107, 108 arising from a reflection from another edge of the surface 102. Again, different wave modes can have different speeds and thus arrive at the array 100 at different times, hence the separation of wavefronts 105, 106 and 107, 108 in the illustrations.
[0126] It will be appreciated that there will be many more wavefronts impinging on the array 100, as a result of one or more reflection from the boundary of the surface, but these are omitted form the Figure for clarity. Generally, later arriving signals will tend to be weaker than the first, direct signals and also than the first-order echoes, since the amplitude of the signal typically decays with distance R travelled by the vibration (e.g., 1/R or 1/√R, depending on the type of vibrations).
[0127] The direction and relative timings of arrival of the wavefronts depends on the shape, size and nature of the propagating medium. In some situations those aspects can be parameterized to cover, for example, a few key parameters like three-dimensional size (depth, width, height), the relative position and orientation of the array on the surface, as well as wave speeds of the most relevant wave modes. Given an initial estimate, those parameters could be estimated, inferred or observed from the tapping or sliding process.
[0128] The array housing could also be equipped with software and data describing a set of possible parameters (e.g., a set of parameter ranges) that could be later estimated from actual tapping or sliding data obtained in practice.
[0129] The tapping/sliding may happen in predefined locations (“training locations”), leading to a ‘correct’ estimation of the key parameters. Alternatively, the tapping/sliding may happen “blindly”, i.e., the sensor detects a tap or slide, but exact or approximate tapping position is unknown to the sensor. Instead, it is possible to determine the location as well as the parameters due to the use of multiple sensors as will be explained below. In addition to increasing resolution and improving noise characteristics by averaging, the use of multiple sensors can create an overdetermined set of equations for estimating touch/slide positions. In contrast, just using two or three sensors could be just enough for estimating location, but not the parameters as well. This overdetermination can be used to determine the key parameters, because if those parameters are incorrectly estimated, the equation system would typically result in a poor match between the observed “raw accelerometer” data and the estimated location of the finger that could lead to such a position estimate.
[0130] Specifically, an example of tapping will now be considered, in which the tapping sound would be some kind of impulse that could be observed by the multiple sensors. Let the surface shape vector s=[x, y, z] hold the dimensions of the surface (believed to be a finite planar slab). Let p=[px, py] be the position of the centre of the sensor array and let θ be the orientation of the sensor array in the plane. Let c be the wave speed of the dominant wave mode in the surface upon tapping (assuming there is one), and let y1(t), y2(t), y3(t), . . . y_N(t) be the time series of signals received by the N accelerometers during some time interval t from T0, T0+1, . . . T1+(M−1). Let x be the (unknown) position of the finger tapping on the table.
[0131] Now clearly y.sub.i is a function of t, but also a function of all the other parameters:
y.sub.i(t)=y.sub.i(t|s,p,θ,x,c,q.sub.i)Eq.5
[0132] where q.sub.i denotes the relative position of the i.sup.th sensor within the array, relative to the centre (this is typically a fixed parameter, i.e., not one that needs to be estimated). Now, given the family of signals {y.sub.i(t)} it is possible to construct directional edge detectors, that can pick up an edge of an impinging signal from a specific direction. One such approach is to use a matched filter with a thresholding function. Specifically, we can construct a set of filters f.sub.i.sup.ϕ(t) and apply (convolve with the signals) and add them up to get:
[0133] Typically, the filters (or signals) f.sub.i.sup.ϕ(t) will contain expected wavefronts as they would be expected to be observed by the sensors i=1, . . . N, i.e., with relative time delays, depending on the angle of arrival ϕ and the relative sensor position. A signal would then be detected—direct path or echoic at angle ϕ and at time T, for example, if
z.sup.ϕ(T)≥Threshold
[0134] This statistic can then be used to create an signal-ec Eq. 7 E∈ R.sup.Q,M, where Q is the number of different angles ϕ used for testing angularly impinging signals, and M is the number of time-samples in the time-window of interest (i.e., T0, T0+1, . . . T0+(M−1) as before). The elements of E will be defined as
[0135] Where ϕ.sub.i denotes the i.sup.th angle used in the angular sampling grid. In this case the matrix is a binary matrix, but a continuous matrix, measuring “degree” of detected impinging angular signals could also be used.
[0136] Recalling that if a “hypothetical touch” at a location x had been correctly estimated and with parameters s, p, θ, x, c, q.sub.i, this would have given an estimate signal-echo-matrix Ê.
Ê=Ê(ŝ,{circumflex over (p)},{circumflex over (θ)},{circumflex over (x)},ĉ,{q.sub.i})
[0137] The estimated signal-echo-matrix could be obtained in Eq. 9 of different ways. For example, if the medium was a slab of a known or partially known material, one could simulate the propagation of acoustic energy, arising from a specific position using ray-tracing techniques. A simple model for such an approach would be room impulse response models, where one makes an informal analogy between a slab and an (empty) room. Publically available software packages exists for such approaches, see for instance the RIR Generator from International Audio Laboratories in Erlangen. Other, more complex approaches for estimating wave propagation and the resulting measurements of signals by accelerometer could be made for instance, via finite-element modelling. COMSOL is one commercially available toolbox.
[0138] If all the parameters are estimated correctly, i.e., the correct size of the surfaces, the relative position of the sensor array, the wave speed and the sensor array orientation have been found, then some suitable distance function
d(Ê,E)
should attain its minimum value over all possible values of parameter estimates. The distance function could be based on any suitable measure, such as an L1, L2 or LP norm of matrix differences. It could also be based on distance measurements after suitable filtering of the matrices E and Ê, such as a preliminary smoothing of those matrices to allow for minor deviations from an exact fit between binary matrices. It could also be based on sum of minimum distances between the nearest neighbours in the matrices Eand E, or any other suitable measure. One can then use a general-purpose algorithm to search for the key parameters defining the surface of interest plus the relative location of the sensor array, i.e., the parameters s, p, θ, c. In this specific respect the position of the finger x is a nuisance parameter and q.sub.i are the fixed (not-to-be-estimated) relative positions of the individual sensor elements. In a typical embodiment of the invention, initial ranges or legal values may be set for those parameters, i.e., S∈ S, p ∈ P, θ∈ [0, 2π], c∈ C. To search the ranges for a minimum value of the function d(.) could be attained, for instance using the simplex algorithm. Other algorithms such as steepest-descent searches or conjugate gradient approaches based could equally well be used. Such approaches could use as input a small subset of estimated Signal-Echo matrices, compare those to the observed echo-signal matrices, and make one or more qualified estimates on the best directional changes in the parameter vector, so a gradually to obtain a better match. Multiple starting points for initializing the algorithm could be used.
[0139] Moreover, the same process could also use more sophisticated filters than pure edge filters. The filters could be designed to detect energy from some directions while suppressing those of others, perhaps in the form of weighted filters, i.e., using a kind of time-domain beamforming.
[0140] Also, the filters need not be limited to edge filters, but could also be filters detecting pure acoustic energy from a specific direction, i.e., a high level of sustained energy from a specific direction. This could be useful if the signal from the finger is not a tap, but a swipe motion, in which case there is likely to be less of a sharp rise of a signal. Tapping fingers: The above approach that was used for obtaining an estimate of the model could also be used subsequently for positioning of new touch events. Both the basic approach and the more sophisticated beam-forming-like approaches could be taken on. In this case, one could simply “lock” the estimated parameters s, p, θ, c and subsequently conduct much faster construction of hypothetical signal-echo matrices and match them with the observed data to estimate the location of the touch.
[0141] Sliding fingers: These are of particular interest and relevance since they related to events like pinch-to-zoom. In this case, however, the sound does typically not appear at the sensor array in the shape of an impulse signal trailed by later echoes. Instead, there is a continuous reception of signals, following the motion of the fingers on the surface and the resulting acoustic energy. This means that to some degree, the matrix
is replaced by another matrix (or in fact a vector) which does not have the same type of time-resolution.
[0142] Determining the position of a tap or swipe using reflected signals (for example, in accordance with the embodiment of
[0143]
[0144] It can be seen that even though the acoustic wave energy comes from one angle (for the direct signal), energy is detected from a range of angles. This is because a delay-and-sum beamforming technique does not provide sufficient resolution when applied to a broadband signal to distinguish between close angular responses. This results in a spatial (i.e., directional) smearing of the signal, rather than a narrow peak being observed. A reasonably accurate determination of the direction of arrival can however be made based on the assumption that the direction of arrival corresponds to the peak 202 of the signal.
[0145]
[0146] As the echo propagates from the boundary where it is reflected, it arrives at the sensor from a different direction from the direct signal 200. The echo shows spatial/directional smearing for the same reasons discussed above for the direct signal. However, it can be seen that the echo signal peak 206 is at a different angle from the direct signal peak 202, corresponding to a different general direction of arrival.
[0147] However, these two constituents 200, 204 (the direct signal and the echo) are not individually observable. Instead, the combined energy of the two signals would be observed, as shown in
[0148] It can be seen that the two constituent signals of the direct signal 200 and the echo 204 are not easily distinguishable from each other. Further, although two peaks 210, 212 can be discerned, the position of those peaks 210, 212 are shifted relative to the peaks 202, 206 of the constituent signals, as represented by the arrows 214, 216. The arrow 216 representing the shift in the echo signal peak is larger, representing the fact that the peak of the echo 204 is shifted by a greater amount than the peak of the direct signal 200. However, even though the direct signal 200 is much stronger than the echo 204, the angular location of the peak 210 corresponding to the direct signal 200 is also still affected by the echo's signal 204, as represented by the smaller arrow 214.
[0149] It can be challenging to determine the peaks of the direct signal 200 and the echo 204 from the combined signal 208. Methods such as ESPRIT and MUSIC can be used, as well as snap-shot based methods such as those that use compressive sensing and L1-like approaches. However, the Applicant has devised a particularly advantageous approach to solving this problem that provides an improvement upon these methods, and this approach is described below.
[0150] The approach is based on the Applicant's appreciation that the direct signal is much stronger than any echoes, and therefore influences the shape of the echoic signals more than the echoic signals affect each other. By ignoring the contribution of the echoic signals, the direction of arrival of the direct signal can be determined. The direct signal (having a shape which is estimated through theoretical calculations or empirical observation) can then be subtracted from the observed (i.e., combined) signal, leaving a residual signal corresponding to the echoes. The residual signal consisting of the echoes can then be analysed to determine the position of the swipe or tap on the surface. This technique can be used to particular advantage for swipes on the surface that are (or have a substantial component) perpendicular to the linear array of optical vibration sensors. An example of this is shown in
[0151]
[0152] The other lines 302-316 correspond to peaks representing the echoes that become visible after the central peak 400 has been subtracted. It can be seen that the direction of arrival of the echoes does vary with time. This is because the incoming acoustic signal travels via a reflection from a boundary of the surface (e.g., a table edge). The location and direction of a swipe can be determined using the residual echoes, because the angle of reflection (and thus the direction of arrival) of the echoes depends on how close the point of contact on the surface is to the array. This is explained further below with reference to
[0153]
[0154] It can be seen that the direct signals 418, 422 from the start and end locations 414, 416 both travel perpendicularly to the array of sensors, while the echoic signals 420, 424 arrive at an angle that varies from α.sub.1 for the start location 414 to α.sub.2 for the end location 416. Information regarding the movement of the swipe towards and away from the array can thus be discerned from the echoes 420, 424 based on the direction of arrival of each echo. The determination of the exact position of the point of contact during the swipe may be calculated using parameters relating to the surface (e.g., position of boundaries) and/or one or more other detected echo(es) and their direction(s) of arrival. For example, the position of the point of contact during the swipe may be accurately determined by matching the detected composite signal with a set of expected angular profiles and/or locations of echoes.
[0155] The Applicant has also appreciated that it is not necessary to detect/identify each echo individually. It is also possible to hypothise what the sum of all the echoes would look like for a given point of contact location and then match the hypothesized sum signal with the observed temporal data. This would amount to a ‘matched spatial filter’. For example, in the case of a swipe motion, the continuous movement of the finger creates waves that are roughly stationary (or semistationary). Under this assumption, a Fourier-transform can be used on the set of received signals.
[0156] This will now be explained further, assuming that the finger is swiping over positions x.sub.i, and that this motion causes i direct path signals/echoes, where the relative strength of the echoes in known from the geometry of the surface (e.g., table). In the discussion that follows, the signals are normalized so that the direct path echo signal has unit energy. For a specific frequency of interest ω, the vector Y(ω) denotes the Fourier-transformed output of the sensor data. One coefficient of Y(ω) corresponds to one sensor output. Then, the hypothesized angular response from a touch/swipe on location x, at the array could be denoted as:
Ŷ.sub.t(ω)=γ.sub.i1F.sub.i1(ω)+γ.sub.i2F.sub.i2(ω)+γ.sub.i3F.sub.i3(ω)+ . . . γ.sub.iNF.sub.iN(ω)
[0157] Where γ.sub.i1=1 and the other coefficients are less than 1. A large set of hypothesized array responses can be used, and then the observed array response can be matched with the set of estimated array responses, i.e.
max.sub.i
which is a matched filter. The coefficients {y.sub.ij} and the corresponding expected directional vectors {F.sub.ij(ω)} are computed based on the estimated location and the model of the table/surface.
[0158] A further approach that can be used in accordance with embodiments of the invention is described with reference for
[0159]
[0160]
[0161] The significance of this is that it does not matter what is detected in the regions shown in dotted lines. This is in contrast with methods of the prior art which use training to build a database of ‘fingerprints’, i.e., recording the full acoustic response detected for certain specified training taps/swipes, and then matching those to subsequently detected input taps/swipes to determine the position of the input taps/swipes. In such approaches it is necessary to determine the impulse response for each possible tap/swipe completely. This requires a time-consuming and inconvenient training procedure prior to use each time the sensor device is set up on a surface.
[0162] The method described with reference to
[0163] An example of this approach is described in detail below.
[0164] Assume that there are N hypothetical points on the touch surface, p.sub.1, p, . . . , p.sub.N. Then a touch or swipe event happens at the surface, by the finger impacting or swiping along the surface, and this is recorded as signals y.sub.1(t), y.sub.2(t), . . . y.sub.Q(t) at sensors 1, 2, . . . Q respectively, with time intervals t, t+1, t+L−1, assuming some sampling frequency f.sub.s and where L is sampling window length.
[0165] Furthermore, it has previously been estimated, or computed by tracing the direct path signals and echoes, that each point, p.sub.1, p, . . . , p.sub.N may be associated with an estimated signal at sensors 1, 2, . . . Q. These estimated signals may be described as signals:
x.sub.ij(t)
where i is the point index (i.e., referring to points p.sub.1, p, . . . , p.sub.N) and j the sensor index, i.e., referring to sensor 1, 2 . . . Q.sub.1 and again with the signals sampled at an interval t, t+1, . . . t+L−1.
[0166] It may not be possible to compute every part of this signal accurately, since the signal is being constructed largely based on direct path signals and reflections of some specific wave modes. In other words, there may be other wave modes that are too complex to model or approximate without more detailed knowledge of the material in the surface, its size, and potential environmental parameters. Therefore, a weight w.sub.ij(t) is associated with each x.sub.ij(t), where each w.sub.ij(t) is typically in the range [0,1], which reflects the level of ‘belief’ in the correctness of the signal, i.e., the level of certainty that the signal is correct.
[0167] Then, to estimate the location of the impacting finger, a score s.sub.i is computed, where
which essentially matches the receive signal vectors y.sub.1(t), y.sub.2 (t), . . . y.sub.Q(t) with the weighted, estimated impulse response associated with a location p.sub.i. It is then possible, for instance, to compute the maximum over all the s.sub.i, and choose that location as the estimated finger position.
[0168] It may also be required, for example, that the s.sub.i is above a certain threshold, to avoid accidental touch or swipe detection, or require that the selected s.sub.i is above a certain threshold relative to the distribution given by the other, non-selected scores, i.e., to detect that s.sub.i is a “clear winner”. It could for instance be a certain number of times higher than the average value of the scores.
[0169] This may be useful, because it is difficult to set an exact expected “match energy level” because this will be related to the “force” of the touch/swipe or tap motion, which is typically not known. In this case, each signal x.sub.ij(t), could be modelled.
[0170] The matching above can clearly also be carried out in the frequency domain, by doing a Fourier transform on the signals and then doing the correlation/matching in frequency domain. It could also be done in any other suitable domain, such as fractional Fourier domain or using wavelets or other suitable basis representation approaches.
[0171] In some situations, it may be difficult to accurately estimate or gauge the exact ‘shape’ of the signal x.sub.ij(t), even at those locations where it is known or assumed that there is an echo. This can be the result of damping and magnification of certain frequencies, or certain smaller phase differences. In this case it could instead be assumed that x.sub.ij(t) is viewed as a result of linear combinations of certain basis signals. Letting x.sub.ij denote the vectorised/sampled version of the signal, i.e., x.sub.ij=[x.sub.ij(t), x.sub.ij(t+1), . . . , x.sub.ij(t+L−1)], then it is possible to use the expression
x.sub.ij=B.sub.ijα.sub.ij
where B.sub.ij contains, as its columns, vectors whose linear combination could amount to the actual (and later weighted) impulse response signal associated with location p.sub.i. The score s.sub.i above could then be recomputed with another level of ‘max’ operations, i.e., with an additional level of maximization, this time over the set of possible vectors α.sub.ij, subject to some constraint to avoid the trivial scale-to-infinity solution. One example would be the constraint ∥α.sub.ij∥.sub.2.sup.2=1.
[0172] More sophisticated approaches such as deconvolution (instead of correlation/matching) of hypothesized signals against real signals could be used.
[0173] The approach described above is particularly advantageous as it obviates the need to train the array of sensors, e.g., by performing a sequence of known taps and/or swipes to allow the sensor array to compute certain parameters used in the analysis of subsequent input taps/swipes. The use of an estimated partial impulse response avoids the need to build up a database of full impulse responses for specified touch inputs.
[0174]
[0175]
[0176] The input device 4′ comprises a main housing 16, which comprises an outer housing portion 18 and inner housing portion 20. The main housing 16 has the same structure as the main housing 16 of the embodiment of
[0177] When a user touches the surface 24, vibrations caused by the touch propagate through the surface 24 to the reaction mass 72, which vibrates with the surface 24. As in the first embodiment, the sensor housing 6 is not in contact with the surface 24, and the inner housing portion 20 prevents the propagation of vibrations from the outer housing portion 18 to the sensor housing 6. Consequently, the housing 6 remains substantially stationary, while the reaction mass and the membrane vibrate due to the vibrations in the surface 24.
[0178] In an equivalent manner to that described with reference to
[0179]
[0180] The input device 4″ comprises a main housing 16″, which comprises an outer housing portion 18″ and an inner housing portion 20. In contrast with the embodiments of
[0181] The optical vibration sensor 2″ detects vibrations in the same manner as described above with reference to
[0182] It will be appreciated that although only one optical vibration sensor is depicted in each of
[0183]
[0184] It will be appreciated that the embodiments described above are only examples, and that variations are possible within the scope of the invention.