LOCATION DETERMINATION SYSTEM, METHOD FOR DETERMINING A LOCATION AND DEVICE FOR DETERMINING ITS LOCATION

20230047992 · 2023-02-16

Assignee

Inventors

Cpc classification

International classification

Abstract

A system for determining a location of a device includes at least two speakers and the device, where each speaker is configured to produce a unique sound. The device includes microphones for receiving sound and for providing signals corresponding thereto, a memory configured to store for each speaker a fingerprint of each unique sound and speaker location information, and a processor connected to the outputs. The processor is configured to determine, by comparing microphone signals to fingerprints, a difference in arrival time of a sound at two microphones, and determine, based on the differences in arrival time, an orientation of the device with respect to the speakers, and to determine the location of the device in the space.

Claims

1. A location determination system for determining a location of a device within a predetermined space, wherein the location determination system comprises at least two speakers and the device, wherein each speaker is configured to produce a unique predetermined sound in said space, wherein the device comprises: at least two microphones with a mutual interspacing, each configured to receive sound and each comprising an output for providing a signal corresponding to the sound received; a memory configured to store for each speaker a fingerprint corresponding to the unique predetermined sound and information relating to a location of that speaker; and a processor connected to the outputs, wherein the processor is configured to: determine, for each sound, based on a comparison of signals received from the outputs with fingerprints retrieved from the memory, a difference between arrival times of the sound at each microphone for each pair of said at least two microphones, and determine, based on the differences in arrival time, an orientation of the device with respect to each one of the speakers, and to perform multiangulation based on the information relating to the speaker locations to determine the location of the device in the space.

2. The location determination system according to claim 1, wherein the comparison comprises convolution and/or cross-correlation.

3. The location determination system according to claim 2, wherein determining the difference in arrival time of a sound comprises: determining a first arrival time of said sound at a first microphone of said pair based on a first maximal value of the convolution and/or cross-correlation; and determining a second arrival time of said sound at a second microphone of said pair based on a second maximal value of the convolution and/or cross-correlation.

4. The location determination system according to claim 1, wherein the processor is further configured to continuously compare the received signals with the retrieved fingerprints.

5. The location determination system according to claim 1, wherein the at least two speakers are configured to produce the sounds at regular intervals.

6. The location determination system according to claim 1, wherein the processor is connected to each output via a single analog-to-digital converter or via an analog-to-digital converter for each output.

7. The location determination system according to claim 1, wherein the at least two microphones comprise three or four microphones.

8. The location determination system according to claim 1, wherein the memory is further configured to store information relating to the mutual position of the at least two microphones.

9. The location determination system according to claim 1, further including a temperature sensor configured to output a temperature signal corresponding to a measured temperature, wherein the processor is further configured to determine, based on the temperature signal, a parameter corresponding to the propagation speed of sound at the measured temperature, and perform the multiangulation using said parameter.

10. A method for determining a location of a device within a predetermined space using a location determination system, the method comprising the steps, to be performed in any suitable order, of: a) producing at least two unique predetermined sounds in said space using at least two respective speakers; b) for each speaker, storing a fingerprint corresponding to the unique predetermined sound and storing information relating to a location of that speaker in a memory; c) receiving sound, at at least two microphones with a mutual interspacing, and providing a signal corresponding to the sound received at an output of each microphone; d) comparing, by a processor, signals received from the outputs with fingerprints retrieved from the memory, and determining for each sound, based on the comparison, a difference between in arrival times of the sound at each microphone for each pair of said at least two microphones, and e) determining, by the processor, based on the differences in arrival time, an orientation of the device with respect to each one of the speakers, and performing multiangulation based on the information relating to the speaker locations to determine the location of the device in the space.

11. A device for determining its location within a predetermined space using a location determination system, the location determination system comprising at least two speakers and the device, wherein each speaker is configured to produce a unique predetermined sound in said space, wherein the device comprises: at least two microphones with a mutual interspacing, each configured to receive sound and each comprising an output for providing a signal corresponding to the sound received; a memory configured to store for each speaker a fingerprint corresponding to the unique predetermined sound and information relating to a location of that speaker; and a processor connected to the outputs, wherein the processor is configured to: determine, for each sound, based on a comparison of signals received from the outputs with fingerprints retrieved from the memory, a difference between ii arrival times of the sound at each microphone for each pair of said at least two microphones, and determine, based on the differences in arrival time, an orientation of the device with respect to each one of the speakers, and to perform multiangulation based on the information relating to the speaker locations to determine the location of the device in the space.

12. The location determination system according to claim 1, wherein said unique predetermined sound in said space is ultrasonic sound.

13. The method according to claim 10, wherein said at least two unique predetermined sounds in said space are ultrasonic sounds.

14. The device according to claim 11, wherein said predetermined sound in said space is ultrasonic sound.

Description

[0066] The invention will be further elucidated with reference to the appended drawings, wherein:

[0067] FIG. 1 is a schematic representation of the location determination system according to the invention;

[0068] FIG. 2 is a schematic representation of the device of the location determination system of FIG. 1.

[0069] FIGS. 3A and 3B are representations of one fingerprint of a unique predetermined sound to be produced by a speaker of the location determination system of FIG. 1;

[0070] FIG. 4 is a representation of a signal received at an output of a microphone in the location determination system of FIG. 1;

[0071] FIGS. 5A and 5B are representations of the result of a cross-correlation of the signal of FIG. 4 with the sound of FIG. 3B; and

[0072] FIG. 6 schematically represents the system of FIG. 1 in a specific situation.

[0073] Throughout the figures, like elements are referred to using like reference numerals.

[0074] FIG. 1 shows a location determination system 1 for determining a location of a device 2 within a space 3. The location determination system 1 comprises six speakers 4. Another number of speakers 4, such as two, three or more, could also have been used. Each speaker 4 is configured to produce a unique predetermined sound in said space 3.

[0075] FIGS. 3A and 3B show a fingerprint of such a predetermined sound. A fingerprint may herein may be understood as a digital representation of a sound. In particular, the fingerprint is a list of numeric values representing an amplitude of the sound at predetermined intervals. The fingerprint can thus be regarded as a discrete sample of the sound. Using the fingerprint, the sound can be reproduced at least to a certain degree of accuracy. In FIGS. 3A and 3B, the intervals of the fingerprint have been numbered and are shown on the horizontal axis. The vertical axis of FIGS. 3A and 3B represents the numeric value corresponding to the amplitude of the sound at that interval. FIG. 3B only shows a part of the data of FIG. 3A. One speaker 4 of the location determination system 1 of FIG. 1 is configured to produce a sound with the fingerprint shown in FIGS. 3A and 3B. The other speakers 4 are configured to produce other predetermined sounds, which are mutually different. Accordingly, each speaker 4 is configured to produce a sound unique to that speaker 4. In this case, the speaker produces the same unique sound regularly ten times every second. In order to produce its unique predetermined sound, each speaker 4 may have a memory into which a fingerprint of its unique predetermined sound is stored.

[0076] As shown in FIG. 2, the device 2 comprises four microphones 5-1 to 5-4. Another number of microphones 5 could have been used, such as two, three or more. The microphones 5-1-5-4 have a mutual interspacing, i.e. they are at a distance from each other. The device 2 also comprises a processor 6 and a memory 7. The microphones 5-1-5-4 are configured for providing a signal corresponding to received sound at their respective outputs 8. When a speaker 4 produces a sound, the sound can be received by each of the microphones 5-1-5-4, the microphones 5-1-5-4 are configured to provide a signal at their outputs 8 corresponding to the sound received.

[0077] FIG. 4 shows a representation of such a signal. The representation is achieved by sampling an analog signal provided by a microphone 5-1-5-4 at its output 8 which is generated in response to the sound using an analog-to-digital converter (not shown) with a sampling frequency of 500 kHz.

[0078] The sampled signal is plotted in FIG. 4 with sample numbers on the horizontal axis and a value representing the amplitude of the signal on the vertical axis. A similarity can be seen between FIG. 4 and FIG. 3B, as they are both representations of the same sound, before producing the sound by the speaker 4 (FIG. 3B) and after being received by a microphone 5-1-5-4 and sampled (FIG. 4). Because the signal of FIG. 4 relates to sound received by a microphone 5-1-5-4, it may include noise present in the environment of the microphone 5-1-5-4, and picked up by it. Multiple sounds may be received by a single microphone 5-1-5-4 at the same time, or may partially overlap each other.

[0079] The memory 7 of the device 2 stores a fingerprint of a sound produced by, and information relating to a location of, each speaker. The memory 7 further stores information relating to the mutual position of the microphones 5-1-5-4. The processor 6 is connected to the outputs 8 of the microphones 5-1-5-4 and to the memory 7 via data carrying connections 9. The data carrying connections 9 are preferably wired, but may also be wireless.

[0080] The processor 6 is configured to determine the location 6 of the device 2 in the space 3. For this purpose, the processor 6 is configured to, for each sound, based on a comparison of signals received from the outputs 8 with fingerprints retrieved from the memory 7, a difference in arrival time for each pair of said at least two microphones 5-1-5-4, and to determine, based on the differences in arrival time, an orientation of the device 2 with respect to each one of the speakers 4, and to perform multiangulation based on the information relating to the speaker 4 locations. In this example, the processor 6 is configured to continuously compare the received signals with the retrieved fingerprints.

[0081] As an example, a sound may be produced by a speaker 4 positioned to the top-left of the device 2. This situation is depicted in FIG. 2. The sound will then travel towards the device 2 in the direction of the arrow 10. Due to the direction 10 at which the sound approaches the device 2, and the mutual interspacing of the microphones 5-1-5-4, the sound will reach each microphone 5-1-5-4 at a different arrival time. Accordingly, a difference in arrival time of the sound at a pair of microphones 5-1-5-4 can be determined. Said difference in arrival time gives information about the orientation of the device 2 with respect to the speaker 4 that produced the sound. For instance, the sound approaching the device 2 in the direction 10 shown in FIG. 2, will first reach microphone 5-1 and shortly thereafter microphone 5-4. Somewhat later the sound will reach microphone 5-2 and shortly thereafter microphone 5-3. Next, the processor compares the signal provided by each microphone 5-1-5-4, which corresponds to the sound received by that microphone 5-1-5-4, to a fingerprint of a sound belonging to a speaker 4. In this example, the comparison is a cross-correlation of the signal with the fingerprint. Cross-correlation involves shifting one input with respect to another by a variable x, and gives as output a scalar value y representing the similarity of the signals given the shift x. When the signal and the fingerprint line up, they are relatively similar, since they include information on the same sound. Thus, at a certain shift x.sub.max, the cross-correlation gives a maximal value y.sub.max. The shift x.sub.max belonging to the maximal value y.sub.max is a measure for the arrival time of the sound at the microphone 5-1-5-4 of which the signal was compared to the fingerprint.

[0082] The output y of cross-correlations of a fingerprint with the signals of the four microphones 5-1-5-4 has been represented in FIGS. 5A and 5B, wherein FIG. 5B shows only a part of the data of FIG. 5A. On the horizontal axes of FIGS. 5A and 5B represent the shift x, whereas the vertical axes represent the output y of the cross-correlation for each microphone 5-1-5-4. As can be seen best in FIGS. 5A and 5B, the cross-correlation is relatively small in magnitude, except for clearly distinguishable peaks p-1-p-4 (see FIG. 5B), where its magnitude is significantly higher. At these peaks p-1-p-4, where the cross-correlation signal is maximal, the signal lines up with the fingerprint. The value of the shift x at the peak p-1-p-4 represents the arrival time of the sound at the microphone 5-1-5-4. FIG. 5B more clearly shows that the peaks for the signal belonging to each microphone 5-1-5-4 occur at a different horizontal position, i.e. at a different shift x representing different arrival times. The peaks p-1-p-4 match the above described sequence of arrival of the sound at the microphones 5-1-5-4, first at microphones 5-1 and 5-4 shortly after each other, and then at microphones 5-2 and 5-3 shortly after each other.

[0083] Based on the difference in arrival time for a pair of microphones 5-1-5-4 the orientation of the device 2 with respect to the speaker 4 which produced the sound can be determined, based on the information relating to the mutual position of the microphones 5-1-5-4. The propagation speed of sound is a relevant parameter when determining the orientation of the device 2. Accordingly, the device 2 is equipped with a temperature sensor 11, which is connected to the processor. The temperature sensor 11 is configured to output a temperature signal corresponding to a measured temperature. The processor 6 determines, based on the temperature signal, a parameter corresponding to the propagation speed of sound at the measured temperature. When the orientation of the device 2 with respect to at least two speakers 4 has been determined, multiangulation is employed to determine the location of the device with respect to the speakers 4 based on the information relating to the speaker locations.

[0084] FIG. 6 shows the system of FIG. 1 in a specific situation. In FIG. 6, the same reference numerals are used as in FIG. 1 for the same elements. Above it has been explained how differences in arrival times can be used to determine an orientation of the device 2 with respect to a speaker. With reference to FIG. 6 it will be explained how the orientation of the device 2 with respect to two or more speakers 4 can be used to determine the location of the device 2. Following the procedure above, the orientation of the device 2 is determined with respect to two different speakers, which have been numbered 4-1 and 4-2 in FIG. 6. Said determined orientation is represented in FIG. 6 via an angle α for the orientation of the device 2 with respect to the first speaker 4-1, and via an angle β for the orientation of the device 2 with respect to the second speaker 4-2. The angles α and β are determined with respect to an arrow R indicating the orientation of the device 2 with respect to the space 3. The arrow R can be understood as indicating a front-facing direction of the device 2. When the orientation of the device 2 is determined with respect to the two speakers 4-1 and 4-2, a series of possible locations for the device 2 can be determined. Two possible locations have been shown by drawing the same device 2 in the first possible location 2-1 and in the second possible location 2-2. In reality, more possible locations exist. The possible locations lie on a curve c12. The device 2 is however only shown in two locations 2-1 and 2-2 for the sake of clarity. In the possible locations, and only in the possible locations, the location and orientation of the device 2-1, 2-2 with respect to the speakers 4-1, 4-2 satisfies the determined orientation of the device with respect to the speakers 4-1, 4-2, i.e. the determined angles α and β. This can be seen from the fact that the angles α-1 and β-1 in the first position of the device 2-1 are equal in magnitude to the angles α-2 and β-2 in the second position of the device 2-2. Since the location of the device 2 is now ambiguously defined, because there are multiple possible locations on the curve c12, a further processing step may be necessary. The amount of possible locations of the device 2 may for instance be reduced by e.g. discarding locations outside the space 3. Additionally or alternatively, the orientation of the device 2 with respect to a further speaker 4-3 can be determined, in the same way as the orientation of the device with respect to the first and second speaker 4-1, 4-2 was determined. Using the orientation of the device 2 with respect to the third speaker 4-3 and that of the second speaker 4-2, a second curve c23 can be drawn, that similar to curve c12 consists of possible locations for the device. The point where the curves c12, c23 overlap, is the location of the device. This thus allows to eliminate all but one possible location. Further, possible locations may chosen as the location of the device by comparing the possible locations with historic and/or recent locations of the device 2, or locations may be eliminated by comparing them with historic and/or recent locations of the device 2.

[0085] From this, the skilled person understands that for different situations, a different amount of curves, thus a different amount of speakers 4, can be used for determining the location of the device. In certain situations, e.g. in a particular location of the device 2 with respect to the speakers 4, more than two, such as three or four or even more, speakers 4 may be required. In another particular location, two speakers 4 may be sufficient. The skilled person is able to extend these principles, i.e. those of determining an orientation with respect to a speaker 4, and using the known location of the speaker 4 to determine the location of the device 2, to three dimensional spaces. An example of location determination in a three dimensional space is therefore omitted here for the sake of brevity.

[0086] Although the invention has been described hereabove with reference to a number of specific examples and embodiments, the invention is not limited thereto. Instead, the invention also covers the subject matter defined by the claims, which now follow.