Multifunctional earphone system for sports activities

11523218 · 2022-12-06

Assignee

Inventors

Cpc classification

International classification

Abstract

A multifunctional earphone system for sports activities is described which comprises the following: a first apparatus configured to be carried in one of a user's ears, the first apparatus comprising a first data communication unit and a first loudspeaker, and a second apparatus configured to be carried in the user's other ear, the second apparatus comprising a second data communication unit and a second loudspeaker, wherein at least one of the first apparatus and the second apparatus comprises a sensor unit and a data processing unit, wherein the data processing unit is configured to generate performance data based on measurement data acquired by the sensor unit, wherein the first apparatus further comprises a signal processing unit configured to generate a binaural audio signal based on the performance data, the binaural audio signal comprising a first signal part to be output by the first loudspeaker and a second signal part to be output by the second loudspeaker, and wherein the first data communication unit is configured to communicate the second signal part of the binaural audio signal to the second data communication unit. Furthermore, a method is described.

Claims

1. A multifunctional earphone system for sports activities, the system comprising: a first apparatus configured to be carried in one of a user's ears, the first apparatus comprising a first data communication unit and a first loudspeaker, and a second apparatus configured to be carried in the user's other ear, the second apparatus comprising a second data communication unit and a second loudspeaker, wherein at least one of the first apparatus and the second apparatus comprises a sensor unit and a data processing unit, wherein the data processing unit is configured to generate performance data based on measurement data acquired by the sensor unit, wherein the first apparatus further comprises a signal processing unit configured to generate a binaural audio signal based on the performance data, the binaural audio signal comprising a first signal part to be output by the first loudspeaker and a second signal part to be output by the second loudspeaker, wherein the binaural audio signal is constructed to evoke a spatial hearing impression with precise directional localization within a three-dimensional space around a head of the user dependent on at least a predetermined reference value, and wherein the first data communication unit is configured to wirelessly communicate the second signal part of the binaural audio signal to the second data communication unit.

2. The system according to claim 1, wherein the binaural audio signal generated by the signal processing unit comprises a signal component that is indicative of a value of the performance data.

3. The system according to claim 2, wherein the signal processing unit is further configured to generate the binaural audio signal such that a spatial position of the signal component is dependent on the value of the performance data.

4. The system according to claim 3, wherein the spatial position of the signal component relative to a plane is dependent on a difference between the value of the performance data and the predetermined reference value.

5. The system according to claim 4, wherein the binaural audio signal generated by the signal processing unit comprises a further signal component that is indicative of a further value of the performance data, and wherein the signal processing unit is further configured to generate the binaural audio signal in such a way that a spatial position of the further signal component is different from the spatial position of the signal component.

6. The system according to claim 5, wherein the signal processing unit is configured to modify pre-stored audio data in dependency on at least one value of the performance data.

7. The system according to claim 1 wherein the sensor unit comprises a physiological sensor unit.

8. The system according to claim 7 wherein the sensor unit comprises a motion sensor unit.

9. The system according to claim 8 wherein both the first apparatus and the second apparatus comprise a motion sensor unit.

10. A multifunctional earphone system for sports activities, the system comprising: a first apparatus having a first housing configured to be carried in one of a user's ears, the first apparatus comprising a first data communication unit and a first loudspeaker within the first housing, and a second apparatus having a second housing configured to be carried in the user's other ear, the second apparatus comprising a second data communication unit and a second loudspeaker within the second housing, wherein at least one of the first apparatus and the second apparatus comprises a sensor unit and a data processing unit, wherein the data processing unit is configured to generate performance data based on measurement data acquired by the sensor unit, wherein the first apparatus further comprises a signal processing unit configured to generate a binaural audio signal based on the performance data, the binaural audio signal comprising a first signal part to be output by the first loudspeaker and a second signal part to be output by the second loudspeaker, wherein the binaural audio signal is constructed to evoke a spatial hearing impression with precise directional localization within a three-dimensional space around a head of the user dependent on at least a predetermined reference value, and wherein the first data communication unit is configured to wirelessly communicate the second signal part of the binaural audio signal to the second data communication unit, wherein the performance data comprises values descriptive of a sports activity and values descriptive of the user.

11. The system according to claim 10, wherein the binaural audio signal generated by the signal processing unit comprises a signal component that is indicative of a value of the performance data.

12. The system according to claim 11, wherein the signal processing unit is further configured to generate the binaural audio signal such that a spatial position of the signal component is dependent on the value of the performance data.

13. The system according to claim 12, wherein the spatial position of the signal component relative to a plane is dependent on a difference between the value of the performance data and the predetermined reference value.

14. The system according to claim 13, wherein the binaural audio signal generated by the signal processing unit comprises a further signal component that is indicative of a further value of the performance data, and wherein the signal processing unit is further configured to generate the binaural audio signal in such a way that a spatial position of the further signal component is different from the spatial position of the signal component.

15. The system according to claim 14, wherein the signal processing unit is configured to modify pre-stored audio data in dependency on at least one value of the performance data.

16. The system according to claim 10 wherein the sensor unit comprises a physiological sensor unit.

17. The system according to claim 16 wherein the sensor unit comprises a motion sensor unit.

18. The system according to claim 17 wherein both the first apparatus and the second apparatus comprise a motion sensor unit.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a block diagram of one of the two apparatuses of a system according to an exemplary embodiment.

(2) FIG. 2A shows a first view of an apparatus according to an exemplary embodiment.

(3) FIG. 2B shows a second view of an apparatus according to an exemplary embodiment.

(4) FIG. 2C shows a third view of an apparatus according to an exemplary embodiment.

(5) FIG. 3 shows a system according to an embodiment.

DETAILED DESCRIPTION

(6) FIG. 1 shows a block diagram of one of the two apparatuses of a system according to an exemplary embodiment. The apparatus is incorporated in a housing, which is configured to be carried in the ear and will be described in more detail further below in conjunction with the FIGS. 2A, 2B and 2C. The apparatus comprises a data processing unit 1, a signal processing unit 2, a loudspeaker or receiver 3, an accelerometer 4 and a pulse oximeter or a pulse oximetry sensor 5.

(7) The data processing unit 1 receives data from the accelerometer 4 and the pulse oximeter 5 and processes these in order to generate or calculate performance data, such as for example a number of steps, a distance, a speed, an arterial oxygen saturation, a respiratory frequency, a cardiovascular flow, a cardiac output, a blood pressure, a blood glucose value, etc. The performance data are communicated to the signal processing unit 2 and used by the signal processing unit 2 to generate an audio signal, which is output into the ear of the user by means of the loudspeaker 3. The audio signal is generated in such a way that the user, when hearing the corresponding sound, can learn information about at least one value of the performance data. This may take place by outputting speech elements (for example pre-stored numbers and words) or pulsed tone signals, by manipulating music or in any other suitable way.

(8) The apparatus further comprises a central control unit or controller 9 and a memory 11. The controller 9 is connected with the data processing unit 1, with the signal processing unit 2 and with the memory 11, and is configured to control these units and to receive or read out information from these units. For example, the controller 9 controls which of the performance data generated by the data processing unit 1 the signal processing unit 2 shall take into consideration when generating the audio signal, for example whether the audio signal is currently to provide information on heart rate, speed or something else. The controller 9 can also control the signal processing unit 2 in such a way that music or the like (for example an audio book) is played back independently of the performance data.

(9) The controller 9 is furthermore connected with a first microphone 7, with a second microphone 14, with a touch sensor unit 6, with an EEG unit (electroencephalography unit) 8, with a contact sensor unit 10, with a Bluetooth unit 12, and with an NFC unit (Near Field Communication unit) 13. These units are also incorporated or integrated in the housing and generally enable the user to influence and control the functionalities of the apparatus and also allow the apparatus to communicate with an external device, such as for example a similar apparatus, a smart phone or a computing device.

(10) The first microphone 7 is a bone conduction microphone which is arranged in the housing in such a way that it can detect sound being conducted through the cranial bone, for example while speaking. One function of the first microphone 7 is to detect user speech, for example speech commands for controlling the apparatus, or when the apparatus is used as a headset in conjunction with an external device.

(11) The second microphone 14 is arranged in the housing in such a way that it may in particular detect ambient sound. By processing the signals from both the first microphone 7 and the second microphone 14, disturbing ambient noises can be filtered out of the user speech, which improves the use as a headset as well as the quality of recognition of speech commands. A further use of the two microphone signals is the recognition of acoustic gestures, i.e. the recognition of certain moving or sweeping touches of the body surface. More specifically, an acoustic gesture may for example consist in the user making a rapid sweeping movement with a finger across his or her skin or clothes in a particular direction (vertically, horizontally, etc.), wherein the finger touches the skin or clothes during the entire movement. Sound emerges from such sweeping movements and by analyzing the signals recorded by the microphones 7 and 14, the direction, speed and further characteristics of the gesture can be recognized and converted into control signals.

(12) The touch sensor unit 6 comprises a plate with a plurality of capacitive sensors and is arranged on a part of the surface of the housing in such a way that the user can touch it with the finger when the apparatus is carried in the ear. In other words, the touch sensor unit 6 is located on a surface of the housing pointing away from the auditory canal. The user can control the apparatus by sweeping and/or tapping with the finger on the touch sensor unit 6. For example, the control unit 9 may link a sweeping upward movement with a sound level increase and a sweeping downward movement with a sound level decrease and control the signal processing unit 2 accordingly. In a similar manner, the control unit 9 may for example link a single tap on the touch sensor unit 6 with a change of function and it may link two recurring taps with a selection of a function.

(13) The EEG unit 8 comprises a plurality of electrodes arranged on the surface of the housing in such a way that measurements of electric potentials can be carried out on the skin surface in the ear. These measurements are analyzed by the control unit and compared with pre-stored measurements in order to recognize particular thoughts of the user and to use these as control commands. Thinking intensively of one's own favorite dish may for example trigger an announcement of the present calorie consumption.

(14) The contact sensor unit 10 comprises a capacitive sensor arranged in the surface region of the housing in such a way that it contacts the skin surface of the user when carried in the ear. Thus, the controller 9 can detect whether the apparatus is in use or not and in accordance therewith generate different control signals. For example, functionalities with intensive current consumption may be shut off a couple of minutes after the apparatus has been taken out of the ear.

(15) The Bluetooth unit 12 serves to provide wireless communication with other devices (for example an apparatus carried in the other ear) or with external devices (mobile phone, PC, etc.). When communicating with an apparatus in the other ear of the user, sensor signals from both sides may be taken into consideration in order to obtain an improved precision in the performance data. Furthermore, the generated audio signal may be stereophonically or binaurally processed. In such systems, one apparatus functions as a primary apparatus or master in the sense that it receives and processes data from both apparatuses and defines the respective audio signals that are to be output.

(16) Communication with an external device may take place during use of the apparatus, i.e. during performance of a sports activity. In this case, data, such as music or GPS data, may be transmitted from the external device to the apparatus and used or stored therein. At the same time, data, such as acquired sensor data or calculated performance data, may be transmitted from the apparatus to the external device. This also enables a use of the apparatus as headset in conjunction with communication applications.

(17) Communication with the external device may also take place when the apparatus is not used in the ear, for example in order to configure the different control options described above, in order to set threshold values (for example for heart rate, respiratory rate, distance, time or speed, etc.) or in order to read out performance data for external processing. This is conveniently done by means of a special application or app.

(18) The NFC unit 13 makes it possible to communicate with an NFC enabled device, for example a smart phone, when this is brought into the vicinity of the apparatus. Thereby, configuration data can be transferred from the smart phone to the apparatus or data stored in the apparatus, such as for example the user's contact information, can be read out. This information can be of use when a lost apparatus is found or in case of an accident.

(19) The FIGS. 2A, 2B and 2C show different views of an apparatus 20 according to an exemplary embodiment, in particular they show the shape of the housing into which all units of the apparatus 20 are incorporated.

(20) FIG. 2A shows a view of an apparatus 20, which comprises a housing. The housing is made of plastics or synthetic material, such as silicone, and essentially comprises a first portion 21 and a second portion 22. The first portion 21 is shaped to be inserted into the auditory canal of a user and the second portion 22 is shaped to be retained in the user's auricle or outer ear. In this regard, the first portion 21 is essentially cone-shaped in order to fit well into the outer section of the auditory canal. An elastic collar 24 is provided at an end section of the first portion 21. The collar 24 functions as a seal when the apparatus 20 is carried in the ear so that the apparatus 20 blocks the user's auditory canal. The second portion 22 is shaped in such a way that it can be inserted into the concha of the auricle of a typical ear and such that it can be retained there.

(21) The housing further comprises a surface 23 which points away from the auditory canal and thus can be reached by the user, for example with a finger. The surface 23 particularly comprises a capacitive sensor unit for acquiring control commands from the user, for example when the user taps with his or her finger on the surface 23 or when the user swipes a finger across the surface 23 in a predetermined direction. The housing comprises a closable opening 25 in the vicinity of the surface 23 through which a (not shown) plug can be coupled to a socket in order to charge the battery of the apparatus 20 or in order to exchange data with the apparatus 20.

(22) The housing further comprises an opening 26 located at a position of the surface of the housing which closely contacts the skin, in particular in the area behind the tragus, when the apparatus 20 is carried in the ear. The opening 26 may comprise a pulse oximetry sensor having two differently colored light sources, in particular light emitting diodes, and a photo sensor. In this case, the opening 26 is positioned in the housing in such a way that the light sources can illuminate a portion of the skin surface in the user's ear and such that the photo sensor can detect corresponding reflections from the skin surface. Alternatively, the opening 26 may contain a bone conduction microphone. In a system comprising two apparatuses, one apparatus may comprise the pulse oximetry sensor and the other apparatus may contain the bone conduction microphone.

(23) FIG. 2B shows a further view of the apparatus 20, wherein the surface 23 can be seen in the foreground. The surface 23 comprises a slot- or slit-shaped opening 27 which lets sound originating from the surroundings through to a (not shown) microphone. The apparatus 20 further comprises a cuff or sleeve 28, which surrounds a part of the surface 23 and serves to adapt the size of the apparatus to the ear of a user. The cuff 28 is made from soft plastics and is detachable from the housing. Thereby, the user may try different sleeves 28 having different sizes and choose the one that provides the best fit.

(24) FIG. 2C shows a yet further view of the apparatus 20, wherein the apparatus 20 is turned 180° in comparison to the view of FIG. 2B. Both the collar 24 and also the end of the first portion 21, which extends deepest into the auditory canal, comprise openings 29 through which the sound that is generated by a loudspeaker which is incorporated in the apparatus can be output.

(25) The openings 25, 26, 27 and 29 are all waterproof sealed so that the apparatus 20 can also be used for swimming or when it rains.

(26) FIG. 3 shows a system according to an exemplary embodiment. The system comprises a first apparatus 20R and a second apparatus 20L. The first apparatus 20R is configured to be carried in the right ear and the second apparatus 20L is configured to be carried in the left ear. Each apparatus 20R and 20L corresponds essentially to the above described apparatus 20. However, the first apparatus 20R comprises a bone conduction microphone 7 in its opening 26 while the second apparatus 20L comprises a pulse oxymetri sensor 5 in its opening 26.

(27) The first apparatus 20R functions as primary apparatus or master in the sense that it receives and processes data from both apparatuses 20R, 20L and defines the audio signals that are to be respectively output. The two apparatuses 20R and 20L communicate with each other through their respective Bluetooth units 12 (see FIG. 1) and can thus exchange sensor data, performance data, audio data, control data, etc. During operation, the secondary apparatus 20L in particular transmits pulse oximeter data and motion sensor data or performance data that is derived from pulse oximeter data and/or motion data to the primary apparatus 20R. The data processing unit 1 of the first apparatus 20R generates performance data based on the data received from the second apparatus 20L and the measurement data acquired by its own sensor unit. The performance data is used by the signal processing unit 2 of the first apparatus 20R to generate a binaural audio signal.

(28) The binaural audio signal consists of a first (right) signal part, which is output through the loudspeaker 3 of the first apparatus 20R, and a second (left) signal part, which is transmitted to the second apparatus 20L over the Bluetooth connection and output by the loudspeaker 3 of the second apparatus 20L. By synchronized output of the first signal part in the right ear and the second signal part in the left ear, the user can perceive the binaural audio signal and thereby gain information that is of relevance for the performance of a sports activity. The user, when hearing the binaural audio signal, can in particular realize or learn about one or more specific values of the performance data, changes in one or more specific values of the performance data, and/or a relation between one or more specific values of the performance data and corresponding reference values, in particular threshold values. The binaural audio signal in particular enables that information about one or more values can be heard simultaneously (or close to simultaneously) at different spatial positions.

(29) The generated binaural audio signal contains a signal component, such as for example a pulsed tone signal or a sequence of pre-stored speech elements, which is indicative for a value of the performance data. This signal component is audible for the user at a particular spatial position. This spatial position can be shifted or changed when the corresponding value of the performance data changes. This may for example take place such that the signal component is shifted or moved forwards or upwards when the value increases and such that it is shifted or moved rearwards or downwards when the value decreases. The displacement of the position can in particular take place relative to a vertical plane extending through the body of the user or relative to a horizontal plane extending through the user's head (at the height of the ears). As long as the signal component is played back in one of these planes, i.e. directly on the left or right side of the user or at the height of the user's ears, then the value is equal or close to a predetermined reference value. When the value exceeds the predetermined reference value or threshold value, the position of the signal component is for example displaced forwards or upwards, where the amount of displacement depends on the difference between the performance parameter value and the reference value. In a similar manner, the position of the signal component can be moved downwards or rearwards when the value gets below the (or another) reference value or threshold value.

(30) Thus, the user can easily and intuitively realize whether the value differs from the predetermined reference value and, when this is the case, act accordingly in order to again bring the value closer to the predetermined reference value.

(31) The generated binaural audio signal may contain further signal components which are indicative for further values of the performance data. In this case, the binaural audio signal is generated such that a spatial position of each signal component is different from the spatial positions of the other signal components. Thereby, values in relation to different performance values can be played back at different predetermined positions in the three dimensional space around the user's head. For example, a first pulsed tone signal or speech signal regarding the heart rate can be played back at an upper left location and a second pulsed tone signal or speech signal regarding a speed can be played back at a lover front location. The playback of the individual signal components may be simultaneous or slightly time-displaced in order to facilitate the user's perception.