Hearing aid system with internet protocol

11564047 · 2023-01-24

Assignee

Inventors

Cpc classification

International classification

Abstract

A hearing aid system comprising at least a first hearing aid, wherein the first hearing aid is configured to establish a communication link over the internet with a remote entity based on a protocol stack, wherein the protocol stack includes an internet protocol, and the protocol stack is implemented in the first hearing aid.

Claims

1. A hearing aid system comprising at least a first hearing aid, wherein the first hearing aid is configured to establish a communication link over the internet with a remote entity based on a protocol stack, wherein the protocol stack includes a combination of an internet protocol and a Bluetooth based protocol, and the protocol stack is implemented in the first hearing aid.

2. The hearing aid system according to claim 1, wherein the internet protocol is the Internet Protocol Version 4, IPv4, or the Internet Protocol Version 6, IPv6.

3. The hearing aid system according to claim 2, wherein the protocol stack comprises one or more of the following: an application layer, in particular configured for dealing with user data and/or control data, for instance audio data, encoded audio data, audio control data and/or non-audio data; a transport layer, in particular configured for establishing a connection between the first hearing aid and the remote entity; a network layer comprising the internet protocol, in particular configured for connecting the first hearing aid and the remote entity; a data link layer, in particular configured for formatting data received or to be transmitted via a physical communication medium; a physical layer, in particular for transmitting and/or receiving data over a physical communication medium.

4. The hearing aid system according to claim 2, wherein a physical layer of the protocol stack includes the Bluetooth based protocol, and a data link layer of the protocol stack includes the combination of the Bluetooth based protocol and the internet protocol.

5. The hearing aid system according to claim 2, wherein the Bluetooth based protocol is a Bluetooth, a Bluetooth Low Energy (LE) protocol and/or a Bluetooth network encapsulation protocol; and the internet protocol is: a wireless local area network protocol, WLAN, according to the IEEE 802.11 standards; a wireless personal area network protocol, WPAN, according to the IEEE 802.15 standards; a low power wide area network, LPWAN, protocol; or an ultra-wide band protocol, UWB, according to the IEEE802.15.4a standard and/or IEEE 802.11ah standard.

6. The hearing aid system according to claim 2, wherein the first hearing aid is configured for one or more of the following: receiving a stream of audio from the remote entity over the communication link; receiving fitting data from the remote entity over the communication link; receiving firmware updates from the remote entity over the communication link; transmitting sensor data recorded at the hearing aid system to the remote entity over the communication link; receiving and/or transmitting optimization data for a neural network over the communication link; receiving and/or transmitting of IFTTT data over the communication link.

7. The hearing aid system according to claim 1, wherein the protocol stack comprises one or more of the following: an application layer, in particular configured for dealing with user data and/or control data, for instance audio data, encoded audio data, audio control data and/or non-audio data; a transport layer, in particular configured for establishing a connection between the first hearing aid and the remote entity; a network layer comprising the internet protocol, in particular configured for connecting the first hearing aid and the remote entity; a data link layer, in particular configured for formatting data received or to be transmitted via a physical communication medium; a physical layer, in particular for transmitting and/or receiving data over a physical communication medium.

8. The hearing aid system according to claim 7, wherein a physical layer of the protocol stack includes the Bluetooth based protocol, and a data link layer of the protocol stack includes the combination of the Bluetooth based protocol and the internet protocol.

9. The hearing aid system according to claim 7, wherein the Bluetooth based protocol is a Bluetooth, a Bluetooth Low Energy (LE) protocol and/or a Bluetooth network encapsulation protocol; and the internet protocol is: a wireless local area network protocol, WLAN, according to the IEEE 802.11 standards; a wireless personal area network protocol, WPAN, according to the IEEE 802.15 standards; a low power wide area network, LPWAN, protocol; or an ultra-wide band protocol, UWB, according to the IEEE802.15.4a standard and/or IEEE 802.11ah standard.

10. The hearing aid system according to any of the previous claim 1, wherein a physical layer of the protocol stack includes the Bluetooth based protocol, and a data link layer of the protocol stack includes the combination of the Bluetooth based protocol and the internet protocol.

11. The hearing aid system according to claim 10, wherein the Bluetooth based protocol is a Bluetooth, a Bluetooth Low Energy (LE) protocol and/or a Bluetooth network encapsulation protocol; and the internet protocol is: a wireless local area network protocol, WLAN, according to the IEEE 802.11 standards; a wireless personal area network protocol, WPAN, according to the IEEE 802.15 standards; a low power wide area network, LPWAN, protocol; or an ultra-wide band protocol, UWB, according to the IEEE802.15.4a standard and/or IEEE 802.11ah standard.

12. The hearing aid system according to claim 1, wherein the Bluetooth based protocol is a Bluetooth, a Bluetooth Low Energy (LE) protocol and/or a Bluetooth network encapsulation protocol; and the internet protocol is: a wireless local area network protocol, WLAN, according to the IEEE 802.11 standards; a wireless personal area network protocol, WPAN, according to the IEEE 802.15 standards; a low power wide area network, LPWAN, protocol; or an ultra-wide band protocol, UWB, according to the IEEE802.15.4a standard and/or IEEE 802.11ah standard.

13. The hearing aid system according to claim 1, wherein the first hearing aid is configured for one or more of the following: receiving a stream of audio from the remote entity over the communication link; receiving fitting data from the remote entity over the communication link; receiving firmware updates from the remote entity over the communication link; transmitting sensor data recorded at the hearing aid system to the remote entity over the communication link; receiving and/or transmitting optimization data for a neural network over the communication link; receiving and/or transmitting of IFTTT data over the communication link.

14. The hearing aid system according to claim 1, further comprising at least a second hearing aid, wherein the first hearing aid and the second hearing aid are configured for communicating with one another.

15. The hearing aid system according to claim 14, wherein at least one hearing aid is configured for relaying data received over the communication link from the remote entity to the respective other hearing aid and/or for relaying data received from the respective other hearing aid over the communication link to the remote entity.

16. A system comprising: the hearing aid system according to claim 1; and the remote entity.

17. The system according to claim 16 further comprising: a portable or stationary auxiliary device local to the hearing aid system, providing routing functionality for the communication link between the first hearing aid and the remote entity.

18. Method, performed by at least a first hearing aid of a hearing aid system, in particular a hearing aid system according to claim 1, the method comprising: establishing a communication link over the internet with a remote entity based on a protocol stack, wherein the protocol stack includes an internet protocol, and the protocol stack is implemented in the first hearing aid.

19. A computer program code, the computer program code, when executed by a processor, causing an apparatus to perform and/or control the actions of the method according to claim 18.

20. A non-transitory computer readable storage medium in which computer program code is stored, the computer program code when executed by a processor causing at least one apparatus to perform the method according to claim 19.

Description

BRIEF DESCRIPTION OF DRAWINGS

(1) The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:

(2) FIG. 1 illustrates an exemplary layered IP protocol stack;

(3) FIG. 2 illustrates a TCP header format;

(4) FIG. 3 illustrates a UDP header format;

(5) FIG. 4 illustrates an exemplary IP stack overview for Classic Bluetooth enabling an IP link;

(6) FIG. 5 illustrates an exemplary IP stack overview for Low Energy enabling an IP link;

(7) FIG. 6 illustrates an exemplary system allowing an end to end IP link between a remote entity and a hearing aid system;

(8) FIG. 7 illustrates an exemplary IP link to a hearing aid system and forwarding between the hearing aids;

(9) FIG. 8 illustrates an exemplary system with IP streaming and voice assistant capabilities;

(10) FIG. 9 illustrates an exemplary IP stack configuration for music streaming and voice assistant;

(11) FIG. 10 illustrates an exemplary system enabling remote fitting with a border router device;

(12) FIG. 11 illustrates an exemplary IP stack configuration for a remote fitting procedure;

(13) FIG. 12 illustrates an alternative exemplary IP stack configuration for remote fitting;

(14) FIG. 13 illustrates an exemplary IP stack configuration for a Device Firmware Update;

(15) FIG. 14 illustrates an alternative exemplary IP stack configuration for a Device Firmware Update;

(16) FIG. 15 illustrates an exemplary IP stack configuration for data harvesting;

(17) FIG. 16 illustrates an exemplary IP stack configuration for neural network tuning;

(18) FIG. 17 illustrates an alternative exemplary IP stack configuration for neural network tuning;

(19) FIG. 18 illustrates a generic IP communication model with a wireless interface;

(20) FIG. 19 illustrates an exemplary IP stack with CTPS;

(21) FIG. 20 illustrates a PDU format for CTPS;

(22) FIG. 21 illustrates a payload format for CTPS;

(23) FIG. 22 illustrates a format of the Link Manger SDU for CTPS;

(24) FIG. 23 illustrates a format of the Acknowledged PDU(s) frame for CTPS;

(25) FIG. 24 illustrates a format of a new SDUs frame for CTPS;

(26) FIG. 25 illustrates a format of a re-transmitted SDUs frame for CTPS;

(27) FIG. 26 illustrates an exemplary PDU encryption;

(28) FIG. 27 illustrates a flow diagram when receiving an encrypted PDU;

(29) FIG. 28 illustrates a simplified IP stack with multiple exemplary possible transports;

(30) and

(31) FIG. 29 illustrates an exemplary end to end IP link via LoRA or WiFi 802.11ax transports.

DETAILED DESCRIPTION

(32) The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.

(33) The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g.

(34) flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.

(35) In the following different exemplary protocols are described, which can be used in the different aspects of the present disclosure, and different use cases are described, in which the different aspects of the present disclosure can be used.

(36) FIG. 1 depicts an exemplary five layered view of a protocol stack 100 comprising an internet or network layer 103 with an internet protocol (thus an “Internet Protocol stack”, or “IP stack”) with an illustration of how successive headers are added by protocols working at each layer 101-105. Each layer 101-105 handles a particular set of problems involving some aspect of sending data between distributed user applications, i.e. applications that are running on devices (such as a hearing aid system and a remote entity as described in more detail below), which are connected to the same or different networks. As the raw application data (such as user data or control data, as disclosed in the present disclosure) moves from the application layer 105 down through the various layers 102-104, it is wrapped up (or encapsulated) within protocol data units (PDUs) created by each of the protocols it encounters. The names commonly used to refer to these PDUs tend to vary. E.g. at the network layer they are called packets or datagrams. At the link layer, they are more often called frames.

(37) Data from an application is passed down to the appropriate application layer protocol, which encapsulates the data within a protocol data unit (PDU) by adding some header information.

(38) The entire PDU is then passed down to the transport layer protocol and undergoes a similar process here. This encapsulation is repeated for the network layer and the link layer. The frame that is built by the link layer is then sent to a (e.g. border) router or network switch via a physical transmission medium as a stream of bits or symbols.

(39) A functional description of each layer 101-105, which may be employed in a hearing aid system (e.g. in a hearing aid, auxiliary device) or remote entity according to the present disclosure, is given in the below table:

(40) TABLE-US-00001 Layer Functional description Application An application layer protocol is specific to a particular type of application layer 105 (e.g. file transfer, electronic mail, network management etc.) and is sometimes embodied within the application’s client software, although it could also be implemented within the operating system software. The interface between an application layer protocol and a transport layer protocol is defined with reference to port numbers and sockets. Further it defines the format and organization of data including encryption and authentication. Transport This layer handles the end-to-end transfer of data and can handle a number of layer 104 data streams simultaneously. It provides a variety of services between two host computers, including connection establishment and termination, flow control, error recovery, and segmentation of large data blocks into smaller parts for transmission. The two main transport layer protocols are: 1. Transmission Control Protocol (TCP), which provides a reliable, connection-oriented service. 2. User Datagram Protocol (UDP) provides an unreliable, connectionless service (delivery is not guaranteed, but UDP is useful for applications for which speed is more important than reliability). Network This layer provides addressing and routing functions that ensures messages layer 103 are delivered to their destination. Internet Protocol (IP) is a connectionless, unreliable protocol that does not provide flow control or error handling, and attempts to deliver IP datagrams on a best-effort basis. Network devices called routers forward incoming datagrams according to the destination IP address specified within the IP packet. Data Link This layer formats data into frames appropriate for transmission onto a layer 102 physical medium. Defines rules for when the medium can be used and general link management. Defines the means by which to recognize transmission errors. May include authentication and encryption between the data link devices. This layer can be divided into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC). The LLC sublayer is on top of the MAC sublayer and is responsible for Cyclic redundancy Checking (CRC), sequencing information, and adding appropriate source and destination information. The MAC sublayer controls device interaction allocating medium access. Physical Defined the electrical, radio frequency, optical, cable-link, connectors, and layer 101 procedural details requiring transmitting and receiving bits, represented as some form of energy passing over a physical medium.

(41) In the following different exemplary protocols of these layers 101-105 are described, which may also be employed in a hearing aid system (e.g. a hearing aid or auxiliary device) or remote entity according to the present disclosure.

(42) There are several standardized application layer protocols (cf. e.g. Service Name and Transport Protocol Port Number Registry, www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml, where they are called services). The following table shows a short list of exemplary standardized application layer protocols, which may be employed in the different aspects according to the present disclosure:

(43) TABLE-US-00002 Application layer App Header protocols Description size [octets] Constrained Application CoAP is a protocol that is intended for use Between 5 Protocol (CoAP) in resource-constrained internet devices, and 17 such as wireless sensor network nodes. File Transfer Protocol A network protocol used for the transfer of TBD (FTP) computer files between a client and server on a computer network. Hypertext Transfer HTTP is the foundation of data TBD Protocol (HTTP) communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser Message Queuing Runs any network protocol that provides Telemetry Transport ordered, lossless, bi-directional Protocol (MQTT) connections e.g. TCP. Defines two types of network entities: a message broker and a number of clients. An MQTT broker is a server that receives all messages from the clients and then routes the messages to the appropriate destination client Real-time Transport RTP is a network protocol for delivering Protocol (RTP) audio and video over IP networks. Real Time Streaming RTSP is a network control protocol Protocol (RTSP) designed for use in entertainment and communications systems to control streaming media servers.

(44) Two exemplary transport layer protocols in the IP stack are the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). An overview over the differences between TCP and UDP is provided in the below table:

(45) TABLE-US-00003 Transmission Control Protocol (TCP) User Datagram Protocol (UDP) TCP is a connection-oriented protocol. UDP is the Datagram oriented protocol. This Connection-orientation means that the is because there is no overhead for opening a communicating devices should establish a connection, maintaining a connection, and connection before transmitting data and terminating a connection. UDP is efficient should close the connection after for broadcast and multicast type of transmitting the data network transmission. TCP is reliable as it guarantees delivery of The delivery of data to the destination data to the destination cannot be guaranteed in UDP. TCP provides extensive error checking UDP has only the basic error checking mechanisms. It is because it provides flow mechanism using checksums. control and acknowledgment of data. Sequencing of data is a feature of There is no sequencing of data in UDP. If Transmission Control Protocol (TCP). this ordering is required, it has to be managed by means that packets arrive in-order the application layer at the receiver. Retransmission of lost packets is There is no retransmission of lost possible in TCP. packets in UDP TCP has a (20-80) bytes variable UDP has a 8 bytes fixed length header length header. TCP does not support Broadcasting. UDP supports Broadcasting

(46) FIGS. 2 and 3 show the header formats 200, 300 of the two transport layer protocols “Transmission Control Protocol” (TCP) and “User Datagram Protocol” (UDP), which may be employed in a protocol stack according to the present disclosure.

(47) As already mentioned above, the network layer 103 of FIG. 1 comprises an Internet Protocol. An Internet Protocol (“IP”) deals with getting the datagrams form the source all the way to the destination. Getting to the destination may involve making many hops at routers (intermediate nodes). IP provides a best effort network layer service connecting endpoints (computers, phones, etc.) to form a computer network. The IP network service transmits datagrams between (intermediate nodes) using IP routers.

(48) There are generally two relevant deployed Internet Protocols or specifications thereof: Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6). IPv4 is the fourth version of the Internet Protocol (IP). IPv4 was the first version deployed for production in the in 1983. It still routes most Internet traffic today, despite the ongoing deployment of the successor protocol, IPv6. IPv4 and IPv6 are described in the Internet Protocol, Version 4 (IPv4) Specification (https://tools.ietf.org/html/rfc791) and the Internet Protocol, Version 6 (IPv6) Specification (https://tools.ietf.org/html/rfc2460), respectively.

(49) One of the important differences between IPv4 and IPv6 is the address length, where IPv4 uses a 32-bit address scheme allowing for a total of 2{circumflex over ( )}32 addresses—just over 4 billion address—whereas IPv6 uses a 128-bit address scheme allowing for about 2{circumflex over ( )}128 addresses. With the growth of the Internet it is expected that the number of unused IPv4 addresses will eventually run out because every device (phones, PCs, game consoles, ear buds, hearing aids, etc.) that connects to the Internet requires an address.

(50) The size of the IPv4 header is between 20 and 60 octets (depends on the length of the option field, which may be up to 16 octets) whereas the iPV6 header is 40 octets, though an extension header may be added as part of the payload field. The extension header is minimum 4 octets or more (cf. e.g. IPv6 header, https://en.wikipedia.org/wiki/IPv6packet).

(51) In order to now enable a direct IP link between a hearing aid and a remote entity, a protocol stack with an interne protocol (IP stack) is supported in the hearing aid.

(52) A preferred approach according to the present disclosure is the employment of an IP link in the frame of the Bluetooth standard, which is explained in more detail with respect to FIGS. 4 and 5. One option is the use of the Bluetooth Network Encapsulation Protocol (BNEP) for Classic Bluetooth, which is specified in in the Bluetooth Network Encapsulation Protocol (BNEP) Specification (see www.bluetooth.org/docman/handlers/DownloadDoc.ashx?doc_id=6552). A corresponding protocol stack 400 is illustrated in FIG. 4. The protocol stack 400 inter alia comprises a physical layer 401 with Bluetooth Radio and a Bluetooth Baseband, a link layer 402 with the L2CAP protocol, a network sub-layer 403 with the BNEP protocol, a network/internet layer 404 (with an internet protocol such as IPv4 or IPv6), a transport layer 405 and an application layer 406.

(53) Another option is the Internet Protocol Support Profile (IPSP) for Bluetooth Low Energy, which is for instance specified under www.bluetooth.org/docman/handlers/DownloadDoc.ashx?doc_id=296307. A corresponding protocol stack 500 is illustrated in FIG. 5. The protocol stack 500 inter alia comprises a physical layer 501, a link layer 502 comprising inter alia the L2CAP protocol, a network sub-layer 503 with the 6LoBTLE protocol (which is now according to RFC 7668 called IPv6 over BLUETOOTH(R) Low Energy), a network/internet layer 504 with an internet protocol such as IPv6, a transport layer 505 and an application layer 506. BNEP supports transport of both IPv4 and IPv6 datagrams whereas IPSP only supports IPv6 datagrams.

(54) To save power, low power radio protocols such as Bluetooth uses small (bearer-specific) frame sizes. The frame size dependents on the amount of payload and the amount of control (signaling) data that are required. Hence it is important to minimize the amount of overhead in the data link layer frame in a lower power protocol.

(55) With reference to FIG. 4, BNEP removes and replaces the IP header with a BNEP Header when a frame with a message is transmitted. The opposite is the case when a frame with a message is received. Finally, both the BNEP Header and the IP payload is encapsulated by the Bluetooth Logical Link Control and Adaptation Protocol (L2CAP) followed by the baseband or Link Layer protocol and is sent over the physical media. The BNEP header is typically between 4 and 16 octets, which is a significant reduction compared to the sizes of the IPv4 and IPv6 headers. Generally, layers 403-406 may be considered as enabling a direct IP link from the hearing aid to the remote entity.

(56) With reference to FIG. 5, for IP transports via Bluetooth Low Energy (LE) the IPv6 over Bluetooth Low Energy (6LoBTLE) specification (cf. IPv6 over BLUETOOTH(R) Low Energy) is used for e.g.:

(57) link establishment to an auxiliary device, such as an IPSP router;

(58) neighbor discovery, i.e. other IPSP nodes connected to the same router; and/or

(59) compression of the IPv6 header to between 2 to 20 octets.

(60) Generally, layers 503-506 may be considered as enabling a direct IP link from the hearing aid to the remote entity.

(61) Depending on the point of view, the allocation of different protocols to different layers may not always be strict and unambiguous. However, typically both BNEP and 6LoBTLE are typically considered sub-layers in the networking layer or an adaptation layer, i.e. below the IP protocol.

(62) The BNEP or IPSP stack 400, 500 is often part of the auxiliary device or typical gateway used (i.e. smartphone, tablet, personal computer or a combo WiFi/Bluetooth router with BNEP or IPSP). If BNEP and/or IPSP is now supported in the hearing aid, then vendor specific software is the not needed on the auxiliary device to access internet connected servers from the hearing aid. A corresponding system with a remote entity and a hearing aid system establishing an end to end IP link is shown in FIG. 6.

(63) The system 600 of FIG. 6 comprises a hearing aid system 602 with a first hearing aid 604 and a second hearing aid 606. The hearing aid system further comprises an auxiliary device in the form of a mobile device 608 or a router 610. The system 600 further comprises a remote server 612. While the hearing aids 604 and/or 606 are physically connected to the one or more auxiliary devices 608 and/or 610 via Bluetooth or Bluetooth LE, the first and/or second hearing aid 604, 606 establishes a direct IP link to the remote server 612 over the internet. This is possible by e.g. employing a protocol stack, such as IP stack 400 or 500, as described above. While the connection between the auxiliary devices 608, 610 and the server 612 is a standard IP based connection, the IP data in the connection between the hearing aid 604, 606 and the auxiliary device is encapsulated in the Bluetooth protocol.

(64) It may be that only one of the two hearings aids 604, 606 of FIG. 6 has a Bluetooth connection (IP link) to a border router, for instance. In that case, the hearing aid 604, 606 with the Bluetooth connection may relay the IP payload to the other hearing aid by means of separate connection between the two hearing aids, as illustrated with hearing aids 704, 706 and border router 710 of hearing aid system 702 in FIG. 7. For example the connection can be a Bluetooth connection as well, but also a vendor specific connection.

(65) Further options for realizing an IP stack in a hearing aid will be described with reference to FIG. 30 below.

(66) In the following different use cases are described, which are enabled by realizing an IP Link by the hearing aid.

(67) One example of a use case is the streaming of audio. A corresponding system 800 is exemplarily shown in FIG. 8. The following streaming use cases are possible. As one example, a remote server 812 can stream music directly to the hearing aid. Based on either timestamp, codec frame number or both provided by the remote entity (such as a server), the hearing aids 804, 806 of hearing aid system 802 may synchronize the rendered audio via a connection between the hearing aids. The connection between the hearing aids 804, 806 can be magnetic or RF based. As another example, a two-way session may be employed, in which the end user speaks with a remotely located person, via a remote server bridge to a voice assistant (such Alexa, Siri or Google) and/or directly with a (human) voice assistant.

(68) As yet another example, the hearing aid may stream audio one-way to a server 812, e.g. for control commands. Additionally or alternatively, the remote server 812 can notify the user with a one-way audio stream, e.g. when there are (feature) updates.

(69) A corresponding example of an IP stack 900 with layers 901-905 for this use case is shown in FIG. 9. In the application layer 905, the RTP (Real-time Transport Protocol) is used for transport of encoded audio data, and the RTCP (Real-time Transport Protocol Control Protocol) is used for transport of audio control messages. These Protocols are described in the Real-Time Protocol (RTP) and RTP Control Protocol (RTCP) Specification. UDP is used as the transport protocol in transport layer 904.

(70) Another example of a use case is the Fitting procedure. FIG. 10 illustrates an exemplary system 1000 with hearing aid system 1002, hearing aids 1004, 1006, border router 1010 and remote server 1012 for this use case. The benefits of a fitting procedure while employing an internet protocol at the hearing aid are as follows: No fitting gateway (like the Noahlink

(71) Wireless) is needed. In order to increase the fitting speed, two routers—one for each hearing instrument—can be used. During the fitting procedure, the dispenser can simultaneously speak via IP with the end user, as described above with respect to the communication to a assistant.

(72) While today's hearing aids can already be remotely fitted, this requires either via a wireless connection to a dedicated fitting device or a connection to a mobile phone, tablet or computer, which must contain a dedicated fitting application or piece of fitting software. However, if the hearing aid 1004, 1006 supports an internet protocol, such as IPv4 or IPv6, and a bearer e.g. Bluetooth, then any auxiliary device (phone/tablet/pc) which can operate as an IP border router 1010 and with a bearer compatible to the bearer employed by the hearing aid can be used as the fitting device without the need for any dedicated device or dedicated software. During the remote fitting session, the dispenser and end user can also speak to each other via the IP link (cf. above).

(73) FIG. 11 illustrates an exemplary IP stack 1100 with layers 1101-1105 for this use case. At the application layer 1105, a fitting application protocol may be used for transport of the of fitting data, RTP may be used for transport of encoded audio data, and RTCP may be used for transport of audio control messages. At the transport layer 1004, the fitting application protocol may use TCP and the audio application protocols may use UDP. The fitting application protocol can be vendor specific or a standardized protocol e.g. Message Queuing Telemetry Transport (MQTT), specified under http://mqtt.org/.

(74) FIG. 12 depicts an alternative IP stack configuration 1200 with layers 1201-1205 for remote fitting where the fitting protocol uses the Constrained Application Protocol (CoAP), as e.g. specified under https://tools.ietf.org/html/rfc7252, which allows that a message can be acknowledged, which enables reliable data communication in the fitting use case. This is particularly advantageous, as all fitting should be transported reliable, as the hearing aid may behave incorrectly or may even damage the end user's ear, if some fitting data is missing.

(75) Yet another example of a use case is a Device Firmware Update. Similar to the remote fitting use case, a hearing aid can be Device Firmware Updated (DFU) via a wireless connection to a dedicated fitting device or a smart phone, tablet or person computer containing a dedicated fitting application or piece of fitting software. However, if the hearing aid supports an internet protocol, such as IPv4 or IPv6 and a bearer e.g. Bluetooth, then any auxiliary device (such as smart phone, tablet, computer) which can operate as an IP border router and with a bearer compatible with the bearer employed by the hearing aid can be used as the fitting device without the need for any dedicated device or dedicated application or software.

(76) FIG. 13 illustrates an exemplary IP stack configuration 1300 with layers 1301-1305 for this use case. At the application layer 1305, a DFU application protocol is used for the transport of the of the DFU data. Alternatively, a standardized file transfer protocol (such as a protocol mentioned under https://en.wikipedia.org/wiki/Comparison_of_file_transferprotocols) can be used. At the transport layer 1304 TCP may be used.

(77) FIG. 14 depicts an alternative IP stack configuration 1400 with layer 1401-1405 for the use case of DFU where the DFU protocol uses the Constrained Application Protocol (CoAP). As explained above, this protocol allows that a message can be acknowledged, which enables reliable data communication in the case of a DFU. It is advisable that a DFU data is transported reliable, as the hearing aid may behave incorrectly or may even damage the end user's ear, if some DFU data is missing.

(78) Yet another use case is data harvesting. Various sensors may be implemented in a hearing aid. Examples of such sensors are a microphone, a heart rate sensor, or an electroencephalography (EEG) sensor. The hearing aid may read respective sensor outputs at a regular rate and may in one example store the data in its non-volatile memory. The recorded sensor data can be uploaded to a remote data harvesting server once the hearing aid is connected to the internet, as described above. When the hearing aid is connected, it may also transfer the sensor data directly to the data harvesting server without storing the data on its non-volatile memory. In addition, programs usage statistics and hearing aid status information can be upload directly or stored data can be uploaded when connected to the internet.

(79) FIG. 15 illustrates an exemplary IP stack configuration 1500 with layer 1501-1504 for the use case of data harvesting. At the application layer 1505, the Constrained Application Protocol (CoAP) already mentioned above is used as the data harvesting application protocol for transport of the harvested data. Alternatively, a vendor specific protocol may be used, as well. At the transport layer 1504 UDP may be used.

(80) Yet another use case is the optimization or tuning of the neural networks in the hearing instrument. The hearing aid may implement neural networks (NNs), which may need to be optimized or tuned for the specific user. One situation where this is needed could be the voice recognition for spoken commands, where the NNs of the hearing aid need to be optimized to better recognize the hearing aid's users voice, for instance. For instance, the user could be requested (e.g. by the NN tuning server) to speak out certain commends, wherein these commands are then recorded by the hearing aid microphone(s) and are then sent to the NN tuning server together with the NN coefficients. Once the NN (i.e. its coefficients) have been optimized, the coefficients are downloaded to the NNs in the hearing aid.

(81) Another situation where the hearing aid's NNs may need to be optimized or tuned could be in a specific listening situation, where the NNs make a sub-optimum or wrong decision. The situation may be recorded by the hearing aid's microphones and sent to the NN tuning server together with the NN coefficients. During the optimization of the NN coefficients, the user may be asked questions via the audio IP link described above. Once the NN coefficients have been optimized, the coefficients are downloaded to the NNs in the hearing aid.

(82) FIG. 16 illustrated an exemplary IP stack configuration 1600 with layers 1601-1605 for the use case of neural network tuning. At the application layer 1605, a NN tuning application protocol may be used for the transport of the NN coefficients, whereas RTP may be used for transport of encoded audio data, and RTCP may be used for transport of audio control messages. At the transport layer 1604, the NN tuning application protocol uses TCP and the audio application protocols use UDP. The NN tuning application protocol can be vendor specific or a standardized protocol e.g. Message Queuing Telemetry Transport (MATT), already mentioned above.

(83) FIG. 17 depicts an alternative IP stack configuration 1700 with layers 1701-1705 for neural network tuning where the NN tuning protocol uses the Constrained Application Protocol (CoAP), which allows that a message can be acknowledged, which enables reliable data communication in the exemplary use case of neural network tuning. Neural network data, such as coefficients, should be transported reliable, as the hearing aid may behave incorrectly or even damage the end users ear, if some NN data is corrupted.

(84) Yet another use case is an interaction with a home management system. For instance, the hearing instrument can interact either directly or indirectly via a remote server with the home management system.

(85) The example aspects of the present disclosure may further employ a Constrained Transport Protocol with Security (CTPS), which will be explained more detail in the following.

(86) In the communication via an internet protocol an application on one device communicates with an application on another device in case of uni-cast or more devices in case of multi-cast. Often when two devices (such as the hearing aid and the remote entity) communicate via an internet protocol, multiple applications may be communicating with each other on the two devices. Each message between two respective applications is sent as an individual internet protocol packet, i.e. messages from multiple applications directed at applications of the same destination cannot be bundled together on one IP packet, which would however be desirable to minimize the IP-stack header overhead. The overhead is even larger, if security is enabled in one of the layers from the Network Layer and above.

(87) FIG. 18 shows an exemplary IP communication model 1800 with layers 1801-1805 with a wireless interface where multiple applications are communicating with one another. When one of the devices is a resource constrained device with a wireless interface (such as a hearing aid) then transmitting or receiving packets at the wireless interface containing IP payloads form the individual Application is in-efficient power wise as the radio on-time is higher compared to when the application data from multiple applications were bundled into larger packets. However, that requires that the applications can accept the higher latency as the transmission rate may be reduced while collecting the asynchronous data from multiple applications depending on the transmission frequency and pattern of the multiple applications.

(88) The following sections will describe a transport layer protocol, which deals with the above outlined issues and which may be implemented in a protocol stack according to the present disclosure as described above.

(89) The Application Layer Protocols are often called services (cf. e.g. Service Name and Transport Protocol Port Number Registry, www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml), and therefore the exchanged data frame between the CTPS and the Application is called a Service Data Unit (SDU). The exchanged data frame between the CTPS and the Netwotk CTPS Layer is called a Protocol Data Unit (PDU). FIG. 19 shows an exemplary implementation of on the IP stack 1900 and its architectural components, in particular layer 1903-1905. The CTPS Control Message Protocol block 1906 is responsible for the transport layer control messaging, which includes updates or adjustment of parameters related to links between devices and exchange/update of security keysThe Flow Control and re-transmission with PDU assembly/disassembly block 1907 handles the flow control and re-transmission including puts the SDUs in ascending order, if the PDUs have been received out of order before the SDUs are passed on the Application layer. PDU assembly and disassembly is handled by this block. The Authentication, de-/encryption and CRC block 1908 checks or adds the CRC field, authenticates the PDU and encrypts/decrypts the payload field.

(90) The CTPS PDU 2000 is schematically depicted in FIG. 20 and the following table provides a summary of the packet structure.

(91) TABLE-US-00004 Field Name Octets Description Header 1 Header field with control information Length 2 Length of the PDU payload Sequence Nr. 4 PDU sequence number. For a set of authentication/ encryption keys each new PDU shall have a new sequence number. Payload Up to 65535 PDU payload MIC M∈ [4, 6, 8, Message Integrity Check (MIC). Used for 10, 12, authentication. The length of the MIC field is defined 14, 16] during security update The MIC field shall not be included in an un-encrypted PDU CRC 3 Cyclic Redundancy Error (CRC)

(92) The following table provides header field definitions of the CTPS PDU.

(93) TABLE-US-00005 Field Name Bits Description Ver 1 Indicates the CTPS version EP 2 Encrypted PDU (EP). EP ≠ 0b00 indicates that the PDU is encrypted and includes a MIC field. When the encryption keys are changed the value in EP field shall be changed and the new value shall not be 0b00 when encryption is enabled. A change in in EP field value indicates to the peer device that the encryption keys have been changed. EP = 0b00 indicates that the PDU is not encrypted and don’t include a MIC field. LLM 1 Indicated that the payload contains an CTPS Control Message (CM). AF 1 Indicates that the payload contains sequence numbers of the PDUs, that have requested to be acknowledged RTX 1 Indicates that the payload contains re-transmitted PDUs ARQ 1 Acknowledgement of the this PDU is requested to the receiver by the sender. (RFU) 1 Reserved for Further Use

(94) FIG. 21 depicts an exemplary sequence 2100 of four types of frames in the PDU payload 2101: CTPS Control message SDU 2102, Acknowledged SDUs frame 2103, Re-transmitted SDUs frame 2104 and New SDUs frame 2105. The header bits AF and RTX indicate whether the Acknowledged SDUs frame and Re-transmitted SDUs frame are included respectively. The header bits indicate whether the first 3 types are included in a payload or whether one or more of the three first frame types are absent in a payload.

(95) FIG. 22 schematically depicts the format of the CTPS Control Message SDU 2200 part of the payload. The first octet field “Length” indicates the length of the CTPS Control Message.

(96) FIG. 23 schematically depicts the format of the acknowledged PDUs part 2300 in the payload. The first two octets “Number” field indicates the number of two octet PDU Nr frames, which contain the lower 16-bits of sequence number of the PDU being acknowledged.

(97) FIG. 24 schematically depicts the format of the New SDUs frame 2400 in the payload. The first two octets “Total length” field indicates in octets the total length of all the New SDU frames. The SDU frame begins with a two octets “length” field, which indicated the length of the SDU payload in octets. The next field is one octet “Port Nr.”, which is the port number of the application. The port number field could however be extended to e.g. 2 octets, whereby the TCP and UDP port numbers could be re-used.

(98) FIG. 25 schematically depicts an example format of the Re-transmitted SDUs 2500 in the payload. The first two octets contain the number of the PDU being retransmitted. The two octets Length field indicates which indicated the length of the SDU payload. The one octet Port Nr. is the port number of the application

(99) For example, For authenticating the PDU and encrypting the PDU payload the 128 bits AES (Advanced Encryption Standard) and CCM (Counter Mode Cypher Block Chaining Message Authentication Code) may be used, which is illustrated in diagram 2600 of FIG. 26 and which inputs and outputs are provided in below table. An 256 bits AES may be used.

(100) TABLE-US-00006 TABLE 2-6 Description of the in- and out-put of the AES-CCM block. AES-CCM Inputs/outputs Description K 128-bits encryption key, which is known by both the sender and receiver. L Number of octets in the length field-here 2 M Number in the octets in the MIC field (used for authentication)-Defined during negotiation or update of the security keys. a Additional authenticated data. Here it consists of a virtual part, which is the 128-bits IPv6 source address and the 128-bit IPv6 destination address plus the PDU Header field and the Total PDU length field. N The 15-L = 13 octets nonce. Here the nonce is a concatenation of the 4 octets PDU sequence number (Sequence Nr) and an Initialization Vector (IV), which is known by both the sender and receiver. I.e. nonce = IV|Sequence Nr m Message to authenticate and encrypt. Here the PDU payload c Cypher or encrypted payload field U Authentication value or the PDU MIC field IV The Initialization Vector (IV) is defined during negotiation or update of the security keys. The size in is:15-L-number of octets in the Sequence Nr Field = 15-2-4 = 9 octets.

(101) Further details of the AES-CCM block can for instance be found in Counter with CBC-MAC under https://tools.ietforg/html/rfc3610.

(102) Regarding the PDU reception and transmission, FIGS. 27 and 28 show exemplary flow diagrams 2700, 2800 of receiving and transmitting a PDU, respectively. The encryption process has been described above.

(103) Before data communication can happen, both ends need to agree on e.g. the maximum PDU size, transport interval and/or security keys. This can either happen via configuration at manufacturing or negotiation via the CTPS Control Message Protocol.

(104) The transport interval can be configured with a constant interval, with a constant interval plus additional transmissions, if there is more data to be sent than the maximum PDU size, with a non-constant interval, where a PDU is only transported, if the payload has a defined/agreed size, with a non-constant interval, where a PDU is only transported, if the payload has a defined/agreed size or if there has not been a transmission in a defined period, or with a non-constant interval, where a PDU is transported, if there is a payload.

(105) When acknowledgement is enabled the CTPS will order the received SDU in ascending order before the SDU is passed on the Application(s), which requested a reliable transport—the ordering feature is similar to the feature implemented in TCP. Which transport interval method to use and whether acknowledgment is requested depends on the application protocol in use.

(106) Above, the protocol stack comprising the internet protocol implemented at the hearing aid used the Bluetooth or Bluetooth LE standard for handling and physically transmitting the in IP packets in the layer or sub-layers below internet protocol. However, generally other protocol or standards than Bluetooth may generally also be employed. For instance, instead of or in addition to letting Bluetooth handling the transport of the IP packets in the hearing aid, this handling and transmission could also be realized via the IEEE802.15.4 standard (WPAN), the IEEE802.15.4a standard (UWB), any of the 802.11 family of standards (WiFi) or LoRaWAN for instance. A simplified IP stack 2900 with its layers 2901-2905 and multiple possible bearers 2906-2911 is depicted in FIG. 28.

(107) Correspondingly, FIG. 29 depicts an exemplary system 3000 with hearing aid system 3002, hearing aids 3004, 3006, LoRA base station and border router 3010 and remote server 3012. Here, two examples of an IP end to end link via two different transport protocols (either LoRA or WiFi 802.11ax in this case) is illustrated.

(108) It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.

(109) As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element, but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method are not limited to the exact order stated herein, unless expressly stated otherwise.

(110) It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.

(111) Accordingly, the scope should be judged in terms of the claims that follow.