HEARING AID COMPRISING A USER INTERFACE
20230074554 · 2023-03-09
Assignee
Inventors
- Sudershan Yalgalwadi SREEPADARAO (Smørum, DK)
- Anders MENG (Smørum, DK)
- Meng GUO (Smørum, DK)
- Mojtaba FARMANI (Smørum, DK)
- Martin KURIGER (Fribourg, CH)
- Mikkel GRØNBECH (Smørum, DK)
- Nels Hede Rohde (Smørum, DK)
- Thomas JENSEN (Smørum, DK)
Cpc classification
H04R2225/61
ELECTRICITY
H04R1/1041
ELECTRICITY
H04R25/70
ELECTRICITY
H04R3/02
ELECTRICITY
H04R25/554
ELECTRICITY
International classification
Abstract
A hearing aid configured to be worn by a user, the hearing aid comprising a user interface allowing the user to control functionality of the hearing aid, and a feedback sensor for repeatedly providing a feedback signal indicative of a current estimate of feedback from an output transducer to an input transducer of the hearing aid, wherein the user interface is based on changes to the current estimate of the feedback path, e.g. provided by the user. A method of operating a hearing aid is further disclosed. Thereby an alternative user interface for a hearing aid may be provided. The invention may e.g. be used in hearing aids or headsets, or a combination thereof.
Claims
1. A hearing aid configured to be worn by a user, the hearing aid comprising an input transducer for picking up sound from an environment around the user when wearing the hearing aid and providing an electric input signal representing said environment sound; a processor for processing said electric input signal, including to apply a frequency and level dependent amplification to said electric input signal, or a signal originating therefrom, and providing a processed output signal; and an output transducer for converting said processed output signal to stimuli perceivable by the user as sound; a user interface allowing the user to control functionality of the hearing aid; and a feedback sensor for repeatedly providing a feedback signal indicative of a current estimate of feedback from an output transducer to an input transducer of the hearing aid, wherein the user interface is based on changes to the current estimate of the feedback path, wherein the processor comprises a control unit configured to enter a command mode when a specific trigger signal is received, and wherein the control unit is configured to detect one of a number of predefined changes to the feedback signal when said command mode is entered, and wherein each of said number of predefined changes to the feedback signal is associated with a specific command for controlling the hearing aid.
2. A hearing aid according to claim 1 wherein the control unit is configured to reduce said amplification, when said command mode is entered.
3. A hearing aid according to claim 2 wherein the control unit is configured to reduce said amplification by a predefined amount or factor.
4. A hearing aid according to claim 2 wherein the control unit is configured to reduce said amplification by a predefined amount or factor in dependence of said trigger signal.
5. A hearing aid according to claim 1 wherein said feedback sensor comprises an adaptive filter for providing said feedback signal.
6. A hearing aid according to claim 1 comprising memory (MEM) wherein said number of predefined changes to the feedback signal are stored.
7. A hearing aid according to claim 6 wherein each of said predefined changes to the feedback signal is associated with a specific command for controlling the hearing aid.
8. A hearing aid according to claim 7 configured to execute the command associated with a detected change to the feedback signal.
9. A hearing aid according to claim 1 wherein the feedback signal is based on a frequency response of an estimated feedback path from said output transducer to said input transducer.
10. A hearing aid according to claim 5, wherein the feedback signal is based on a frequency response of an estimated feedback path from said output transducer to said input transducer, and wherein the control unit is configured to monitor the frequency response of the estimated feedback path in a limited frequency range.
11. A hearing aid according to claim 5, wherein the feedback signal is based on a frequency response of an estimated feedback path from said output transducer to said input transducer, wherein a magnitude of the predefined changes is above a threshold.
12. A hearing aid according to claim 2 wherein the control unit is configured to reduce its amplification in certain frequency regions.
13. A hearing aid according to claim 1 wherein said trigger signal is related to the reception of a telephone call.
14. A hearing aid according to claim 1 being constituted by or comprising an air-conduction type hearing aid or a bone-conduction type hearing aid, or a combination thereof.
15. A method of operating a hearing aid configured to be worn by a user, the hearing aid comprising a user interface allowing the user to control functionality of the hearing aid, the hearing aid comprising an input transducer for picking up sound from an environment around the user when wearing the hearing aid and providing an electric input signal representing said environment sound; a processor for processing said electric input signal, including to apply a frequency and level dependent amplification said electric input signal, or a signal originating therefrom, and providing a processed output signal; and an output transducer for converting said processed output signal to stimuli perceivable by the user as sound; the method comprising repeatedly providing a feedback signal indicative of a current estimate of feedback from an output transducer to an input transducer of the hearing aid, providing said user interface based on changes to said current estimate of the feedback path, entering a command mode when a specific trigger signal is received, and detecting one of a number of predefined changes to the feedback signal when said command mode is entered, and wherein each of said number of predefined changes to the feedback signal is associated with a specific command for controlling the hearing aid.
16. A method according to claim 15 wherein said changes to the current estimate of the feedback path are provided by user gestures.
17. A method according to claim 15 wherein said specific trigger signal is a signal from a communication device indicating the presence of a telephone call, or any other input from such device, or other electronic device, requiring some sort of acceptance or rejection from the user.
18. A method according to claim 15 comprising: providing a reduction of said amplification of a signal of an audio path from sad input transducer to said output transducer, when said command mode is entered.
19. A method according to claim 18 comprising: providing said reduction of amplification by a predefined amount or factor.
20. A method according to claim 18 comprising: providing said reduction of amplification by 3 dB or more, or by 6 dB or more.
21. A method according to claim 18 comprising: providing said reduction of amplification in one or more frequency regions, where feedback is most likely to occur.
22. A method according to claim 18 comprising: providing said reduction of amplification in a frequency range between 2 kHz and 5 kHz.
23. A method according to claim 15 comprising: terminating the command mode in case no hand gesture has been detected within a predefined time.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0102] The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
[0103]
[0104]
[0105]
[0106] The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.
[0107] Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.
DETAILED DESCRIPTION OF EMBODIMENTS
[0108] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
[0109] The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
[0110] The present application relates to the field of hearing aids, in particular to a user interface for a hearing aid.
[0111] The solution described in the present disclosure makes use of existing dynamic feedback sensor technology in state-of-the-art hearing aids. An exemplary application of the solution may be to enable a truly handsfree experience during telephone call.
[0112] The dynamic feedback sensor is capable of detecting an onset of acoustic feedback, e.g. when a human hand is brought physically close to the hearing aid, while it is worn by the user. The feedback manifests via a level change in the feedback signal, e.g. from a low level, when no hand is present near the hearing aid, to a high level, when a hand is moved physically close to the hearing aid. The signal also returns to a low level when the hand is withdrawn from hearing aid.
[0113] This change in level presents an opportunity to use dynamic feedback sensor as a proximity sensor for hand movements.
[0114] Further, the duration of time a hand remains close to the hearing aid correlates to the duration of the signal at which it remains at a high level.
[0115] Hence, the following logic can be established: [0116] A hand moved close to the hearing aid, held there for a short duration and withdrawn, corresponds to the feedback signal going to a high level for a short duration and returning to its original level. This can be used to interpret the action as an intended input of the user, e.g. “Answer” the phone call, and e.g. used as trigger to initiate an action, e.g. to establish an audio communication path with mobile phone. [0117] A hand moved close to the hearing aid, held there for a long duration and withdrawn, corresponds to the signal going to a high level for a long duration and returning to its original level. This can be used to interpret the action as an intended input of the user, e.g. “Hang up” or “Reject” the phone call, and e.g. used as trigger to initiate an action, e.g. to disable the audio communication path with mobile phone (“Hang up”) (or signal the rejection of the phone call to the mobile phone (“Reject”)).
[0118] The above procedure can be used to implement a user interface based on hand gestures, e.g. to manage a mobile phone call, as described in further detail in connection with
[0119]
[0120]
[0121] In addition to the respective subtraction units (‘+’), the forward path further comprises respective analysis filter banks (FB-A1, FB-A2) connected to the subtraction units and configured to convert the (digitized, time-domain) output signals (ER1, ER2) of the subtraction units (‘+’) to a time-frequency representation (X.sub.1, X.sub.2), where each of the error signals are provided in a frequency sub-band representation (k, l), where k and l are frequency and time indices, respectively, and where k=1, K and K is the number of frequency sub-bands (e.g. equal to the order of a Fourier transform algorithm, e.g. STFT). The forward path further comprises a beamformer (BF) connected to the outputs (X.sub.1, X.sub.2) of the analysis filter banks (FB-A1, FB-A2) and configured to provide a spatially filtered (beamformed) signal (Y.sub.BF). The beamformed signal (Y.sub.BF) is provided as a weighted combination of the electric input signals (X.sub.1, X.sub.2) based on predefined or adaptively updated filter weights. The beamformer (BF) may e.g. be configured to attenuate noise in the environment of the user, and e.g. enabling a better perception of a target signal, e.g. representing speech of a communication partner in the environment. The forward path further comprises a forward path processing part (HAG) connected to the output (Y.sub.BF) of the beamformer (BF) and configured to apply one or more processing algorithms to the spatially filtered signal. The one or more processing algorithms may e.g. include one or more of a compressive amplification algorithm and a noise reduction algorithm. The forward path processing part (HAG) provides a processed signal (Y.sub.G), which is fed to a synthesis filter bank (FB-S1) for converting the frequency sub-band signals (Y.sub.G) to a time-domain signal (OUT). The time-domain signal (OUT) is fed to the output transducer (SP) for presentation to the user's eardrum or skull bone. In a normal mode of operation, the reference signal (OUT) to the adaptive algorithms (ALG1, ALG2), which is identical to the processed (output) signal (OUT) played to the user via the output transducer (SP), is based on the beamformed signal (Y.sub.BF). In other words, the output signal (OUT) presented to the user is the normal hearing aid signal (i.e. an enhanced environment signal, e.g. focusing on a speaker in the environment, but which also includes a contribution from the user's voice, although not in an optimal form).
[0122] The hearing aid further comprises a wireless interface (e.g. comprising an audio interface) to a communication device, e.g. a telephone, e.g. a mobile telephone. The wireless interface may be based on a proprietary or standardized protocol. The proprietary protocol may e.g. be Ultra WideBand (UWB) or similar technology. The standardized protocol may e.g. be Bluetooth or Bluetooth low energy. The wireless interface may be implemented by appropriate antenna and transceiver circuitry (indicated by transmitter (Tx) and receiver (Rx) in
[0123] The control unit (CONT) is configured to detect when a telephone call is received by the receiver (Rx) (via signal PHIN). The control unit (CONT) is configured to set the hearing aid in a ‘call ready’ mode wherein it monitors the feedback signal or signals (EST1, EST2) from at least one of the feedback estimation units (AF1, AF2), cf. also
[0124] Detection of one of a number of (e.g. frequency dependent) predefined changes to the feedback signal (or signals) may be provided by storing the feedback signal when the incoming call is detected (just before entering the ‘call ready’ mode), determining a possible change to the feedback signal occurring after entering the ‘call ready’ mode (but within the predefined maximum time) by comparing (e.g. subtracting) the current feedback signal with the feedback signal stored just before entering the ‘call ready’ mode. The control unit (CONT) is configured to compare the observed change in the feedback signal with the number of predefined changes to the feedback signal stored in memory (MEM) of the hearing aid. Each of the predefined changes to the feedback signal stored in memory (MEM) may e.g. be induced by certain (associated) gestures of the user, e.g. hand movements (cf. e.g. description in connection with step 4 of the flow diagram in
[0125] In case a call is accepted, the control unit (CONT) is configured to enter the ‘call mode’ and route the incoming audio signal (PHIN) from the receiver (Rx), e.g. comprising audio from a far-end communication partner or audio from a one way audio delivery device, to the output transducer (SP) of the hearing aid via the forward path processing part (HAG). The incoming audio signal (PHIN) may e.g. be mixed with the (possibly attenuated) beamformed signal (Y.sub.BF) from the environment, and possibly subjected to processing algorithms of the hearing aid (e.g. to compensate for a user's hearing impairment) before being presented to the user via the output transducer (SP).
[0126] In case the accepted call is a normal two-way telephone call, the control unit (CONT) (being in ‘call mode’) is further configured to activate the own voice pick-up path (cf. top signal path of
[0127] In case the call is rejected, the control unit (CONT) is configured to leave the ‘call ready’ mode and return to ‘normal mode’ (e.g. the mode that the hearing aid was in when the ‘call ready’ mode was entered).
[0128] In case the remote communication partner terminates the telephone call, the control unit will receive or extract a ‘call ended’ message from the signal (PHIN) received from the user's telephone via the wireless receiver (Rx) of the hearing aid. The control unit (CONT) is configured to leave the ‘call mode’ and return to ‘normal mode’ (e.g. the mode that the hearing aid was in when the ‘call ready’ mode was entered).
[0129] In case the user wants to terminate the telephone call, this may be done via a (normal) user interface on the telephone. Alternatively, of additionally, the control unit may be configured to detect a specific change in the feedback signal associated with the action ‘terminate call’. This may e.g. be implemented by arranging that the control unit (CONT) is configured to detect whether or not the specific change to the feedback signal (or signals) (EST1, EST2) (stored in memory (MEM) of the hearing aid) is observed. The specific change in feedback may be induced by a specific hand gesture that creates a large or otherwise easy to detect change in the feedback signal (e.g. a repeated variation between a large and small change of the feedback signal, which if not induced by a hand gesture of the user would be highly improbable to occur). When this specific change in the feedback signal is detected, the control unit (CONT) is configured to leave the ‘call mode’ and return to ‘normal mode’ (e.g. the mode that the hearing aid was in when the ‘call ready’ mode was entered).
[0130] Steps in the management of a telephone call via a user interface according to the present disclosure is exemplified below (where ‘HI’ is short for ‘hearing instrument’ intended to be synonymous with the term ‘hearing aid’): [0131] HI is connected to a mobile phone via Bluetooth; [0132] An incoming call notification on the phone is routed to HI and a ring tone (or a similar prompt) is played on HI; [0133] The HI goes into “Incoming Call” mode preparing to either answer or reject the call; [0134] The user can choose one of the two actions (answer, reject): [0135] The user can answer the call by moving his hand close to the HI (or one of the HIs), hold it there for a short duration (ΔT.sub.A), and then withdraw it; [0136] The user can reject the call by moving his hand close to the HI, hold it there for a long duration (ΔT.sub.R>ΔT.sub.A)), and then withdraw it; [0137] (The gestures may, in principle, be configured ‘the other way around’, so that ΔT.sub.A>ΔT.sub.R); [0138] If the user choses to “Answer” the call, then the HI goes into “In Call” mode; [0139] At the end of the call, the user can “Hang up” the call by moving his hand close to HI, hold there for a long duration (ΔT.sub.H≥ΔT.sub.R) and withdraw it. [0140] (again, this gesture may in principle be of any duration, short/long/very long, as we are only waiting for “Hang up” at this stage); [0141] The change in signal level and duration can be used to trigger further actions such as to set up a 1-way or 2-way audio path to the mobile phone or to disable the path at the end of call.
[0142] The actual configuration of durations (T.sub.A, T.sub.R) may also be user-defined (use either the long or short movements for accept/reject), e.g. during fitting, or via a normal user interface of the HI, e.g. via an APP. Further, the actual movements (gestures) applied to the different ‘commands’ may be selectable via a normal user interface of the HI, e.g. among a number of optional gestures and/or durations.
[0143] In addition to the above mentioned ‘answer call’, ‘reject call’ and ‘hang up’ (i.e. ‘terminate call’), other commands related to the telephone call may be introduced via the user interface according to the present disclosure. As an example, a “pause/muted” feature, providing a pause in the connection between the hearing aid and the user's telephone, can be introduced (e.g. to allow a user to do other things without being connected to a far-end communication partner).
[0144] This task of translating the changes in the feedback signal and its duration may be handled by the signal processor of the hearing aid. The feedback signal may e.g. be the estimation signal provided by a feedback estimation system the hearing aid (e.g. typically provided by an adaptive filter comprising a variable filter whose filter coefficients are adaptively updated by an adaptive algorithm e.g. an LMS algorithm or an NLMS algorithm, etc.
[0145] Hence, the (alternative) user interface according to the present disclosure may be implemented using functional parts that are already present in a state-of-the-art hearing aid (digital signal processing and feedback path estimation).
[0146] The above procedure is illustrated in the flow diagram of
[0147]
[0148] State 1: The hearing device is in its “normal operation” mode.
[0149] State 2: If there is an incoming call (directly to the hearing device, or through a phone that is connected to the hearing device via Bluetooth or other connections), the hearing device changes its operation mode to “Call Ready” mode (arrow ‘Yes’ leading to state 3). Otherwise, stay in it “normal operation” mode (arrow ‘No’ leading to state 1).
[0150] State 3: The hearing device is in the “Call Ready” mode. More specifically, [0151] The hearing device sends a notification to the user; this may be one or more notification tones, voices, and/or with caller information (such as names, phone numbers read out for the user) played through its output (receiver/speaker in the hearing aids, and vibrator in the case of a bone conducting hearing aid device). [0152] The hearing device may be configured to reduce its amplification by e.g. 6 dB in certain frequency regions in this mode to avoid any possible user gesture would lead to (critical) acoustic feedback to occur (e.g. howl). [0153] The system is waiting for hand gestures from the user. The estimated feedback path change from the feedback system will be monitored and used to determine the gestures. Particularly, this can be done by monitoring the frequency response of the estimated feedback path, e.g. in the frequency range between 2-5 kHz. If the magnitude exceeds a certain value within a time window, e.g., by 3 dB over a 0.2-1 second period, a gesture can be declared. As an alternative to the feedback path estimate, the open loop transfer function can also be used for the gesture detection. An open loop transfer function estimation can be done without having any adaptive filters as part of a feedback cancellation system. The magnitude/phase of the open loop transfer function (OLM/OLP) can be determined as:
OLM=L(ω,n)−L(ω,n−D),
OLP=P(ω,n)−P(ω,n−D), [0154] where L is the signal level (in dB), P is the signal phase (both for a signal at any point in the acoustic signal loop), ω is the frequency index, n is the discrete time index, and D is the loop delay in samples. The loop delay is the time needed for a signal to travel through an electric and acoustic loop (e.g. starting from the acoustic input to an input transducer (e.g. a microphone) of the hearing device through the electric forward path to the output of the output transducer (e.g. a loudspeaker) and further via an acoustic feedback path from the output of the output transducer to the input of the input transducer).
[0155] State 4: When a valid gesture has been registered, the user can accept or reject the call; the hearing device is set to either “in call” mode (arrow ‘Accept’ leading to state 5) or back to “normal operation” mode (arrow ‘Reject’ leading to ‘state 1’). More specifically, [0156] To accept the call, the gesture “Hand moved close to HI, held there for a short duration and withdrawn” may e.g. be decided. ‘A short duration’ may typically be 0.5-1 s, but can also be 0.2 s, or up to 2 s. Shorter duration would make the gesture detection unreliable, and longer time could then be treated as “long duration” to reject the call. To reject the call, the gesture “Hand moved close to HI, held there for a long duration and withdrawn” may e.g. be decided. ‘A short duration’ may typically be longer than 2-3 seconds (at least longer than the time for ‘a short duration’. [0157] (In principle, the long/short duration can be defined by user to accept/reject calls). [0158] Instead of or in addition to the short/long duration, the gestures can also be “left and right hand gesture”, e.g., by moving the hand to the left hearing device means “accept” and moving the hand to the right hearing device means “reject”. [0159] Different distances from the hand to the hearing aid can also be used to indicate “accept” or “reject”. E.g., a hand approximately 10 cm away means “reject”, whereas a hand approximately 3 cm means “accept”. [0160] Different repetitions of hand movements can also be used to indicate accept/reject calls. E.g., the hand quickly move towards/away from the hearing device means “accept”, while two repeated such movements quickly after each other means “reject”. [0161] A combination of the above mentioned may also be used, e.g., on the left-hand side, a short/long duration means accept/reject, respectively, whereas on the right-hand side, a short/long duration means the opposite, i.e., reject/accept, respectively. In this way, it is possible to always use one hand or the short/long duration to accept/reject calls. [0162] In case that no valid gesture is detected, a predefined action (e.g. ‘reject call’, or ‘accept call’) may be performed.
[0163] State 5: The hearing device is in the “In call” mode.
[0164] State 6: The hearing aid ends the call, if a “hang up” gesture has been registered (arrow ‘Yes’ leading to state 1). If no “hang up” gesture is detected the hearing aid remains in state 5 (arrow ‘No’ leading to state 5). The “hang up” gesture can be any of the abovementioned gestures or a specific hang up-gesture different from the gestures decided for ‘accept’ and ‘reject’. In case the hang up-signal comes from the far-end, the control unit (CONT) unit (cf. e.g.
[0165]
[0166] A ‘trigger input may’ e.g. be a telephone call (cf. e.g. signal PHIN in
[0167] In principle, all user interactions that would be possible with mechanical buttons, physical touching, or changes via a touch screen of an APP can be activated as these ‘gesture based’ commands according to the present disclosure.
[0168] The ‘gesture based’ user interface may be used as a confirmation of a command entered via a normal (e.g. APP-based) user interface, e.g. in case the command in question is especially important, e.g. providing access to an account, or device, e.g. a car. Thereby it may be ensured that the command from the normal user interface is issued by the hearing aid user.
[0169] Embodiments of the disclosure may e.g. be useful in applications such as hearing aids or headsets, or a combination thereof.
[0170] It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
[0171] As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element, but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method are not limited to the exact order stated herein, unless expressly stated otherwise.
[0172] It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure.
[0173] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
[0174] The claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.