Differential audio data compensation
11700485 · 2023-07-11
Assignee
Inventors
- Anders Røser HANSEN (Ballerup, DK)
- Stig PETRI (Ballerup, DK)
- Svend FELDT (Ballerup, DK)
- Poul Peder HESTBEK (Ballerup, DK)
- Casper FYNSK (Ballerup, DK)
- Mirjana Adnadjevic (Ballerup, DK)
Cpc classification
H04R1/025
ELECTRICITY
H04M1/6033
ELECTRICITY
H04R2430/20
ELECTRICITY
H04R1/02
ELECTRICITY
H04R3/02
ELECTRICITY
H04R2201/40
ELECTRICITY
International classification
H04R3/02
ELECTRICITY
G10K11/178
PHYSICS
Abstract
A method is disclosed, the method comprising obtaining at least one first information indicative of audio data gathered by at least one first microphone, and at least one second information indicative of audio data gathered by at least one second microphone; determining a differential information indicative of one or more differences between at least two pieces of information, wherein the differential information is determined based, at least in part, on the at least one first information and the at least one second information; and compensating of an impact onto the audio data, wherein audio data of the first information and/or the second information is compensated based, at least in part, on the determined differential information. Further, an apparatus, and a system are disclosed.
Claims
1. A speakerphone comprising: a speakerphone housing, wherein a first microphone and a second microphone are arranged in the speakerphone housing, an output transducer arranged in the speakerphone housing, a processor in communication with the first microphone, the second microphone and the output transducer, wherein the processor is configured to receive electrical signals from the first microphone and the second microphone, wherein the electrical signals represent ambient sound received by the first microphone and second microphone respectively, a first calibration unit, wherein the signal from the first microphone is fed to the first calibration unit which is configured to provide a first calibrated signal, an adaptive filter configured to receive the first calibrated signal, the adaptive filter providing a first filtered output signal based on the received first calibrated signal, a second calibration unit, wherein the signal from the second microphone is fed to the second calibration unit which is configured to provide a second calibrated signal, wherein the processor is configured to establish a processed signal based on the first filtered output signal and the second calibrated signal, wherein signal contributions from the output transducer are substantially eliminated in the processed signal.
2. A speakerphone according to claim 1, wherein the processor is arranged in the speakerphone housing or in a remote device configured to be in wired or wireless communication with the speakerphone.
3. The speakerphone according to claim 1, wherein the processing performed by the processor includes determining a difference between the first filtered output signal and the second calibrated signal.
4. The speakerphone according to claim 1, wherein the adaptive filter is configured to operate based on the difference between the first filtered output signal and the second calibrated signal.
5. The speakerphone according to claim 1, wherein differential information is determined by subtracting the first filtered output signal from the second calibrated signal, or by subtracting the second calibrated signal from the first filtered output signal.
6. The speakerphone according to claim 1, wherein one or more parameters in audio processing are adjusted based on a determined current aging state of at least one of the first microphone and the second microphone so that aging impacts causing alteration of audio data gathered by the at least one of the first microphone and the second microphone are compensated.
7. The speakerphone according to claim 6, wherein current aging state includes or is a determination of a current degradation state.
8. The speakerphone according to claim 1, wherein in the speakerphone housing, the first microphone, the second microphone and the output transducer are arranged on an axis or line when viewed from a top surface of the speakerphone housing.
9. The speakerphone according to claim 1, wherein in the speakerphone housing, one of the first microphone and the second microphone is arranged closer to the output transducer, and that the adaptive filter provides an output signal which compensates for the difference in closeness between the first and second microphones relative to the output transducer.
10. The speakerphone according to claim 1, wherein, owing to the adaptive filter, an adaptive beamformer is established based on the first microphone signal and the second microphone signal so as to eliminate the signal from the output transducer.
11. A system, comprising: the speakerphone according to claim 1; and an external microphone configured to be in communication with the speakerphone and the speakerphone configured to establish the processed signal by including signals from the external microphone.
12. The system according to claim 11, wherein the system is configured for cancelling at least one echo perceivable by a far-end user of the speakerphone.
13. A method, performed by a speakerphone according to claim 1, the method comprising: obtaining at least one first information indicative of audio data gathered by at least one first microphone; obtaining at least one second information indicative of audio data gathered by at least one second microphone; determining a differential information, wherein the differential information is determined based, at least in part, on the at least one first information and the at least one second information; and compensating of an impact onto the audio data, wherein audio data of the first information and/or the second information is compensated based, at least in part, on the determined differential information.
14. The method according to claim 13, further comprising: adjusting audio data gathered by the at least one first and/or the at least one second microphone, wherein one or more parameters impacting performance of the at least one first and/or the at least one second microphone are compensated in the processing so that a difference in performance between the at least one first microphone and the at least one second microphone is evened out, such as minimized.
15. A system, comprising: the speakerphone according to claim 2; and an external microphone configured to be in communication with the speakerphone and the speakerphone configured to establish the processed signal by including signals from the external microphone.
16. A system, comprising: the speakerphone according to claim 3; and an external microphone configured to be in communication with the speakerphone and the speakerphone configured to establish the processed signal by including signals from the external microphone.
17. A system, comprising: the speakerphone according to claim 4; and an external microphone configured to be in communication with the speakerphone and the speakerphone configured to establish the processed signal by including signals from the external microphone.
18. A system, comprising: the speakerphone according to claim 5; and an external microphone configured to be in communication with the speakerphone and the speakerphone configured to establish the processed signal by including signals from the external microphone.
19. A system, comprising: the speakerphone according to claim 6; and an external microphone configured to be in communication with the speakerphone and the speakerphone configured to establish the processed signal by including signals from the external microphone.
20. A system, comprising: the speakerphone according to claim 7; and an external microphone configured to be in communication with the speakerphone and the speakerphone configured to establish the processed signal by including signals from the external microphone.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9) The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
(10) The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
(11) A speakerphone or speakerphone system may be or include an apparatus according to the first exemplary aspect that is adapted to improve or augment the hearing capability to a far-end user receiving an acoustic signal (e.g. audio data). ‘Improving or augmenting the hearing capability of a far-end user’ may include compensating audio data. The “speakerphone” may further refer to a device such as a conference telephone, an earphone or a headset adapted to receive audio data electronically, possibly compensating the audio data and providing the possibly compensated audio data as an audible signal to at least one of the user's ears. Such audio data may be provided in the form of an acoustic signal radiated into the user's outer ear, or an acoustic signal transferred as mechanical vibrations to the user's inner ears through bone structure of the user's head and/or through parts of the middle ear of the user or electric signals transferred directly or indirectly to the cochlear nerve and/or to the auditory cortex of the user.
(12) A speakerphone or speakerphone system (e.g. also referred to as hearing system herein) may refer to a system comprising at least one apparatus according to the first exemplary aspect, e.g. comprising at least two microphones where the respective devices are adapted to cooperatively provide audio data to e.g. a far-end user's ears and/or a device at least according to the further example. A speakerphone or speaker phone system comprises at least a speakerphone housing, an output transducer/speaker and an input system comprising a first and a second microphone. The speakerphone or speakerphone system may be configured to communicate with one or more further auxiliary device(s) that communicates with the at least one apparatus, the auxiliary device affecting the operation of the at least one apparatus and/or benefitting from the functioning of the at least one apparatus. A wired or wireless communication link between the at least one at least one apparatus and the auxiliary device may be established that allows for exchanging information (e.g. control and status signals, possibly audio signals and/or audio data) between the at least one apparatus and the auxiliary device. Such auxiliary devices may include at least one of a remote control, a remote microphone, an audio gateway device, a wireless communication device, e.g. a mobile phone (such as a smartphone) or a tablet or another device, e.g. comprising a graphical interface, a public-address system, a car audio system or a music player, or a combination thereof. The audio gateway may be adapted to receive a multitude of audio signals such as from an entertainment device like a TV or a music player, a telephone apparatus like a mobile telephone or a computer, e.g. a PC. The auxiliary device may further be adapted to (e.g. allow a user to) select and/or combine an appropriate one of the received audio signals (or combination of signals) for transmission to the at least one hearing device. The remote control is adapted to control functionality and/or operation of the at least one hearing device. The function of the remote control may be implemented in a smartphone or other (e.g. portable) electronic device, the smartphone/electronic device possibly running an application (APP) that controls functionality of the at least one hearing device.
(13) In general, a speakerphone or speakerphone system includes i) an input unit such as a microphone for receiving audio data (e.g. an acoustic signal from a user's surroundings and providing a corresponding input audio signal, and/or ii) a receiving unit for electronically receiving an input audio data. The speakerphone or speakerphone system further includes a signal processing unit for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the compensated audio signal.
(14) The input unit may include multiple input microphones, e.g. for providing direction-dependent audio signal processing. Such directional microphone system is adapted to (relatively) enhance a target acoustic source among a multitude of acoustic sources in the user's environment and/or to attenuate other sources (e.g. noise). In one aspect, the directional system is adapted to detect (such as adaptively detect) from which direction a particular part of the microphone signal originates. This may be achieved by using conventionally known methods. The signal processing unit may include an amplifier that is adapted to apply a frequency dependent gain to the input audio signal. The signal processing unit may further be adapted to provide other relevant functionality such as compression, noise reduction, etc.
(15) For decades, the “ideal” business meeting was one in which all participants were physically present in the room at the same time. This enabled attendees to easily present ideas, ask questions, exchange information and receive feedback.
(16) Of course, it isn't always practical—or even advisable today—to gather all meeting participants in a single room. In addition, as business has become increasingly global and technology more robust, many business meetings are now conducted via videoconference, with the aid of speakerphones.
(17) To ensure the best possible meeting, speakerphones need to accurately reproduce the experience of being physically present with other call participants. Among the most important activities they need to replicate is double-talk. A natural part of conversation, double-talk occurs when people on opposite sides of a digital call interrupt, question or comment on each other's statements mid-sentence, thus speaking simultaneously.
(18) The Technical Challenge of Enabling Double-Talk
(19) While double-talk occurs easily when all participants are physically present in a room, replicating this activity can be a difficult technical challenge for many communication devices—and speakerphones in particular.
(20) This is because speakerphones are, by design, open audio systems that include a loudspeaker and microphone (or series of microphones) that are fully open to the air. While this has the advantage of enabling many attendees to be present on a call, it can also create an unwanted side effect: Echo, which occurs when sound from the loudspeaker is picked up by the microphone and looped back to the speaker at the far end.
(21) The Many Causes of Echo
(22) While audio waves that travel from a speakerphone's loudspeaker to its microphone is the most common cause of echo, it is far from the only one. Echo can have many causes, and, in fact, many factors can occur simultaneously to create echo situations.
(23) The design and construction of the speakerphone can be major contributors to echo. Poorly designed devices, or ones built using low-quality materials and components, enable audio waves to immediately reach the microphone as vibrations that pass through the body of the speakerphone, causing echo.
(24) The size and layout of a speakerphone can also cause echo. In general, the chance of creating an echo increases as the distance between loudspeaker and microphone decreases, because the audio waves have a shorter path to travel. Small speakerphones can be particularly susceptible to echo because their compact design limits the available space between loudspeaker and microphone.
(25) The local sound environment, such as the conference room in which the call is being conducted, also plays a role in creating echo. Audio waves from the loudspeaker naturally reflect off walls, open laptop screens and other objects, including people, within a conference room. Because sound passes through or reflects off these surfaces at different speeds, these signals can arrive at the microphone at different times—with structure-bound waves arriving immediately and airborne arriving a millisecond later.
(26) Strategies for Controlling Echo
(27) With all the ways echo can present itself, how do we eliminate it from speakerphone calls? That's a question that audio engineers have grappled with for decades and are still working to perfect. It's important to note that echo is an ever-changing sound artifact that constantly appears in new shapes and forms. In reality, echo cannot be eliminated altogether using current technologies; however, it can be controlled—and, in many cases, controlled highly effectively.
(28) Strategies for mitigating echo include everything from speakerphone design, construction and materials to employing highly advanced echo-cancellation features.
(29) Concert Hall Vs. Library: Quantifying the Echo-Cancelling Challenge
(30) Cancelling echo from speakerphones is a major task. How big? Essentially the equivalent of reducing the volume of a very loud rock concert to that of a quiet library.
(31) The loudest sound on a speakerphone occurs at the rim of the loudspeaker and has a sound pressure level (SPL) of approximately 115-125 dB. This SPL should be reduced to approximately 35 dB to be “drowned” by the microphone self-noise and thus unlikely to produce an echo.
(32) Controlling Echo Through Design
(33) The first step in controlling speakerphone echo begins with thoughtful hardware design and relentless attention to detail. Everything from the physical design of the device to the quality of materials used in construction plays a role in reducing echo, enabling natural conversation and creating an outstanding meeting experience. This is however not the main focus of the present disclosure.
(34) Some acoustic design considerations include: Intelligent chassis design. No detail is too small when it comes to the physical design of speakerphones. For instance, the loudspeaker system may be isolated from the rest of the chassis to avoid transferring sound from the loudspeaker to the microphones through chassis vibrations. Optimal speaker location. Because the risk of echo increases the closer the loudspeaker is to the microphone, the positions the microphone array as far away from the loudspeaker as possible to minimize the risk of echo. Performance-quality components. The highest quality speakers and microphones are components that are linear and offer a flat, predictable frequency response, which helps minimize “surprise” sounds that can often cause echo.
(35) The present disclosure mainly relate to controlling echo through signal processing.
(36) Because echo comes in various forms and can originate from many sources, outstanding speakerphone design alone isn't enough to fully control it. Thus, audio engineers employ an array of digital signal-processing strategies—ranging from the basic to the highly advanced—to identify and mitigate sources of echo.
(37) In general, most echo-cancelling strategies seek to compare the microphone signal with the loudspeaker signal and then “purify” the microphone signal by removing all sound components coming from the loudspeaker.
(38) An overview of signal processing strategies includes:
(39) Microphone Disabling
(40) The most rudimentary echo-cancelling system—and one used in some lesser-grade speakerphones—works by automatically muting the microphone on the receiving speakerphone while the person on the speaking end is talking. When an algorithm in the speakerphone senses a signal at the loudspeaker, which indicates the other person is talking, it shuts down the microphone on the receiving side, eliminating any unwanted sounds and the possibility of echo. When the system recognizes an absence of signal at the speaker, it enables the microphone on the receiving side, allowing that person to then respond.
(41) The biggest drawback to this echo-cancelling strategy is that it doesn't allow double-talk. Without the ability for one person to interrupt or acknowledge the other mid-sentence or both to speak at the same time, this method doesn't lend itself to natural conversation or a positive user experience.
(42) Use of Adaptive Microphone Pick-Up Patterns
(43) A much more advanced echo-cancelling strategy incorporates directional microphones into the speakerphone design. Rather than completely turning off the receiving microphone when a signal is present at the loudspeaker, an algorithm instead shifts the microphone from omnidirectional mode to a directional pick-up pattern pointing away from the loudspeaker, thus minimizing the audio traveling from loudspeaker to microphone. When the system recognizes an absence of signal at the loudspeaker, it shifts the microphone back to omnidirectional mode, constantly ensuring optimal double-talk performance.
(44) Delaying and Subtracting Loudspeaker Signal from Microphone Signal
(45) This strategy employs several advanced signal-processing techniques to negate the slight leakage from the loudspeaker to the microphone, which occurs regardless of the quality of the microphone's pick-up pattern. The loudspeaker signal is looped back to the microphone signal path—and then delayed and inverted—to cancel any residual airborne loudspeaker signal that may have leaked into the microphone. This advanced process of inverting and phase delaying a signal and combining it with the original can be highly effective at promoting double-talk and eliminating the risk of echo.
(46) Advanced Echo-Cancelling
(47) While a combination of the echo-cancelling strategies discussed previously can be effective in controlling echo when using speakerphones, an advanced echo-cancellation system has been developed
(48) This ultra-high-performance system may include a combination of linear and non-linear signal processing that constantly measures, estimates and updates itself based on thousands of pre-defined parameters. Combined with a state-of-the-art microphone array that effectively separates human voice from other extraneous audio, a system according to the present disclosure may ensure high-quality sound and a natural meeting experience without disturbing sound artifacts.
(49) Some of the components of this system may include: Reference disturbance signal—Emulates the loudspeaker signal that potentially can cause echo. When the signal is played through the loudspeaker at different volume levels it will become more or less distorted depending on the level. This feature estimates the disturbance signal as accurately as possible to filter it out effectively. Spatial environment analysis—Analyzes the meeting room and identifies possible spatial changes during the meeting, such as a person moving closer to the microphones or people entering the room. Adaption and removal—Constantly adapts to the changing environment and removes the disturbance signal.
(50) Preferably, speakerphones should be able to reproduce the experience of being physically present with other call participants to ensure a natural speaking and meeting environment.
(51) Among the most important activities speakerphones need to replicate is double-talk, which occurs when people on opposite sides of a digital call speak simultaneously. For speakerphones, replicating this activity can be technically difficult because of the risk of echo, which often results when an audio signal from the loudspeaker travels back to the microphone and is then transmitted back to the person speaking.
(52) Echo is a continually changing sound artifact that can never be fully eliminated, but it can be controlled through superior design and materials as well as several advanced echo-cancelling strategies.
(53) Why are Speakerphones Prone to Echo, but Headsets Aren't?
(54) While speakerphones are susceptible to echo, headsets are largely impervious to it. Why? A couple reasons. For starters, speakerphones need to play at a higher volume than headsets, which increases the chance of echo. In addition, unlike speakerphones, which are open audio systems, headsets are largely closed audio systems. The foam padding around the ears prevents audio waves from escaping and being picked up by the microphone—and thus causing an unwanted echo.
(55) Now referring to
(56) The speakerphone system 100, 200, 300 comprises a speakerphone housing or chassis 17. Within the speakerphone chassis 17, a speaker chamber 16 is arranged. The speaker chamber 16 is configured to receive a speaker unit, e.g. a loudspeaker 14, e.g. to playback audio information provided by a far-end user e.g. to one or more users or participants of a telephone conference utilizing the speakerphone system 100, 200, 300. Further, the speakerphone system 100, 200, 300 comprises at least two microphones, at present a first microphone 10a and a second microphone 10b. The first microphone 10a and the second microphone 10b are arranged at a microphone chamber of the speakerphone system 100, 200, 300. The first microphone 10a and the second microphone 10b are arranged along a symmetry line SL extending along a longitudinal direction of the speakerphone system 100, 200, 300. The speakerphone system 100, 200, 300 is configured to perform and/or control a method according to all exemplary aspects. The speakerphone system 100, 200, 300 may comprise or be, at least a part of it, the apparatus according to the first exemplary aspect.
(57) As illustrated in the sectional view of
(58) The first microphone 10a and the second microphone 10b may be configured as respective bidirectional microphones. Such a bidirectional microphone has a polar pattern as illustrated in the bidirectional microphone polar plot 2 of
(59) Now referring to
(60) Now referring to
(61) In
(62) There could be other conditions for adaption than the active speaker/loudspeaker. Another situation is during double talk, here the loudspeaker will be playing, while the person using the speakerphone is talking as well. The microphones will pick up both signals. The goal of the speakerphone is to cancel the loudspeaker signal and convey the user's speech, but it is important to realize that the user's speech in this situation is considered noise to the adaption algorithm. Hence stopping or slowing down adaption during double talk is most likely needed. In
(63) The method disclosed in reaction to
(64) System 200 comprises an adjustment element 20 which has been added to the system 200 in comparison to the system 100 of
(65) The calibration circuits 11a, 11b are maintained in comparison to the system 100 of
(66) Now referring to
(67) A method to compensate for having reduced pickup of sound in a plane of symmetry as explained above, is to physically rearrange the microphones in relation to the speaker. This is shown in
(68) In the configuration illustrated in
(69) The circuitry of
(70) For the adaptation to perform well in canceling the contribution from the speaker, the adaptive filter should incorporate a significant attenuation (to ensure the amplitude of the speaker signal are equal) before subtraction. As the distances between speaker and microphones are significantly lower than the distances to the wanted speech, it can be shown that this array type implements a canceling “point” instead of a canceling plane as above. In literature this is sometimes referred to as a nearfield beamformer, however, in this context, the configuration is used as a near field beamformer to cancel the speaker signal in a speakerphone application, which is not the common use. The present disclosure comprises moving one of the microphones very close to the speaker, which is counter intuitive as you move the microphone closer to the acoustic source you wish to remove or eliminate from your microphone signal.
(71)
(72) The two microphones 10a, 10b of the system 300 are re-arranged (in comparison to the two respective microphones as utilized by system 100 of
(73)
(74) In a first step 610, the at least one first information and the at least one second information are obtained, e.g. by receiving the at least one first information and the at least one second information from a first microphone (e.g. microphone 10a), and from a second microphone (e.g. microphone 10b).
(75) In a second step 620, a differential information is determined. The differential information is determined based, at least in part, on the at least one first information and the at least one second information obtained in step 610.
(76) In a third step 630, an impact onto audio data, wherein audio data is represented or comprised by the first information and/or the second information is compensated. The compensating may be performed and/or controlled based, at least in part, on the determined differential information of step 620.
(77) In a fourth step 640, audio data gathered by the at least one first (e.g. microphone 10a) and/or the at least one second microphone (e.g. microphone 10b) is adjusted. One or more parameters at least one first (e.g. microphone 10a) and/or the at least one second microphone that may impact a respective performance of the at least one first and/or the at least one second microphone are adjusted. This allows e.g. that a difference in performance between the at least one first microphone and the at least one second microphone may be evened out.
(78) The present disclosure also relate to at least the following item: An apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least perform: obtaining at least one first information indicative of audio data gathered by at least one first microphone (10a), and at least one second information indicative of audio data gathered by at least one second microphone (10b); determining a differential information indicative of one or more differences between at least two pieces of information, wherein the differential information is determined based, at least in part, on the at least one first information and the at least one second information; and compensating of an impact onto the audio data, wherein audio data of the first information and/or the second information is compensated based, at least in part, on the determined differential information.
(79) It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
(80) As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element, but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method are not limited to the exact order stated herein, unless expressly stated otherwise.
(81) It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
(82) Accordingly, the scope should be judged in terms of the claims that follow.