Hearing instrument with off-line speech messages
09788128 · 2017-10-10
Assignee
Inventors
Cpc classification
H04R2225/55
ELECTRICITY
International classification
Abstract
A hearing instrument configured for use with a device, the hearing instrument includes: an interface for reception of a message and/or a speech message from the device, wherein the speech message is a converted form of the message and is generated using a text-to-speech processor; a memory for storage of the message and/or the speech message, and a message processor configured for, at a selected time, outputting audio samples of the speech message for transmission to a user of the hearing instrument.
Claims
1. A hearing instrument configured for use with a device, the hearing instrument comprising: a microphone for generating a microphone output signal; an interface for reception of a message and/or a speech message from the device, wherein the speech message is a converted form of the message and is generated using a text-to-speech conversion algorithm; a memory for storage of the message and/or the speech message; a message processor configured for outputting audio samples of the speech message; and a speaker configured to provide an audio signal for a user of the hearing instrument based on the microphone output signal and the audio samples of the speech message.
2. The hearing instrument according to claim 1, wherein the hearing instrument comprises a hearing aid.
3. The hearing instrument according to claim 1, wherein the hearing instrument comprises a timer that is synchronized with a timer of the device, and wherein the message processor is configured for automatically outputting the audio samples at a selected time as determined with the timer of the hearing instrument.
4. The hearing instrument according to claim 1, wherein the interface is also for reception of information regarding a selected time from the device.
5. A hearing instrument system comprising the hearing instrument of claim 1, and the device.
6. The hearing instrument system according to claim 5, wherein the device is configured to transmit the message to the hearing instrument upon detection of a connection with the hearing instrument.
7. The hearing instrument system according to claim 5, wherein the device comprises: a first interface that is configured for connection with a Wide-Area-Network, a second interface configured for connection with the hearing instrument, and a central processor configured for controlling reception of information relating to the user through the Wide-Area-Network, and transmission of the message to the hearing instrument based on the information.
8. The hearing instrument system according to claim 7, wherein the time information indicating the selected time is included in the information.
9. The hearing instrument according to claim 1, wherein a duration of a transmission of the message to the hearing instrument is longer than a duration in which the audio samples of the speech message is outputted by the message processor.
10. The hearing instrument system according to claim 5, further comprising a user interface configured to receive a user command to sequentially output two or more messages stored in the memory of the hearing instrument for transmission to a user of the hearing instrument system.
11. The hearing instrument system according to claim 5, further comprising a user interface configured to receive a user command to delete a selected message in the memory of the hearing instrument.
12. The hearing instrument system according to claim 5, further comprising a user interface configured to receive a user command to repeat transmission of a selected message.
13. The hearing instrument system according to claim 5, further comprising a user interface configured to receive a user command to mute a selected message.
14. A communication method performed by a hearing instrument, comprising: receiving a message or a speech message from a device, wherein the speech message is a converted form of the message and is generated using a text-to-speech conversion algorithm; storing the message and/or the corresponding speech message in a memory of the hearing instrument, generating a microphone output signal by a microphone of the hearing instrument; and providing an audio signal by a speaker of the hearing instrument for a human based on the microphone output signal and audio samples of the speech message.
15. The method according to claim 14, wherein the speech message is stored in the memory of the hearing instrument.
16. The hearing instrument of claim 1, wherein the message comprises a device-generated notification.
17. The hearing instrument of claim 16, wherein the device-generated notification comprises a receipt of a new email, SMS, instant message, traffic announcement, or an update in a social or professional network.
18. The hearing instrument of claim 1, wherein the message comprises an email, a SMS, a post in a social or professional network, a blog post, a RSS/Atom feed, a news feed, or an instant message.
19. A hearing instrument configured for use with a device, the hearing instrument comprising: a microphone for receiving sound and for generating a microphone output signal based on the received sound; an interface for reception of a message and/or a speech message from the device, wherein the speech message is a converted form of the message and is generated using a text-to-speech conversion algorithm; a memory for storage of the message and/or the speech message; a processor configured to apply a first weight to the microphone output signal to obtain a weighted microphone output signal; and a speaker for providing an audio signal to a user of the hearing instrument, wherein the audio signal is based on the weighted microphone output signal and an audio sample of the speech message.
20. The hearing instrument of claim 19, wherein the processor is also configured to apply a second weight to the audio sample of the speech message to obtain a weighted audio sample, the first weight being different from the second weight.
21. The hearing instrument of claim 19, wherein the processor is configured to apply the first weight so that a volume associated with the microphone output signal will be reduced or muted while the audio sample of the speech message is being presented to the user as a part of the audio signal.
22. The hearing instrument of claim 20, further comprising a mixer for combining the weighted microphone output signal with the weighted audio sample.
23. The hearing instrument of claim 19, wherein the first weight is zero.
24. The hearing instrument of claim 19, wherein the processor is also configured to obtain the speech message from the memory and output the speech message at a predetermined future time.
25. The hearing instrument of claim 19, wherein the hearing instrument comprises a hearing aid configured to compensate for a hearing loss of the user.
26. The hearing instrument of claim 1, wherein the message processor is configured to output the audio samples of the speech message based on time information indicating a selected time.
27. The method of claim 14, wherein the message is transmitted from the device to the hearing instrument in a duration that is longer than a duration in which the audio signal based on the audio samples of the speech message is provided for the human.
28. The method of claim 14, further comprising storing time information in the memory of the hearing instrument.
29. The hearing instrument of claim 1, further comprising a mixer for combining the microphone output signal with the audio samples of the speech message.
30. The hearing instrument of claim 1, wherein the message processor is also configured to provide a first weight for the microphone output signal and a second weight for the audio samples.
31. The method of claim 14, further comprising combining the microphone output signal with the audio samples of the speech message.
32. The method of claim 14, further comprising applying a first weight for the microphone output signal and applying a second weight for the audio samples of the speech message.
33. The hearing instrument according to claim 26, wherein the time information comprises stored information.
34. The hearing instrument according to claim 26, wherein the time information indicates a predetermined future time.
35. The method according to claim 28, wherein the time information indicates a predetermined future time.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The drawings illustrate the design and utility of embodiments, in which similar elements are referred to by common reference numerals. These drawings are not necessarily drawn to scale. In order to better appreciate how the above-recited and other advantages and objects are obtained, a more particular description of the embodiments will be rendered, which are illustrated in the accompanying drawings. These drawings depict only typical embodiments and are not therefore to be considered limiting of its scope.
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6) Various exemplary embodiments are described hereinafter with reference to the figures. It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or not so explicitly described.
(7) The new method, hearing instrument, and hearing instrument system will now be described more fully hereinafter with reference to the accompanying drawings, in which various examples of the new method, hearing instrument, and hearing instrument system are illustrated. The new method, hearing instrument, and hearing instrument system according to the appended claims may, however, be embodied in different forms and should not be construed as limited to the examples set forth herein.
(8)
(9) The illustrated hearing aid circuitry 10 comprises a front microphone 12 and a rear microphone 14 for conversion of an acoustic sound signal from the surroundings into corresponding microphone audio signals 16, 18 output by the microphones 14, 16. The microphone audio signals 16, 18 are digitized in respective ND converters 20, 22 for conversion of the respective microphone audio signals 16, 18 into respective digital microphone audio signals 24, 26 that are optionally pre-filtered (pre-filters not shown) and combined in signal combiner 28, for example for formation of a digital microphone audio signal 30 with directionality as is well-known in the art of hearing aids. The digital microphone audio signal 30 is input to the mixer 32 configured to output a weighted sum 34 of signals input to the mixer 32. The mixer output 34 is input to a hearing loss processor 36 configured to generate a hearing loss compensated output signal 38 based on the mixer output 34. The hearing loss compensated output signal 38 is input to a receiver 40 for conversion into acoustic sound for transmission towards an eardrum (not shown) of a user of the hearing aid.
(10) The illustrated hearing aid circuitry 10 is further configured to receive audio signals from various devices capable of audio streaming, such as smart phones, mobile phones, radios, media players, companion microphones, broadcasting systems, such as in a public place, e.g. in a church, an auditorium, a theatre, a cinema, etc., public address systems, such as in a railway station, an airport, a shopping mall, etc., etc.
(11) In the illustrated example, digital audio, including audio samples of speech messages, are transmitted wirelessly to the hearing aid, e.g. from a smart phone, and received by the hearing aid antenna 42 connected to a radio receiver 44. The radio receiver 44 retrieves the audio samples 46 from the received radio signal, and the time and date at which the audio samples of the speech message is to be played back to the user, possible transmitter identifiers, and possible network control signals, etc. The audio samples of the speech message are stored in an audio file in the memory 48 together with the time and date, at which the audio file, i.e. the speech message, has to be played back to the user.
(12) At the time and date at which the corresponding speech message is to be played back to the user, the message processor 54 controls retrieval of the audio samples from the memory 48 and forwarding of the audio samples 50 to the mixer 32. The message processor 54 also sets the weights 52 with which the digital microphone audio signal 30 and the audio samples 50 are added together in the mixer 32 to form the weighted output sum 34.
(13) The weights may be set so that the audio file is played back to the user while other signals input to the mixer are attenuated during play back of the audio file. Alternatively, all or some of the other signals may be muted during play back of the audio file. The user may enter a command through a user interface of the hearing aid of a type well-known in the art, controlling whether the other signals are muted or attenuated.
(14) The hearing aid may store more than one speech message with identical or similar time and dates to be played back; i.e. one or more speech messages may be going to be played back during ongoing play back of another speech message, whereby play back of more than one speech message may overlap fully or partly in time.
(15) Such a situation may be handled in various ways. For example, the hearing aid may simultaneously play back more than one speech message; i.e. one or more messages may be played back during ongoing play back of another speech message, whereby more than one speech message may be played back simultaneously or partly simultaneously. In the mixer 32, each speech message is treated as a separate input to the mixer 32 added to the output of the mixer with its own weights, whereby the speech messages are transmitted to the user with substantially unchanged respective times for play back.
(16) Alternatively, the speech messages may have assigned priorities and may be transmitted to the hearing aid together with information on the priority, e.g. an integer, e.g. larger than or equal to 1, e.g. the lower the integer, the higher the priority. Alarm messages may for example have the highest priority, while traffic announcements may have the second highest priority, and possible other communications may have the lowest priority. Such messages may then be played back sequentially in the order of priority one at the time without overlaps.
(17) The hearing aid may be configured to always mute one or more other signals received by the hearing aid during transmission of a speech message of highest priority towards the eardrum of the user of the hearing aid.
(18)
(19) In the illustrated circuitry 10, the text-to-speech processor 56 is configured to generate a speech message, such as a spoken reminder, from the text message received from the device, and the generated digital audio samples 58 are stored in an audio file in the memory 48 in the hearing aid for subsequent transmission to the mixer 32 at the selected time also received from the device and stored in the memory 48.
(20)
(21) The device has a user interface 120, namely a touch screen 120 as is well-known from conventional smart phones, for user control and adjustment of the device and possibly the hearing aid (not shown) interconnected with the device.
(22) The user may use the user interface 120 of the smart phone 100 to input information to the tools (not shown) in a way well-known in the art.
(23) The smart phone 100 may further transmit speech messages output by the text-to-speech processor 116 to the hearing aid through the audio interface 114.
(24) In addition, the microphone of the hearing aid may be used for reception of spoken user commands that are transmitted to the device for reception at the interface 114 and input to the unit 118 for speech recognition and decoding of the spoken commands and outputting the decoded spoken commands as control inputs to a central processor 110. The central processor 110 controls the hearing aid system to perform actions in accordance with the received spoken commands.
(25) The central processor 110 also controls an Internet interface 112 configured for connection with the Internet, e.g. a Wireless Local Area Network interface, a GSM interface 122, etc, and a wired audio and data interface 114, preferably a low power wireless interface, such as the Bluetooth Low Energy wireless interface, configured for connection with the hearing aid for transmission and reception of audio samples and other data to and from the hearing aid.
(26) Through the Internet, the device has access to electronic time management and communication tools used by the user for communication and for storage of time management and communication information relating to the user.
(27) The tools may include electronic calendar system(s), email system(s), such as Microsoft Outlook Express, Lotus notes, Windows Mail, Mozilla Thunderbird, Postbox, Apple Mail, Opera Mail, KMail, Hotmail, Yahoo, Gmail, AOL, etc, social network(s), professional network(s), such as Facebook®, LinkedIn®, Google+, tumblr, Twitter, MySpace, etc, RSS/Atom feeder(s), such as Bloglines, Feedly, Google Reader, My Yahoo!, NewsBlur, Netvibes, etc, news feeder(s), etc, well-known for management of daily activities and communications.
(28) Reminders, notifications, and received communication may include tasks to be performed, reminders of calendar dates, such as birthdays, anniversaries, appointments, meetings, etc, notifications on receipt of new SMS or new email, new Facebook update, new tweet, new RSS feed, new traffic announcement, etc, and/or the actual item notified, e.g. the SMS itself.
(29) The central processor 110 is configured to access the tools for electronic time management and communication facilitating use of the hearing instrument system to manage daily activities and communication through the Wide-Area-Network. A hearing aid app (not shown) executed by the central processor 110 instructs the smart phone to forward reminders and updates and received communication from the tools to the hearing aid as speech messages in accordance with settings previously made by the user and recorded with the tools.
(30) The device comprises the text-to-speech processor 116 configured for conversion of messages, such as reminders or notifications or received communication etc, into speech messages for transmission to the hearing aid.
(31) The user may have a plurality of devices with internet interfaces providing access to the tools and information relating to the user, and some or all of such devices may have the text-to-speech processor 116 and the interface 114 to the hearing aid and may constitute the device disclosed above.
(32) The speech message is transmitted to the hearing aid together with timing information on the date and time of day of play back of the speech message. Speech messages that are desired to play back without delay after receipt by the hearing aid may have zeroes in the transmitted date field.
(33) Typically, when the user accesses the tools in order to record or edit an event that requires attention or a task to be performed, the user has the option of specifying a message, namely a reminder, to be sent to the user in advance. Typically, the user may select that the reminder is forwarded as an SMS and/or an email and/or displayed in a pop-up window on a computer and/or is forwarded to the hearing aid as a speech message.
(34) Further, the user may select the time of presentation of the reminder to the user in several ways. For example, the user may specify the date and time of day for presentation of the reminder to the user, or the user may specify the number of seconds, minutes, hours and/or days in advance of term expiry of the recorded event or task, the reminder should be presented to the user, e.g. 3 days before a recorded birthday, or the user may specify the number of seconds, minutes, hours and/or days to elapse from data entry until presentation of the reminder to the user, etc.
(35) Typically, the user also receives messages in the form of notifications on incoming communication, such as receipt of a new email, SMS, instant message, etc, or receipt of updates in social or professional networks, such as Facebook, Twitter, LinkedIn, etc, RSS/Atom feeds
(36) The message may also include received information, e.g. an email, an SMS, a post in social or professional network, a blog post, an RSS/Atom feed, a news feed, an instant message, etc.
(37) Thus, examples of speech messages include reminders on, e.g. meetings, birthdays, social gatherings, journeys, to-do items, etc, and notifications on, e.g., tweets, emails, news, traffic updates, social network updates, web page updates, etc, and communication, e.g. an email, an SMS, a post in social or professional network, a blog post, an RSS/Atom feed, a news feed, an instant message, etc.
(38) The speech message may be accompanied by a distinct sound, such as short single note tone, or a distinct sequence of notes, such as a notification jingle, such as a personalized notification jingle.
(39)
(40) The hearing aid 10 is configured for reception of a speech message 80 from the smart phone 100.
(41) In one example, the speech message 80 is a reminder of a meeting taking place at the same day at 10 o'clock. The user recorded the meeting in his electronic calendar a week before, and the user also set a reminder to alert the user 15 minutes before start of the meeting, i.e. at 9.45 a.m. the same day. The user recorded the meeting with a computer at work without an interface to the hearing aid 10. However, the user has set the smart phone 100 to synchronize with the electronic calendar every half hour, whenever the smart phone is connected to the Internet through a WiFi network, and since the working place has a WiFi, the smart phone 100 was synchronized with the calendar server shortly after entry of the new meeting. The user has also set the smart phone 100 to send reminders to the hearing aid 100 within 24 hours of the time at which the reminders have to be played back by the hearing aid 10. The hearing aid 10 and the smart phone 100 establish a mutual communication link whenever they are within coverage of their radio transmitters. Since the user usually carries the smart phone 100 and the hearing aid 10 simultaneously, the communication link between them is usually in operation and thus, approximately at 10 am the day before the day of the meeting, the reminder is transferred as a speech message 80 to the hearing aid 10. The user set the reminder to be played back to the user 15 minutes before start of the meeting. Thus, at 9.45 am, the hearing aid 10 plays back the message “remember meeting with CEO in room 1A at 10 am” to the user. If the user presses a button (not visible) on the BTE housing within 15 seconds after termination of play back, the reminder is deleted from the memory of the hearing aid, and if not, the reminder is played back again 5 minutes before start of the meeting and subsequently deleted from the memory of the hearing aid.
(42) The spoken reminder 80 is converted from a text reminder received by the smart phone 100 from the electronic calendar system through the Internet 200. The conversion to the spoken reminder takes place in a text-to-speech processor 116 in the smart phone 100. The text-to-speech processor 116 provides the spoken reminder as digital audio samples that is transmitted to the hearing aid 10 and stored in an audio file in the memory of the hearing aid. At play back, the digital audio samples of the audio file is converted to an analogue audio signal in a digital-to-analogue converter of the hearing aid and the analogue audio signal is input to a receiver of the hearing aid 10 that outputs the acoustic speech message to the user.
(43) The user interface 120 of the smart phone 100 also constitutes a user interface of the time management and communication tools used by the user as is well-known in the art. The user interface 120 of the smart phone 100 also constitutes a user interface of the hearing aid as is well-known in the art.
(44) In addition, the user interface 120 of the smart phone 100 is also used for user entry of conditions specifying when a speech message in the memory of the hearing aid is to be deleted, e.g. upon play back, upon second play back, upon receipt of a specific user entry, etc.
(45) The user interface 120 of the smart phone 100 is also used to set volume levels of play back of the speech messages and the volume of reproduced sounds received by the microphone(s) of the hearing aid and possible other audio sources, such as media players, TV, radio, hearing loops, etc, of the hearing aid.
(46) Other equipment than the smart phone 100 may also constitute the device. For example, the user may have a computer at home connected to the Internet with an interface to the hearing aid 10. Through the Internet, the home computer, like the smart phone, has access to the electronic time management and communication tools used by the user for communication and for storage of time management and communication information relating to the user, and like the smart phone 100, the computer regularly may regularly synchronize with the information handled by the tools as is well-known in the art. The tools may include electronic calendar system(s), email system(s), such as Microsoft Outlook Express, Lotus notes, Windows Mail, Mozilla Thunderbird, Postbox, Apple Mail, Opera Mail, KMail, Hotmail, Yahoo, Gmail, AOL, etc, social network(s), professional network(s), such as Facebook®, LinkedIn®, Google+, tumblr, Twitter, MySpace, etc, RSS/Atom feeder(s), such as Bloglines, Feedly, Google Reader, My Yahoo!, NewsBlur, Netvibes, etc, news feeder(s), etc, well-known for management of daily activities and communications.
(47) The information may include tasks to be performed, calendar dates, such as birthdays, anniversaries, appointments, meetings, etc, contacts, websites of interest, etc.
(48) Similar to the smart phone 100, the hearing aid 10 and the home computer establish a mutual communication link whenever they are within coverage of their respective radio transmitters, and whenever the communication link is established, the home computer transfers speech messages to the hearing aid 10.
(49) Thus, the hearing aid 10 may receive speech messages from any device with which the communication link can be established.
(50) The speech messages may also be notifications on incoming communication, such as receipt of a new email, SMS, instant message, traffic update, or updates in social or professional networks, such as Facebook, Twitter, LinkedIn, etc, RSS/Atom feeds
(51) The speech message may also include the received information, e.g. an email, an SMS, a post in social or professional network, a tweet, a blog post, an RSS/Atom feed, a news feed, an instant message, etc.
(52) Thus, examples of speech messages include reminders on, e.g. meetings, birthdays, social gatherings, journeys, to-do items, etc, and notifications on, e.g., tweets, emails, news, social network updates, web page updates, etc, and communication, e.g. an email, an SMS, a post in social or professional network, a blog post, an RSS/Atom feed, a news feed, an instant message, etc.
(53) Some speech messages may be played back immediately upon receipt by the hearing aid.
(54) Speech messages to be played back immediately may be transmitted to the hearing aid together with a time and date to be played back equal to zero.
(55) The speech message may be accompanied by a notification jingle, such as a personalized notification jingle.
(56) The speech message, or the message, may be automatically removed from the memory of the hearing aid after play back in order to make the part of the memory occupied by the, possibly spoken, message available to a new, possibly spoken, message.
(57) Typically, the user may access the tools and the stored information from any computer that is connected to the Wide-Area-Network by logging-in to a specific account, e.g. with a username and a password.
(58) The user may authenticate other devices to access the tools and the stored information when logged-in to the account in question.
(59) In order for the device to be authenticated and allowed access to the tools and the stored information and to receive information from the tools, the user may have to log onto the corresponding accounts from the device.
(60) The hearing aid may have a timer providing information on date and time of day, and the message processor may be configured for transmitting the audio file at a selected date and time of day.
(61) The timer may be synchronized with the device, e.g. whenever data is transmitted to the hearing aid.
(62) The new hearing aid system takes advantage of the fact that a user of the hearing aid system, especially a hearing aid user, already wears the hearing aid and therefore, the user is able to listen to played back speech messages without having to perform additional tasks, such as mounting a headphone or headset on his or her head, bringing a telephone to the ear, looking at a screen and select information to be displayed and/or played back, looking at a dashboard of a car and select information to be displayed and/or played back, etc.
(63) The hearing aid may have a wireless interface for reception of data transmitted from the device, including speech messages and possibly the selected time, i.e. timing information specifying when the hearing aid is controlled to play back the speech message.
(64) The user may use a user interface of the hearing aid to command the hearing aid to sequentially play back the messages of the audio files currently stored in the memory of the hearing aid, e.g. in ascending or descending order of time of receipt, in ascending or descending order of time to be played back, etc, e.g. also specified by the user using the user interface.
(65) The user may select a new time for the message to be played back using the user interface. For example, tapping a push button twice may cause the speech message to be played back again 5 minutes later.
(66) Thus, the selected time may be a time for playing back the message as previously specified by the user during recording or editing of the event or task in question and transmitted to the hearing aid for storage together with the message in the hearing aid.
(67) The speech message may be played back at more than one selected times, each of which may be transmitted to the hearing aid for storage with the message in question.
(68) With the illustrated hearing aid system, the user is relieved from the task of consulting other equipment for updates on upcoming events and incoming communication; rather, the user need not change anything or take any particular actions in order to be able to receive speech messages.
(69) The transmission of messages from the smart phone 100 to the hearing aid 10 need not take place at the time at which the hearing aid plays the speech message back. Rather, the transmission may occur anytime before the time of play back, e.g. a reminder may be transmitted to the hearing aid together with the time for play back of the reminder, upon recording or editing of the reminder; whenever the hearing aid is within receiving range of the transmitter of the device.
(70) The data rate of the transmission may be slow, since the message samples is not used for streaming; rather, the data is stored in a memory in the hearing aid for later play back. Thus, data transmission may be performed whenever data transmission resources are available. Thus, there is no need for the device to be in contact with the hearing aid at the precise time of speech message play back, e.g. reminding the user of something.
(71) In this way, the communication link, e.g. the wireless communication link, need not be particularly fast or particularly reliable. For example, the link data rate need not be fast enough to transmit audio in real-time. Still, the speech messages may be played back to the user as high quality audio, since the speech messages may be read out of the memory of the hearing aid at a data rate much higher than the data rate of the communication link.
(72) Data transmission to the hearing aid may be performed, slowly, when the communication link is available, and the data transmission is robust to possible communication drop outs, e.g. due to noise.
(73) Since the data rate is not critical, and since data transmission may be interrupted and resumed without interfering with the desired timing of speech message play back to the user, the synchronization may be performed in the background without interfering with the other desired functions of the hearing aid.
(74) Although particular embodiments have been shown and described, it will be understood that they are not intended to limit the claimed inventions, and it will be obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the claimed inventions. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. The claimed inventions are intended to cover alternatives, modifications, and equivalents.