A METHOD OF OUTPUTTING SOUND AND A LOUDSPEAKER

20230370777 · 2023-11-16

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of converting an audio signal into signals for a number of loudspeaker transducers, where the audio signal is divided up into audio sub signals each representing a particular frequency interval, and where the signal for each loudspeaker transducer comprises a portion of each audio sub signal which varies over time.

    Claims

    1.-14. (canceled)

    15. A method of outputting sound based on an audio signal, the method comprising: receiving the audio signal, generating a number of audio sub signals from the audio signal, each audio sub-signal representing the audio signal within a frequency interval within the frequency interval of 100-8000 Hz, where the frequency interval of one sub signal is not fully included in the frequency interval of another sub signal, providing a speaker comprising a plurality of sound output loudspeaker transducers each capable of outputting sound in at least the interval of 100-8000 Hz, the loudspeaker transducers being positioned within a room or venue, generating an electrical sub signal for each loudspeaker transducer, each electrical sub signal comprising a predetermined portion of each audio sub signal, and feeding the electrical sub signals to the loudspeaker transducers, wherein the generation of the electrical sub signals comprises: altering, over time, the predetermined portions of the audio sub signals in each electrical sub signal and providing the electrical sub signals with the same or at least substantially the same sound energy, loudness or intensity.

    16. The method according to claim 15, wherein the step of receiving the audio signal comprises receiving a stereo signal, and wherein the step of generating the audio sub signals comprises generating, for each channel in the stereo audio signal, a plurality of audio sub signals.

    17. The method according to claim 15, wherein the step of receiving the audio signal comprises receiving a mono signal and generating from the audio signal a second signal being at least substantially phase inverted to the mono signal, and wherein the step of generating the audio sub signals comprises generating a plurality of audio sub signals for each of the mono audio signal and the second signal.

    18. The method according to claim 15, further comprising the step of deriving, from the audio signal, a low frequency portion thereof having frequencies below a first threshold frequency and including the low frequency portion at least substantially evenly in all electrical sub signals.

    19. The method according to claim 15, further comprising the step of deriving, from the audio signal, a high frequency portion thereof having frequencies above a second threshold frequency and including the high frequency portion at least substantially evenly in all electrical sub signals.

    20. The method according to claim 15, wherein the step of generating the audio sub signals comprises selecting the frequency interval for one or more of the audio sub signals so that an energy/loudness in each audio sub signal is within 10% of a predetermined energy/loudness value.

    21. The method according to claim 15, wherein the step of generating the electrical sub signals comprises, for one or more electrical sub signal(s), generating the electrical sub signal so that a portion of an audio sub band represented in the electrical sub band increases or decreases by at least 5% per second.

    22. A system for outputting sound based on an audio signal, the system comprising: an input for receiving the audio signal, a speaker comprising a plurality of sound output loudspeaker transducers each capable of outputting sound in at least the interval of 100-8000 Hz, the loudspeaker transducers being positioned within a room or venue, a controller configured to: generate a number of audio sub signals from the audio signal, each audio sub-signal representing the audio signal within a frequency interval within the frequency interval of 100-8000 Hz, where the frequency interval of one sub signal is not fully included in the frequency interval of another sub signal, generate an electrical sub signal for each loudspeaker transducer, each electrical sub signal comprising a predetermined portion of each audio sub signal, and means for feeding the electrical sub signals to the loudspeaker transducers, wherein the controller is configured to generate each of the electrical sub signal so that: the predetermined portions of the audio sub signals in each electrical sub signal altering over time and a sound energy, loudness or intensity of the electrical sub signals is the same or at least substantially the same.

    23. The system according to claim 22, wherein the input is configured to receive a stereo signal, and wherein the controller is configured to generate a plurality of audio sub signals for each channel in the stereo audio signal.

    24. The system according to claim 22, wherein the input is configured to receive a mono signal and wherein the controller is configured to generate, from the audio signal, a second signal being at least substantially phase inverted to the mono signal, and to generate a plurality of audio sub signals for each of the mono audio signal and the second signal.

    25. The system according to claim 22, wherein the controller is further configured to derive, from the audio signal, a low frequency portion thereof having frequencies below a first threshold frequency and include the low frequency portion at least substantially evenly in all electrical sub signals.

    26. The system according to claim 22, wherein the controller is further configured to derive, from the audio signal, a high frequency portion thereof having frequencies above a second threshold frequency and include the high frequency portion at least substantially evenly in all electrical sub signals.

    27. The system according to claim 22, wherein the controller is further configured to select the frequency interval for one or more of the audio sub signals so that an energy/loudness in each audio sub signal is within 10% of a predetermined energy/loudness value.

    28. The system according to claim 22, wherein the controller is further configured to, for one or more electrical sub signal(s), generate the electrical sub signal so that a portion of an audio sub band represented in the electrical sub band increases or decreases by at least 5% per second.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0099] Unless specified otherwise, the accompanying drawings illustrate aspects of the innovations described herein. Referring to the drawings, wherein like numerals refer to like parts throughout the several views and this specification, several embodiments of presently disclosed principles are illustrated by way of example, and not by way of limitation.

    [0100] FIG. 1 illustrates an embodiment of an audio device.

    [0101] FIG. 2 illustrates a sound sphere corresponding to a representative listening environment.

    [0102] FIG. 3 illustrates another possible sound sphere corresponding to another representative listening environment.

    [0103] FIG. 4 illustrates another possible sound sphere corresponding to another representative listening environment.

    [0104] FIG. 5 illustrates the frequency range for spatial, sound source localization.

    [0105] FIG. 6 illustrates a sound distribution on the loudspeaker transducers.

    [0106] FIG. 7a illustrates another sound distribution on the loudspeaker transducers.

    [0107] FIG. 7b illustrates another sound distribution on the loudspeaker transducers.

    [0108] FIG. 8 illustrates three-dimensional directivity factors.

    [0109] FIG. 9 illustrates the audio processing environment.

    [0110] FIG. 10 illustrates another audio processing environment.

    DETAILED DESCRIPTION

    [0111] The following describes various innovative principles related to systems for providing sound spheres having smoothly changing, or constant, three-dimensional in-air transitions. For example, certain aspects of disclosed principles pertain to an audio device configured to project a desired sound sphere, or an approximation thereof, throughout a listening environment.

    [0112] Embodiments of such systems described in the context of method acts are but particular examples of contemplated systems, chosen as being convenient illustrative examples of disclosed principles. One or more of the disclosed principles can be incorporated in various other audio systems to achieve any of a variety of corresponding system characteristics.

    [0113] Thus, systems having attributes that are different from the specific examples discussed herein can embody one or more presently disclosed innovative principles, and can be used in applications not described herein in detail. Accordingly, such alternative embodiments also fall within the scope of this disclosure.

    [0114] In some implementations, the innovation disclosed herein generally concern systems and associated techniques for providing three-dimensional sound spheres with multiple beams, that combine to provide smoothly changing sound localization information. For example, some disclosed audio systems can project subsections in frequency bands of the sound in subtly changing, or constant, phase relationships, and independent amplitude to the loudspeaker transducers. Thereby, the audio system can render added, or procured, spatial information to any input audio throughout a listening environment.

    [0115] As but one example, an audio device can have an array of loudspeaker transducers constituting each an independent full-range transducer. The audio device includes a processor and a memory containing instructions that, when executed by the processor, cause the audio device to render a three-dimensional waveform as a 360 degree spherical shape, in weighted combination of individual virtual shape components, as coordinated pairs of shape component or otherwise, that are slowly moved along the loudspeaker transducers by a panning process of the audio signals. For each loudspeaker transducer, the audio device can filter a received audio signal according to a designated procedure. When executing the dynamic sound sphere, the audio device retains the original sound across the combined sphere components, when they are summed in the acoustic space. Therefore, for the listener the resulting sound retains the original sound's frequency envelope, but with the addition, or procurement, of a dynamic, or constant, three-dimensional audio spatialization.

    [0116] The disclosure can combine its three-dimensional audio rendering with a summed signal above and below two designated thresholds, where the audio signal outside the thresholds holds no information about a sound's localization, discernible to the cognitive listening apparatus. These two ranges are summed separately into two monophonic audio signals and can be sent to all loudspeaker transducers simultaneously. The audio device can thereby provide the full three-dimensional spatialization that the cognitive listening apparatus can recognize, together with an independent control for all loudspeaker transducers of the low and high frequency ranges.

    [0117] The disclosure can manage one mono signal input on one audio device in a number of independent sphere components that is equal to the number of the device's loudspeaker transducers, or a number of virtual sphere components that is different from the number of the device's loudspeaker transducers. Each sphere component can be a subset of a frequency range, and all components can be evenly distributed along the range as a balanced sum total of the components. These components can then be panned independently on all loudspeaker transducers on the geometric solid's planes, or as polar inverted pairs at opposite points on the geometric solid, or otherwise modified, and they can be positioned at any point between adjacent planes. Used in a paired stereo configuration with two devices, such a system will provide separate three-dimensional spatialization on each of the monophonic audio channels, and, rendered the left channel and the right channel separately to the two audio devices, resulting in a three-dimensional stereophonic audio rendering system. The stereo pairs can also be panned individually, and not observe any correlation in opposite points.

    [0118] The disclosure can manage one stereo signal on one audio system in a number of independent iterations that is equal to half the number of the unit's loudspeaker transducers. Each pair is a subset of the frequency range of the stereo signal and can be positioned at opposite points on the geometric solid, or at any point between the solid's adjacent planes. The stereo pairs are panned equally, so that a single audio device will give a satisfactory rendering of the input stereo signal, hereby eschewing the need for two devices for rendering the full information of the original stereophonic signal, while still procuring the described three-dimensional audio cues. The result is a point source, three-dimensional stereophonic audio rendering system.

    [0119] The instructions stored in processor memory can produce an adaptable division of the frequency bands that can, if so desired, observe equal loudness between the bands. This will avoid sudden directional changes due to changes in energy/loudness at very localized frequency ranges.

    [0120] I Overview

    [0121] Referring now to FIGS. 1 and 2, an audio device, or speaker, 10 can be positioned in a room 20. A three-dimensional sound sphere 30 is rendered by the audio device 10, where the listener's optimal listening area coincides with the sphere 30.

    [0122] FIGS. 3 and 4 show other exemplary representations of device 10 positioning. The audio device 10 can correspond to a position of one or more reflective boundaries, e.g., a wall 22a, 22b, relative to the device 10, as well as listener's likely position 26a, 26a, coinciding with the sound sphere 30a, 30b. The rendered three-dimensional sound sphere 30a, 30b, is reinforced as the waveform folds back from the walls.

    [0123] As will be explained more fully below, a three-dimensional sound sphere can be constructed by a combination of sphere components. A three-dimensional sound sphere is dependent on change of amplitude, phase and time along different audio frequencies, or frequency bands. A methodology can be devised to manage such dependencies, and disclosed audio devices can apply these methods to an acoustic signal, or a digital signal, containing an audio content to render as a three-dimensional sound sphere.

    [0124] Section II describes principles related to such an audio device by way of reference to the device depicted in FIG. 1. Section III describes principles pertaining to desired three-dimensional sound spheres, and Section IV describes principles related to decomposing an audio content into a combination of sphere components, both virtual and real, and reassembling them in acoustic space. Section V discloses principles in directivity relating to the three-dimensionality of an audio device and variations thereof with frequency. Section VI describes principles related to audio processors suitable to render an approximation of a desired three-dimensional sound sphere from an incoming audio signal, on input 51, containing an audio content. Section VII describes principles related to computing environments suitable for implementing disclosed processing methods. This will include examples of machine-readable media containing instructions that, when executed, cause a processor 50 of, e.g., a computing environment, to perform one or more disclosed methods. Such instructions can be embedded in software, firmware, or hardware. In addition, disclosed methods and techniques can be carried out in a variety of forms of signal processor, again, in software, firmware, or hardware.

    [0125] II. Audio Devices

    [0126] FIG. 1 shows an audio device 10 that includes a loudspeaker cabinet 12 having integrated therein a loudspeaker array including a plurality of individual loudspeaker transducers or loudspeaker transducers S1, S2, . . . , S6.

    [0127] In general, a loudspeaker array can have any number of individual loudspeaker transducers, despite that the illustrated array has six loudspeaker transducers. The number of loudspeaker transducers depicted in FIG. 1 is selected for convenience of illustrations. Other arrays have more or fewer than six transducers, and may have more, or less, than three axis of transducer pairs, and an axis can have only one transducer. For example, an embodiment of an array for the audio device can have 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, or more, loudspeaker transducers.

    [0128] In FIG. 1, the cabinet 12 has a generally cubic shape defining a central axis z arranged to the opposed corners 16 of the cubic cabinet.

    [0129] Each of the loudspeaker transducers S1, S2, . . . , S6 in the illustrated loudspeaker array are distributed evenly on the cube's planes at a constant, or a substantially constant, position relative to, and at a uniform radial distance, polar, and azimuth angle from, the axis ‘center. In FIG. 1, the loudspeaker transducers are spherically spaced from each other by about 90 degrees.

    [0130] Other arrangements for the loudspeaker transducers are possible. For instance, the loudspeaker transducers in the array may be distributed evenly within the loudspeaker cabinet 10, or unevenly. As well, the loudspeaker transducers S1, S2, . . . , S6 can be positioned at various selected spherical positions measured from the axis center, rather than at constant distance position as shown in FIG. 1. For example, each loudspeaker transducer can be distributed from two or more axis points.

    [0131] Each transducer S1, S2, . . . , S6 may be an electrodynamic or other type of loudspeaker transducer that may be specially designed for sound output at particular frequency bands, such as woofer, tweeter, midrange, or full-range, for example. The audio device 10 can be combined with a seventh loudspeaker transducer SO, to supplement output from the array. For example, the supplemental loudspeaker transducer SO can be so configured to radiate selected frequencies, e.g., low-end frequencies as a subwoofer. The supplemental loudspeaker transducer SO can be built into the audio device 10, or it can be housed in a separate cabinet. In addition or alternatively, the SO loudspeaker transducer may be used for high frequency output.

    [0132] Although the loudspeaker cabinet 10 is shown as being cubic, other embodiments of a loudspeaker cabinet 10 have another shape. For example, some loudspeaker cabinets can be arranged as, e.g., a general prismatic structure, a tetrahedral structure, a spherical structure, an ellipsoidal structure a toroidal structure, or as any other desired three-dimensional shape.

    [0133] III. The Three-Dimensional Sound Sphere

    [0134] Referring again to FIG. 2, the audio device 10 can be positioned in the middle of a room. In such a situation, as noted above, the three-dimensional sound sphere is distributed evenly around the audio device 10.

    [0135] By projecting acoustic energy in a three dimensional sphere, a user's listening experience can be enhanced in comparison to two-dimensional audio system, since, and in contrast to prior art in one and two-dimensional sound fields, the three-dimensional listening cues provided by the disclosure are spatial and hence immersive, similarly to sound cues in the physical world.

    [0136] Furthermore, the disclosure's listening space provides infinite listening positions around the device 10, as the added spatial audio cues do not operate on the basis of an ideal listening position, as long as the entire listening field, or sphere, contains an even balance, or an almost even balance, of the salient features of the original sound input.

    [0137] FIG. 3 depicts the audio device 10 in a different position than is shown in FIG. 2. In FIG. 2, the sound field 30 has a circular shape, and only directing little or no acoustic energy toward the walls 22. Although the three-dimensional sound sphere shown in FIG. 3 is different from that shown in FIG. 2, the sound sphere shown in FIG. 3 can be well suited to the loudspeaker's illustrated position compared to the wall 22 and the possible listening positions coinciding with the, now partially folded, sound sphere 30 shown in FIG. 3, because the wall 22 reflections are not incompatible with the sound sphere 30, since the sphere components are shifted constantly along the loudspeaker transducers, hereby avoiding any constant enforcement of a specific frequency, or frequency band. Similarly, FIG. 4 shows the audio device 10 in yet another position in the room, and a three-dimensional sound sphere 30 that coincides with the listening positions, again correspondingly folded by the wall positions 22, and room arrangement, compared to the position of the audio device 10 shown in FIG. 2. In this particular arrangement, the same situation concerning the sound sphere's 30 projection by means of shifting sphere components as in FIG. 3 is occurring, resulting in no constant enforcement of any specific frequency, or frequency band.

    [0138] In some embodiments of audio devices, a three-dimensional sound field can be modified when the audio device's 10 proximity to a wall 22 is extreme, or very pronounced. For example, by representing the three-dimensional sound sphere 30 using polar coordinates with the z-axis of the audio device 10 positioned at the origin, a user can modify the sound sphere 30 from a sphere to an asymmetrical tri-axial ellipsoidal shape, by means of “drawing”, as on a touch screen, a directional scaling of the loudspeaker transducers ‘amplitude, relative to the z-axis of the audio device 10.

    [0139] In still other embodiments, a user can select from a plurality of three-dimensional asymmetrical tri-axial ellipsoids stored by the audio device 10 or remotely. If stored remotely, the audio device 10 can load the selected tri-axial asymmetrical ellipsoid over a communication connection. And in still further embodiments, a user can “draw” a desired tri-axial asymmetrical ellipsoid contour or existing room boundary, as above, on a smartphone or a tablet, and the audio device 10 can receive a representation of the desired asymmetrical tri-axial ellipsoid, or room boundary, directly or indirectly from the user's device over a communication connection. Other forms of user input beside touch screens can be used, as described more fully below in connection with computer environments.

    [0140] IV. Modal Decomposition and Reassembly of a Three-Dimensional Sound Sphere

    [0141] FIG. 5 shows the frequency range between 40 (positioned at 100 Hz) and 45 (positioned at 3 kHz), as a subset of the total frequency range of the listener's hearing, that a listener uses for spatial sound source localization in three-dimensional hearing. The cues for sound source localization include time and level differences between both ears, spectral information, timing analysis, correlation analysis, and pattern matching. The disclosure uses this knowledge of the auditory system to add, or procure, spatial information to an input sound, by splitting the frequency range between 40 and 45 into a number of bands (arrows), and processing these bands. The number of bands can be half the number of loudspeaker transducers, and it can be more, and less, the number of transducers.

    [0142] As means of but one example, and not of all possible embodiments, in FIG. 6, the high-pass filter 50, the band-pass filters 51, 52, and 53, and the low-pass filter 54, separate the audio stream into five sub-streams or audio sub signals. The high pass filter removes signal components above 4 kHz and the low pass filter components below 100 Hz. The audio streams from the filters 50 and 54 lie outside of the three-dimensional hearing range, and are sent equally to all loudspeaker transducers S1, S2, . . . , S6 according to different methods—or to the loudspeaker transducer SO. A copy of the signal from each frequency band from the filters 51, 52, and 53, can be modified by applying a degree of phase shift, or by polarity inversion, before sending the modified signal to different points, such as the opposite point at 180 degrees in relation to the original signal, of the audio device 10, as a summation of the individual signals to arrive at the signals for the loudspeaker transducers S1-S6. The resulting audio output is a monophonic sound with the addition of independent spatial cues in three pairs of connected sphere components, for a monophonic, three-dimensional sound sphere. In a variant of this example, the audio streams from the filters 51, 52, and 53 are sent separately to the loudspeaker transducers S1, S2, . . . , S6 and moved in a random, or semi random, but coordinated fashion. This will likewise provide the spatial cues for a monophonic, three-dimensional sound sphere, but of a significantly different nature to the previous example.

    [0143] FIG. 7a represents the same scenario, but with a stereo signal input. As means of but one example, and not of all possible embodiments, in FIG. 7a, the high-pass filter 60, band-pass filters 61, 62, and 63, and the low-pass filter 64, separate the audio into five audio streams. The audio streams from filters 60 and 64 lie outside of the three-dimensional hearing range, and are sent equally to all loudspeaker transducers S1, S2, . . . , S6, as summed mono signals for the low-passed audio and for the high passed audio prior to emission, as they do not provide any, or very little, spatial information, or as two separate audio streams for the left and/or for the right channel of the low-passed audio and of the high-passed audio. The audio streams from filters 61, 62, and 63 that lie inside the three-dimensional hearing range are sent separately, but now pair-wise to the loudspeaker transducers [S1, S2], [S3, S4], [S5, S6], or to any axis points between the transducers. The resulting audio output is a stereophonic sound with addition, or procurement, of spatial cues to provide a point-source, stereophonic, three-dimensional sound field.

    [0144] FIG. 7b represents a scenario where a stereo signal input is treated as separate mono channels. As means of but one example, and not of all possible embodiments, in FIG. 7b, the high-pass filter 70, band-pass filters 71A, 71B, 72A, 72B, 73A, 73B and the low-pass filter 74, separate the audio into eight audio streams. The audio streams from filters 70 and 74 lie outside of the three-dimensional hearing range, and are sent equally to all loudspeaker transducers S1, S2, . . . , S6, as summed mono signals for the low-passed audio and for the high passed audio prior to emission, as they do not provide any, or very little, spatial information, or as two separate audio streams for the left and/or for the right channel of the low-passed audio and of the high-passed audio. The audio streams from filters 71A, 71B, 72A, 72B, 73A, 73B that lie inside the three-dimensional hearing range are sent separately, to the loudspeaker transducers [S1, S2, S3, S4, S5, S6], or to any axis points between the transducers. The resulting audio output is a multiple single-directional sound with addition, or procurement, of spatial cues to provide a point-source, multiple single-directional, three-dimensional sound field. Thus, compared to FIG. 7a, there need be no correlation between the angles between the directions in which the corresponding audio sub signals (relating to the same sub bands) are output.

    [0145] V. Directivity Considerations

    [0146] FIG. 8 represents aspects of the sound device's 10 Directivity Factor. The Directivity Factor, with the range 1−∞, is the indication of the ability of a loudspeaker transducer (or any other sound emitter) to confine the applied energy into a spherical section. Audio devices exhibit differing degrees of directivity throughout the audible frequency range (e.g., about 20 Hz to about 20 kHz), generally exhibiting lower Directivity Factor as the frequency approaches 20 Hz, and increasing Directivity Factor with increasing frequency. The disclosed audio device's 10 Directivity Factor is 1, or close to 1, along the entire frequency range, given the loudspeaker transducers distributed evenly, or nearly evenly on an even sided, geometric solid. The disclosed audio device's 10 individual loudspeaker transducer Directivity Factor will be 2 at low frequencies, or close to 2, and will vary across the frequency range, but it will go towards higher values with higher frequency. With a Directivity Factor of 8, each transducer will have a spherical part that, in combination of 6 transducer on the cube cabinet described above, combines into a full sphere for the audio device 10. Since the directed energy for a single loudspeaker transducer determines a defined listening window, as a selected range of angular positions at a constant radius with the loudspeaker positioned at the origin, a user's listening experience is diminished if the user's position relative to the loudspeaker varies. The disclosure, having a much lower Directivity Factor, has an infinite, or much larger, number of desired listening positions, than previous art in two-dimensional sound fields.

    [0147] To achieve a desired sound sphere or smoothly varying sphere components (or pattern) over all frequencies, the sphere components described above can undergo equalization so each sphere component provides a corresponding sound field with a desired frequency response throughout. Stated differently, a filter can be designed to provide the desired frequency response throughout the sphere component. And, the equalized sphere components can then be combined to render a sound sphere having a smooth transition of sphere components across the range of audible frequencies and/or selected frequency bands, within the range of audible frequencies.

    [0148] VI. Audio Processors

    [0149] FIG. 9 shows a block diagram of an audio rendering processor for an audio device 10 to playback an audio content (e.g., a musical work, a movie sound track).

    [0150] The audio rendering processor 50 may be a special purpose processor such as an application specific integrated circuit (ASIC), a general purpose microprocessor, a field programmable gate array (FPGA), a digital signal controller, or a set of hardware logic structures (e.g., filters, arithmetic logic units, and dedicated state machines). In some instances, the audio rendering processor can be implemented using a combination of machine-executable instructions, that, when executed by a processor, cause the audio device to process one or more input channels as described. The rendering processor 50 is to receive the input channel of a piece of sound program content from an input audio source 51.

    [0151] The input audio source 51 may provide a digital input or an analog input. The input audio source or input 51 may include a programmed processor that is running a media player application program and may include a decoder that produces the digital audio input to the rendering processor. To do so, the decoder may be capable of decoding an encoded audio signal, which has been encoded using any suitable audio codec, e.g., Advanced Audio Codec (AAC), MPEG Audio Layer II, MPEG Audio LAYER III, and Free Lossless Audio Codec (FLAC). Alternatively, the input audio source may include a codec that is converting an analog or optical audio signal, from a line input, for example, into digital form for the audio rendering processor 50. Alternatively, there may be more than one input audio channel, such as a two-channel input, namely left and right channels of a stereophonic recording of a musical work, or there may be more than two input audio channels, such as for example the entire audio soundtrack in 5.1-surround format of a motion picture film or movie. Other audio format examples are 7.1 and 9.1-surround formats.

    [0152] The array of loudspeaker transducers 58 can render a desired sound sphere (or approximation thereof) based on a combination of sphere component segmentations 52a . . . 52N applied to the audio content by the audio rendering processor 50. Rendering processors 50 according to FIG. 9 conceptually can be divided between sphere component domain and a loudspeaker transducer domain. In the component domain, the segment processing 53a . . . 53N for each constituent sphere component 52a . . . 53N can be applied to the audio content in correspondence with a desired sphere component in a manner described above. An equalizer 54a. 54N can provide equalization to each respective sphere component 52a . . . 52N to adjust for variation in Directivity Factor arising from the particular audio device 10, and from any sphere adjustment towards a desired asymmetrical ellipsoid sphere contour, mentioned above.

    [0153] In the loudspeaker transducer domain, a Sphere Domain Matrix can be applied to the various sphere domain signals to provide a signal to be reproduced by each respective loudspeaker transducer in the array 58. Generally speaking, the matrix is an M×N sized matrix, with N is the number of loudspeaker transducers, and M=(2×N)+(2×O) where O represents the number of virtual sphere components. An equalizer 56a . . . 56N can provide equalization to each respective sphere component 57a . . . 57N to adjust for variation in Directivity Factor arising from the particular audio device 10, and from any sphere adjustment towards a desired ellipsoid sphere contour, mentioned above.

    [0154] It should be understood the audio rendering processor 50 is capable of performing other signal processing operations in order to render the input audio signal for playback by the transducer array 58 in a desired manner. In another embodiment, in order to determine how to modify the loudspeaker transducer signal, the audio rendering processor may use an adaptive filter process to determine constant, or varying, boundary frequencies. FIG. 10 shows a block diagram of an audio rendering processor for an audio device 10 to render a synthesized sound (e.g., a digital keyboard, a digital audio workstation (DAW)), or an electric and/or acoustic musical instrument.

    [0155] VII. Computing Environments

    [0156] FIG. 10 illustrates a generalized example of a suitable computing environment 100, which may comprise the operation of the controller 50, in which described methods, embodiments, techniques, and technologies relating, for example, to procedurally generating a sound sphere. The computing environment 100 is not intended to suggest any limitation as to scope of use or functionality of the technologies disclosed herein, as each technology may be implemented in diverse general-purpose or special-purpose computing environments. For example, each disclosed technology may be implemented with other computer system configurations, including wearable and handheld devices, mobile-communications devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, embedded platforms, network computers, mini-computers, mainframe computers, smartphones, tablet computers, data centers, and the like. Each disclosed technology may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications connection or network, or that are incorporated into digital or analog musical instruments. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

    [0157] The computing environment 100 includes at least one central processing unit 110 and memory 120. In FIG. 10, this most basic configuration 130 is included within a dashed line. The central processing unit 110 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power and as such, multiple processors can run simultaneously. The memory 120 may be volatile memory (e.g., register, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. The memory 120 stores software 180a that can, for example, implement one or more of the innovative technologies described herein, when executed by a processor.

    [0158] A computing environment may have additional features. For example, the computing environment 100 includes storage 140, one or more input devices 150, one or more output devices 160, and one or more communication connections 170. An interconnection mechanism (not shown) such as a bus, a controller, or a network, interconnects the components of the computing environment 100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 100, and coordinates activities of the components of the computing environment 100.

    [0159] The storage 140 may be removable or non-removable, and can include selected forms of machine-readable media that includes magnetic disks, magnetic tapes or cassettes, non-volatile solid-state memory, CD-ROMs, CD-RWs, DVDs, magnetic tape, optical data storage devices, and carrier waves, or any other machine-readable medium which can be used to store information and which can be accessed within the computing environment 100. The storage 140 stores instructions for the software 180b, which can implement technologies described herein.

    [0160] The storage 140 can also be distributed over a network so that software instructions are stored and executed in a distributed fashion. In other embodiments, some of these operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed data processing components and fixed hardwired circuit components.

    [0161] The input device(s) 150 may be a touch input device, such as a keyboard, keypad, mouse, pen, touchscreen, touch pad, or trackball, a voice input device, a scanning device, or another device, that provides input to the computing environment 100. For audio, the input device(s) 150 may include a microphone or other transducer (e.g., a sound card or similar device that accepts audio input in analog or digital form), or a computer-readable media reader that provides audio samples to the computing environment 100.

    [0162] The output device(s) 160 may be a display, printer, speaker transducer, DVD writer, or another device that provides output from the computing environment 100.

    [0163] The communication connection(s) 170 enable communication over communication medium (e.g., a connecting network) to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed graphics information, processed signal information (including processed audio signals), or other data in a modulated signal.

    [0164] Thus, disclosed computing environments are suitable for performing disclosed orientation estimation and audio rendering processes as disclosed herein.

    [0165] Machine-readable media are any available media that can be accessed within a computing environment 100. By way of example, and not limitation, with the computing environment 100, machine-readable media include memory 120, storage 140, communication media (not shown), and combinations of any of the above. Tangible machine-readable (or computer-readable) media exclude transitory signals.

    [0166] As explained above, some disclosed principles can be embodied in a tangible, non-transitory machine-readable medium (such as a micro-electronic memory) having stored thereon instructions, which program one or more data processing components (generically referred to here as a “processor”) to perform the digital signal processing operations described above including estimating, adapting, computing, calculating, measuring, adjusting (by the audio processor 50), sensing, measuring, filtering, addition, subtraction, inversion, comparisons, and decision-making. In other embodiments, some of these operations (of a machine process) might be performed by specific electronic hardware components that contain hardwired logic (e.g., dedicated digital filter blocks). Those operations might alternatively be performed by any combination of programmed data processing components and fixed hardwired circuit components.

    [0167] The audio device 10 can include a loudspeaker cabinet 12 configured to produce sound. The audio device 10 can also include a processor, and a non-transitory machine readable medium (memory) in which instructions are stored which, when executed by the processor, automatically perform the three-dimensional sphere construct processes, and supporting processes, as described herein.

    [0168] The examples described above generally concern apparatus, methods, and related systems for rendering audio, and more particularly, to providing desired three-dimensional sphere patterns. Nonetheless, embodiments other than those described above in detail are contemplated based on the principles disclosed herein, together with any attendant changes in configurations of the respective apparatus described herein.

    [0169] Directions and other relative references (e.g., up, down, top, bottom, left, right, rearward, forward, etc.) may be used to facilitate discussion of the drawings and principles herein, but are not intended to be limiting. For example, certain terms may be used such as “up”, “down”, “upper”, “lower”, “horizontal”, “vertical”, “left”, “right”, and the like. Such terms are used, where applicable, to provide some clarity of description when dealing with relative relationships, particularly with respect to the illustrated embodiments. Such terms are not, however, intended to imply absolute relationships, positions, and/or orientations. For example, with respect to an object, an “upper” surface can become a “lower” surface simply by turning the object over. Nevertheless, it is still the same surface and the object remains the same. As used herein, “and/or” means “and” or “or”, as well as “and” and “or”. Moreover, all patent and non-patent literature cited herein is hereby incorporated by reference in its entirety for all purposes.

    [0170] The principles described above in connection with any particular example can be combined with the principles described in connection with another example described herein. Accordingly, this detailed description shall not be construed in a limiting sense, and following a review of this disclosure, those of ordinary skill in the art will appreciate the wide variety of signal processing and audio rendering techniques that can be devised using the various concepts described herein.

    [0171] Moreover, those of ordinary skill in the art will appreciate that the exemplary embodiments disclosed herein can be adapted to various configurations and/or uses without departing from the disclosed principles. Applying the principles disclosed herein, it is possible to provide a wide variety of systems adapted to providing a desired three-dimensional spherical sound field. For example, modules identified as constituting a portion of a given computational engine in the above description or in the drawings can be partitioned differently than described herein, distributed among one or more modules, or omitted altogether. As well, such modules can be implemented as a portion of a different computational engine without departing from some disclosed principles.

    [0172] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed innovations. Various modifications to those embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of this disclosure. Thus, the claimed inventions are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular, such as by use of the article “a” or “an” is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. All structural and functional equivalents to the features and methods acts of the various embodiments described throughout the disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the features described and claimed herein. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim recitation is to be construed, unless the recitation is expressly recited using the phrase “means for” or “step for”.

    [0173] Thus, in view of the many possible embodiments to which the disclosed principles can be applied, we reserve the right to claim any and all combinations of features and technologies described herein as understood by a person ordinary skilled in the art, including, for example, all that comes within the scope of the technology.