ACTION SOUND CAPTURE USING SUBSURFACE MICROPHONES
20180139535 ยท 2018-05-17
Assignee
- Dolby Laboratories Licensing Corporation (San Francisco, CA)
- Dolby International Ab (Amsterdam Zuidoost, NL)
Inventors
- Giulio Cengarle (Barcelona, ES)
- Antonio Mateos Sole (Barcelona, ES)
- Natanael David Olaiz (Barcelona, ES)
- Kenneth Robert HONOLD (Hackettstown, NJ, US)
Cpc classification
H04R2201/405
ELECTRICITY
H04R5/027
ELECTRICITY
International classification
Abstract
Methods and systems for generating an audio mix indicative of action sound captured at an event on a surface (e.g., a sporting event on a field) using a microphone array, where the array includes subsurface microphones (e.g., a large number of sub-surface microphones) positioned under the surface, and optionally also other microphones. In typical embodiments, at least one point of interest (PI) on the surface is selected in an automated manner, PI data indicative of a currently selected PI on the surface is generated (e.g., a sequence of PIs on the surface is selected, the PI data is indicative of the sequence of PIs, and a most recently selected PI in the sequence is the currently selected PI), and the audio mix is generated in response to the PI data. Aspects include methods performed by any embodiment of the system, and a system or device configured (e.g., programmed) to perform any embodiment of the method.
Claims
1. A method for generating a mix indicative of action sound captured at an event on a surface, including steps of: (a) capturing the action sound using a microphone array, said array including N subsurface microphones positioned under the surface, wherein the subsurface microphones are positioned in a triangular tiling pattern under the surface such that respective three adjacent subsurface microphones form the vertices of an equilateral triangle; (b) in an automated manner, selecting at least one point of interest, PI, on the surface, operating a graphic user interface to display a representation of the surface and a PI representation superimposed on the representation of the surface, controlling the PI representation's position relative to the representation of the surface to determine a current PI representation position, wherein the current PI representation position corresponds to and determines the currently selected PI, generating PI data indicative of a currently selected PI on the surface; and (c) in response to the PI data, generating an audio mix from outputs of the microphones including at least one of the subsurface microphones, such that the audio mix is indicative of action sound emitted at the currently selected PI on the surface.
2. The method of claim 1, wherein N is a number of microphone outputs that is too large for said outputs to be manually mixed during the event by mixing personnel of ordinary skill using conventional practice.
3. The method of claim 1, wherein N15.
4. The method of claim 1, wherein the microphone array also includes microphones which are not subsurface microphones.
5. The method of claim 1, wherein the event is a sporting event, and the surface is a field.
6. The method of claim 1, also including a step of generating an audio program including audio content indicative of the audio mix.
7. The method of claim 1, wherein step (c) includes also including steps of performing signal processing on microphone output signals from microphones of the microphone array, including at least one of the subsurface microphones, to generate processed microphone signals; and generating the audio mix from the processed microphone signals in response to the PI data, wherein the signal processing includes at least one of noise reduction, or equalization, or dynamic range control, or limiting, or delay alignment, or scrambling of detected voice content.
8. A system for generating a mix indicative of action sound captured at an event on a surface, said system including: a microphone array, including N subsurface microphones positioned under the surface, wherein the subsurface microphones are positioned in a triangular tiling pattern under the surface such that respective three adjacent subsurface microphones form the vertices of an equilateral triangle; and a mixing system, including a mixing subsystem coupled to the microphone array, and a point of interest, PI, selection subsystem coupled to the mixing subsystem, wherein the PI selection subsystem is configured to generate PI data in an automated manner, the PI data is indicative of a currently selected PI on the surface, and the mixing subsystem is configured to generate, in response to the PI data, an audio mix from outputs of microphones of the array including at least one of the subsurface microphones, such that the audio mix is indicative of action sound emitted at the currently selected PI on the surface, wherein the PI selection subsystem implements a graphic user interface, the graphic user interface is configured to display a representation of the surface and a PI representation superimposed on the representation of the surface, and to respond to control by a user of the PI representation's position relative to the representation of the surface to determine a current PI representation position, and to determine the currently selected PI to correspond to the current PI representation position.
9. The system of claim 8, wherein N is a number of microphone outputs that is too large for said outputs to be manually mixed during the event by mixing personnel of ordinary skill using conventional practice.
10. The system of claim 8, wherein N15.
11. The system of any of claim 8, wherein the microphone array also includes microphones which are not subsurface microphones.
12. The system of any of claim 8, wherein the event is a sporting event, and the surface is a field.
13. The system of any of claim 8, also including a step of generating an audio program including audio content indicative of the audio mix.
14. The system of any of claim 8, wherein the mixing subsystem is configured: to perform signal processing on microphone output signals from microphones of the microphone array, including at least one of the subsurface microphones, to generate processed microphone signals, and to generate the audio mix from the processed microphone signals in response to the PI data, wherein the signal processing includes at least one of noise reduction, or equalization, or dynamic range control, or limiting, or delay alignment, or scrambling of detected voice content.
15. A system for generating a mix of action sound which has been emitted during an event on a surface, where the action sound was captured using a microphone array including N subsurface microphones positioned under the surface, wherein the subsurface microphones are positioned during the event in a triangular tiling pattern under the surface such that respective three adjacent subsurface microphones form the vertices of an equilateral triangle, said mixing system including: a memory; and a mixing subsystem coupled to the memory and configured to generate an audio mix in response to point of interest, PI, data indicative of a currently selected PI on the surface and in response to outputs of microphones of the array including at least one of the subsurface microphones, such that the audio mix is indicative of action sound emitted during the event at the currently selected PI on the surface, wherein the memory stores, in a non-transitory manner, data indicative of at least a segment of each of the outputs of microphones of the array including said at least one of the subsurface microphones, or data indicative of at least a segment of a processed version of each of said outputs of microphones of the array including said at least one of the subsurface microphones wherein the system further comprises a PI selection subsystem coupled to the mixing subsystem, and the PI selection subsystem implements a graphic user interface, the graphic user interface is configured to display a representation of the surface and a PI representation superimposed on the representation of the surface, and to respond to control by a user of the PI representation's position relative to the representation of the surface to determine a current PI representation position, and to determine the currently selected PI to correspond to the current PI representation position.
16. The system of claim 15, wherein the mixing subsystem includes: a signal processing subsystem coupled and configured to perform signal processing on the outputs of microphones of the array including said at least one of the subsurface microphones to generate processed microphone signals, wherein the mixing subsystem is coupled and configured to generate the audio mix in response to the PI data and at least some of the processed microphone signals, and wherein the signal processing includes at least one of noise reduction, or equalization, or dynamic range control, or limiting, or delay alignment, or scrambling of detected voice content.
17. The system of any of claim 15, wherein N is a number of microphone outputs that is too large for said outputs to be manually mixed during the event by mixing personnel of ordinary skill using conventional practice.
18. The system of any of claim 15, wherein N15.
19. The system of any of claim 15, wherein the microphone array also includes microphones which are not subsurface microphones.
20. The system of any of claim 15, wherein the event is a sporting event, and the surface is a field.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0037]
[0038]
[0039]
NOTATION AND NOMENCLATURE
[0040] Throughout this disclosure, including in the claims, the expression performing an operation on a signal or data (e.g., filtering, scaling, transforming, or applying gain to, the signal or data) is used in a broad sense to denote performing the operation directly on the signal or data, or on a processed version of the signal or data (e.g., on a version of the signal that has undergone preliminary filtering or pre-processing prior to performance of the operation thereon).
[0041] Throughout this disclosure including in the claims, the expression system is used in a broad sense to denote a device, system, or subsystem. For example, a subsystem that implements processing may be referred to as a processing system, and a system including such a subsystem (e.g., a system that generates multiple output signals in response to X inputs, in which the subsystem generates M of the inputs and the other X-M inputs are received from an external source) may also be referred to as a processing system.
[0042] Throughout this disclosure including in the claims, the term processor is used in a broad sense to denote a system or device programmable or otherwise configurable (e.g., with software or firmware) to perform operations on data (e.g., audio, or video or other image data). Examples of processors include a field-programmable gate array (or other configurable integrated circuit or chip set), a digital signal processor programmed and/or otherwise configured to perform pipelined processing on audio or other sound data, a programmable general purpose processor or computer, and a programmable microprocessor chip or chip set.
[0043] Throughout this disclosure including in the claims, the expression metadata refers to separate and different data from corresponding audio data (audio content of a bitstream which also includes metadata). Metadata is associated with audio data, and indicates at least one feature or characteristic of the audio data (e.g., what type(s) of processing have already been performed, or should be performed, on the audio data, or the trajectory of an object indicated by the audio data). The association of the metadata with the audio data is time-synchronous. Thus, present (most recently received or updated) metadata may indicate that the corresponding audio data contemporaneously has an indicated feature and/or comprises the results of an indicated type of audio data processing.
[0044] Throughout this disclosure including in the claims, the term couples or coupled is used to mean either a direct or indirect connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections.
[0045] Throughout this disclosure including in the claims, the following expressions have the following definitions:
[0046] speaker and loudspeaker are used synonymously to denote any sound-emitting transducer. This definition includes loudspeakers implemented as multiple transducers (e.g., woofer and tweeter);
[0047] speaker feed: an audio signal to be applied directly to a loudspeaker, or an audio signal that is to be applied to an amplifier and loudspeaker in series;
[0048] channel (or audio channel): a monophonic audio signal. Such a signal can typically be rendered in such a way as to be equivalent to application of the signal directly to a loudspeaker at a desired or nominal position. The desired position can be static, as is typically the case with physical loudspeakers, or dynamic;
[0049] audio program: a set of one or more audio channels (at least one speaker channel and/or at least one object channel) and optionally also associated metadata (e.g., metadata that describes a desired spatial audio presentation);
[0050] speaker channel (or speaker-feed channel): an audio channel that is associated with a named loudspeaker (at a desired or nominal position), or with a named speaker zone within a defined speaker configuration. A speaker channel is rendered in such a way as to be equivalent to application of the audio signal directly to the named loudspeaker (at the desired or nominal position) or to a speaker in the named speaker zone;
[0051] object channel: an audio channel indicative of sound emitted by an audio source (sometimes referred to as an audio object). Typically, an object channel determines a parametric audio source description (e.g., metadata indicative of the parametric audio source description is included in or provided with the object channel). The source description may determine sound emitted by the source (as a function of time), the apparent position (e.g., 3D spatial coordinates) of the source as a function of time, and optionally at least one additional parameter (e.g., apparent source size or width) characterizing the source;
[0052] object based audio program: an audio program comprising a set of one or more object channels (and optionally also comprising at least one speaker channel) and optionally also associated metadata (e.g., metadata indicative of a trajectory of an audio object which emits sound indicated by an object channel, or metadata otherwise indicative of a desired spatial audio presentation of sound indicated by an object channel, or metadata indicative of an identification of at least one audio object which is a source of sound indicated by an object channel); and
[0053] render: the process of converting an audio program into one or more speaker feeds, or the process of converting an audio program into one or more speaker feeds and converting the speaker feed(s) to sound using one or more loudspeakers (in the latter case, the rendering is sometimes referred to herein as rendering by the loudspeaker(s)). An audio channel can be trivially rendered (at a desired position) by applying the signal directly to a physical loudspeaker at the desired position, or one or more audio channels can be rendered using one of a variety of virtualization techniques designed to be substantially equivalent (for the listener) to such trivial rendering. In this latter case, each audio channel may be converted to one or more speaker feeds to be applied to loudspeaker(s) in known locations, which are in general different from the desired position, such that sound emitted by the loudspeaker(s) in response to the feed(s) will be perceived as emitting from the desired position. Examples of such virtualization techniques include binaural rendering via headphones (e.g., using Dolby Headphone processing which simulates up to 7.1 channels of surround sound for the headphone wearer) and wave field synthesis.
DETAILED DESCRIPTION
[0054]
[0055] The
[0056] The
[0057] In some implementations, the outputs of the microphones (S and D) are coupled to a network (either wired or wireless) configured to provide robust, redundant transmission of the audio content to the mixing system, and optionally also to provide command and control of the individual microphones and any associated equipment from a centralized remote location. The microphone output signals could be transmitted over such a network using Audio over IP (AoIP) techniques. In some implementations, the microphones (S and D) are linked to the mixing system by a cellular or Wi-Fi network.
[0058] In a typical implementation, subsystem 3 includes memory 9, signal processing subsystem 5 (coupled to memory 9), and mixing subsystem 7 (coupled to processing subsystem 5). Subsystem 5 is configured to perform signal processing (e.g., as described below) on individual microphone output signals (from microphones of the microphone array, including at least one of subsurface microphones S) to generate processed microphone signals. Subsystem 7 is configured to generate an audio mix in response to processed microphone signals output from subsystem 5 and in response to point of interest (PI) data from PI selection subsystem 4, such that the audio mix is indicative of action sound emitted at the currently selected PI on the surface. Alternatively, subsystem 5 is omitted, and subsystem 7 is operable to generate an audio mix in response to microphone output signals from microphones of the array including at least one (and typically, more than one) of the subsurface microphones S (e.g., in response to data indicative of such microphone output signals) and in response to PI data from subsystem 4, such that the audio mix is indicative of action sound emitted at the currently selected PI on the surface.
[0059] Optionally, subsystem 3 also includes signal processing subsystem 5A, which is configured to perform signal processing (e.g., a subset of the processing operations which would be performed by subsystem 5 if subsystem 5A were omitted) on the audio mix which is output from subsystem 7, and the processed audio mix which is output from subsystem 5A (rather than the audio mix which is output from subsystem 7) is asserted to console 6. Subsystem 5A may be included because some of the signal processing (which could alternatively be performed in subsystem 5) is better done on the mixed signal than on the unmixed input signals. One reason is computational cost. The other reason is that nonlinear processes do not commute with mixing (e.g., one may not know if limiting is needed until a mix has been generated from microphone signals).
[0060] Memory 9 (which may be a buffer memory) stores (in a non-transitory manner) data indicative of at least a segment of the output signal of each of the microphones of the array (including the subsurface microphones S). In this context, segment of a signal implies that the signal has a duration and denotes a portion of the signal in a time interval, where the time interval is shorter than the duration. Alternatively, memory 9 stores (in a non-transitory manner) data indicative of at least a segment of each of the processed microphone signals output from subsystem 5 (including a processed version of at least one of subsurface microphones S). In other implementations of subsystem 3, memory 9 is not present.
[0061] PI selection subsystem 4 is configured to generate the point of interest (PI) data in an automated manner. The PI data is indicative of a currently selected point of interest (PI) on the surface (e.g., PI data indicative of a sequence of PIs, where a most recently selected PI in the sequence is the currently selected PI).
[0062] In the
[0063] In some other embodiments, PI selection subsystem 4 is implemented as a processor (e.g., a portable device) including a pointing device (e.g., a mouse) which can be employed by a user to control a displayed PI representation's position relative to a displayed representation of the surface on which the event (whose audio is to be captured) occurs.
[0064] In some other embodiments, PI selection subsystem 4 is replaced by or includes an automated tracking system (e.g., a video camera tracking system) configured to identify and track a PI on the surface and to generate PI data indicative of a currently selected PI. Tracking subsystem 19 of
[0065] In some implementations, processing subsystem 5 is configured to perform signal processing on individual microphone output signals from microphones of the microphone array (including at least one of the subsurface microphones) to generate processed microphone signals. This signal processing can include one or more of: noise reduction; equalization (e.g., to restore high frequency loss due to burying of the subsurface microphones); dynamic range control or limiting (e.g., to avoid unwanted large peaks) and/or other dynamic processing; delay alignment; and/or voice detection and scrambling of any detected voice (e.g., dialog) content.
[0066] Mixing subsystem 7 is configured to output a mix signal (indicative of the audio mix generated by subsystem 7) in a format (analog or digital) that is suitable for assertion to broadcasting console 6. In some embodiments, mixing subsystem 7 also generates (and asserts to console 6) metadata which corresponds to the audio mix and is indicative of the currently selected PI corresponding to each segment of the mix. In some embodiments, console 6 is configured to generate an object based audio program including at least one object channel indicative of an audio object, such that the audio object is indicative of the captured action sound emitted from at least one currently selected PI on the surface. The object channel is determined by (and is itself indicative of) the audio mix and the corresponding metadata output from mixing subsystem 7. Such a program can be rendered (for playback by a speaker array, e.g., a three-dimensional speaker array) to provide a perception of the action sound emitting from the PI location (e.g., time-varying PI location) indicated by the metadata (e.g., so that at any instant, the perceived source location of the rendered sound relative to the speaker array corresponds to the location of a time-invariant PI on the surface, or a location along the trajectory of a time-varying PI on the surface).
[0067] In some example embodiments a system (e.g., an implementation of subsystem 3 of
[0068] In some embodiments of the inventive system, the mixing subsystem includes a signal processing subsystem (e.g., subsystem 5 of mixing system 2) coupled and configured to perform signal processing on the outputs of microphones of the array including said at least one of the subsurface microphones to generate processed microphone signals, and the mixing subsystem is coupled and configured to generate the audio mix in response to the PI data and at least some of the processed microphone signals. In some embodiments, the signal processing includes one or more of: noise reduction; equalization (e.g., to restore high frequency loss due to burying of subsurface microphones); dynamic range control or limiting (e.g., to avoid unwanted large peaks) and/or other dynamic processing; delay alignment; and/or voice detection and scrambling of any detected voice (e.g., dialog) content. Voice scrambling would typically replace captured real vocal utterances (e.g., dialog) with unintelligible words or phrases while maintaining the feeling and emotional content of, and the intention(s) motivating, the captured voice (e.g., to avoid the problem of unwanted dialog being broadcast). In some embodiments, voice scrambling is performed (e.g., by subsystem 5 of
[0069] In some embodiments of the inventive system, the PI data has been generated in response to user manipulation of a touch screen (or other) graphic user interface which displays a representation of the surface, or by a tracking system which implements automatic detection of occurrences during the event (e.g., a ball tracking system which implements slaved to ball tracking, including automatic detection of ball location or ball kick locations). The system may be configured to output a mix signal indicative of the audio mix in a format (analog or digital) suitable for assertion to a broadcasting console.
[0070] The inventors have recognized that it is often preferable that a microphone array employed to capture action sound (to be mixed in accordance with example embodiments) includes N subsurface microphones (and optionally also other microphones which are not subsurface microphones), where N is a large number. In this context, a large number of microphones denotes a number of microphones that is too large for the outputs of said microphones to be manually mixed live (i.e., during an event whose action is being captured) by mixing personnel of ordinary skill (e.g., a single skilled human operator or two human operators) using conventional practice. For example, N15 is a large number in this context. It is contemplated that in some embodiments in which action sound is captured during a soccer game, the number of subsurface microphones employed is in the range from 16 to 50 inclusive (e.g., 32 to 50 inclusive). For capture of action sound during other events (e.g., sporting events on bobsled tracks or other surfaces that are larger than typical soccer fields), the number of subsurface microphones employed may be 100 or more.
[0071] The inventors have recognized that subsurface microphones positioned in a triangular tiling pattern under a field (or other event surface) desirably provides a greater fill factor (greater coverage) of the event surface than would the same number of subsurface microphones arranged in a rectangular tiling pattern (e.g., 91% for triangular tiling versus 78% for rectangular tiling).
[0072]
[0073] In another preferred embodiment, for capture of action sound during a soccer game on a field (pitch), N subsurface microphones (where N is a number equal, or substantially equal, to 30) are buried under the field, in a pattern that ensures uniform coverage of inner areas of the field (e.g., in a triangular tiling pattern). The subsurface microphones are connected to a mixing system either wireles sly, or with individual microphone cables, or with network cables (in this case the microphone output signals would typically be converted from analog to digital form and then transmitted, individually or in a multiplexed manner, through the network cables). A number (at least substantially equal to 12) of standard directional microphones located around (i.e., not under) the field and pointing inwards are also coupled to the mixing system. In the mixing system, the individual microphone output signals (from the subsurface microphones and other microphones) are processed (before undergoing mixing), for example, to perform thereon one or more of: [0074] noise reduction; [0075] equalization (e.g., to restore high frequency loss due to burying of the subsurface microphones); [0076] dynamic range control or limiting (e.g., to avoid unwanted large peaks) and/or other dynamic processing; and/or [0077] voice detection (e.g., dialog detection) and scrambling of detected voice (e.g., dialog) content.
In the mixing system, processed microphone output signals are then mixed in response to point of interest (PI) data of any of the types described herein. The PI data may have been generated in response to operator manipulation of a touch screen (or other) user interface, or by a tracking system which implements automatic detection of occurrences during the event (e.g., a ball tracking system which implements slaved to ball tracking, including automatic detection of ball location or ball kick locations). The mixing system may output a mix signal indicative of the audio mix in a format (analog or digital) that is suitable for assertion to a broadcasting console.
[0078] It is also contemplated that microphones (including subsurface microphones) be used to capture action sound emitted during events other than football or soccer games in accordance with example embodiments. For example, in one embodiment, action sound is captured during a baseball game on a baseball field using a microphone array as shown in
[0079] In a class of example embodiments a method is provided for generating a mix indicative of action sound captured at an event on a surface (e.g., a sporting event on a field), including steps of:
[0080] (a) capturing the action sound using a microphone array (e.g., the microphone array of
[0081] (b) in an automated manner (e.g., by operation of subsystem 4 or 19 of
[0082] (c) in response to the PI data, generating (e.g., in system 2 of
[0083] Typically, the audio mix can be rendered (for playback by a loudspeaker or loudspeaker array) to provide a perception of action sound (captured by at least one of the subsurface microphones) emitted at the spatial location the surface corresponding to the currently selected PI (or a sequence of spatial locations corresponding to a sequence of selected PIs). Typically, the audio mix is a mono mix. Some embodiments include a step of generating (e.g., in broadcast console 6 of
[0084] In some embodiments, step (b) includes steps of operating a graphic user interface (e.g., a user interface implemented using a touch screen, as in subsystem 4 of
[0085] In some embodiments, step (c) includes a step of generating (e.g., in subsystem 7 of
[0086] In some embodiments, two or more PIs on the surface are contemporaneously selected in step (b) (e.g., by an implementation of PI selection subsystem 4 and/or subsystem 19 of
[0087] In some embodiments, an audio program (including audio content indicative of a mix of action sound captured during an event on a surface) is generated (e.g., by the
[0088] The inventors have recognized that outputs of multiple microphones under a surface on which an event (e.g., a sporting event) occurs (e.g., microphones buried under the grass of a football field or other playing field), if properly processed, can allow the capture of action sound indicative of spatially localized action during the event (e.g., the sound generated by ball kicks, footsteps, and the like, during a football game), where the action occurs in areas on (e.g., above) the surface where traditional microphones located around the surface (e.g., at the sides and/or ends of the surface) fall short of coverage. The signals from the subsurface microphones, and optionally also signals from microphones located at the sides and/or ends of the surface, may be transmitted separately (wireles sly or via cables) to a processing unit (e.g., subsystem 3 of
[0089] In typical example embodiments disclosed herein, the embodiments includes or employs at least one of the following elements:
[0090] subsurface microphones under an event surface (e.g., buried under a field on which an event occurs). The subsurface microphones may be arranged in a regular or irregular grid;
[0091] output signals of subsurface microphones may be transmitted in one of the following ways: via standard cables buried underground; wirelessly, each microphone having a battery-powered transmitter and using a specific frequency of the spectrum; or wireles sly per zones (several microphones are grouped together via cables or a closed wireless network. Each group has a transmitter which multiplexes their signals and transmits them wireles sly or with fewer cables);
[0092] a subsystem (e.g., implemented in hardware) configured to collect the output signals of the microphones and perform thereon at least one of the following operations: [0093] Pre-amplification of analog microphone signals, or reception of wireless signals, or de-multiplexing of multiplexed signals; [0094] Noise reduction; [0095] EQ (equalization) to restore timbre of underground microphones; [0096] Dynamic compression/limiting to maintain level consistence; [0097] Delay alignment of signals from multiple spaced microphones (e.g., signals from microphones at different distances from a selected PI); [0098] Mixing of the signals based on one or multiple specified points of interest (PIs); and/or [0099] Output of one mixed signal per PI, corresponding to and indicative of action sound emitted at the PI;
[0100] automation of selection of each PI, e.g., by a tablet application where a human user employs a graphic user interface to move the PI in real-time, or via slaving to an external tracking system. The mixing process is based on the current position of the PI.
[0101] In some example embodiments, the embodiments implements at least one of the following features: [0102] action sound is captured during events other than sporting events on fields, where it is desirable to capture action sound in locations that are not accessible to traditional microphones; [0103] the tracking of the PI is slaved to automatic detection of audio events (e.g. a kick of the ball, or a starter gun); [0104] Automatic calibration of microphone (e.g., subsurface microphone) gains (e.g., using sound emitted from a venue Public Address system before a game); [0105] Defining and outputting multiple points of interest (PIs); [0106] Outputting positional metadata (e.g., PI data) indicative of each selected PI.
[0107] Noise reduction on subsurface microphone outputs is expected to be necessary in many cases. Subsurface microphones will typically capture much noise at all times during operation. The noise reduction signal processing would typically be performed consistently on all subsurface microphone outputs (so that the noise-reduced signals would be indicative of similar sounds, and hence could be mixed).
[0108] When rendered, the output of an underground microphone would typically sound heavily filtered (due to the material above and around an underground microphone) unless appropriate signal processing is performed thereon. Outputs of different subsurface microphones might sound very differently when rendered unless equalization is performed thereon, which can be a great problem when they are mixed automatically. Therefore, automatic equalization would typically be performed on such outputs, to make the equalized outputs sound similarly.
[0109] Action sounds will typically arrive to different buried microphones with similar loudness but different times of arrival. Thus, a time-compensation signal processing stage (implemented, for example, by subsystem 5 of
[0110] In some embodiments, the inventive system is implemented to be easily reconfigurable, for example, so that the system (including the display generated by the graphic user interface of the PI selection subsystem) can be reconfigured when one of the microphones is detected to be malfunctioning. For example, a manual or automatic detection that one microphone is not functioning properly might trigger reconfiguration, and the reconfiguration might include automatic recalculation of optimal microphone gains needed to capture sound from a selected PI on the event surface.
[0111] In some embodiments, gains are applied to individual microphone outputs as part of the mentioned signal processing (before mixing). This could be performed in a separate gain stage so as to enable, for example, automatic calibration of microphone signals or compensation of unwanted losses which may occur over time.
[0112] In typical embodiments, underground microphones and their related electronics are properly protected from atmospheric conditions (e.g. with waterproof, acoustically semi-transparent capsules).
[0113] Example embodiments disclosed herein may be implemented in hardware, firmware, or software, or a combination thereof. For example, subsystem 3 or subsystem 4 of
[0114] Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language.
[0115] For example, when implemented by computer software instruction sequences, various functions and steps of the example embodiments may be implemented by multithreaded software instruction sequences running in suitable digital signal processing hardware, in which case the various devices, steps, and functions of the embodiments may correspond to portions of the software instructions.
[0116] Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be implemented as a computer-readable storage medium, configured with (i.e., storing in a non-transitory manner) a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
[0117] A number of example embodiments have been described. It should be understood that various modifications may be made without departing from the spirit and scope of the example embodiments disclosed herein. Numerous modifications and variations of the example embodiments are possible in light of the above teachings. It is to be understood that within the scope of the appended claims, the example embodiments may be practiced otherwise than as specifically described herein.