Patent classifications
H03H2021/0034
Projection-Based Audio Object Extraction from Audio Content
A method is disclosed for audio object extraction from an audio content which includes identifying a first set of projection spaces including a first subset for a first channel and a second subset for a second channel of the plurality of channels. The method may further include determining a first set of correlations between the first and second channels, each of the first set of correlations corresponding to one of the first subset of projection spaces and one of the second subset of projection spaces. Still further, the method may include extracting an audio object from an audio signal of the first channel at least in part based on a first correlation among the first set of correlations and the projection space from the first subset corresponding to the first correlation, the first correlation being greater than a first predefined threshold. Corresponding system and computer program products are also disclosed.
System and method for anomaly detection using anomaly cueing
Described a system for anomaly detection using anomaly cueing. In operation, an input image having two-dimensional (2D) image mixtures of primary components is reformatted into one-dimensional (1D) input signals. Blind source signal separation is used to separate the 1D input signals into separate output primary components, which are 1D output signals. The 1D output signals are reformatted into 2D spatially independent component output images. The system then calculates all possible pair product images of the 2D spatially independent component output images and corresponding signal-to-noise ratios. A pair product image is selected based on the peak signal-to-noise ratio and thresholded to identify anomalies in the pair product image. Several types of devices can then be controlled based on the identified anomalies in the pair product image.
Acoustic source separation systems
A method for acoustic source separation comprises inputting acoustic data from a plurality of acoustic sensors, combined from a plurality of acoustic sources, converting the acoustic data to time-frequency domain data comprising time-frequency data frames, and constructing a multichannel filter for the time-frequency data frames to separate signals from the acoustic sources. The constructing comprises determining a set of de-mixing matrices (W.sub.f) to apply to each time-frequency data frame to determine a vector of separated outputs (y.sub.ft) by modifying each of the de-mixing matrices by a respective gradient value (G;G′) for a frequency dependent upon a gradient of a cost function measuring a separation of the sources by the respective de-mixing matrix. The respective gradient values for each frequency are each calculated from a stochastic selection of the time-frequency data frames.
Source separation for reverberant environment
Embodiments of source separation for reverberant environment are disclosed. According to a method, first microphone signals for each individual one of at least one source are captured respectively by at least two microphones for a period during which only the individual one produces sounds. Mixing parameters for modeling acoustic paths between the at least one source and the at least two microphones are learned by a processor based on the first microphone signals. Second microphone signals are captured respectively by the at least two microphones for a period during which all of the at least one source produce sounds. The reconstruction model is estimated by the processor based on the mixing parameters and second microphone signals. The processor performs the source separation by applying the reconstruction model.
SOURCE SEPARATION FOR REVERBERANT ENVIRONMENT
Embodiments of source separation for reverberant environment are disclosed. According to a method, first microphone signals for each individual one of at least one source are captured respectively by at least two microphones for a period during which only the individual one produces sounds. Mixing parameters for modeling acoustic paths between the at least one source and the at least two microphones are learned by a processor based on the first microphone signals. Second microphone signals are captured respectively by the at least two microphones for a period during which all of the at least one source produce sounds. The reconstruction model is estimated by the processor based on the mixing parameters and second microphone signals. The processor performs the source separation by applying the reconstruction model.
ACOUSTIC SOURCE SEPARATION SYSTEMS
A method for acoustic source separation comprises inputting acoustic data from a plurality of acoustic sensors, combined from a plurality of acoustic sources, converting the acoustic data to time-frequency domain data comprising time-frequency data frames, and constructing a multichannel filter for the time-frequency data frames to separate signals from the acoustic sources. The constructing comprises determining a set of de-mixing matrices (W.sub.f) to apply to each time-frequency data frame to determine a vector of separated outputs (y.sub.ft) by modifying each of the de-mixing matrices by a respective gradient value (G;G) for a frequency dependent upon a gradient of a cost function measuring a separation of the sources by the respective de-mixing matrix. The respective gradient values for each frequency are each calculated from a stochastic selection of the time-frequency data frames.
Source separation for reverberant environment
Embodiments of source separation for reverberant environment are disclosed. According to a method, first microphone signals for each individual one of at least one source are captured respectively by at least two microphones for a period during which only the individual one produces sounds. Mixing parameters for modeling acoustic paths between the at least one source and the at least two microphones are learned by a processor based on the first microphone signals. Second microphone signals are captured respectively by the at least two microphones for a period during which all of the at least one source produce sounds. The reconstruction model is estimated by the processor based on the mixing parameters and second microphone signals. The processor performs the source separation by applying the reconstruction model.
Sound source separation apparatus
A sound source separation apparatus includes: a separation-matrix processor that transforms a plurality of observation signals corresponding to sounds being propagated from a plurality of sound sources into a frequency-domain signal group the separation-matrix processor updating a separation matrix based on the frequency-domain signal group and transforming the updated separation matrix into time-series filter coefficients to output; a filter-coefficient transformer that partially removes non-causal components from the filter coefficients to transform the filter coefficients, and a separator that supplies the filter coefficients to a filter group, the separator generating a plurality of separation signals separated from the plurality of observation signals corresponding to the separation matrix.
SOURCE SEPARATION FOR REVERBERANT ENVIRONMENT
Embodiments of source separation for reverberant environment are disclosed. According to a method, first microphone signals for each individual one of at least one source are captured respectively by at least two microphones for a period during which only the individual one produces sounds. Mixing parameters for modeling acoustic paths between the at least one source and the at least two microphones are learned by a processor based on the first microphone signals. Second microphone signals are captured respectively by the at least two microphones for a period during which all of the at least one source produce sounds. The reconstruction model is estimated by the processor based on the mixing parameters and second microphone signals. The processor performs the source separation by applying the reconstruction model.
Cognitive signal processor for simultaneous denoising and blind source separation
Described is a cognitive signal processor for signal denoising and blind source separation. During operation, the cognitive signal processor receives a mixture signal that comprises a plurality of source signals. A denoised reservoir state signal is generated by mapping the mixture signal to a dynamic reservoir to perform signal denoising. At least one separated source signal is identified by adaptively filtering the denoised reservoir state signal.