Patent classifications
G10L25/18
METHODS FOR PROCESSING AND ANALYZING A SIGNAL, AND DEVICES IMPLEMENTING SUCH METHODS
A method for processing an initial signal includes a useful signal and added noise, which comprises a step of frequency selective analysis providing starting from initial signal a plurality of wideband analysis signals corresponding to one of the analysed frequencies, and comprising the following actions: zero or more complex frequency translations, one or more undersampling operations, computation of the instantaneous Amplitude, of the instantaneous Phase, and of the instantaneous Frequency of the wideband analysis signals. This information then allow to detect modulations of signals included in high levels of noise and to detect with a good probability the presence of a signal in a high level of noise.
Audio data processing method, apparatus and storage medium for detecting wake-up words based on multi-path audio from microphone array
An audio data processing method is provided. The method includes: obtaining multi-path audio data in an environmental space, obtaining a speech data set based on the multi-path audio data, and separately generating, in a plurality of enhancement directions, enhanced speech information corresponding to the speech data set; matching a speech hidden feature in the enhanced speech information with a target matching word, and determining an enhancement direction corresponding to the enhanced speech information having a highest degree of matching with the target matching word as a target audio direction; obtaining speech spectrum features in the enhanced speech information, and obtaining, from the speech spectrum features, a speech spectrum feature in the target audio direction; and performing speech authentication on the speech hidden feature and the speech spectrum feature that are in the target audio direction based on the target matching word, to obtain a target authentication result.
Audio data processing method, apparatus and storage medium for detecting wake-up words based on multi-path audio from microphone array
An audio data processing method is provided. The method includes: obtaining multi-path audio data in an environmental space, obtaining a speech data set based on the multi-path audio data, and separately generating, in a plurality of enhancement directions, enhanced speech information corresponding to the speech data set; matching a speech hidden feature in the enhanced speech information with a target matching word, and determining an enhancement direction corresponding to the enhanced speech information having a highest degree of matching with the target matching word as a target audio direction; obtaining speech spectrum features in the enhanced speech information, and obtaining, from the speech spectrum features, a speech spectrum feature in the target audio direction; and performing speech authentication on the speech hidden feature and the speech spectrum feature that are in the target audio direction based on the target matching word, to obtain a target authentication result.
Audio-based detection and tracking of emergency vehicles
Techniques are provided for audio-based detection and tracking of an acoustic source. A methodology implementing the techniques according to an embodiment includes generating acoustic signal spectra from signals provided by a microphone array, and performing beamforming on the acoustic signal spectra to generate beam signal spectra, using time-frequency masks to reduce noise. The method also includes detecting, by a deep neural network (DNN) classifier, an acoustic event, associated with the acoustic source, in the beam signal spectra. The DNN is trained on acoustic features associated with the acoustic event. The method further includes performing pattern extraction, in response to the detection, to identify time-frequency bins of the acoustic signal spectra that are associated with the acoustic event, and estimating a motion direction of the source relative to the array of microphones based on Doppler frequency shift of the acoustic event calculated from the time-frequency bins of the extracted pattern.
Audio-based detection and tracking of emergency vehicles
Techniques are provided for audio-based detection and tracking of an acoustic source. A methodology implementing the techniques according to an embodiment includes generating acoustic signal spectra from signals provided by a microphone array, and performing beamforming on the acoustic signal spectra to generate beam signal spectra, using time-frequency masks to reduce noise. The method also includes detecting, by a deep neural network (DNN) classifier, an acoustic event, associated with the acoustic source, in the beam signal spectra. The DNN is trained on acoustic features associated with the acoustic event. The method further includes performing pattern extraction, in response to the detection, to identify time-frequency bins of the acoustic signal spectra that are associated with the acoustic event, and estimating a motion direction of the source relative to the array of microphones based on Doppler frequency shift of the acoustic event calculated from the time-frequency bins of the extracted pattern.
SUBBAND BLOCK BASED HARMONIC TRANSPOSITION
The present document relates to audio source coding systems which make use of a harmonic transposition method for high frequency reconstruction (HFR), as well as to digital effect processors, e.g. exciters, where generation of harmonic distortion add brightness to the processed signal, and to time stretchers where a signal duration is prolonged with maintained spectral content. A system and method configured to generate a time stretched and/or frequency transposed signal from an input signal is described. The system comprises an analysis filterbank configured to provide an analysis subband signal from the input signal; wherein the analysis subband signal comprises a plurality of complex valued analysis samples, each having a phase and a magnitude. Furthermore, the system comprises a subband processing unit configured to determine a synthesis subband signal from the analysis subband signal using a subband transposition factor Q and a subband stretch factor S. The subband processing unit performs a block based nonlinear processing wherein the magnitude of samples of the synthesis subband signal are determined from the magnitude of corresponding samples of the analysis subband signal and a predetermined sample of the analysis subband signal. In addition, the system comprises a synthesis filterbank configured to generate the time stretched and/or frequency transposed signal from the synthesis subband signal.
SUBBAND BLOCK BASED HARMONIC TRANSPOSITION
The present document relates to audio source coding systems which make use of a harmonic transposition method for high frequency reconstruction (HFR), as well as to digital effect processors, e.g. exciters, where generation of harmonic distortion add brightness to the processed signal, and to time stretchers where a signal duration is prolonged with maintained spectral content. A system and method configured to generate a time stretched and/or frequency transposed signal from an input signal is described. The system comprises an analysis filterbank configured to provide an analysis subband signal from the input signal; wherein the analysis subband signal comprises a plurality of complex valued analysis samples, each having a phase and a magnitude. Furthermore, the system comprises a subband processing unit configured to determine a synthesis subband signal from the analysis subband signal using a subband transposition factor Q and a subband stretch factor S. The subband processing unit performs a block based nonlinear processing wherein the magnitude of samples of the synthesis subband signal are determined from the magnitude of corresponding samples of the analysis subband signal and a predetermined sample of the analysis subband signal. In addition, the system comprises a synthesis filterbank configured to generate the time stretched and/or frequency transposed signal from the synthesis subband signal.
INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processor including: an operation control unit that controls a motion of an autonomous mobile body acting on the basis of recognition processing, in a case where a target sound that is a target voice for voice recognition processing is detected, the operation control unit moving the autonomous mobile body to a position, around an approach target, where an input level of a non-target sound that is not the target voice becomes lower, the approach target being determined on the basis of the target sound.
WEARABLE DEVICE FOR PROVIDING MULTI-MODALITY AND OPERATION METHOD THEREOF
Provided are a wearable device for providing a multi-modality, and an operation method of the wearable device. The operation method of the wearable device including obtaining source data including at least one of image data, text data, or sound data, determining whether the image data, the text data, and the sound data are included in the source data, based on determining that at least one of the image data, the text data, or the sound data is not included in the source data, generating the image data, the text data, and the sound data, which are not included in the source data, by using a generator of an generative adversarial network (GAN), which receives the source data as an input, generating a pulse-width modulation (PWM) signal based on the sound data, and outputting the multi-modality based on the image data, the text data, the sound data, and the PWM signal.
METHOD FOR PROCESSING AN AUDIO STREAM AND CORRESPONDING SYSTEM
A method and a system for processing an audio stream are described, wherein at least one database of classified voices and at least one database of classified background sounds are provided and a comparison between these classified voices and background sounds with the voices and the sounds extrapolated from a suitably re-processed audio stream is carried out in order to identify possible matches.