G10L19/022

Harmonic transposition in an audio coding method and system
11594234 · 2023-02-28 · ·

The present invention relates to transposing signals in time and/or frequency and in particular to coding of audio signals. More particular, the present invention relates to high frequency reconstruction (HFR) methods including a frequency domain harmonic transposer. A method and system for generating a transposed output signal from an input signal using a transposition factor T is described. The system comprises an analysis window of length L.sub.a, extracting a frame of the input signal, and an analysis transformation unit of order M transforming the samples into M complex coefficients. M is a function of the transposition factor T. The system further comprises a nonlinear processing unit altering the phase of the complex coefficients by using the transposition factor T, a synthesis transformation unit of order M transforming the altered coefficients into M altered samples, and a synthesis window of length L.sub.s, generating a frame of the output signal.

COMPRESSIVE SENSING FOR FULL MATRIX CAPTURE
20230098406 · 2023-03-30 ·

Examples of the present subject matter provide techniques for compressive sampling of acoustic data. A probe may sample in a compression mode, such that the entire matrix is not sampled at full-time resolution or spatial resolution. Therefore, the initial amount of data captured by the probe is reduced, allowing for lower density hardware (e.g., fewer analog-to-digital conversion channels or related analog front-end hardware) to be used at a lower data rate.

COMPRESSIVE SENSING FOR FULL MATRIX CAPTURE
20230098406 · 2023-03-30 ·

Examples of the present subject matter provide techniques for compressive sampling of acoustic data. A probe may sample in a compression mode, such that the entire matrix is not sampled at full-time resolution or spatial resolution. Therefore, the initial amount of data captured by the probe is reduced, allowing for lower density hardware (e.g., fewer analog-to-digital conversion channels or related analog front-end hardware) to be used at a lower data rate.

Hypothesis stitcher for speech recognition of long-form audio

A hypothesis stitcher for speech recognition of long-form audio provides superior performance, such as higher accuracy and reduced computational cost. An example disclosed operation includes: segmenting the audio stream into a plurality of audio segments; identifying a plurality of speakers within each of the plurality of audio segments; performing automatic speech recognition (ASR) on each of the plurality of audio segments to generate a plurality of short-segment hypotheses; merging at least a portion of the short-segment hypotheses into a first merged hypothesis set; inserting stitching symbols into the first merged hypothesis set, the stitching symbols including a window change (WC) symbol; and consolidating, with a network-based hypothesis stitcher, the first merged hypothesis set into a first consolidated hypothesis. Multiple variations are disclosed, including alignment-based stitchers and serialized stitchers, which may operate as speaker-specific stitchers or multi-speaker stitchers, and may further support multiple options for differing hypothesis configurations.

Hypothesis stitcher for speech recognition of long-form audio

A hypothesis stitcher for speech recognition of long-form audio provides superior performance, such as higher accuracy and reduced computational cost. An example disclosed operation includes: segmenting the audio stream into a plurality of audio segments; identifying a plurality of speakers within each of the plurality of audio segments; performing automatic speech recognition (ASR) on each of the plurality of audio segments to generate a plurality of short-segment hypotheses; merging at least a portion of the short-segment hypotheses into a first merged hypothesis set; inserting stitching symbols into the first merged hypothesis set, the stitching symbols including a window change (WC) symbol; and consolidating, with a network-based hypothesis stitcher, the first merged hypothesis set into a first consolidated hypothesis. Multiple variations are disclosed, including alignment-based stitchers and serialized stitchers, which may operate as speaker-specific stitchers or multi-speaker stitchers, and may further support multiple options for differing hypothesis configurations.

Systems and methods to verify values input via optical character recognition and speech recognition

Disclosed are systems, methods, and non-transitory computer-readable medium for data input with multi-format validation. The method may include receiving data input via a microphone mounted on a user device and receiving the data input via a camera mounted on the user device. Additionally, the method may include comparing the data input via the microphone and the data input via the camera and determining whether the comparison of the data input exceeds a predetermined confidence level. Additionally, the method may include storing the data input, upon determining that the comparison of the data input exceeds the predetermined confidence level and presenting to the user a notification of validation upon determining that the comparison of the data input does not exceed the predetermined confidence level. Additionally, the method may include receiving from the user a validation of the data input based on the notification of validation and storing the data input based on the validation of the data input.

Systems and methods to verify values input via optical character recognition and speech recognition

Disclosed are systems, methods, and non-transitory computer-readable medium for data input with multi-format validation. The method may include receiving data input via a microphone mounted on a user device and receiving the data input via a camera mounted on the user device. Additionally, the method may include comparing the data input via the microphone and the data input via the camera and determining whether the comparison of the data input exceeds a predetermined confidence level. Additionally, the method may include storing the data input, upon determining that the comparison of the data input exceeds the predetermined confidence level and presenting to the user a notification of validation upon determining that the comparison of the data input does not exceed the predetermined confidence level. Additionally, the method may include receiving from the user a validation of the data input based on the notification of validation and storing the data input based on the validation of the data input.

SPEECH RECOGNITION METHOD AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
20220343898 · 2022-10-27 ·

A speech recognition method, including acquiring first linear frequency spectrums corresponding to audios to be trained with different sampling rates; determining the maximum sampling rate and other sampling rates; determining the maximum frequency domain sequence number of the first linear frequency spectrums as a first frequency domain sequence number and a second frequency domain sequence number; in the first linear frequency spectrums corresponding to the other sampling rate, configuring amplitude values corresponding to each frequency domain sequence number that is greater than the first frequency domain sequence number and less than or equal to the second frequency domain sequence number to be zero to obtain second linear frequency spectrums; determining first speech features and second voice features; and using the first speech features and the second speech features to train a machine learning model.

SPEECH RECOGNITION METHOD AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
20220343898 · 2022-10-27 ·

A speech recognition method, including acquiring first linear frequency spectrums corresponding to audios to be trained with different sampling rates; determining the maximum sampling rate and other sampling rates; determining the maximum frequency domain sequence number of the first linear frequency spectrums as a first frequency domain sequence number and a second frequency domain sequence number; in the first linear frequency spectrums corresponding to the other sampling rate, configuring amplitude values corresponding to each frequency domain sequence number that is greater than the first frequency domain sequence number and less than or equal to the second frequency domain sequence number to be zero to obtain second linear frequency spectrums; determining first speech features and second voice features; and using the first speech features and the second speech features to train a machine learning model.

SELECTION OF QUANTISATION SCHEMES FOR SPATIAL AUDIO PARAMETER ENCODING
20230129520 · 2023-04-27 ·

There is disclosed inter alia an apparatus for spatial audio signal encoding comprising means for receiving for each time frequency block of a sub band of an audio frame a spatial audio parameter comprising an azimuth and an elevation; determining a first distortion measure for the audio frame by determining a first distance measure for each time frequency block and summing the first distance measure for each time frequency block; determining a second distortion measure for the audio frame by determining a second distance measure for each time frequency block and summing the second distance measure for each time frequency block, and selecting either the first quantization scheme or the second quantization scheme for quantising the elevation and the azimuth for all time frequency blocks of the sub band of the audio frame, wherein the selecting is dependent on the first and second distortion measures.