G10L25/90

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
20230222711 · 2023-07-13 · ·

An image processing apparatus includes a controller. The controller calculates a fundamental frequency component included in sound data and a harmonic component corresponding to the fundamental frequency component, converts the fundamental frequency component and the harmonic component into image data, and generates a sound image where the fundamental frequency component and the harmonic component converted into the image data are arranged adjacent each other.

Adapting Automated Speech Recognition Parameters Based on Hotword Properties
20230223014 · 2023-07-13 · ·

A method for optimizing speech recognition includes receiving a first acoustic segment characterizing a hotword detected by a hotword detector in streaming audio captured by a user device, extracting one or more hotword attributes from the first acoustic segment, and adjusting, based on the one or more hotword attributes extracted from the first acoustic segment, one or more speech recognition parameters of an automated speech recognition (ASR) model. After adjusting the speech recognition parameters of the ASR model, the method also includes processing, using the ASR model, a second acoustic segment to generate a speech recognition result. The second acoustic segment characterizes a spoken query/command that follows the first acoustic segment in the streaming audio captured by the user device.

Adapting Automated Speech Recognition Parameters Based on Hotword Properties
20230223014 · 2023-07-13 · ·

A method for optimizing speech recognition includes receiving a first acoustic segment characterizing a hotword detected by a hotword detector in streaming audio captured by a user device, extracting one or more hotword attributes from the first acoustic segment, and adjusting, based on the one or more hotword attributes extracted from the first acoustic segment, one or more speech recognition parameters of an automated speech recognition (ASR) model. After adjusting the speech recognition parameters of the ASR model, the method also includes processing, using the ASR model, a second acoustic segment to generate a speech recognition result. The second acoustic segment characterizes a spoken query/command that follows the first acoustic segment in the streaming audio captured by the user device.

AUDIO TRANSPOSITION

An electronic device comprising circuitry configured to separate by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and to transpose an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the second vocal signal.

AUDIO TRANSPOSITION

An electronic device comprising circuitry configured to separate by audio source separation a first audio input signal into a first vocal signal and an accompaniment, and to transpose an audio output signal by a transposition value based on a pitch ratio, wherein the pitch ratio is based on comparing a first pitch range of the first vocal signal and a second pitch range of the second vocal signal.

METHOD AND DEVICE FOR SPEECH/MUSIC CLASSIFICATION AND CORE ENCODER SELECTION IN A SOUND CODEC
20230215448 · 2023-07-06 ·

Two-stage speech/music classification device and method classify an input sound signal and select a core encoder for encoding the sound signal. A first stage classifies the input sound signal into one of a number of final classes. A second stage extracts high-level features of the input sound signal and selects the core encoder for encoding the input sound signal in response to the extracted high-level features and the final class selected in the first stage.

METHOD AND DEVICE FOR SPEECH/MUSIC CLASSIFICATION AND CORE ENCODER SELECTION IN A SOUND CODEC
20230215448 · 2023-07-06 ·

Two-stage speech/music classification device and method classify an input sound signal and select a core encoder for encoding the sound signal. A first stage classifies the input sound signal into one of a number of final classes. A second stage extracts high-level features of the input sound signal and selects the core encoder for encoding the input sound signal in response to the extracted high-level features and the final class selected in the first stage.

SOUND PROCESSING METHOD USING DJ TRANSFORM
20230215456 · 2023-07-06 ·

Provided is a sound processing method performed by a computer, the method comprising generating a DJ transform spectrogram indicating estimated pure-tone amplitudes for respective frequencies corresponding to natural frequencies of a plurality of springs and a plurality of time points by modeling an oscillation motion of the plurality of springs having different natural frequencies, with respect to an input sound, and calculating the estimated pure-tone amplitudes for the respective natural frequencies; calculating degrees of fundamental frequency suitability based on a moving average of the estimated pure-tone amplitudes or a moving standard deviation of the estimated pure-tone amplitudes with respect to each natural frequency of the DJ transform spectrogram; and extracting the fundamental frequency based on local maximum values of the degrees of fundamental frequency suitability for the respective natural frequencies at each of the plurality of time points.

Short segment generation for user engagement in vocal capture applications

User interface techniques provide user vocalists with mechanisms for solo audiovisual capture and for seeding subsequent performances by other users (e.g., joiners). Audiovisual capture may be against a full-length work or seed spanning much or all of a pre-existing audio (or audiovisual) work and in some cases may mix, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed or short segment may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a short seed or short segment. Computational techniques are described that allow a system to automatically identify suitable short seeds or short segments. After audiovisual capture against the short seed or short segment, a resulting, solo or group, full-length or short-form performance may be posted, livestreamed, or otherwise disseminated in a social network.

Short segment generation for user engagement in vocal capture applications

User interface techniques provide user vocalists with mechanisms for solo audiovisual capture and for seeding subsequent performances by other users (e.g., joiners). Audiovisual capture may be against a full-length work or seed spanning much or all of a pre-existing audio (or audiovisual) work and in some cases may mix, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed or short segment may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a short seed or short segment. Computational techniques are described that allow a system to automatically identify suitable short seeds or short segments. After audiovisual capture against the short seed or short segment, a resulting, solo or group, full-length or short-form performance may be posted, livestreamed, or otherwise disseminated in a social network.