G10L25/15

Automatically managing the musical tastes and preferences of system users based on user feedback and autonomous analysis of music automatically composed and generated by an automated music composition and generation system
11430418 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user. The automated music composition and generation engine includes a user taste generation subsystem for automatically (i) determining the musical tastes and preferences of a system user based on user feedback and autonomous piece analysis, (ii) maintaining a system user profile reflecting musical tastes and preferences of each system user, and (iii) using the musical taste and preference information to change or modify the musical experience descriptors provided to the system to produce a digital piece of composed music composition that better reflects the musical tastes and preferences of the system user.

Automatically managing the musical tastes and preferences of system users based on user feedback and autonomous analysis of music automatically composed and generated by an automated music composition and generation system
11430418 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user. The automated music composition and generation engine includes a user taste generation subsystem for automatically (i) determining the musical tastes and preferences of a system user based on user feedback and autonomous piece analysis, (ii) maintaining a system user profile reflecting musical tastes and preferences of each system user, and (iii) using the musical taste and preference information to change or modify the musical experience descriptors provided to the system to produce a digital piece of composed music composition that better reflects the musical tastes and preferences of the system user.

Methods and apparatus for obtaining biometric data
11393449 · 2022-07-19 · ·

A method of modelling speech of a user of a headset comprising a microphone, the method comprising: receiving a first sample, from a bone-conduction sensor, representing bone-conducted speech of the user; obtaining a measure of fundamental frequency of the bone-conducted speech in each of a plurality of speech frames of the first sample; obtaining a first distribution of the fundamental frequencies of the bone-conducted speech over the plurality of speech frames; receiving, from the microphone, a second sample; determining a first acoustic condition at the headset based on the second signal; performing a biometric process based on the first distribution of fundamental frequencies and the first acoustic condition.

Methods and apparatus for obtaining biometric data
11393449 · 2022-07-19 · ·

A method of modelling speech of a user of a headset comprising a microphone, the method comprising: receiving a first sample, from a bone-conduction sensor, representing bone-conducted speech of the user; obtaining a measure of fundamental frequency of the bone-conducted speech in each of a plurality of speech frames of the first sample; obtaining a first distribution of the fundamental frequencies of the bone-conducted speech over the plurality of speech frames; receiving, from the microphone, a second sample; determining a first acoustic condition at the headset based on the second signal; performing a biometric process based on the first distribution of fundamental frequencies and the first acoustic condition.

Real-Time Speech To Singing Conversion
20220223127 · 2022-07-14 ·

A method of converting a frame of a voice sample to a singing frame includes obtaining a pitch value of the frame; obtaining formant information of the frame using the pitch value; obtaining aperiodicity information of the frame using the pitch value; obtaining a tonic pitch and chord pitches; using the formant information, the aperiodicity information, the tonic pitch, and the chord pitches to obtain the singing frame; and outputting or saving the singing frame.

Real-Time Speech To Singing Conversion
20220223127 · 2022-07-14 ·

A method of converting a frame of a voice sample to a singing frame includes obtaining a pitch value of the frame; obtaining formant information of the frame using the pitch value; obtaining aperiodicity information of the frame using the pitch value; obtaining a tonic pitch and chord pitches; using the formant information, the aperiodicity information, the tonic pitch, and the chord pitches to obtain the singing frame; and outputting or saving the singing frame.

ELECTRONIC DEVICE AND CONTROL METHOD THEREOF

The disclosure relates to an electronic device and a control method thereof. The electronic device includes a memory, and a processor configured to: obtain first feature data for estimating a waveform by inputting acoustic data of a first quality to a first encoder model; and obtain waveform data of a second quality that is a higher quality than the first quality by inputting the first feature data to a decoder model to.

Apparatus and method of processing audio signals

A method for processing audio signals includes extracting a fundamental frequency (F0) component from a first audio signal; processing the first audio signal with Dominant Melody Enhancement (DoME) based on a hearing profile and output a second audio signal; and providing the second audio signal to the user. The DoME enhances the F0 component. The enhancement weight of the DoME is corresponding to the hearing profile.

Method and apparatus for measuring distortion and muffling of speech by a face mask
11295759 · 2022-04-05 · ·

Systems and methods are provided for measuring the distortion and muffling caused by a face mask. For example, in one embodiment a simulated voice source produces a sound. The sound is then acoustically coupled to a simulated vocal tract and a face mask. A microphone receives sound and produces a signal and an analyzer receives the signal from the microphone. A manikin head or other facial structure may also simulate fitting of the face mask onto a face. The analyzer may further produce a quantitative assessment of the distortion and muffling of the face mask, for example, by comparing at least one spectrum obtained with the face mask and at least one spectrum obtained without the face mask.

Method and apparatus for measuring distortion and muffling of speech by a face mask
11295759 · 2022-04-05 · ·

Systems and methods are provided for measuring the distortion and muffling caused by a face mask. For example, in one embodiment a simulated voice source produces a sound. The sound is then acoustically coupled to a simulated vocal tract and a face mask. A microphone receives sound and produces a signal and an analyzer receives the signal from the microphone. A manikin head or other facial structure may also simulate fitting of the face mask onto a face. The analyzer may further produce a quantitative assessment of the distortion and muffling of the face mask, for example, by comparing at least one spectrum obtained with the face mask and at least one spectrum obtained without the face mask.