G10L25/15

SYSTEMS AND METHODS FOR GENERATING SIGNATURE AMBIENT SOUNDS AND MAPS THEREOF

Systems and methods are provided for generating signature ambient sounds and maps thereof to assist in navigation based on a start location, a destination location, start time and average walking speed. The maps are generated by concatenating two or more of the signature ambient sounds and two or more of voice based navigation guidance. The signature ambient sounds for up to all location coordinate pairs in a map, for up to all time ranges and for up to all walking speed ranges are generated and tagged to the corresponding location coordinate pair, the corresponding time range and the corresponding average walking speed. The signature ambient sounds are generated by applying digital signal processing techniques on ambient sounds recorded by client devices, that are filtered and outlier removed.

Real-time speech to singing conversion

A method of converting a frame of a voice sample to a singing frame includes obtaining a pitch value of the frame; obtaining formant information of the frame using the pitch value; obtaining aperiodicity information of the frame using the pitch value; obtaining a tonic pitch and chord pitches; using the formant information, the aperiodicity information, the tonic pitch, and the chord pitches to obtain the singing frame; and outputting or saving the singing frame.

Real-time speech to singing conversion

A method of converting a frame of a voice sample to a singing frame includes obtaining a pitch value of the frame; obtaining formant information of the frame using the pitch value; obtaining aperiodicity information of the frame using the pitch value; obtaining a tonic pitch and chord pitches; using the formant information, the aperiodicity information, the tonic pitch, and the chord pitches to obtain the singing frame; and outputting or saving the singing frame.

Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
11430419 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.

Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
11430419 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.

METHODS AND APPARATUS FOR OBTAINING BIOMETRIC DATA

A method of modelling speech of a user of a headset comprising a microphone, the method comprising: receiving a first sample, from a bone-conduction sensor, representing bone-conducted speech of the user; obtaining a measure of fundamental frequency of the bone-conducted speech in each of a plurality of speech frames of the first sample; obtaining a first distribution of the fundamental frequencies of the bone-conducted speech over the plurality of speech frames; receiving, from the microphone, a second sample; determining a first acoustic condition at the headset based on the second signal; performing a biometric process based on the first distribution of fundamental frequencies and the first acoustic condition.

METHODS AND APPARATUS FOR OBTAINING BIOMETRIC DATA

A method of modelling speech of a user of a headset comprising a microphone, the method comprising: receiving a first sample, from a bone-conduction sensor, representing bone-conducted speech of the user; obtaining a measure of fundamental frequency of the bone-conducted speech in each of a plurality of speech frames of the first sample; obtaining a first distribution of the fundamental frequencies of the bone-conducted speech over the plurality of speech frames; receiving, from the microphone, a second sample; determining a first acoustic condition at the headset based on the second signal; performing a biometric process based on the first distribution of fundamental frequencies and the first acoustic condition.

EVALUATION APPARATUS, TRAINING APPARATUS, METHODS AND PROGRAMS FOR THE SAME

An evaluation device applies a lowpass filter with a cutoff frequency being a first predetermined value or a second predetermined value greater than the first predetermined value with or without change of feedback formant frequencies which are formant frequencies of a picked-up speech signal, converts the picked-up speech signal, feeds back the converted speech signal to a subject, and includes an evaluation unit that calculates a compensatory response vector by using pickup formant frequencies which are formant frequencies of a speech signal acquired by picking up an utterance made by the subject while feeding back a speech signal that has been converted with change of the feedback formant frequencies to the subject, and pickup formant frequencies which are formant frequencies of a speech signal acquired by picking up an utterance made by the subject while feeding back a speech signal that has been converted without change of the feedback formant frequencies to the subject, and determines an evaluation based on a compensatory response vector for each cutoff frequency.

EVALUATION APPARATUS, TRAINING APPARATUS, METHODS AND PROGRAMS FOR THE SAME

An evaluation device applies a lowpass filter with a cutoff frequency being a first predetermined value or a second predetermined value greater than the first predetermined value with or without change of feedback formant frequencies which are formant frequencies of a picked-up speech signal, converts the picked-up speech signal, feeds back the converted speech signal to a subject, and includes an evaluation unit that calculates a compensatory response vector by using pickup formant frequencies which are formant frequencies of a speech signal acquired by picking up an utterance made by the subject while feeding back a speech signal that has been converted with change of the feedback formant frequencies to the subject, and pickup formant frequencies which are formant frequencies of a speech signal acquired by picking up an utterance made by the subject while feeding back a speech signal that has been converted without change of the feedback formant frequencies to the subject, and determines an evaluation based on a compensatory response vector for each cutoff frequency.

METHOD FOR VOICE IDENTIFICATION AND DEVICE USING SAME
20220270613 · 2022-08-25 · ·

An electronic device may include: a memory; a sound sensor; and a processor, wherein the processor is configured to: receive, from the sound sensor, sound data including a first piece of data corresponding to a first frequency band and a second piece of data corresponding to a second frequency band different from the first frequency band; receive voice data related to a voice of a registered user from the memory; perform voice identification by comparing the first piece of data and the second piece of data with the voice data related to the voice of the registered user; and determine an output based on a result of the voice identification.