G10L25/27

Audio Processing Method, Method for Training Estimation Model, and Audio Processing System
20220406325 · 2022-12-22 ·

An audio processing method by which input data are obtained that includes first sound data representing first components of a first frequency band, included in a first sound corresponding to a first sound source, second sound data representing second components of the first frequency band, included in a second sound corresponding to a second sound source, and mix sound data representing mix components of an input frequency band including a second frequency band, the mix components being included in a mix sound of the first sound and the second sound. The input data are then input to a trained estimation model, to generate at least one of first output data representing first estimated components within an output frequency band including the second frequency band, included in the first sound, or second output data representing second estimated components within the output frequency band, included in the second sound.

Audio Processing Method, Method for Training Estimation Model, and Audio Processing System
20220406325 · 2022-12-22 ·

An audio processing method by which input data are obtained that includes first sound data representing first components of a first frequency band, included in a first sound corresponding to a first sound source, second sound data representing second components of the first frequency band, included in a second sound corresponding to a second sound source, and mix sound data representing mix components of an input frequency band including a second frequency band, the mix components being included in a mix sound of the first sound and the second sound. The input data are then input to a trained estimation model, to generate at least one of first output data representing first estimated components within an output frequency band including the second frequency band, included in the first sound, or second output data representing second estimated components within the output frequency band, included in the second sound.

Linear prediction analysis device, method, program, and storage medium

An autocorrelation calculation unit 21 calculates an autocorrelation R.sub.O(i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′.sub.O(i) obtained by multiplying a coefficient w.sub.O(i) by the autocorrelation R.sub.O(i). It is assumed here, for each order i of some orders i at least, that the coefficient w.sub.O(i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.

Linear prediction analysis device, method, program, and storage medium

An autocorrelation calculation unit 21 calculates an autocorrelation R.sub.O(i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′.sub.O(i) obtained by multiplying a coefficient w.sub.O(i) by the autocorrelation R.sub.O(i). It is assumed here, for each order i of some orders i at least, that the coefficient w.sub.O(i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.

Provision of targeted advertisements based on user intent, emotion and context

An electronic device and method are disclosed herein. The electronic device includes a microphone, a camera, an output device, a memory, and a processor. The processor implements the method, including receiving a voice input and/or capturing an image, and analyze the first voice input or the image to determine at least one of a user's intent, emotion, and situation based on predefined keywords and expressions, identifying a category based on the input, selecting first information based on the category, selecting and outputting a first query prompting confirmation of output of the first information, detect a first responsive input to the first query, and when a condition to output the first information is satisfied, output a second query, detecting a second input responsive to the second query, and selectively outputting the first information based on the second input.

Provision of targeted advertisements based on user intent, emotion and context

An electronic device and method are disclosed herein. The electronic device includes a microphone, a camera, an output device, a memory, and a processor. The processor implements the method, including receiving a voice input and/or capturing an image, and analyze the first voice input or the image to determine at least one of a user's intent, emotion, and situation based on predefined keywords and expressions, identifying a category based on the input, selecting first information based on the category, selecting and outputting a first query prompting confirmation of output of the first information, detect a first responsive input to the first query, and when a condition to output the first information is satisfied, output a second query, detecting a second input responsive to the second query, and selectively outputting the first information based on the second input.

Assisted hearing aid with synthetic substitution
11528568 · 2022-12-13 · ·

A device and method for improving hearing devices by using computer recognition of words and substituting either computer generated words or pre-recorded words in streaming conversation received from a distant speaker. The system may operate in multiple modes such as a first mode being amplification and conditioning of the voice sounds; a second mode having said microphone pickup up the voice sounds from a speaker, a processor configured to convert voice sounds to discrete words corresponding to words spoken by said speaker, generating a synthesized voice speaking said words and outputting said synthesized voice to said sound reproducing element, which is hearable by the user. Other modes include translation of foreign languages into a user's ear and using a heads up display to project the text version of words which the computer had deciphered or translated. The system may be triggered by eye moment, spoken command, hand movement or similar.

Assisted hearing aid with synthetic substitution
11528568 · 2022-12-13 · ·

A device and method for improving hearing devices by using computer recognition of words and substituting either computer generated words or pre-recorded words in streaming conversation received from a distant speaker. The system may operate in multiple modes such as a first mode being amplification and conditioning of the voice sounds; a second mode having said microphone pickup up the voice sounds from a speaker, a processor configured to convert voice sounds to discrete words corresponding to words spoken by said speaker, generating a synthesized voice speaking said words and outputting said synthesized voice to said sound reproducing element, which is hearable by the user. Other modes include translation of foreign languages into a user's ear and using a heads up display to project the text version of words which the computer had deciphered or translated. The system may be triggered by eye moment, spoken command, hand movement or similar.

ACOUSTIC DATA AUGMENTATION WITH MIXED NORMALIZATION FACTORS
20220375484 · 2022-11-24 ·

A method, computer system, and a computer program product for audio data augmentation are provided. Sets of audio data from different sources may be obtained. A respective normalization factor for at least two sources of the different sources may be calculated. The normalization factors from the at least two sources may be mixed to determine a mixed normalization factor. A first set of the sets may be normalized by using the mixed normalization factor and to obtain training data for training an acoustic model.

ACOUSTIC DATA AUGMENTATION WITH MIXED NORMALIZATION FACTORS
20220375484 · 2022-11-24 ·

A method, computer system, and a computer program product for audio data augmentation are provided. Sets of audio data from different sources may be obtained. A respective normalization factor for at least two sources of the different sources may be calculated. The normalization factors from the at least two sources may be mixed to determine a mixed normalization factor. A first set of the sets may be normalized by using the mixed normalization factor and to obtain training data for training an acoustic model.