Patent classifications
G10L25/15
AUDIO ENCODER FOR ENCODING AN AUDIO SIGNAL, METHOD FOR ENCODING AN AUDIO SIGNAL AND COMPUTER PROGRAM UNDER CONSIDERATION OF A DETECTED PEAK SPECTRAL REGION IN AN UPPER FREQUENCY BAND
An audio encoder for encoding an audio signal having a lower frequency band and an upper frequency band includes: a detector for detecting a peak spectral region in the upper frequency band of the audio signal; a shaper for shaping the lower frequency band using shaping information for the lower band and for shaping the upper frequency band using at least a portion of the shaping information for the lower band, wherein the shaper is configured to additionally attenuate spectral values in the detected peak spectral region in the upper frequency band; and a quantizer and coder stage for quantizing a shaped lower frequency band and a shaped upper frequency band and for entropy coding quantized spectral values from the shaped lower frequency band and the shaped upper frequency band.
Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine
An automated music composition and generation system and process for producing one or more pieces of digital music, by providing a set of musical energy (ME) quality control parameters to an automated music composition and generation engine, applying certain of the selected musical energy quality control parameters as markers to specific spots along the timeline of a selected media object or event marker by the system user during a scoring process, and providing the selected set of musical energy quality control parameters to drive the automated music composition and generation engine to automatically compose and generate one or more pieces of digital music with control over the specified qualities of musical energy embodied in and expressed by the piece of digital music to composed and generated by the automated music composition and generation engine.
Method of and system for controlling the qualities of musical energy embodied in and expressed by digital music to be automatically composed and generated by an automated music composition and generation engine
An automated music composition and generation system and process for producing one or more pieces of digital music, by providing a set of musical energy (ME) quality control parameters to an automated music composition and generation engine, applying certain of the selected musical energy quality control parameters as markers to specific spots along the timeline of a selected media object or event marker by the system user during a scoring process, and providing the selected set of musical energy quality control parameters to drive the automated music composition and generation engine to automatically compose and generate one or more pieces of digital music with control over the specified qualities of musical energy embodied in and expressed by the piece of digital music to composed and generated by the automated music composition and generation engine.
Audio processing device and audio processing method
There is provided an audio processing device including a memory, and a processor coupled to the memory and the processor configured to detect a first acoustic feature amount and a second acoustic feature amount of an input audio, calculate a coefficient for the second acoustic feature amount based on a time change amount by calculating the time change amount of the first acoustic feature amount, and calculate a statistical amount for the second acoustic feature amount based on the coefficient.
Audio processing device and audio processing method
There is provided an audio processing device including a memory, and a processor coupled to the memory and the processor configured to detect a first acoustic feature amount and a second acoustic feature amount of an input audio, calculate a coefficient for the second acoustic feature amount based on a time change amount by calculating the time change amount of the first acoustic feature amount, and calculate a statistical amount for the second acoustic feature amount based on the coefficient.
Assessment of a pulmonary condition by speech analysis
Described embodiments include apparatus that includes a network interface (28) and a processor (30). The processor is configured to receive, via the network interface, speech of a subject (22) who suffers from a pulmonary condition related to accumulation of excess fluid, to identify, by analyzing the speech, one or more speech-related parameters of the speech, to assess, in response to the speech-related parameters, a status of the pulmonary condition, and to generate, in response thereto, an output indicative of the status of the pulmonary condition. Other embodiments are also described.
Assessment of a pulmonary condition by speech analysis
Described embodiments include apparatus that includes a network interface (28) and a processor (30). The processor is configured to receive, via the network interface, speech of a subject (22) who suffers from a pulmonary condition related to accumulation of excess fluid, to identify, by analyzing the speech, one or more speech-related parameters of the speech, to assess, in response to the speech-related parameters, a status of the pulmonary condition, and to generate, in response thereto, an output indicative of the status of the pulmonary condition. Other embodiments are also described.
COGNITIVE FUNCTION EVALUATION DEVICE, COGNITIVE FUNCTION EVALUATION SYSTEM, COGNITIVE FUNCTION EVALUATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
A cognitive function evaluation device includes: an instruction unit that instructs quick pronunciation of pseudoword in which a predetermined syllable is repeated; an obtainment unit that obtains voice data indicating a voice of an evaluatee who has received an instruction; a calculation unit that calculates a feature from the voice data obtained by the obtainment unit; an evaluation unit that evaluates a cognitive function of the evaluatee from the feature calculated by the calculation unit; and an output unit that outputs a result of the evaluation by the evaluation unit.
METHOD AND ELECTRONIC DEVICE FOR FORMANT ATTENUATION/AMPLIFICATION
A method comprising determining feature values of an input audio window and determining a formant attenuation/amplification coefficient for the input audio window based on the processing of the feature values by a neural network.
Speaker recognition with assessment of audio frame contribution
This application describes methods and apparatus for speaker recognition. An apparatus according to an embodiment has an analyzer (202) for analyzing each frame of a sequence of frames of audio data (A.sub.IN) which correspond to speech sounds uttered by a user to determine at least one characteristic of the speech sound of that frame. An assessment module (203) determines, for each frame of audio data, a contribution indicator of the extent to which the frame of audio data should be used for speaker recognition processing based on the determined characteristic of the speech sound. In this way frames which correspond to speech sounds that are of most use for speaker discrimination may be emphasized and/or frames which correspond to speech sounds that are of least use for speaker discrimination may be de-emphasized.