G10L25/15

METHOD FOR RATING THE SPEECH QUALITY OF A SPEECH SIGNAL BY WAY OF A HEARING DEVICE
20220068294 · 2022-03-03 ·

A method for rating the speech quality of a speech signal by a hearing device. An acousto-electric input transducer records sound containing the speech signal and converts it into an input audio signal. At least one articulatory and/or prosodic property of the speech signal is quantitatively acquired through analysis of the input audio signal, and a quantitative measure of speech quality is derived based on the articulatory and/or prosodic property. A hearing device with an acousto-electric input transducer configured to record a sound and convert it into an input audio signal, and a signal processing apparatus that is designed to quantitatively acquire at least one articulatory and/or prosodic property of a component, contained in the input audio signal, of a speech signal based on analysis of the input audio signal and to derive a quantitative measure of the speech quality based on the at least one articulatory and/or prosodic property.

Autonomous music composition and performance system employing real-time analysis of a musical performance to automatically compose and perform music to accompany the musical performance
11037539 · 2021-06-15 · ·

An autonomous music composition and performance system employing an automated music composition and generation engine configured to receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians. The system buffers and analyzes musical signals from the set of real or synthetic musical instruments, composes and generates music in real-time that augments the music being played by the band of musicians, and/or composes and generates music for subsequent playback, review and consideration by the human musicians.

Autonomous music composition and performance system employing real-time analysis of a musical performance to automatically compose and perform music to accompany the musical performance
11037539 · 2021-06-15 · ·

An autonomous music composition and performance system employing an automated music composition and generation engine configured to receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians. The system buffers and analyzes musical signals from the set of real or synthetic musical instruments, composes and generates music in real-time that augments the music being played by the band of musicians, and/or composes and generates music for subsequent playback, review and consideration by the human musicians.

Automated music composition and generation systems, engines and methods employing parameter mapping configurations to enable automated music composition and generation
11037540 · 2021-06-15 · ·

An automated music composition and generation system includes a graphical user interface (GUI) based system user interface for enabling system users to review and select one or more musical experience descriptors as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the GUI-based system user interface, for receiving, storing and processing the musical experience descriptors and time and/or space parameters, and composing and generating digital pieces of music, each containing a set of musical notes arranged and performed in the digital piece of composed music. A system network and methods are provided for designing and developing parameter mapping configurations (SMCs) used in the automated music composition and generation engine so as to enable the automated music composition and generation engine to automatically compose and generate music in response to musical experience descriptors and time and/or space parameters provided as input to the system.

Automated music composition and generation systems, engines and methods employing parameter mapping configurations to enable automated music composition and generation
11037540 · 2021-06-15 · ·

An automated music composition and generation system includes a graphical user interface (GUI) based system user interface for enabling system users to review and select one or more musical experience descriptors as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the GUI-based system user interface, for receiving, storing and processing the musical experience descriptors and time and/or space parameters, and composing and generating digital pieces of music, each containing a set of musical notes arranged and performed in the digital piece of composed music. A system network and methods are provided for designing and developing parameter mapping configurations (SMCs) used in the automated music composition and generation engine so as to enable the automated music composition and generation engine to automatically compose and generate music in response to musical experience descriptors and time and/or space parameters provided as input to the system.

Method of composing a piece of digital music using musical experience descriptors to indicate what, when and how musical events should appear in the piece of digital music automatically composed and generated by an automated music composition and generation system
11037541 · 2021-06-15 · ·

An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of composing a piece of digital music using musical experience descriptors as to indicate what, when and how particular musical events should occur in the piece of digital music to be automatically composed and generated. The method uses the system user interface to select one or more musical experience descriptors and applying the musical experience descriptors along a timeline representation of a piece of digital music to be automatically composed and generated by the automated music composition and generation engine.

Method of composing a piece of digital music using musical experience descriptors to indicate what, when and how musical events should appear in the piece of digital music automatically composed and generated by an automated music composition and generation system
11037541 · 2021-06-15 · ·

An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of composing a piece of digital music using musical experience descriptors as to indicate what, when and how particular musical events should occur in the piece of digital music to be automatically composed and generated. The method uses the system user interface to select one or more musical experience descriptors and applying the musical experience descriptors along a timeline representation of a piece of digital music to be automatically composed and generated by the automated music composition and generation engine.

COGNITIVE FUNCTION EVALUATION DEVICE, COGNITIVE FUNCTION EVALUATION SYSTEM, COGNITIVE FUNCTION EVALUATION METHOD, AND STORAGE MEDIUM

A cognitive function evaluation device includes: an obtainment unit that obtains utterance data indicating a voice of an evaluatee uttering a sentence as instructed; a calculation unit that calculates, from the utterance data obtained by the obtainment unit, a feature based on the utterance data; an evaluation unit that compares the feature calculated by the calculation unit to reference data indicating a relationship between voice data indicating a voice of a person and a cognitive function of the person to evaluate the cognitive function of the evaluatee; and an output unit that outputs the sentence to be uttered by the evaluatee and outputs a result of evaluation by the evaluation unit.

FREQUENCY EXTRACTION METHOD USING DJ TRANSFORM
20210183403 · 2021-06-17 ·

A method, of which each step is performed by a computer, for extracting a frequency of an input sound according to an embodiment of the present disclosure comprises the steps of: modeling a plurality of springs which have natural frequencies different from each other and oscillate according to an input sound; calculating transient-state-pure-tone amplitudes of the plurality of modeled springs; calculating expected steady-state amplitudes of the plurality of modeled springs; calculating predicted pure-tone amplitudes based on the expected steady-state amplitudes; calculating filtered pure-tone amplitudes by multiplying the transient-state-pure-tone amplitudes with the predicted pure-tone amplitudes ; and extracting the natural frequency of the spring which corresponds to a local maximum value among the filtered pure-tone amplitudes.

CONVERSATION DEPENDENT VOLUME CONTROL
20210193162 · 2021-06-24 ·

Techniques are described for detecting a conversation between at least two people, and for reducing noise during the conversation. In certain embodiments, at least one speech metric is generated based on spectral analysis of an audio signal and is used to determine that the audio signal represents speech from a first person. Responsive to determining that the speech is part of a conversation between the first person and a second person an operating state of a device in a physical environment is adjusted such that a volume level of sound contributed by or associated with the device is reduced. The sound contributed by or associated with the device corresponds to noise, at least for the duration of the conversation. Therefore, reducing the volume level of sound contributed by or associated with the device reduces the overall noise level in the environment, resulting in a reduction in conversational effort.