G10H2240/131

Methods and Apparatus For Determining A Mood Profile Associated With Media Data
20230129425 · 2023-04-27 ·

An example method involves comparing a primary element of a first piece of audio data to a primary element of a second piece of audio data; based on the comparing of the primary elements, determining that the first and second pieces of audio data have the same predominant mood category; in response to determining that the first and second pieces of audio data have the same predominant mood category, comparing a first mood score of the primary element of the first piece of audio data to a second mood score of the primary element of a second piece of audio data; determining that an output of the comparison of the two mood scores exceeds a threshold value; and in response to determining that the output of the comparison of the two mood scores exceeds the threshold value, providing an indicator to an application.

Patient tailored system and process for treating ASD and ADHD conditions using music therapy and mindfulness with Indian Classical Music
11596763 · 2023-03-07 ·

A method, system and processes to develop a patient tailored music therapy based on Indian Classical Music compositions to treat ASD (Autistic Spectrum Disorders) and ADHD (Attention Deficit Hyperactivity Disorder) is described. According to this present invention there is provided a method to develop a tailored music therapy for treating patients suffering from ASD (Autistic Spectrum Disorders) and ADHD (Attention Deficit Hyperactivity Disorder) based on the patient's response and a system to measure response of the patient to music therapy and mindfulness inputs. This invention comprises of a process to determine suitable Indian Classical Music compositions playlist for use in treating the patient (FIG. 1) followed by further tuning of the selections allowable note levels, ramp up and ramp down times to and from allowable note levels, melody hold times and rhythm pattern selection to develop an optimum waveform (FIG. 2) all based on measuring the patient response using a multiple input—physical movement, audio and brain wave response measurement system (FIG. 3) or thru visual observations. The invention also provides a process to determine daily therapy and mindfulness time and a process for monthly music therapy and mindfulness tailoring. The invention also provides a system (FIG. 3) to measure patient response to the music therapy and mindfulness, which can be used in conjunction with or in place of visual observations. In this invention the patient starts off with a therapy and mindfulness tailoring session where a playlist of Indian Classical Music Raga compositions is first developed, selected based on the patient's response as measured by the system provided in FIG. 3 or thru visual observations. Then patient specific optimum note level, beat rhythm pattern and rhythm pattern frequency, ramp up to and down times from optimum note levels are determined based on the patient's response to create a waveform (FIG. 2). The playlist selections are then modified manually or by a computer program using the waveform parameters and when played to the patient elicits a Calm Range Response pattern defined by a state of stimulated mindfulness but not falling asleep characterized by specific range of motion, audio or brain wave response unique to the patient. The specific pieces of the waveform are derived by varying waveform parameters and measuring the response of the patient (FIGS. 4 A, B, C, D) using the response measuring system (FIG. 3) or thru visual observations. The invention also describes a process to develop daily listening period duration (FIG. 5). The invention describes a process used

AUTOMATIC MUSIC DOCUMENT DISPLAYING ON PERFORMING MUSIC
20230067175 · 2023-03-02 · ·

A user interface presents structural musical information in a score in a way where both the start and the end points of each jump in the score are visible simultaneously. Each jump is presented in a manner that allows the user to select during performance, which one of different alternatives to choose, when approaching a decision point like a repeat in the song.

SYSTEMS AND METHODS FOR DYNAMICALLY SYNTHESIZING AUDIO FILES ON A MOBILE DEVICE
20220326906 · 2022-10-13 ·

Embodiments of the present invention provide for systems and methods for dynamically synthesizing audio files on a mobile device. Embodiments of the present invention are directed to an exemplary audio file recorder with unique music-related features, such as synchronization of multiple pieces of content by the beats per minute, with functionality for content creation, editing, layering of multiple tracks, and sharing. Further, embodiments of the present invention are also related to an exemplary computer software platform which allows users to record any audio into their mobile device and organize the audio files in a manner that allows the files to be easily retrieved and shared.

METHOD, SYSTEM, AND MEDIUM FOR AFFECTIVE MUSIC RECOMMENDATION AND COMPOSITION
20230113072 · 2023-04-13 ·

A method, system, and medium for affective music recommendation and composition. A listener's current affective state and target affective state are identified, and an audio stream, such as a music playlist, is generated with the intent of effecting a controlled trajectory of the listener's affective state from the current state to the target state. The audio stream is generated by a machine learning system trained using data from the listener and/or other users indicating the effectiveness of specific audio segments, or audio segments having specific features, in effecting the desired affective trajectory. The audio stream is presented to the user as an auditory stimulus. The machine learning system may be updated based on the affective state changes induced in the listener after exposure to the auditory stimulus. Over time, the machine learning system gains a robust understanding of the relationship between music and human affect, and thus the machine learning system may also be used to compose, master, and/or adapt music configured to induce specific affective responses in listeners.

MULTI-LEVEL AUDIO SEGMENTATION USING DEEP EMBEDDINGS

Embodiments are disclosed for generating an audio segmentation of an audio sequence using deep embeddings. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an audio sequence and extracting features for each frame of the audio sequence, where each frame is associated with a beat of the audio sequence. The method may further comprise clustering frames of the audio sequence into one or more clusters based on the extracted features and generating segments of the audio sequence based on the clustered frames, where each segment includes frames of the audio sequence from a same cluster. The method may further comprise constructing a multi-level audio segmentation of the audio sequence and performing a segment fusioning process that merges shorter segments with neighboring segments based on cluster assignments.

Music generator
11625217 · 2023-04-11 · ·

Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.

Audio translator
11605369 · 2023-03-14 · ·

Audio translation system includes a feature extractor and a style transfer machine learning model. The feature extractor generates for each of a plurality of source voice files one or more source voice parameters encoded as a collection of source feature vectors, and generates for each of a plurality of target voice files one or more target voice parameters encoded as a collection of target feature vectors. The style transfer machine learning model trained on the collection of source feature vectors for the plurality of source voice files and the collection of target feature vectors for the plurality of target voice files to generate a style transformed feature vector.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM
20230144216 · 2023-05-11 · ·

This information processing device comprises: a control unit which receives an input of information indicating one or more timings in a playback time of a piece of music, and generates a file including information pertaining to a timbre control performed from each of the one or more timings; and a transmission unit which transmits an audio signal of the piece of music and the information pertaining to the timbre control in the file to a device which outputs a synthesized sound of the playback sound of the piece of music and the playing sound of a musical instrument.

CAR KARAOKE

A motor vehicle includes a loudspeaker, a voice recognition module, and a microphone producing a microphone signal based upon words uttered by a human passenger within a passenger compartment of the motor vehicle. An electronic processor is communicatively coupled to the microphone, the loudspeaker, and the voice recognition module. The electronic processor receives the microphone signal and communicates with the voice recognition module to thereby ascertain the words uttered by the human passenger. The electronic processor retrieves Karaoke music corresponding to the ascertained words uttered by the human passenger, and plays the Karaoke music on the loudspeaker.