G10H2210/131

Method and system for simulating musical phrase
11842711 · 2023-12-12 ·

Disclosed is a method for simulating a musical phrase, wherein the musical phrase includes a sequence of notes, the method comprising generating timbral fingerprints associated with a sample library that comprises recordings of a plurality of notes in a plurality of intensities, using a linear predictive coding technique, wherein the timbral fingerprints relate to the plurality of intensities of the plurality of notes; determining an origin intensity for the musical phrase, wherein the origin intensity is one intensity selected from amongst the plurality of intensities; and simulating each note in the sequence of notes, by morphing a recording of each note in the origin intensity according to timbral fingerprints of the note in the plurality of intensities, for simulating the musical phrase.

SYSTEMS, DEVICES, AND METHODS FOR COMPUTER-GENERATED MUSICAL NOTE SEQUENCES
20210241734 · 2021-08-05 ·

Computer-based systems, devices, and methods for generating musical note sequences are described. One or more musical composition(s) stored in digital media include one or more data object(s) that encode notes and/or note sequences. At least one note sequence is processed to form a time-ordered sequence of parallel notes, which is analyzed to determine a k-back probability transition matrix for the at least one note sequence. An attribute, such as a style, of the at least one note sequence is thus encoded and used to generate new note sequences that embody a similar attribute or style. In some implementations, the at least one note sequence may include a concatenated set of note sequences representative of a particular library of musical compositions.

Head pose mixing of audio files

Examples of wearable devices that can present to a user of the display device an audible or visual representation of an audio file comprising a plurality of stem tracks that represent different audio content of the audio file are described. Systems and methods are described that determine the pose of the user; generate, based on the pose of the user, an audio mix of at least one of the plurality of stem tracks of the audio file; generate, based on the pose of the user and the audio mix, a visualization of the audio mix; communicate an audio signal representative of the audio mix to the speaker; and communicate a visual signal representative of the visualization of the audio mix to the display.

COMPUTER-IMPLEMENTED METHOD OF DIGITAL MUSIC COMPOSITION
20210272543 · 2021-09-02 ·

A computer-implemented method of digital music composition that creates a digital multi-genre musical composition track by downloading a host digital music track of a first genre and two or more separate donor multi-genre musical tracks, and then selectively modulating the instruments and rhythmic patterns of the donor musical tracks by manipulating the rhythmic patterns. The manipulation includes manipulating at least one of the intensities, frequency, sound, beat, and rhythm of the rhythmic pattern. The manipulated donor musical tracks are then integrated into the host musical track to create a combined digital multi-genre musical composition track, which can be downloaded, saved in a file, and replayed as needed.

PROVIDING PERSONALIZED SONGS IN AUTOMATED CHATTING

The present disclosure provides method and apparatus for providing personalized songs in automated chatting. A message may be received in a chat flow. Personalized lyrics of a user may be generated based at least on a personal language model of the user in response to the message. A personalized song may be generated based on the personalized lyrics. The personalized song may be provided in the chat flow.

AUTONOMOUS GENERATION OF MELODY
20210158790 · 2021-05-27 ·

Implementations of the subject matter described herein provide a solution that enables a machine to automatically generate a melody. In this solution, user emotion and/or environment information is used to select a first melody feature parameter from a plurality of melody feature parameters, wherein each of the plurality of melody feature parameters corresponds to a music style of one of a plurality of reference melodies. The first melody feature parameter is further used to generate a first melody that conforms to the music style and is different from the reference melody. Thus, a melody that matches user emotions and/or environmental information may be automatically created.

AUTOMATIC PREPARATION OF A NEW MIDI FILE
20210158791 · 2021-05-27 ·

The present disclosure relates to a method of automatically preparing a MIDI file based on a target MIDI file comprising respective note information about each of a plurality of target notes and a source MIDI file comprising respective note information about each of a plurality of source notes. Each note information comprises pitch information defining a pitch of the note. The method comprises ranking the plurality of target notes based on the pitch of each target note. The method also comprises, for each of the ranked target notes, removing the pitch information from the note information of the target note. The method also comprises, for each of the ranked target notes, replacing the removed pitch information with pitch information of a corresponding source note, whereby the target note has the same pitch as the corresponding source note, forming a plurality of new notes of a new MIDI file.

Method of creating musical compositions and other symbolic sequences by artificial intelligence
11024276 · 2021-06-01 ·

A method of creating AI-composed music having a style that reflects and/or augments the personal style of a user includes selecting and/or composing one or more seed compositions; applying variation and/or mash-up methods to the seed compositions to create training data; training an AI using the training data; and causing the AI to compose novel musical compositions. The variation methods can include methods described in the inventor's previous patents. A novel method of creating mash-ups is described herein. Variation and mash-up methods can further be applied to the A compositions. The AI compositions, and/or variations and/or mash-ups thereof, can be added to the training data for re-training of the AI. The disclosed mash-up method includes parsing the seed compositions into sequences of elements, which can be of equal length, beat-matching the seed compositions to make corresponding elements of equal beat length, and combining the elements to form a mash-up.

AUTOMATIC ORCHESTRATION OF A MIDI FILE
20210125593 · 2021-04-29 ·

An electronic device segments a first and second MIDI files into pluralities of source segments and target segments. For each of a plurality of consecutive pairs of first and second target segments, the electronic device identifies a first source segment corresponding to the first target segment of the consecutive pair and identifies a second source segment corresponding to the second target segment of the consecutive pair, where the first and second source segments are identified by determining that the first and second source segments are harmonically conformant to the corresponding first and second target segments, and determining that a transition between the first and second source segments is graphically conformant to a transition between a consecutive pair of source segments. The electronic device generates a third MIDI file using the identified first and second source segments for each of the plurality of consecutive pairs of first and second target segments.

Music context system audio track structure and method of real-time synchronization of musical content
11854519 · 2023-12-26 · ·

A system is described that permits identified musical phrases or themes to be synchronized and linked into changing real-world events. The achieved synchronization includes a seamless musical transitionachieved using a timing offset, such as relative advancement of an significant musical onset, that is inserted to align with a pre-existing but identified music signature, beat or timebasebetween potentially disparate pre-identified musical phrases having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods.