G10H2240/131

GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES

A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.

Method and system for learning and using latent-space representations of audio signals for audio content-based retrieval

A method and system are provided for extracting features from digital audio signals which exhibit variations in pitch, timbre, decay, reverberation, and other psychoacoustic attributes and learning, from the extracted features, an artificial neural network model for generating contextual latent-space representations of digital audio signals. A method and system are also provided for learning an artificial neural network model for generating consistent latent-space representations of digital audio signals in which the generated latent-space representations are comparable for the purposes of determining psychoacoustic similarity between digital audio signals. A method and system are also provided for extracting features from digital audio signals and learning, from the extracted features, an artificial neural network model for generating latent-space representations of digital audio signals which take care of selecting salient attributes of the signals that represent psychoacoustic differences between the signals.

Pace-aware music player

An electronic device may comprise audio processing circuitry, pace tracking circuitry, and positioning circuitry. The pace tracking circuitry may be operable to selects songs to be processed for playback, and/or control time stretching applied to such songs, by the audio processing circuitry based on position data generated by the positioning circuitry, a desired tempo, and whether the songs are stored locally or network-accessible. The position data may indicate the pace of a runner during a preceding, determined time interval. The pace tracking circuitry may control the song selection and/or time stretching based on a runner profile data stored in memory of the music device. The profile data may include runner's distance-per-stride data. The electronic device may include sensors operable to function as a pedometer. The pace tracking circuitry may update the distance-per-stride data based on the position data and based on data output by the one or more sensors.

Music Generator Generation of Continuous Personalized Music
20220059063 · 2022-02-24 ·

Techniques are disclosed relating to automatically generate new music content. In some embodiments, a computing system receivers user input specifying a user-defined music control element. The computing system may train a machine learning model to change both composition and performance parameters based on user adjustments to the user-defined music control element. In embodiments in which composition and performance subsystems are on different devices, one device may transmit configuration information to another device, where the configuration information specifies how to adjust parameters based on user input to the user-defined music control element. Disclosed techniques may facilitate centralized learning for human-like music production while allowing individualized customization for individual users. Further, disclosed techniques may allow artists to define their own abstract music controls and make those controls available to end-users.

Device, system and method for generating an accompaniment of input music data
09798805 · 2017-10-24 · ·

A device for automatically generating a real time accompaniment of input music data includes a music input that receives music data. A music analyzer analyzes received music data to obtain a music data description including one or more characteristics of the analyzed music data. A query generator generates a query to a music database including music patterns and associated metadata including one or more characteristics of the music patterns, the query being generated from the music data description and from an accompaniment description describing preferences of the real time accompaniment and/or music rules describing general rules of music. A query interface queries the music database using a generated query and receives a music pattern selected from the music database by use of the query. A music output outputs the received music pattern.

TRANSITIONS BETWEEN MEDIA CONTENT ITEMS

A system of playing media content items determines transitions between pairs of media content items by determining desirable locations in which transitions across the pairs of media content items occur. The system uses a plurality of track features of media content items and determines such track features of each media content item associated with each of transition point candidates, such as beat positions, of that media content item. The system determines similarity in the plurality of track features between the transition point candidates of a first media content item and the transition point candidates for a second media content item being played subsequent to the first media content item. The transition points or portions of the first and second media content items are selected from the transition point candidates for the first and second media content items based on the similarity results.

Music modeling

A computer implemented method is provided for generating a prediction of a next musical note by a computer having at least a processor and a memory. A computer processor system is also provided for generating a prediction of a next musical note. The method includes storing sequential musical notes in the memory. The method further includes generating, by the processor, the prediction of the next musical note based upon a music model and the sequential musical notes stored in the memory. The method also includes updating, by the processor, the music model based upon the prediction of the next musical note and an actual one of the next musical note. The method additionally includes resetting, by the processor, the memory at fixed time intervals.

Learning progression for intelligence based music generation and creation

An artificial intelligence (AI) method includes generating a first musical interaction behavioral model. The first musical interaction behavioral model causes an interactive electronic device to perform a first set of musical operations and a first set of motional operations. The AI method further includes receiving user inputs received in response to the performance of the first set of musical operations and the first set of motional operations and determining a user learning progression level based on the user inputs. In response to determining that the user learning progression level is above a threshold, the AI method includes generating a second musical interaction behavioral model. The second musical interaction behavioral model causes the interactive electronic device to perform a second set of musical operations and a second set of motional operations. The AI method further includes performing the second set of musical operations and the second set of motional operations.

Method and apparatus for an adaptive and interactive teaching of playing a musical instrument

A method for online music learning of playing a musical instrument, that may be a string instrument, a woodwind instrument, a brass instrument, a percussion instrument, or vocal (singing) is described. A client device, such as a smartphone or a tablet notifies a person, such as visually by a display or audibly by a sounder, of a sequence of musical symbols that may be part of a musical piece to be played on the musical instrument. The pace or tempo of the notified sequence is adapted according to a stored skill level value of the person and the pace or tempo associated with the musical piece. The client device, monitors, using a microphone(s) in the client device, the errors in the playing of the sequence, and using a predefined accordingly updates the stored skill level value, and accordingly changes the arrangement, pace and/or tempo of the next sequence of musical symbols.

SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
20220052773 · 2022-02-17 ·

A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.