GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES
20230178057 · 2023-06-08
Inventors
- Marek PLUTA (Kraków, PL)
- Joanna KWIECIEÑ (Wasniów, PL)
- Colin LEWIS (Warszawa, PL)
- Andrzej DABROWSKI (Warszawa, PL)
- Marek WLODARCZYK (Zyrardów, PL)
Cpc classification
G10H2210/145
PHYSICS
G10H2240/145
PHYSICS
G10H2240/011
PHYSICS
G10H2240/131
PHYSICS
G10H1/0025
PHYSICS
G10H2210/101
PHYSICS
G10H2210/031
PHYSICS
G10H2240/125
PHYSICS
G10H2250/005
PHYSICS
G10H2210/105
PHYSICS
International classification
Abstract
A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.
Claims
1. A method of generating music contents according to the invention, wherein input sound samples are processed according to modification algorithms music input files, related, in particular, to characteristics such as tempo, mood of the composition, music genre, duration and the scope of content modulation is selected, with the effect being a composition with the intended artistic expression, wherein music contents are created on the technical level and on the artistic level, wherein on the level of contents creation on the technical level, the input music contents for the presence of patterns, the patterns are saved in a database of business rules and music composing rules used to develop generation models of music compositions of the given type, next a melody generator is created, in which a digital score of the part of the given instrument is created, wherein a database of atomic sounds is created simultaneously, and next the music contents are sent to the generator, in which parameters are set using a controller conforming to the MIDI standard and subjected to automatic generation of a digital score of the composition and parts for individual instruments are created and then rendered to music tracks for each of the instruments, followed by mixing of individual tracks into a record and the final version of the record is obtained, with the composition and its record then verified by an Al critic module.
2. The method of generating music contents according to claim 1, wherein the final music record is created using artificial intelligence algorithms at the stage of analysis for the presence of existing patterns, composition generation models are developed and the generator preparing sound is created.
3. The method of generating music contents according to claim 1, wherein sound samples are created and contents are saved in parallel in the repository.
4. The method of generating music contents according to claim 1, wherein the developed models are sent to be read and a digital score of the composition with the desired parameters is generated automatically.
5. The method of generating music contents according to claim 1, wherein sound tracks of instruments are rendered using resources from the repository.
6. The method of generating music contents according to claim 1, wherein the composition and its record are verified using artificial intelligence algorithms and the process of generating music contents is repeated from the beginning if the record does not pass verification.
Description
[0013] According to the invention, the method of generating music contents is based on a series of sequential processes, the operation and course of which are based on artificial intelligence algorithms. The process of music contents generation takes place using a controller corresponding to the MIDI standards. Business rules enabling automatic creation of music tracks according to user preferences were created. Automatic generation of music contents is possible by solutions operating within the platform, such as a user preference database, repository resources, business rules, models used in generation of music compositions of the given type and a melody generator, where parameters and characteristics for models of instrument form and lines are specified. Models are created on the technical level and further processed according to music input file modification algorithms, such that the final recording is generated, and after its verification the composition containing the intended compositional and artistic load is obtained.
[0014] The method of generating music contents according to the invention is characterised in that the input sound samples are processed according to input music file modification algorithms, related, in particular, to characteristics such as tempo, mood of the song, the music genre, duration and the scope of contents modulation. This results in a composition with the intended artistic expression. The first stage of the generation process includes construction of music contents on the technical level, in the form of models. Technical contents are obtained as a result of a range of processes focused on generator creation. Execution of the series of processes includes analysis of the input music contents in terms of the existence of patterns once the input contents are provided. Next, the patterns are saved in the database of business rules and music composition rules used to develop the music composition generation models of the given type. Thus, a melody generator is created, used to generate a digital score of the parts of the given instrument. A database of atomic sounds is prepared in general and then sent to the generator, where parameters are set using a controlling device conforming to MIDI standards. Thus created models are subjected to automatic generation of a digital score and parts for individual instruments are created and subsequently rendered to music tracks for each instrument. A record on an artistic level is obtained. Next, the record is polished and mixed. The final version of the record is recorded and next the composition and its record is verified by the critic module. After verification, the record is exported to a distribution module of a dedicated platform.
[0015] In the preferred embodiment of the invention, the final music record is created using artificial intelligence algorithms during the stage of analysis in terms of the presence of existing patterns, preparation of composition generation models, creation of a melody generator and sound preparation.
[0016] In another preferred embodiment of the invention, sound samples are created simultaneously with contents saving in the repository.
[0017] In another preferred embodiment of the solution, the developed models are sent to be read and a digital note record of the composition with the desired characteristics is generated automatically.
[0018] In another preferred embodiment of the invention, sound tracks of the instruments are rendered using repository resources.
[0019] In another preferred embodiment of the invention, the composition and its record are verified using artificial intelligence and the process of generating music contents is repeated from the beginning if the record does not pass verification.
[0020] Using ready patterns of diagrams and samples, a user without special instrumental and hardware resources and with substantial knowledge on the level of a programmer or of a sound engineer, using a controller in order to specify the characteristics of sound contents shall be able to create fully fledged music contents with artistic value, prepared according to individually specified composition preferences.
[0021] Artificial intelligence algorithms are used during the process of music contents creation, resulting in an effect of work of an entire team of specialists responsible for generation of such music contents using traditional tools. The operation of the generator is supported and controlled by a controller based on the MIDI standard. The fully digital generation of musical contents using a controller gives the user the opportunity to specify instructions for the generator, by specifying base parameters, in particular for the genre, tempo, mood, duration and content modulation parameters imparting individual contents. The work of the user is additionally supported by the functional repository of sounds containing sounds in the form of single notes. Music tracks for individual instruments are rendered to a form next subjected to mixing and specified to the level according to the intended, artistic composition. The algorithm, based on operation of multilayer feedforward neural networks, verifies the composition and its record in terms of conformity with composition assumptions, in particular for conformity with preferences and business standards effective during composition. The music contents may be generated without limitations. The generator creation process takes place one time. The generated music contents may be distributed.
[0022] The subject of the invention is presented in an example embodiment in the attached drawing, which illustrates an example block diagram of music contents generation.
[0023] The block diagram presents the course of individual operations executing the subject of the invention, and indicates the sets and databases used during generation of new music contents using the method according to the invention. The terms music contents, composition, composition and its record are used in this disclosure in order to designate the result of the method according to the invention. A controller conforming to the MIDI standard is an element required to execute the method according to the invention.
[0024] In the block diagram presented in the figure, each “+” symbol should understood as a conjunction of a series of processes following in a sequence during a single period of time.
[0025] The arrows denoted with a dotted line along their length, should be understood as an indication of the sequence of actions occurring in the past compared to the sequence of activities indicated with arrows denoted with a continuous line along their length.
[0026] Each first arrow leading to the tile of the database 25 should be understood as a “saved in” arrow, while each arrow leading from the tile of the database 25 should be understood as “read in”.
[0027] Existing compositions should be understood as existing sound compositions or sound samples.
[0028] The term “composition and its record” during the stage of exporting to the distribution module of the platform 23 should be understood such that not only the record itself is verified, but also i.e. some of the information regarding parameters set by the user and characteristics of the composition 12, including the composition concept, e.g. its genre.
[0029] The term of contents on the technical level 26 should be understood as the MIDI file and additional data sent to the generator in the form of a technical algorithm and of the source code.
[0030] Sequencer 28a should be understood as an electronic device or a computer program storing not the sequence of sounds, but a sequence of instructions controlling the synthesiser, including parameters and enabling its multiple playback.
[0031] Sampler 28b is understood as an electronic music instrument or a computer program enabling digital recording of any sound, and its subsequent use as any traditional music sound.
[0032] The area of operation of the sampler and of the sequencer should be understood as tandem operation of modules: 28a and 28b on MIDI files and data related to music contents, comprising instructions for the rendering process 16.
[0033] Verification by the AI critic module 19 should be understood as verification of the record and of its composition by a module based on operation of artificial intelligence algorithms based on artificial neural networks. These are learning algorithms comprising of networks of artificial neurons, first and foremost able to generalise the observed data. The network learning term should be understood as forcing the network to react to the selected input parameter in a specific manner.
[0034] As shown in
[0035] Contents from the generator as models on the technical level 27 are sent to the generation process, where a digital score of the composition with the required characteristics 15 is generated automatically on the basis of artificial intelligence algorithms and parts for individual instruments 17 are next obtained. The created parts 17 are sent as information analysed in the field of sequencer and sampler 28 operation and rendered 16 for each instrument separately, such that using the sequencer and the sampler, with samples, digital score of each part of the given instrument are changed to a sound form and the record form is created separately for individual instruments. Next, the record is polished and mixed 18. Thus, the final music record 20 is obtained and sent to verification. The composition 27 and its record 20 are verified using the critic module 19 base on specialist neural network algorithms. The final music contents are exported 23 and sent to distribution 24. If the composition and its record are verified negatively in the critic module 19, the process is stopped at this stage an the automatic generator 16 generates new contents according to the set parameters and characteristics, preferably using user preference databases 13.
[0036] In this embodiment of the invention, the prepared music contents are saved in the sound repository 8 during the sound preparation stage 6.
[0037] In this embodiment of the invention, the generator has composition characteristics and parameters set using user preference databases 13.
[0038] In another embodiment of the invention, generation models for music composition of the given type are developed and saved in a database 11 of prepared models, from which these models are read during the stage of automatic generation of the digital score of the composition with the desired parameters 15.
TABLE-US-00001 List of figure references 1. Introduction of the input music contents 2. Process conjunction 3. Analysis of existing compositions for pattern presence 4. Business rules, including music composition rules 5. Development of generation models for music compositions of the given type 6. Sound preparation 7. Data saving in the selected database 8. Atomic sound repository 9. Data reading from the selected database 10. Melody generator creation 11. Developed models stored in a database 12. Setting composition parameters and characteristics in the generator 13. User preference database 14. Music generator 15. Automatic generation of a digital score of the composition with the desired parameters 16. Rendering the sound from sound samples according to the digital score 17. Instrument parts 18. Record mixing and polishing 19. Verification of the composition and of its record by the critic module based on artificial intelligence (Al) 20. The final music record 21. Positive evaluation of the composition and of its record 22. Negative evaluation of the composition and of its record 23. Export to the distribution module of a dedicated platform 24. Music distribution 25. Database tile 26. Controller conforming to the MIDI standard 27. Creation area of the content on the technical level 28. Sequencer and sampler operation area 28a. Sequencer 28.b. Sampler 29. The area of final music contents on the artistic level