Patent classifications
G10H2240/145
CONTEXT-DEPENDENT PIANO MUSIC TRANSCRIPTION WITH CONVOLUTIONAL SPARSE CODING
The present disclosure presents a novel approach to automatic transcription of piano music in a context-dependent setting. Embodiments described herein may employ an efficient algorithm for convolutional sparse coding to approximate a music waveform as a summation of piano note waveforms convolved with associated temporal activations. The piano note waveforms may be pre-recorded for a particular piano that is to be transcribed and may optionally be pre-recorded in the specific environment where the piano performance is to be performed. During transcription, the note waveforms may be fixed and associated temporal activations may be estimated and post-processed to obtain the pitch and onset transcription. Experiments have shown that embodiments of the disclosure significantly outperform state-of-the-art music transcription methods trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.
Electronic music box
Music data memory includes pieces of music within a group and other pieces of music outside the group. The next piece to be played is automatically determined by random table among pieces within the group. Favorite or newest piece is weighted to be more frequently played in the group. Piece in music data memory is automatically included into the group by random table. Newly downloaded piece into music data memory is included into the group by priority. Most frequently played piece is excluded from the group in place of newly included piece. Favorite or newest piece may be an exception of exclusion. Next piece is capable of being played in tempo similar to that of preceding piece by means of tempo-adjusted or piece replacement or repetition of the same piece for the purpose of continued baby cradling in synchronism with the same tempo of succeeding pieces.
WAVEFORM WRITING DEVICE, METHOD OF WRITING WAVEFORMS, ELECTRONIC MUSICAL INSTRUMENT, AND STORAGE MEDIUM
A device for reading waveform data of a musical tone from a primary storage device and transferring the read waveform data to a secondary storage device for tone reproduction includes a processor configured to perform: retrieving, for each waveform of a plurality of waveforms that represent a musical tone stored in the primary storage device, segment group information from the primary storage device; retrieving the plurality of waveforms that represent the musical tone from the primary storage device, the waveform group retrieval process retrieving a waveform or waveforms, among the plurality of waveforms, that have the same segment group information as a group; and writing, as a single group, the waveform or waveforms, among the plurality of waveforms, that have the same segment group information onto one of prescribed storage segments that are storage regions of prescribed sizes in the secondary storage device.
DRIVING SOUND LIBRARY, APPARATUS FOR GENERATING DRIVING SOUND LIBRARY AND VEHICLE COMPRISING DRIVING SOUND LIBRARY
A driving sound library provides driving sounds classified in various themes to a user by analyzing frequency characteristics and temporal characteristics of sound sources classified into categories and determining a chord corresponding to each sound source based on the frequency characteristics and the time characteristics. Modulated sound sources in which each sound source is modulated are generated by applying a chord corresponding to each sound source. The driving sound sources are generated using the sound sources and the modulated sound sources as input data. Pitches of the driving sound sources are then changed based on the driving sound sources and engine sound orders corresponding to an engine RPM of a preset range. Scores for each theme for the driving sounds are generated.
SYSTEM AND METHOD FOR A NETWORKED VIRTUAL MUSICAL INSTRUMENT
A system and method for operating and performing a remotely networked virtual musical instrument. A client transmits musical control data to a remote server over the network, encompassing a digital music engine and digitally sampled virtual musical instruments. In return, the client consumes, synchronizes, and mixes the combined server playback stream from the network of the fully expressive and interactive musical performance with zero audible latency.
Method and system for AI controlled loop based song construction
According to an embodiment, there is provided a system and method for automatic AI controlled loop based song construction. It provides and benefits from a machine learning AI in a audio loop selection engine for the generation of a song structure and for the selection of fitting audio loops from a database of audio loops. In one embodiment, the instant method provides a music generation process that utilizes an AI system that has been trained and validated on a music item database to complete the creation of a music item given an incomplete song that was started but not finished by a user.
NON-TRANSITORY COMPUTER READABLE MEDIUM STORING ELECTRONIC MUSICAL INSTRUMENT PROGRAM, METHOD FOR MUSICAL SOUND GENERATION PROCESS AND ELECTRONIC MUSICAL INSTRUMENT
An electronic musical instrument, method for a musical sound generation process and a non-transitory computer readable medium that stores an electronic musical instrument program are provided. The program causes a computer provided with a storage part to execute a musical sound generation process using sound data. The program causes the computer to execute:
acquiring, from the storage part, first sound data and first user identification information indicating a user who has acquired the first sound data from a distribution server; acquiring second user identification information indicating a user who causes the musical sound generation process to be executed using the first sound data; determining whether or not the first user identification information matches the second user identification information; and inhibiting execution of the musical sound generation process using the first sound data in a case when the first user identification information does not match the second user identification information.
Generative composition with texture groups
A computer-implemented method of generating a musical composition containing a plurality of musical texture groups is disclosed. The method includes assembling musical texture groups from musical instrument components and associating therewith a tag expressing emotional textural connotation. The instrument components have musical textural classifiers selected from a set of pre-defined textural classifiers such that different instrument components may have a different subset of pre-defined textural classifiers. The textural classifiers within a texture group possess either no musical feature attribute or a single musical feature attribute and any number of musical accompaniment attributes. The method then generates at least one chord scheme to a narrative brief, to provide an emotional connotation to a series of events, the chord scheme generated by selecting and assembling Form Atoms. The final step includes applying a texture to the chord scheme to generate the musical composition reflecting the narrative brief.
FORM ATOM HEURISTICS AND GENERATIVE COMPOSITION
A Form Atom defined by self-contained constructional properties representing a historical corpus of music and contained within metadata of the Form Atom is disclosed. The Form Atom has a generative set of heuristics to support generation of a set of chords in a chord scheme or many different sets of chords. The generated chords are spaced out within a defined window of musical time by chord spacer heuristics. The Form Atom has a tag describing its compositional heuristics. A chord list of the Form Atom is provided in local tonic and defines branching structures that may be used for the generation of different chords from the local tonic. A progression descriptor is combined with a form function such that the Form Atom expresses musically a question, an answer and a statement. A meta-map of a chord scheme for a musical section is created from the metadata.
GENERATIVE COMPOSITION USING FORM ATOM HEURISTICS
A processor-based method of producing a generative musical composition is disclosed herein. The method includes the step of receiving a briefing narrative which describes a musical journey by referencing a plurality of emotional descriptions related to a plurality of musical sections. The generative musical composition is assembled with regard to the briefing narrative through the selection and concatenation of Form Atoms with tags that align with the emotional descriptions related to the musical sections. The Form Atoms, which have compositional nature aligned with the emotional descriptions and self-contained constructional properties representative of the historical corpus of music, are then selected and substituted into the generative composition. The method further involves the step of generating the musical composition by mapping musical transition between selectively chosen Form Atoms to reflect pre-established transitions between Form Atoms and groups Form Atoms that have been identified to have similar tags but different constructional properties.