Patent classifications
G10H2220/351
Techniques for learning effective musical features for generative and retrieval-based applications
A method includes receiving a non-linguistic input associated with an input musical content. The method also includes, using a model that embeds multiple musical features describing different musical content and relationships between the different musical content in a latent space, identifying one or more embeddings based on the input musical content. The method further includes at least one of: (i) identifying stored musical content based on the one or more identified embeddings or (ii) generating derived musical content based on the one or more identified embeddings. In addition, the method includes presenting at least one of: the stored musical content or the derived musical content. The model is generated by training a machine learning system having one or more first neural network components and one or more second neural network components such that embeddings of the musical features in the latent space have a predefined distribution.
Apparatus and Methods for Cellular Compositions
Systems, methods and apparatus for cellular compositions/generating music in real-time using cells are provided. The cellular compositions may be dependent on user data.
KEYBOARD DEVICE AND SOUND GENERATION CONTROL METHOD
A keyboard device includes a plurality of keys, and a keyboard driver configured to drive at least a part of the plurality of keys. The keyboard device is configured such that, in accordance with determination whether a key corresponding to a pitch of event data is drivable by the keyboard driver, sound is generated based on a first sound-generating process in which the keyboard driver is configured to drive the key corresponding to the pitch or a second sound-generating process that is different from the first sound-generating process.
Sound regulation apparatus, method or program
[Technical problem] Circulatory sound regulation that changes environmental sound into input sound, converted sound with arbitrarily regulatory frequency component, arbitrarily regulatory amplitude or both of them, output sound, another input sound synthesized with this output sound and an environmental sound, and another converted sound made from this input sound. [Solution] Apparatus comprising: input means that receives environmental sound from arbitrary environment as an input sound; conversion means that converts the input sound into a converted sound that contains arbitrarily regulatory frequency component including frequency component that approximates to principle oscillator, arbitrarily regulatory amplitude or both; and output means that transmits the converted sound to environment as an output sound; whereby the input means receives synthetic sound synthesized with the output sound and environmental sound as an input sound again, and the conversion means converts this input sound further into another converted sound.
INFORMATION PROCESSING APPARATUS
The present disclosure relates to an information processing apparatus that can provide an information processing apparatus capable of accurately synchronizing and handling sensing data in a mixture of forms and available for assisting in learning of a performance.
The present disclosure provides the information processing apparatus including a conversion section that converts a plurality of pieces of sensing data in different forms obtained from a plurality of sensors each sensing a state related to a performance by a motion of a user, an information processing section that processes the sensing data converted by the conversion section, and an information output section that outputs feedback information to the user on the basis of a processing result of the information processing section. The conversion section includes an analog-digital signal conversion section that converts the sensing data in an analog form from the sensors into sensing data in a digital form and outputs the sensing data in the digital form to the information processing section, and a digital-analog signal conversion section that converts the sensing data in the digital form from the sensors into sensing data in the analog form and outputs the sensing data in the analog form to the analog-digital signal conversion section.
LEARNING PROGRESSION FOR INTELLIGENCE BASED MUSIC GENERATION AND CREATION
An artificial intelligence (AI) method includes generating a first musical interaction behavioral model. The first musical interaction behavioral model causes an interactive electronic device to perform a first set of musical operations and a first set of motional operations. The AI method further includes receiving user inputs received in response to the performance of the first set of musical operations and the first set of motional operations and determining a user learning progression level based on the user inputs. In response to determining that the user learning progression level is above a threshold, the AI method includes generating a second musical interaction behavioral model. The second musical interaction behavioral model causes the interactive electronic device to perform a second set of musical operations and a second set of motional operations. The AI method further includes performing the second set of musical operations and the second set of motional operations.
System and method for creating and outputting music
The subject matter discloses a system implemented on in a mobile electronic device, the system comprising a processing system of the device; and a memory that contains instructions comprising: detecting ambient sounds in the vicinity of the mobile electronic device; determining at least one property selected from a group consisting of a relative direction and relative distance of the ambient sounds relative to the mobile electronic device; analyzing the detected ambient sounds; outputting audio Interactive Music data based on the analysis of the ambient sounds and based on at least one of a relative direction and relative distance of the ambient sounds relative to the mobile electronic device; wherein said outputting is performed on the mobile electronic device.
Keyboard device and sound generation control method
A keyboard device includes a plurality of keys, and a keyboard driver configured to drive at least a part of the plurality of keys. The keyboard device is configured such that sound is generated based on a first sound-generating process in which the keyboard driver is configured to drive a key corresponding to a first pitch, upon receiving performance data including the first pitch. The keyboard device is configured such that sound is generated based on a second sound-generating process that is different from the first sound-generating process, upon receiving performance data including a second pitch that is different from the first pitch.
MUSIC GENERATOR
Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.
METHOD FOR GENERATING MUSIC WITH BIOFEEDBACK ADAPTATION
A method and a system for generating music for an electronic device are provided. The system is configured to generate a first portion of generative music by combining a plurality of audio stems based on a determined first current state vector; measure, with the biosensor, the EEG data while the first portion of generative music is played by the speakers to the user; in response to determining that the current state should be modified to achieve a desired goal state of the user, determine a second set of music parameters for achieving the desired level of focus of the user; generate by the processor and play at the speaker a second portion of generative music characterized by the second set of music parameters to achieve the desired goal state of the user.