Patent classifications
G10H7/008
DISPLAY CONTROL METHOD, DISPLAY CONTROL DEVICE, AND PROGRAM
A display control method includes causing a display device to display a processing image in which a first image representing a note corresponding to a synthesized sound and a second image representing a sound effect are arranged in an area, in which a pitch axis and a time axis are set, in accordance with synthesis data that specify the synthesized sound generated by sound synthesis and the sound effect added to the synthesized sound.
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND PROGRAM
An information processing method is realized by a computer and includes generating a first characteristic transition which is a transition of acoustic characteristics, in accordance with an instruction from a user, generating a second characteristic transition which is a transition of acoustic characteristics of voice that is pronounced in a specific pronunciation style selected from a plurality of pronunciation styles, and generating a combined characteristic transition which is a transition of the acoustic characteristics of synthesized voice by combining the first characteristic transition and the second characteristic transition.
METHOD OF DIGITALLY PERFORMING A MUSIC COMPOSITION USING VIRTUAL MUSICAL INSTRUMENTS HAVING PERFORMANCE LOGIC EXECUTING WITHIN A VIRTUAL MUSICAL INSTRUMENT (VMI) LIBRARY MANAGEMENT SYSTEM
An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.
MUSIC GENERATOR
Techniques are disclosed relating to generating music content. In one embodiment, a method includes determining one or more musical attributes based on external data and generating music content based on the one or more musical attributes. Generating the music content may include selecting from stored sound loops or tracks and/or generating new tracks based on the musical attributes. Selected or generated sound loops or tracks may be layered to generate the music content. Musical attributes may be determined in some embodiments based on user input (e.g., indicating a desired energy level), environment information, and/or user behavior information. Artists may upload tracks, in some embodiments, and be compensated based on usage of their tracks in generating music content. In some embodiments, a method includes generating sound and/or light control information based on the musical attributes.
ELECTRONIC MUSICAL INSTRUMENT MAIN BODY DEVICE AND ELECTRONIC MUSICAL INSTRUMENT SYSTEM
This electronic musical instrument main body device comprises an information acquisition unit and a port assignment unit. The information acquisition unit acquires, from a playing operation device connected to one connection terminal, information related to the playing operation device. The port assignment unit assigns, to the playing operation device, a virtual input port of a type corresponding to the information related to the playing operation device and acquired by the information acquisition unit.
Music context system audio track structure and method of real-time synchronization of musical content
A system is described that permits identified musical phrases or themes to be synchronized and linked into changing real-world events. The achieved synchronization includes a seamless musical transitionachieved using a timing offset, such as relative advancement of an significant musical onset, that is inserted to align with a pre-existing but identified music signature, beat or timebasebetween potentially disparate pre-identified musical phrases having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.
System and method for grouping audio events in an electronic percussion device
An electronic percussion device has a plurality of triggerable actuators, in the form of any of pads, external trigger inputs or foot switches, that may be organized into synchronized groups, and has an operational mode in which triggering of any actuator within the synchronized group initiates playback of audio events or execution of control functions associated with other of the actuators within the synchronized group in one of multiple different synchronization orders, e.g. one at a time, all simultaneously, random or in a predefined or user-defined consecutive order.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
MEDIA-MEDIA AUGMENTATION SYSTEM AND METHOD OF COMPOSING A MEDIA PRODUCT
A media-content augmentation system includes a processing system that receives input data in the form of temporally-varying events data. The processing system resolves the input into one or more categorized contextual themes, correlates the themes with metadata associated with at least one reference media file, and then splices or fades together selected parts of the media file, thus generating as an output, a media product in which transitions between its contextual themes are aligned with selected temporal events in the input data. The temporarily-varying events take the form of a beginning and an end in the case of a sustained feature, or a specific point in time for a hit point. A method aligns sections in digital media files with temporally-varying events data to compose a media product. The system augments a sensory experience of a user by dynamically changing and then playing selected media files within the context of the categorized themes input to the processing system.