Patent classifications
G10H1/0033
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
A mechanism that enables the real-time provision of content according to a user's body movement is provided. An information processing apparatus includes a reproduction control unit (43) that controls reproduction of content on the basis of a result of prediction of a timing of a predetermined state in traveling movement of a user, which is predicted on the basis of sensor information regarding the traveling movement.
MANUAL MUSIC GENERATOR
A manual music generator designed to help musicians express and capture their musical inspiration while on the go. The quick two step music composition process empowers all levels of musicians to accurately record their musical inspiration before the idea exits their imagination. The apparatus breaks down the musical expression process into two steps by separating pitch and rhythm, enabling musicians to easily express and record the musical riffs in their head.
NON-TRANSITORY COMPUTER READABLE MEDIUM STORING ELECTRONIC MUSICAL INSTRUMENT PROGRAM, METHOD FOR MUSICAL SOUND GENERATION PROCESS AND ELECTRONIC MUSICAL INSTRUMENT
An electronic musical instrument, method for a musical sound generation process and a non-transitory computer readable medium that stores an electronic musical instrument program are provided. The program causes a computer provided with a storage part to execute a musical sound generation process using sound data. The program causes the computer to execute:
acquiring, from the storage part, first sound data and first user identification information indicating a user who has acquired the first sound data from a distribution server; acquiring second user identification information indicating a user who causes the musical sound generation process to be executed using the first sound data; determining whether or not the first user identification information matches the second user identification information; and inhibiting execution of the musical sound generation process using the first sound data in a case when the first user identification information does not match the second user identification information.
LEARNING PROGRESSION FOR INTELLIGENCE BASED MUSIC GENERATION AND CREATION
An artificial intelligence (AI) method includes generating a first musical interaction behavioral model. The first musical interaction behavioral model causes an interactive electronic device to perform a first set of musical operations and a first set of motional operations. The AI method further includes receiving user inputs received in response to the performance of the first set of musical operations and the first set of motional operations and determining a user learning progression level based on the user inputs. In response to determining that the user learning progression level is above a threshold, the AI method includes generating a second musical interaction behavioral model. The second musical interaction behavioral model causes the interactive electronic device to perform a second set of musical operations and a second set of motional operations. The AI method further includes performing the second set of musical operations and the second set of motional operations.
Systems and methods for music simulation via motion sensing
The present disclosure relates to systems, methods, and devices for music simulation. The methods may include determining one or more simulation actions based on data associated with one or more simulation actions acquired by at least one sensor. The methods may further include determining, based on at least one of the one or more simulation actions and a mapping relationship between simulation actions and corresponding musical instruments, a simulation musical instrument that matches with the one or more simulation actions. The methods may further include determining, based on the one or more simulation actions, one or more first features associated with the simulation musical instrument. The methods may further include playing music based on the one or more first features.
APPARATUS, SYSTEM, AND METHOD FOR RECORDING AND RENDERING MULTIMEDIA
A system may comprise a looper. The looper may further comprise an arrangement module. The arrangement module may arrange audio data into at least one of the following: a song comprised of at least one song part, at least one track within the at least one song part, and at least one layer within the at least one track. The system may also comprise an output module, the output module may be configured to enable playback of the arranged audio data. The system may also comprise an input module that may be configured to record subsequent audio data during the playback of the arranged audio data from the output module. The system may also comprise a processing module configured to modify playback of the arranged audio data. The system may also comprise an external device configured to control operation of at least one module associated with the looper.
Transitions between media content items
A system of playing media content items determines transitions between pairs of media content items by determining desirable locations in which transitions across the pairs of media content items occur. The system uses a plurality of track features of media content items and determines such track features of each media content item associated with each of transition point candidates, such as beat positions, of that media content item. The system determines similarity in the plurality of track features between the transition point candidates of a first media content item and the transition point candidates for a second media content item being played subsequent to the first media content item. The transition points or portions of the first and second media content items are selected from the transition point candidates for the first and second media content items based on the similarity results.
Systems and methods for music simulation via motion sensing
The present disclosure relates to systems, methods, and devices for music simulation. The methods may include determining one or more simulation actions based on data associated with one or more simulation actions acquired by at least one sensor. The methods may further include determining, based on at least one of the one or more simulation actions and a mapping relationship between simulation actions and corresponding musical instruments, a simulation musical instrument that matches with the one or more simulation actions. The methods may further include determining, based on the one or more simulation actions, one or more first features associated with the simulation musical instrument. The methods may further include playing music based on the one or more first features.
SYSTEM AND METHOD FOR DISTRIBUTED MUSICIAN SYNCHRONIZED PERFORMANCES
A computerized method is provided that enables an interactive multimedia session between a group of geographically distributed musicians. The method includes song arrangements for the interactive multimedia session being specified as a sequence of song parts to be played or sung by each of the participating geographically distributed musicians. Each musician performance is automatically detected on an instrument track along with audio and video for each musician performance on any song part. The timing for each musician performance is automatically captured by the system. The captured performances are transmitted to the musicians participating in a same session of the geographically distributed musicians to produce the effect of playing with other musicians live in the interactive multimedia session. A computer-implemented system and a computer program product stored on a non-transitory computer-readable storage medium for practice of the method are also provided.
ROBOTIC SYSTEM FOR CONTROLLING AUDIO SYSTEMS
A robotic system is provided, which automatically changes settings on an audio system. The audio system (e.g., an instrument amplifier, effect processor, etc.) typically includes one or more controls that impact the operation of the audio system. Correspondingly, the robotic system includes a device interface coupled to a control sequencer. The device interface adapts to one or more controls of the audio system that are to be changed. In this regard, the device interface includes one or more control couplers. Each control coupler is adapted to a corresponding control of the audio system to be changed. The control sequencer provides a control sequence to the device interface that causes the control coupler(s) to vary the settings on the audio system. In practical applications, a combination of sequence values of the control sequence can represent a sufficiently high number of samples to determine a responsive behavior of the audio system.