Patent classifications
G10H2220/201
AUTOMATIC PERFORMANCE SYSTEM, AUTOMATIC PERFORMANCE METHOD, AND SIGN ACTION LEARNING METHOD
An automatic performance system includes a sign detector configured to detect a sign action of a performer performing a musical piece, a performance analyzer configured to sequentially estimates a performance position in the musical piece by analyzing an acoustic signal representing performed sound in parallel with the performance, and a performance controller configured to control an automatic performance device to carry out an automatic performance of the musical piece so that the automatic performance is synchronized with the sign action detected by the sign detector and a progress of the performance position estimated by the performance analyzer.
Learning progression for intelligence based music generation and creation
An artificial intelligence (AI) method includes generating a first musical interaction behavioral model. The first musical interaction behavioral model causes an interactive electronic device to perform a first set of musical operations and a first set of motional operations. The AI method further includes receiving user inputs received in response to the performance of the first set of musical operations and the first set of motional operations and determining a user learning progression level based on the user inputs. In response to determining that the user learning progression level is above a threshold, the AI method includes generating a second musical interaction behavioral model. The second musical interaction behavioral model causes the interactive electronic device to perform a second set of musical operations and a second set of motional operations. The AI method further includes performing the second set of musical operations and the second set of motional operations.
EMULATING A VIRTUAL INSTRUMENT FROM A CONTINUOUS MOVEMENT VIA A MIDI PROTOCOL
The present invention relates to methods and systems for creating a sound effect out of a continuous movement, in particular by means of detecting a continuous movement through a force sensor in a device. A method is shown for creating a sound effect out of a continuous movement. The method comprises a step of providing a first device, where-by the device is adapted at detecting continuous movement and a no-movement state. The method further comprises the step of defining at least one first parameter of movement, in particular a first axis of movement of said continuous movement. A further step comprises the assigning at least one first midi-channel to the first axis of movement. A base-line value is defined for the no-movement state, and along that first axis of movement a range of values is relative to said base-line value is defined. This range of values is relative to said base-line value is reflective of a continuous movement along that first axis of movement. A sound effect is then output relative to the detected continuous movement. One aspect or additional embodiment of the present invention comprises the step of defining at least one first parameter of movement, whereby said first parameter of movement is an angular range in one axis X, Y, Z of an orientation in space of the first device (99.1) adapted at detecting continuous movement (A.1) and a no-movement state.
SOUND PRODUCTION CONTROL APPARATUS, SOUND PRODUCTION CONTROL METHOD, AND STORAGE MEDIUM
A sound production control apparatus by which a sound production mode is controlled on the basis of a player's motion even during a non-playing control operation period. An information obtaining unit 30 obtains detection information by detecting a player's motion. A sound processing unit 36 produces sound on the basis of the detection information obtained in response to operation for generating a sound trigger in the player's motion, and controls a sound production mode on the basis of the detection information obtained in response to operation for generating no sound trigger in the player's motion.
Adaptive Music Playback System
An adaptive music playback system is disclosed. The system includes a composition system that receives information corresponding to user activity levels. The composition system determines target musical criteria corresponding to the user activity levels and modifies the composition of a song in response to changes in user activity.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
There is provided an information processing device, an information processing method, and a program that can provide an information processing device, an information processing method, and a program that can effectively assist in learning performance. The information processing device includes a sensing data obtaining section configured to obtain sensing data regarding at least one of a motion pattern, a motion speed, a motion accuracy, and a motion amount of a motion element of a user in a practice in performance performed by movement of at least a part of a body of the user and a state of a result produced by the performance, an analyzing section configured to analyze the obtained sensing data and estimate information regarding the practice in the performance of the user on a basis of a result of the analysis, and an output section configured to output a result of the estimation to the user.
Integrated Musical Instrument Systems
A system suitable for use as a musical instrument system is provided. The system includes at least one sensor. The system also includes at least one control surface configured to interface with the at least one sensor. Further, the system includes at least one controller configured to interface with the at least one sensor. Additionally, the system includes at least one program module configured to interface with the at least one sensor. The system includes an enclosure. The at least one sensor and the at least one control surface are positionable on the base. The system also includes at least one data processor configured to interface with the at least one sensor, the at least one control surface, and the at least one program module arranged to function as a musical instrument system. The system also includes an enclosure
MOTION FEEDBACK DEVICE
A motion feedback device includes a housing, a speaker and a control module carried by said housing. The control module includes a controller and a motion sensor. The controller is configured to include a mapping adapted for the creation of sound in response to any user-produced movement of the housing as detected by the motion sensor. This allows for continuous original sound generation or composition based upon the user produced movements of the housing.
Gesture-controlled virtual reality systems and methods of controlling the same
Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.
Method and device for controlling acoustic feedback during a physical exercise
Techniques for providing acoustic feedback are disclosed. Several audio clips (21-23) have a synchronized beat. A sensor signal (16) received from a sensor has a sensor signal range divided by first and second thresholds (11, 12) into at least three sensor signal sub-ranges (13-15). An audio signal is output in response to the received sensor signal (16), the output audio signal comprising one or more of the audio clips. If the received sensor signal (16) exceeds the first threshold (11), at least one (21) of the one or more audio clips is discontinued and/or at least one additional audio clip (22) of the audio clips is initiated in synchronization with the one or more audio clips (21). If the received sensor signal (16) falls below the second threshold (12), at least one (21) of the one or more audio clips is discontinued and/or at least one additional audio clip (23) of the audio clips is initiated in synchronization with the one or more audio clips (21).