Patent classifications
G10H2220/201
Information processing device for data representing motion
An information processing method includes generating a change parameter relating to a process in which a temporal relationship between a first motion and a second motion changes, by inputting, into a trained model, first time-series data that represent a content of the first motion, and second time-series data that represent a content of the second motion in parallel to the first motion.
METHOD AND DEVICE FOR CONTROLLING ACOUSTIC FEEDBACK DURING A PHYSICAL EXERCISE
Techniques for providing acoustic feedback are disclosed. Several audio clips have a synchronized beat. A sensor signal received from a sensor has a sensor signal range divided by first and second thresholds into at least three sensor signal sub-ranges. An audio signal is output in response to the received sensor signal, the output audio signal comprising one or more of the audio clips. If the received sensor signal exceeds the first threshold, at least one of the one or more audio clips is discontinued and/or at least one additional audio clip of the audio clips is initiated in synchronization with the one or more audio clips. If the received sensor signal falls below the second threshold, at least one of the one or more audio clips is discontinued and/or at least one additional audio clip of the audio clips is initiated in synchronization with the one or more audio clips.
Systems and methods for music simulation via motion sensing
The present disclosure relates to systems, methods, and devices for music simulation. The methods may include determining one or more simulation actions based on data associated with one or more simulation actions acquired by at least one sensor. The methods may further include determining, based on at least one of the one or more simulation actions and a mapping relationship between simulation actions and corresponding musical instruments, a simulation musical instrument that matches with the one or more simulation actions. The methods may further include determining, based on the one or more simulation actions, one or more first features associated with the simulation musical instrument. The methods may further include playing music based on the one or more first features.
SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
The present technology relates to a signal processing device, a signal processing method, and a program that enable intuitive operation of sound.
The signal processing device includes an acquisition unit that acquires a sensing value indicating a motion of a predetermined portion of a body of a user or motion of an instrument, and a control unit that performs non-linear acoustic processing on an acoustic signal according to the sensing value. The present technology can be applied to an acoustic reproduction system.
Arrangement and method for the conversion of at least one detected force from the movement of a sensing unit into an auditory signal
An arrangement for the conversion of at least one detected force from the movement of a sensing unit into an auditory signal. The arrangement includes at least one sensor for generating a force signal from the at least one detected force. A processing unit is configured for converting the force signal into a digital auditory signal. An output unit for converting the digital auditory signal into an auditory signal is further included wherein the digital auditory signal includes in formation of acceleration, strength and duration of a single detected force. The present method is used for converting at least one detected force affecting an object into auditory signal, as well as the use of an arrangement according to the present invention for various entertainment and/or therapeutic purposes.
Keyless synthesizer
A keyless synthesizer t can stimulate all three senses (hearing, muscle movement and visual) at once in patients born with C.H.A.R.G.E. syndrome while they play and, at the same time, enjoy and have fun with the device. A keyless synthesizer operational by a single hand of the user includes an ultrasound range sensor responsive to the distance “d” of a user's hand from the sensor for generating a sensor signal corresponding to the distance “d”. A programmable microcontroller programmed to convert the sensor signal to one of a plurality of discrete signals. A synthesizer is responsive to each discrete signal for generating a discrete tone. A multi-color generator is responsive to each discrete signal for generating a discrete color so that for each discrete signal corresponding to a distance “d” both a discreet tone and an associated discreet color are generated.
METHODS, SYSTEMS, APPARATUSES, AND DEVICES FOR FACILITATING THE INTERACTIVE CREATION OF LIVE MUSIC BY MULTIPLE USERS
Disclosed herein is a method for facilitating creating of music in real time by multiple users, in accordance with some embodiments. Accordingly, the method comprises receiving first music segment selections from a first user device, receiving second music segment selections from a second user device, transmitting the first music segment selections to the second user device, and transmitting the second music segment selections to the first user device. Further, each of the first user device and the second user device is configured for retrieving the first music segments, retrieving the second music segments, determining a universal time, synchronizing the second music segments with the first music segments to a common musical beat based on the universal time, mixing the second music segments with the first music segments based on the synchronizing, and generating a mixed music comprising the first music segments and the second music segments based on the mixing.
MOTION CAPTURE FOR PERFORMANCE ART
A method for controlling aspects of an artistic performance with a motion capture system includes modeling movements of a performer with a biomechanical skeleton, selecting parent and child segments on the biomechanical skeleton, positioning motion capture sensors on a motion capture subject at positions corresponding to the parent and child segments, selecting actions associated with movements of the child segment according to positions of the parent segment within at least two predefined spatial zones, executing actions in a first action group for the child segment when the parent segment is in a first spatial zone, and executing actions in a second action group for the child segment when the parent segment is in a second spatial zone.
Proactive Actions Based on Audio and Body Movement
Various implementations disclosed herein include devices, systems, and methods that determine that a user is interested in audio content by determining that a movement (e.g., a user's head bob) has a time-based relationship with detected audio content (e.g., the beat of music playing in the background). Some implementations involve obtaining first sensor data and second sensor data corresponding to a physical environment, the first sensor data corresponding to audio in the physical environment and the second sensor data corresponding to a body movement in the physical environment. A time-based relationship between one or more elements of the audio and one or more aspects of the body movement is identified based on the first sensor data and the second sensor data. An interest in content of the audio is identified based on identifying the time-based relationship. Various actions may be performed proactively based on identifying the interest in the content.
Techniques for controlling the expressive behavior of virtual instruments and related systems and methods
Techniques for automatically controlling the expressive behavior of a virtual musical instrument by analyzing an audio recording of a live musician are provided. In some embodiments, an audio recording may be analyzed at various points along the timeline of the recording to derive corresponding values of a parameter that is in some way representative of the musical expression of the live musician. Values of control parameters that control one or more aspects of the audio playback of a virtual instrument may then be generated based on the determined values of the expression parameter. Values of control parameters may be provided to a sample library to control how a digital score selects and/or plays back samples from the library, and/or values of the control parameters may be stored with the digital score for subsequent playback.