Patent classifications
G10H2240/145
Electronic Musical Instrument and Electronic Musical Instrument System
Provided is an electronic musical instrument. The electrical musical instrument is configured to generate an internal acoustic signal; generate a sound generation instruction signal; output the sound generation instruction signal to an external sound source configured to generate an external acoustic signal; switch a first state in which the external acoustic signal is generated by the external sound source in response to the sound generation instruction signal, to a second state in which the internal acoustic signal is generated in response to the sound generation instruction signal; and, when the first state is switched to the second state, control the volume of the internal acoustic signal such that the state relating to the volume of sound generation based on the internal acoustic signal approaches the state relating to the volume of sound generation based on the external acoustic signal.
System and methods for automatically generating a musical composition having audibly correct form
A generative composition system reduces existing musical artefacts to constituent elements termed Form Atoms. To provide myriad new composition, a set of heuristics ensure that musical textures between concatenated musical sections follow a supplied and defined briefing narrative for the new composition whilst contiguous concatenated Form Atoms are also automatically selected to see that similarities in respective and identified attributes of musical textures for those musical sections are maintained to support maintenance of musical form. Independent aspects of the disclosure further ensure that, within the composition work, such as a media product or a real-time audio stream, chord spacing determination and control practiced to maintain musical sense in the new composition. Further, a structuring of primitive heuristics operates to maintain pitch and permit key transformation. The system and its functionality provides signal analysis and music generation through allowing emotional connotations to be specified and reproduced from cross-referenced Form-Atoms.
Electronic music box
Music data memory includes pieces of music within a group and other pieces of music outside the group. The next piece to be played is automatically determined by random table among pieces within the group. Favorite or newest piece is weighted to be more frequently played in the group. Piece in music data memory is automatically included into the group by random table. Newly downloaded piece into music data memory is included into the group by priority. Most frequently played piece is excluded from the group in place of newly included piece. Favorite or newest piece may be an exception of exclusion. Next piece is capable of being played in tempo similar to that of preceding piece by means of tempo-adjusted or piece replacement or repetition of the same piece for the purpose of continued baby cradling in synchronism with the same tempo of succeeding pieces.
Singing voice edit assistant method and singing voice edit assistant device
A singing voice edit assistance method, performed by to computer, includes: judging whether phoneme data based on which waveform data for listening contained in a data set for singing synthesis is synthesized, is available or not for a user to edit a singing voice, the data set for singing synthesis containing score data representing a time series of notes, a lyrics data representing words corresponding to the respective notes; and synthesizing the waveform data for listening while shifting pitches of phoneme data, representing waveforms of phonemes, indicated by the lyrics data to pitches indicated by the score data and connecting the pitch-shifted phoneme data and, if the indicated phoneme data is not available, the synthesizing synthesizes waveform data for listening based on the score data, the lyrics data, and substitute phoneme data available for the user instead of the indicated phoneme data.
SERVER, MUSICAL INSTRUMENT, SERVER COMMUNICATION METHOD, AND COMMUNICATION METHOD
The disclosure provides a server, a musical instrument, a server communication method, and a communication method. The server is connected to a terminal and a musical instrument and comprises a processor, wherein the processor is configured to receive an instruction from the terminal, create a timbre data or a data request destination, and transmit the timbre data or the data request destination to the musical instrument.
SINGING VOICE EDIT ASSISTANT METHOD AND SINGING VOICE EDIT ASSISTANT DEVICE
A singing voice edit assistance method, performed by to computer, includes: judging whether phoneme data based on which waveform data for listening contained in a data set for singing synthesis is synthesized, is available or not for a user to edit a singing voice, the data set for singing synthesis containing score data representing a time series of notes, a lyrics data representing words corresponding to the respective notes; and synthesizing the waveform data for listening while shifting pitches of phoneme data, representing waveforms of phonemes, indicated by the lyrics data to pitches indicated by the score data and connecting the pitch-shifted phoneme data and, if the indicated phoneme data is not available, the synthesizing synthesizes waveform data for listening based on the score data, the lyrics data, and substitute phoneme data available for the user instead of the indicated phoneme data.
Music generation tool
A system and computer-implemented method for generating music content includes a music notation data store having a collection of notation data files and an audio data store having a collection of audio data files, each data file in the notation and audio data stores including associated music characteristic metadata. One or more computer processor is arranged to receive user music preference inputs from a user interface and to search the notation and audio data stores to identify a plurality of data files corresponding to one or more user preference input. The processor randomly selects at least one notation file and at least one audio file from the identified notation and audio files and generates a music instance file by combining the selected notation and audio files for playback to the user.
Dynamic music authoring
A method to author music. The method includes presenting, on a display by a computing device, an audio effect menu, receiving, by the computing device, a first user input selecting a first audio effect from the audio effect menu, generating, in response to receiving the first user input, a first modified audio stream based on a particular audio stream and the first audio effect, receiving, by the computing device while receiving the first user input, a second user input selecting a second audio effect from the audio effect menu, generating, in response to receiving the second user input, a second modified audio stream based on the first modified audio stream and the second audio effect, detecting cessation of the first user input, and continuing, in response to detecting the cessation, generating the second modified audio stream based on the first modified audio stream and the second audio effect.
DRUMSTICK CONTROLLER
A percussion device includes a drumstick assembly. The drumstick assembly includes a drumstick having a base and a tip end, and a drumstick tip secured to the tip end of the drumstick, the drumstick tip including a sensor. The drumstick including the base thereof, and includes at least one control button, a communication element, and a processor in communication with the at least one control button, the drumstick tip and the communication element. The processor is configured to receive a signal from the drumstick tip and to generate output to the communication element. The output so generated includes a signal that specifies a sound file selected by operation of the at least one control button.
Musical Score Generator
A method of generating a musical score file for one or more target musical instruments with a score generation component based on input audio data. The score generation component finds candidate musical notes within the input audio data using a frequency analysis to identify segments that share substantially the same audio frequency, and finds a best match for those candidate musical notes in audio data associated with target musical instruments in a sound database. A generated musical score file can be printed as sheet music or audibly played back over speakers.