Patent classifications
G10H2210/341
Musical sound processing apparatus, musical sound processing method, and storage medium
A musical sound processing apparatus, a musical sound processing method, and a storage medium capable of generating musical sound full of interest are provided. The musical sound processing apparatus includes a first control unit configured to control a timing of sounding of a first tone in steps that come with an interval therebetween and a second control unit configured to control a timing of sounding of a second tone following or overlapping the first tone according to a first tempo, wherein the first control unit is configured to control the timing of sounding of the first tone according to the first tempo when timing information has not been acquired from outside and control the timing of sounding of the first tone according to a second tempo which is based on the timing information and different from the first tempo when the timing information has been acquired.
Method and System for Processing Input Data
A method for analyzing one or more notes in a musical composition, comprising for each note: getting a note, a chord and a scale. computing note properties using the note's value and the chord and the scale. A method for transforming one or more input notes into one or more new notes, comprising for each input note: getting an input note and its note properties, getting a new chord and a new scale for the input note, getting a list of notes candidates, computing distances between the input note and every note in the list, using input note's value, input note's note properties, candidate note's value and candidate note's note properties, finding the candidate that has the minimal distance, and setting a new note value using a note value of the candidate with the minimal distance.
Method of combining audio signals
A method for automatically generating an audio signal, the method comprising receiving a source audio signal analyzing the source audio signal to identify a musical parameter characteristic thereof obtaining a supplemental audio signal based on the identified musical parameter characteristic and combining the source audio signal and the supplemental audio signal to form an extended audio signal.
MUSIC GENERATION DEVICE, MUSIC GENERATION METHOD, AND RECORDING MEDIUM
A music generation device includes: an acquisition unit that acquires first stream data and second stream data different from the first stream data; an accompaniment generation unit that generates accompaniment information, which is music data indicating an accompaniment, based on a change in the first stream data; a melody generation unit that generates melody information, which is music data indicating a melody, based on a change in the second stream data; a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information; and an output unit that outputs the generated musical piece information.
METHOD AND SYSTEM FOR AUTOMATIC MUSIC TRANSCRIPTION AND SIMPLIFICATION
Provided are systems and methods for transforming a digital score file into one or more of a plurality of levels of simplified visualization outputs. Methods of the present invention may be computer implemented. Systems of the present invention may include at least one display device, a non-transitory memory having instructions embedded thereon, and a processor in communication with the non-transitory memory and the at least one display device. Systems and methods of the present invention may be configured to receive at least one digital score file, upon which one or more simplification rules are executed, resulting in at least one simplified visualization output. Simplification rules may include, but are not limited to, song length, tempo adjustment, tie, rhythm, harmonic rhythm, and chord. One or more simplified visualization outputs are then provided.
Determining tap locations on a handheld electronic device based on inertial measurements
Systems and methods are described in which the location of a tap on the body of a handheld device is determined in real time using data streams from an embedded inertial measurement unit (IMU). Taps may be generated by striking the handheld device with an object (e.g., a finger), or by moving the handheld device in a manner that causes it to strike another object. IMU accelerometer, gyroscopic and/or orientation (relative to the magnetic and/or gravitational pull of the earth) measurements are examined for signatures that distinguish a tap at a location on the body of the device compared with signal characteristics produced by taps at other locations. Neural network and/or numerical methods may be used to perform such classifications. Tap locations, tap timing and tap attributes such as the magnitude of applied forces, device orientation, and the amplitude and directions of motions during and following a tap, may be used to control or modulate responses within the handheld device and/or actions within connected devices.
Patient tailored system and process for treating ASD and ADHD conditions using music therapy and mindfulness with Indian Classical Music
A method, system and processes to develop a patient tailored music therapy based on Indian Classical Music compositions to treat ASD (Autistic Spectrum Disorders) and ADHD (Attention Deficit Hyperactivity Disorder) is described. According to this present invention there is provided a method to develop a tailored music therapy for treating patients suffering from ASD (Autistic Spectrum Disorders) and ADHD (Attention Deficit Hyperactivity Disorder) based on the patient's response and a system to measure response of the patient to music therapy and mindfulness inputs. This invention comprises of a process to determine suitable Indian Classical Music compositions playlist for use in treating the patient (FIG. 1) followed by further tuning of the selections allowable note levels, ramp up and ramp down times to and from allowable note levels, melody hold times and rhythm pattern selection to develop an optimum waveform (FIG. 2) all based on measuring the patient response using a multiple input—physical movement, audio and brain wave response measurement system (FIG. 3) or thru visual observations. The invention also provides a process to determine daily therapy and mindfulness time and a process for monthly music therapy and mindfulness tailoring. The invention also provides a system (FIG. 3) to measure patient response to the music therapy and mindfulness, which can be used in conjunction with or in place of visual observations. In this invention the patient starts off with a therapy and mindfulness tailoring session where a playlist of Indian Classical Music Raga compositions is first developed, selected based on the patient's response as measured by the system provided in FIG. 3 or thru visual observations. Then patient specific optimum note level, beat rhythm pattern and rhythm pattern frequency, ramp up to and down times from optimum note levels are determined based on the patient's response to create a waveform (FIG. 2). The playlist selections are then modified manually or by a computer program using the waveform parameters and when played to the patient elicits a Calm Range Response pattern defined by a state of stimulated mindfulness but not falling asleep characterized by specific range of motion, audio or brain wave response unique to the patient. The specific pieces of the waveform are derived by varying waveform parameters and measuring the response of the patient (FIGS. 4 A, B, C, D) using the response measuring system (FIG. 3) or thru visual observations. The invention also describes a process to develop daily listening period duration (FIG. 5). The invention describes a process used
Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.
ELECTRONIC MUSICAL INSTRUMENT AND MUSICAL PIECE PHRASE GENERATION PROGRAM
An electronic musical instrument includes a progression speed changing unit that changes a progression of transport using a predetermined formula such that a total time required for the transport to pass through a specific step section of musical piece data is not changed; a parameter setting unit that sets a parameter for the predetermined formula; and a musical phrase generator that generates a musical phrase by assigning the parameter set by the parameter setting unit to the predetermined formula based on the progression of the transport changed by the progression speed changing unit.
Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music
An automated music composition and generation system including a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the system user interface, for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to the musical experience descriptors and time and/or space parameters selected by the system user. Each digital piece of composed and generated music contains a set of musical notes arranged and performed in the digital piece of music. The engine includes: a digital piece creation subsystem and a digital audio sample producing subsystem supported by virtual musical instrument libraries.