Patent classifications
G10H2210/385
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument in one aspect of the disclosure includes; a plurality of operation elements to be performed by a user for respectively specifying different pitches; a memory that stores musical piece data that includes data of a vocal part, the vocal part including at least a first note with a first pitch and an associated first lyric part that are to be played at a first timing; and at least one processor, wherein if the user does not operate any of the plurality of operation elements in accordance with the first timing, the at least one processor digitally synthesizes a default first singing voice that includes the first lyric part and that has the first pitch in accordance with data of the first note stored in the memory, and causes the digitally synthesized default first singing voice to be audibly output at the first timing.
Enhanced real-time audio generation via cloud-based virtualized orchestra
Systems and methods are provided for enhanced real-time audio generation via a virtualized orchestra. An example method includes receiving, from a user device, a request to generate output associated with a musical score. Actions associated with virtual musicians with respect to respective instruments are simulated based on one or more machine learning models, with the simulated actions being associated with a virtual musician and indicative of an expected playing style during performance of the musical score. Output audio to be provided to the user device is generated, with the output audio being generated based on the simulated actions.
VIDEO GAMING CONSOLE THAT SYNCHRONIZES DIGITAL IMAGES WITH VARIATIONS IN MUSICAL TEMPO
The teachings described herein are generally directed to a system, method, and apparatus for separating and mixing tracks within music. The system can have a video that is synchronized with the variations in the musical tempo through a variable timing reference track designed and provided for a user of the preselected performance that was prerecorded, wherein the designing of the variable timing reference track includes creating a tempo map having variable tempos, rhythms, and beats using notes from the preselected performance.
Systems and methods for automatic mixing of media
Audio mix information is received from a plurality of users. Mix rules are determined from the audio mix information from the plurality of users, wherein the mix rules include a first mix rule associated with a first audio item. The first mix rule relates to an overlap of the first audio item with another audio item. The first mix rule is made available to one or more clients. After making the first mix rule available, an indication, from a respective client device, that the first audio item is to be mixed with a second audio item at the respective client device in accordance with the first mix rule is received. In response to the indication, a specification of the first mix rule is transmitted to the respective client device to be applied by the respective client device to generate a transition between the first audio item and the second item.
Content control device and storage medium
A content control device includes: a plurality of controls to which a plurality of parameters for controlling properties of a content containing at least one of sound and video are respectively assigned, each of the plurality of controls outputting a first indicated value in accordance with an operation amount of the control; and a processor configured to previously create setting information used to determine respective values of the plurality of parameters in accordance with the second indicated value; determine the values of the plurality of parameters corresponding to the second indicated value respectively in accordance with the second indicated value and the setting information; and revise each of the values of the parameters to be determined in accordance with the first indicated value outputted for the control assigned to the parameter.
ENHANCED REAL-TIME AUDIO GENERATION VIA CLOUD-BASED VIRTUALIZED ORCHESTRA
Systems and methods are provided for enhanced real-time audio generation via a virtualized orchestra. An example method includes receiving, from a user device, a request to generate output associated with a musical score. Actions associated with virtual musicians with respect to respective instruments are simulated based on one or more machine learning models, with the simulated actions being associated with a virtual musician and indicative of an expected playing style during performance of the musical score. Output audio to be provided to the user device is generated, with the output audio being generated based on the simulated actions.
Mixing complex multimedia data using tempo mapping tools
The teachings described herein are generally directed to a system, method, and apparatus for separating and mixing tracks within music. The system can have a video that is synchronized with the variations in the musical tempo through a variable timing reference track designed and provided for a user of the preselected performance that was prerecorded, wherein the designing of the variable timing reference track includes creating a tempo map having variable tempos, rhythms, and beats using notes from the preselected performance.
PERFORMANCE ANALYSIS METHOD
A performance analysis method according to the present invention includes generating information related to a performance tendency of a user, from observed performance information relating to a performance of a musical piece by the user and inferred performance information that occurs when the musical piece is performed based on a specific tendency.
INFORMATION PROCESSING METHOD
An information processing method according to the present invention includes providing first musical piece information representing contents of a musical piece and performance information relating to a past performance prior to one unit period within the musical piece to a learner that has undergone learning relating to a specific tendency that relates to a performance, and generating, for the one unit period, performance information that is based on the specific tendency with the learner.
DATA FORMAT
A method for constructing an adaptive media file comprising a plurality of audio components configured to be used to form an audio output arranged to have a controllable tempo, the method comprising providing first audio data associated with a first audio component of the plurality of audio components, setting a playback tempo range of the first audio data, providing second audio data associated with the first audio component, setting a playback tempo range of the second audio data, wherein the tempo range of the second audio data is different to the tempo range of the first audio data, and associating the first audio data, the second audio data and the respective playback tempo ranges.