Patent classifications
G10H2220/101
SYSTEM FOR SELECTION AND PLAYBACK OF SONG VERSIONS FROM VINYL TYPE CONTROL INTERFACES
A system for processing digital audio signals. The system includes a digital audio unit configured to provide digital audio data of a first audio track and a second audio track. The first audio track and a second audio track represent different versions of the same piece of music. The system also includes a user control device operable by a user to allow the user to select either a first region or a second region. The digital audio unit is configured to play back audio signals obtained from the first audio track when the first region is selected, and to play back audio signals obtained from the second audio track when the second region is selected.
System and method for generating an audio file
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
Automated music composition and generation system employing an instrument selector for automatically selecting virtual instruments from a library of virtual instruments to perform the notes of the composed piece of digital music
An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine driven by a set of emotion-type and style-type musical experience descriptors and time and/or space parameters provided by a system user. The automated music composition and generation engine includes an instrument subsystem supporting a library of virtual instruments, wherein each virtual instrument is capable of performing one or more notes of at least a portion of the composed piece of music, in response to the emotion-type and/or style-type musical experience descriptors; an instrument selector subsystem for automatically selecting one or more of virtual instruments from the library, so that each selected virtual instrument performs one or more notes of at least a portion of the composed piece of music; and a digital piece creation subsystem for creating the digital piece of composed music by assembling the notes produced from the virtual instruments selected from the library.
Automatically managing the musical tastes and preferences of system users based on user feedback and autonomous analysis of music automatically composed and generated by an automated music composition and generation system
An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user. The automated music composition and generation engine includes a user taste generation subsystem for automatically (i) determining the musical tastes and preferences of a system user based on user feedback and autonomous piece analysis, (ii) maintaining a system user profile reflecting musical tastes and preferences of each system user, and (iii) using the musical taste and preference information to change or modify the musical experience descriptors provided to the system to produce a digital piece of composed music composition that better reflects the musical tastes and preferences of the system user.
METHOD AND APPARATUS FOR DISPLAYING MUSIC POINTS, AND ELECTRONIC DEVICE AND MEDIUM
Disclosed are a method and apparatus for displaying music points, and an electronic device and a medium. One specific embodiment of the method includes: acquiring audio material; analyzing initial music points in the audio material, wherein the initial music points include beat points and/or note starting points in the audio material; and on an operation interface of video clipping, displaying, according to the position of the audio material on a clip timeline and the positions of target music points in the audio material, identifiers of the target music points on the clip timeline, wherein the target music points are some of or all of the initial music points. According to the embodiment, the time for a user to process audio material and to make music points is reduced, and the flexibility of tools is also guaranteed.
INFORMATION PROCESSING METHOD AND ELECTRONIC MUSICAL INSTRUMENT
An electronic musical instrument includes a communication unit and a processing unit. The communication unit is capable of performing short-range wireless communication with a terminal. In a case where the terminal performs an operation according to an operation pattern including an operation of repeating, in a predetermined pattern, at least one of transmission of a reading request of data and transmission of writing target data and a writing request for the writing target data while the electronic musical instrument and the terminal are in a state in which the short-range wireless communication is possible, the processing unit performs a process of transmitting corresponding data when the reading request is received using the communication unit, and performs a process of writing the writing target data when the writing target data and the writing request for the writing target data are received via the communication unit.
SYSTEM FOR GENERATING AN OUTPUT FILE
A system for creating an output comprises a processing unit, a user input module operably connected to the processing unit, and a video monitor operably connected to the processing unit. The processing unit provides on the video monitor: a grid image comprising multiple cells, each cell representing a duration of time; and a selection area comprising multiple select icons, each select icon representing a source data file. The processing unit is configured such that a user can create a grid layout representing the correlation between individual selected source data files and one or more of the multiple cells. The processing unit produces the output based on the correlation.
System and method for generating an audio file
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
System and method for generating an audio file
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.