Patent classifications
G10H2210/021
Electronic music box
Music data memory includes pieces of music within a group and other pieces of music outside the group. The next piece to be played is automatically determined by random table among pieces within the group. Favorite or newest piece is weighted to be more frequently played in the group. Piece in music data memory is automatically included into the group by random table. Newly downloaded piece into music data memory is included into the group by priority. Most frequently played piece is excluded from the group in place of newly included piece. Favorite or newest piece may be an exception of exclusion. Next piece is capable of being played in tempo similar to that of preceding piece by means of tempo-adjusted or piece replacement or repetition of the same piece for the purpose of continued baby cradling in synchronism with the same tempo of succeeding pieces.
INFORMATION PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND PROGRAM
[Object] To propose an image processing method, image processing apparatus and program which are capable of exciting the emotions of a viewer more effectively. [Solution] An information processing method including: analyzing a beat of input music; extracting a plurality of unit images from an input image; and generating, by a processor, editing information for switching the extracted unit images depending on the analyzed beat.
METHOD FOR SONG MULTIMEDIA SYNTHESIS, ELECTRONIC DEVICE AND STORAGE MEDIUM
The disclosure provides a method for synthesizing a song multimedia, an electronic device and a storage medium. Material obtaining modes are provided based on a song multimedia synthesis request. User audios provided by a user are obtained based on a selected material obtaining mode. A user timbre output by a timbre extraction model is obtained by inputting the user audios into the timbre extraction model. Lyrics to be synthesized and a tune to be synthesized provided by the user are obtained based on the selected material obtaining mode, and a synthesized song multimedia is obtained by inputting the user timbre, the lyrics to be synthesized and the tune to be synthesized into a song synthesis model.
COMPUTER-BASED SYSTEMS, DEVICES, AND METHODS FOR GENERATING MUSICAL COMPOSITIONS THAT ARE SYNCHRONIZED TO VIDEO
Computer-based systems, devices, and methods for generating musical compositions that are purposefully synchronized with video are described. A video timeline is defined with various time-markers that demarcate specific events in the video. A music timeline is generated based on the video timeline. The music timeline preserves the various time-markers from the video timeline. A computer-based musical composition system generates a musical composition based on the music timeline. The musical composition includes various musical events that align, synchronize, or coincide with the time-markers such that when the video and musical composition are played together the musical events align, synchronize, or coincide with the demarcated events in the video.
SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
SYSTEM AND METHOD FOR PERFORMANCE-BASED INSTANT ASSEMBLING OF VIDEO CLIPS
A system for instant assembly of video clips through user's interactive performance, comprising a device operated by a user, wherein the device comprises: user interface means configured for input and output interaction with the user; a processing unit and a memory configured for the creation of a new video assembled appending a plurality of video clip segments extracted from a plurality of video clips; and an I/O unit configured for access to the plurality of video clips; the user interface means are configured to detect a sequence of manual assembling commands, and to display the plurality of video clip segments, the display order of the video segments being defined by the sequence of manual concatenation commands; the processing unit and the memory are configured to record the appending process of the video segments extracted from a plurality of video clips.
MEDIA CONTENT SYSTEM FOR ENHANCING REST
A media-playback device acquires a heart rate, selects a song with a first tempo, and initiates playback of the song. The song meets a set of qualification criteria and the first tempo is based on the heart rate, such as being equal to or less than the heart rate. The media-playback device also initiates playback of a binaural beat at a first frequency. Over a period of time, the binaural beat's first frequency is changed to a second frequency. Over the period of time, the first tempo can also be changed to a second tempo, where the second tempo is slower than the first tempo.
Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music
An automated music composition and generation system including a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the system user interface, for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to the musical experience descriptors and time and/or space parameters selected by the system user. Each digital piece of composed and generated music contains a set of musical notes arranged and performed in the digital piece of music. The engine includes: a digital piece creation subsystem and a digital audio sample producing subsystem supported by virtual musical instrument libraries.
Media content system for enhancing rest
A media-playback device acquires a heart rate, selects a song with a first tempo, and initiates playback of the song. The song meets a set of qualification criteria and the first tempo is based on the heart rate, such as being equal to or less than the heart rate. The media-playback device also initiates playback of a binaural beat at a first frequency. Over a period of time, the binaural beat's first frequency is changed to a second frequency. Over the period of time, the first tempo can also be changed to a second tempo, where the second tempo is slower than the first tempo.
Method of and system for automatically generating music compositions and productions using lyrical input and music experience descriptors
An automated music composition and generation process within an automated music composition and generation system driven by lyrical musical experience descriptors. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to select and provide musical experience descriptors, including lyrics, to the automated music composition and generation engine for processing by said automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on the musical experience descriptors and lyrics provided.