Patent classifications
G10H1/368
A PORTABLE INTERACTIVE MUSIC PLAYER
The present disclosure relates to an interactive music player, the interactive music player adapted to allow a user to control and mix a plurality of simultaneously played audio tracks. The present disclosure also relates to a corresponding method and computer program product.
AUDIO DETECTION METHOD AND APPARATUS, COMPUTER DEVICE, AND READABLE STORAGE MEDIUM
This application provide an audio detection method performed by a computer device. The method includes: acquiring a target time point and a reference point of the target time point from target audio data; performing energy evaluation on the target time point according to an audio amplitude value of the target time point to obtain an energy evaluation value of the target time point; performing energy evaluation on the reference point according to an audio amplitude value of the reference point to obtain an energy evaluation value of the reference point; performing accuracy verification on the target time point according to the energy evaluation value of the target time point and the energy evaluation value of the reference point; and if the accuracy verification on the target time point succeeds, adding the target time point as a target stress point into a target stress point set.
Information processing method
An information processing device 11 including: a control data generation unit that inputs analysis data X that is to be processed, to a trained model that has learnt a relationship between analysis data X that represents a time series of musical notes, and control data Y for controlling movements of an object that represents a performer, thereby generating control data Y according to the analysis data X.
SYSTEM AND METHOD FOR GENERATING AND EDITING A VIDEO
The invention provides a system and a computer-implemented method for generating and editing a video including providing a mobile communication device comprising a camera, a display, a central processing unit (CPU), a video generating application and a memory. Next, starting the video generating application, and then opening the camera and providing camera tutorials. The camera tutorials comprise instructions for camera positioning, camera moving, and camera aligning while taking videos. Next, taking videos of a scene following the instructions for camera positioning, camera moving, and camera aligning while taking videos. Next, uploading the videos to the memory, editing the videos and producing a composite video for the scene. The camera tutorials include a “moving forward/backward” tutorial directing a user first to hold the camera still, to align a horizontal view line in the display with a marker line, and then to move the user's body forward or backward while taking a video of the scene. The editing of the videos includes slowing the videos down, and matching rhythm of music accompanying each video to transitions of consecutive videos. The slowing down of the videos includes removing every other frame.
Systems and methods for transferring musical drum samples from slow memory to fast memory
An electronic-drum module for connection to one or more electronic-drum pads is provided. The module includes an electronic display; a first memory storing audio files for playback when the playback is triggered by a signal received from a pad; and one or more processors coupled to the display and the memory. The processors are configured receive an instruction to transfer a set of samples. The set of samples is associated with a priority-instruction and includes a first subset of samples and a second subset of samples. The processors are also configures to transfer the first subset of samples from a second memory to the first memory based on the priority-instruction before transferring the second subset of samples and to transfer the second subset of samples from the second memory to the first memory.
DATA STRUCTURE FOR COMPUTER GRAPHICS, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
The present invention is designed to allow easy synchronization of the movement of a computer graphics (CG) model with sound data. The data structure according to an embodiment of the present invention presents a data structure that relates to a computer graphics (CG) model, including first time-series information for designating the coordinates of the components of the CG model on a per beat basis, and the first time-series information is used on a computer to process the CG model.
AUDIOVISUAL COLLABORATION SYSTEM AND METHOD WITH SEED/JOIN MECHANIC
User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc. The resulting group performance, whether full-length or just a chunk, may be posted, livestreamed, or otherwise disseminated in a social network.
Intelligent system for matching audio with video
An intelligent system for matching audio with video of the present invention provides a video analysis module targeting color tone, storyboard pace, video dialogue, length and category and director's special requirement, actors expression, movement, weather, scene, buildings, spacial and temporal, things and a music analysis module targeting recorded music form, sectional turn, style, melody and emotional tension, and then uses an AI matching module to adequately match video of the video analysis module with musical characteristics of the music analysis module, so as to quickly complete a creative composition selection function with respect to matching audio with a video.
Data synchronisation
The present invention relates to a method and apparatus to synchronise audio and video data. More particularly, the present invention relates to a loop-based audio-visual mixing apparatus and method for synchronising a plurality of videos and their corresponding audio streams to create audio-visual compositions. According to one aspect, there is provided a method for creating a synchronised lineal sequence from multiple inputs of audio and video data, comprising the steps of: providing a first input, comprising audio and video data; providing one or more subsequent inputs, comprising audio and video data; determining at least one rhythm metric unit for each input; queueing the or each subsequent inputs such that the or each subsequent input is triggered at a beginning of a next said rhythm metric unit of a determined input.
SYSTEM AND METHOD FOR ENHANCING MULTIMEDIA CONTENT WITH VISUAL EFFECTS AUTOMATICALLY BASED ON AUDIO CHARACTERISTICS
Exemplary embodiments of the present disclosure are directed towards system for enhancing multimedia content with visual effects based on audio characteristics, comprising computing device comprises multimedia content enhancing module enables end-user to record multimedia content using camera; enables to select audio track and combine with multimedia content recorded; sends audio track and multimedia content recorded to cloud server; cloud server comprising multimedia analyzing and visual effects retrieving module to receive and analyze beat characteristics of audio track and multimedia content recorded; categorize visual effects and filters and deliver to the computing device; multimedia content enhancing module displays categorized visual effects and filters on computing device and enables end-user to select and apply categorized visual effects and filters on multimedia content to create enhanced multimedia content; enables the end-user to share and post enhanced multimedia content on computing device.