Patent classifications
G10H2210/076
HARMONY-AWARE HUMAN MOTION SYNTHESIS WITH MUSIC
A method and device for harmony-aware audio-driven motion synthesis are provided. The method includes determining a plurality of testing meter units according to an input audio, each testing meter unit corresponding to an input audio sequence of the input audio, obtaining an auditory input corresponding to each testing meter unit, obtaining an initial pose of each testing meter unit as a visual input based on a visual motion sequence synthesized for a previous testing meter unit, and automatically generating a harmony-aware motion sequence corresponding to the input audio using a generator of a generative adversarial network (GAN) model. The GAN model is trained by incorporating a hybrid loss function. The hybrid loss function includes a multi-space pose loss, a harmony loss, and a GAN loss. The harmony loss is determined according to beat consistencies of audio-visual beat pairs.
Systems and methods for embedding data in media content
An electronic device modifies a first media content item by superimposing a first set of data over a first accented musical event. The first accented musical event has a first audio profile. The first set of data has a second audio profile configured to be masked by the first audio profile during playback of the first media content item. The electronic device transmits, to a second electronic device, the modified first media content item.
METHODS, INFORMATION PROCESSING DEVICE, AND IMAGE DISPLAY SYSTEM FOR ELECTRONIC MUSICAL INSTRUMENTS
A method of determining a data output timing, performed by at least one processor in an information processing device for an electronic musical instrument includes, via the at least one processor: obtaining data of a first performance operation on the electronic musical instrument by a user; obtaining data of a second performance operation on the electronic musical instrument by the user; and determining the data output timing for outputting a data to the user based on a time interval between the first and second performance operations.
Method for detecting audio signal beat points of bass drum, and terminal
A method for detecting audio signal beat points of a bass drum, and a terminal. The method comprises: acquiring several intrinsic mode functions based on an inputted audio signal to be detected; calculating instantaneous signals, wherein the instantaneous signals includes instantaneous strength signals and instantaneous frequency signals corresponding to the several intrinsic mode functions; acquiring characteristic signals of the bass drum based on the instantaneous strength signals and the instantaneous frequency signals corresponding to the several intrinsic mode functions; performing peak detection on the characteristic signals to acquire a plurality of peak points; and acquiring the beat points of the bass drum based on the plurality of peak points.
Tempo setting device and control method thereof
Disclosed herein is a tempo setting device including a detecting unit that deems a predetermined utterance as a detection target and detects the utterance of the detection target through recognizing sound, a tempo deciding unit that decides a tempo based on a detection interval of the detected utterance in response to two or more times of consecutive detection of the utterance of the detection target by the detecting unit, and a setting unit that sets the tempo decided by the tempo deciding unit.
System and method for generating musical score
A method for generating a musical score based on user performance during playing a keyboard instrument may include detecting a status change of a plurality of execution devices of the keyboard instrument. The method may include generating a first signal according to the detected status change. The method may include generating a second signal indicating a plurality of timestamps. The method may include determining a tune of the musical score based on the first signal. The method may include determining a rhythm of the musical score based on the second signal. The method may further include generating the musical score based on the tune and the rhythm of the musical score.
Method of combining audio signals
A method for automatically generating an audio signal, the method comprising receiving a source audio signal analyzing the source audio signal to identify a musical parameter characteristic thereof obtaining a supplemental audio signal based on the identified musical parameter characteristic and combining the source audio signal and the supplemental audio signal to form an extended audio signal.
Enhancing music for repetitive motion activities
A method of providing repetitive motion therapy comprising providing access to audio content; selecting audio content for delivery to a patient; performing an analysis on the selected audio content, the analysis identifying audio features of the selected audio content, and extracting rhythmic and structural features of the selected audio content; performing an entrainment suitability analysis on the selected audio content; generating entrainment assistance cue(s) to the selected audio content, the assistance cue(s) including a sound added to the audio content; applying the assistance cues to the audio content simultaneously with playing the selected audio content; evaluating a therapeutic effect on the patient, wherein the selected audio content continues to play when a therapeutic threshold is detected, and a second audio content is selected for delivery to the patient when a therapeutic threshold is not detected.
Animation effect attachment based on audio characteristics
Systems and methods for rendering a video effect to a display are described. More specifically, video data and audio data are obtained. The video data is analyzed to determine one or more attachment points of a target object that appears in the video data. The audio data is analyzed to determine audio characteristics. A video effect associated with an animation to be added to the one or more attachment points is determined based on the audio characteristics. A rendered video is generated by applying the video effect to the video data.
VIRTUAL TUTORIALS FOR MUSICAL INSTRUMENTS WITH FINGER TRACKING IN AUGMENTED REALITY
Systems, devices, media, and methods are described for presenting a tutorial in augmented reality on the display of a smart eyewear device. The system includes a marker registration utility for setting a marker on a musical instrument, a localization utility for locating the eyewear device relative to the marker location and the instrument, a virtual object rendering utility for presenting a series of virtual tutorial objects on the display near one or more actuators on the instrument, and a hand tracking utility for tracking the performer's finger locations in real time during playback of a song file. A high-definition video camera captures sequences of frames of video data. The series of virtual tutorial objects, in one example, includes graphical elements presented on a virtual scroll that appears to move toward the instrument at a speed correlated with the song tempo. The hand tracking utility calculates a set of expected fingertip coordinates based on a detected hand shape and a library of hand poses and landmarks.