Patent classifications
G10H1/40
Method and System for Processing Input Data
A method for analyzing one or more notes in a musical composition, comprising for each note: getting a note, a chord and a scale. computing note properties using the note's value and the chord and the scale. A method for transforming one or more input notes into one or more new notes, comprising for each input note: getting an input note and its note properties, getting a new chord and a new scale for the input note, getting a list of notes candidates, computing distances between the input note and every note in the list, using input note's value, input note's note properties, candidate note's value and candidate note's note properties, finding the candidate that has the minimal distance, and setting a new note value using a note value of the candidate with the minimal distance.
Method for detecting audio signal beat points of bass drum, and terminal
A method for detecting audio signal beat points of a bass drum, and a terminal. The method comprises: acquiring several intrinsic mode functions based on an inputted audio signal to be detected; calculating instantaneous signals, wherein the instantaneous signals includes instantaneous strength signals and instantaneous frequency signals corresponding to the several intrinsic mode functions; acquiring characteristic signals of the bass drum based on the instantaneous strength signals and the instantaneous frequency signals corresponding to the several intrinsic mode functions; performing peak detection on the characteristic signals to acquire a plurality of peak points; and acquiring the beat points of the bass drum based on the plurality of peak points.
Tempo setting device and control method thereof
Disclosed herein is a tempo setting device including a detecting unit that deems a predetermined utterance as a detection target and detects the utterance of the detection target through recognizing sound, a tempo deciding unit that decides a tempo based on a detection interval of the detected utterance in response to two or more times of consecutive detection of the utterance of the detection target by the detecting unit, and a setting unit that sets the tempo decided by the tempo deciding unit.
Tempo setting device and control method thereof
Disclosed herein is a tempo setting device including a detecting unit that deems a predetermined utterance as a detection target and detects the utterance of the detection target through recognizing sound, a tempo deciding unit that decides a tempo based on a detection interval of the detected utterance in response to two or more times of consecutive detection of the utterance of the detection target by the detecting unit, and a setting unit that sets the tempo decided by the tempo deciding unit.
ACOUSTIC DEVICE, DISPLAY CONTROL METHOD, AND DISPLAY CONTROL PROGRAM
An acoustic device includes: a first display unit; a second display unit; an operation unit configured to receive a user's operation; a judging unit configured to judge a type of the operation performed on the operation unit; and a display controller configured to, in response to the type of the operation determined by the judging unit, change display contents of the first display unit to display contents corresponding to the type of the operation and display on the second display unit at least a part of the display contents having been displayed on the first display unit.
ARRANGEMENT GENERATION METHOD, ARRANGEMENT GENERATION DEVICE, AND GENERATION PROGRAM
An arrangement generation method executed by a computer includes acquiring target musical piece data that include performance information that indicates a melody and a chord of at least a part of a musical piece and include meta information that indicates characteristics of at least the part of the musical piece, generating, from the acquired target musical piece data, by using a generative model trained by machine learning, arrangement data obtained by arranging the performance information in accordance with the meta information, and outputting the generated arrangement data.
ARRANGEMENT GENERATION METHOD, ARRANGEMENT GENERATION DEVICE, AND GENERATION PROGRAM
An arrangement generation method executed by a computer includes acquiring target musical piece data that include performance information that indicates a melody and a chord of at least a part of a musical piece and include meta information that indicates characteristics of at least the part of the musical piece, generating, from the acquired target musical piece data, by using a generative model trained by machine learning, arrangement data obtained by arranging the performance information in accordance with the meta information, and outputting the generated arrangement data.
VIRTUAL TUTORIALS FOR MUSICAL INSTRUMENTS WITH FINGER TRACKING IN AUGMENTED REALITY
Systems, devices, media, and methods are described for presenting a tutorial in augmented reality on the display of a smart eyewear device. The system includes a marker registration utility for setting a marker on a musical instrument, a localization utility for locating the eyewear device relative to the marker location and the instrument, a virtual object rendering utility for presenting a series of virtual tutorial objects on the display near one or more actuators on the instrument, and a hand tracking utility for tracking the performer's finger locations in real time during playback of a song file. A high-definition video camera captures sequences of frames of video data. The series of virtual tutorial objects, in one example, includes graphical elements presented on a virtual scroll that appears to move toward the instrument at a speed correlated with the song tempo. The hand tracking utility calculates a set of expected fingertip coordinates based on a detected hand shape and a library of hand poses and landmarks.
AUDIO ONSET DETECTION METHOD AND APPARATUS
An audio onset detection method and apparatus, an electronic device, and a computer readable storage medium. The audio onset detection method comprises: determining a first voice frequency spectrum parameter corresponding to each frequency band according to a frequency domain signal corresponding to an audio signal of an audio; for each frequency band, determining a second voice frequency spectrum parameter of a current frequency band according to the first voice frequency spectrum parameter of the current frequency band and the first voice frequency spectrum parameters of frequency bands positioned before the current frequency band according to a time sequence; and determining one or more onset positions of notes and syllables in the audio according to the second voice frequency spectrum parameters corresponding to the frequency bands.
METHOD AND DEVICE FOR FLATTENING POWER OF MUSICAL SOUND SIGNAL, AND METHOD AND DEVICE FOR DETECTING BEAT TIMING OF MUSICAL PIECE
A method for flattening power of a musical sound signal, said method being characterized by comprising: determining second values corresponding to respective first values indicating power at a plurality of time points of a musical sound signal each on the basis of the result of a comparison between the present value of the first value and the present value of the second value; and flattening the plurality of first values using the second values corresponding to the plurality of first values, respectively, wherein the second value changes while drawing a predetermined trajectory when, in the result of the comparison, a state where the present value of the second value is larger than the present value of the first value continues.