Patent classifications
G10H2210/071
Song analysis device and song analysis program
A music piece analyzer includes: a beat interval acquiring unit configured to acquire a beat interval in music piece data; a candidate detector configured to detect sounding positions where a change amount for sounding is equal to or more than a predetermined threshold in the music piece data, as candidates for sounding positions of a snare drum; and a sounding position determination unit configured to determine that the candidates for the sounding positions at a two-beat interval acquired by the beat interval acquiring unit in the music piece data are the sounding positions of the snare drum, among the candidates for the sounding positions of the snare drum.
Automatic translation using deep learning
Audio data of an original work is received. Text in the audio data is translated to a target language. The audio data is passed to a first deep learning model to learn voice features in the audio data. The audio data is passed to a second deep learning model to learn audio properties in the audio data. The translated text is synchronized to play in the position of original text of the original work in a synthesized voice. A translated audio data of the original work is created by combining the synchronized translated text in the synthesized voice with music of the audio data.
METHOD OF TRAINING A NEURAL NETWORK TO REFLECT EMOTIONAL PERCEPTION AND RELATED SYSTEM AND METHOD FOR CATEGORIZING AND FINDING ASSOCIATED CONTENT
A property vector representing extractable measurable properties, such as musical properties, of a file is mapped to semantic properties for the file. This is achieved by using artificial neural networks “ANNs” in which weights and biases are trained to align a distance dissimilarity measure in property space for pairwise comparative files back towards a corresponding semantic distance dissimilarity measure in semantic space for those same files. The result is that, once optimised, the ANNs can process any file, parsed with those properties, to identify other files sharing common traits reflective of emotional-perception, thereby rendering a more liable and true-to-life result of similarity/dissimilarity. This contrasts with simply training a neural network to consider extractable measurable properties that, in isolation, do not provide a reliable contextual relationship into the real-world.
Special effect synchronization method and apparatus, and mobile terminal
A special effect synchronization method and apparatus, and a mobile terminal are provided. The method may include: obtaining timestamps marked corresponding to rhythm points of a music file; in response to playing a video file, playing the music file and adding a special effect in the video file based on the timestamps; and in response to that playback of the video file is ended, generating a synthesized file by synthesizing the video file, the music file, and the special effect.
Predictive System Based on a Semantically Trained Artificial Neural Network and ANN
A property vector representing extractable measurable properties, such as musical properties, of a file is mapped to semantic properties for the file. This is achieved by using artificial neural networks “ANNs” in which weights and biases are trained to align a distance dissimilarity measure in property space for pairwise comparative files back towards a corresponding semantic distance dissimilarity measure in semantic space for those same files. The result is that, once optimised, the ANNs can process any file, parsed with those properties, to identify other files sharing common traits reflective of emotional-perception, thereby rendering a more liable and true-to-life result of similarity/dissimilarity. This contrasts with simply training a neural network to consider extractable measurable properties that, in isolation, do not provide a reliable contextual relationship into the real-world.
Method and system for analysing sound
The present invention relates to a method and system for analysing audio (eg. music) tracks. A predictive model of the neuro-physiological functioning and response to sounds by one or more of the human lower cortical, limbic and subcortical regions in the brain is described. Sounds are analysed so that appropriate sounds can be selected and played to a listener in order to stimulate and/or manipulate neuro-physiological arousal in that listener. The method and system are particularly applicable to applications harnessing a biofeedback resource.
METHOD OF CHANGING OPERATION BASED ON MUSIC TRANSITION
A method of changing operation based on music transition is disclosed. In the method, an electronic device performs Fourier series transform on a sound signal of music to obtain a rhythm diagram of the sound signal, and then performs an operation to extract a rhythm segment point of a rhythm diagram, a waveform and a spectrogram of the sound signal, when each of intensity of the rhythm diagram, intensity of the waveform and magnitude of the spectrogram has a drop of a preset percentage at the same time for a preset time, a time point of the drop is used as the rhythm segment point, the sound signal of the music is faded out, and the electronic device transmits an operation signal to an operation device at the rhythm segment point. After receiving the operation signal, the operation device performs the corresponding action based on the operation signal.
METHOD OF GENERATING ACTIONS FOLLOWING THE RHYTHM OF MUSIC
A method of generating actions following the rhythm of music includes the steps of (a) using an electronic device to classify the sound signals from plural musical instruments into a complex pitch range, (b) the electronic device performing Fourier series conversion on the sound signals of the complex pitch range to obtain plural rhythm diagrams, (c) the electronic device performing a rhythm change point capture action on each rhythm diagram, and then executing no action if the intensity in the rhythm diagram continues to increase with time, or regarding the point as a rhythm change point if the intensity changes from an increase to a decrease, and then transmitting an action signal to an action device, and (c) the action device executing the corresponding action according to the action signal.
METHOD OF GENERATING ACTIONS FOLLOWING THE RHYTHM OF MUSIC
Method of generating actions following the rhythm of music includes the steps of (a) using an electronic device to classify the sound signals from plural musical instruments into a complex pitch range, (b) the electronic device selecting at least one pitch range according to the significance of the sound, and then performing Fourier series conversion on the sound signals of the selected said at least one pitch range to obtain at least one rhythm diagram. (c) the electronic device performing a rhythm change point capture action on the rhythm diagram, and then executing no action if the intensity in the rhythm diagram continues to increase with time, or regarding the point as a rhythm change point if the intensity changes from an increase to a decrease, and then transmitting an action signal to an action device, and (c) the action device executing the corresponding action according to the action signal.
METHOD OF PRODUCING LIGHT ANIMATION WITH RHYTHM OF MUSIC
A method of producing a light animation with a rhythm of music is disclosed. An electronic device performs Fourier series transform on a sound signal of music produced from at least one musical instrument, so as to obtain a rhythm diagram of the sound signal. The operation to extract a rhythm change point of the rhythm diagram is performed, and when the intensity of the rhythm diagram has a change from increase to decrease, the time point of the change is used as the rhythm change point and the electronic device transmits a lighting control signal to a light emitting device. After receiving the lighting control signal, the light emitting device emits light based on the lighting control signal, and the light emitted from the light emitting device continues to form the light animation, thereby improving overall performance appreciation of the music for audiences.