G10H2210/086

AUTO-GENERATED ACCOMPANIMENT FROM SINGING A MELODY
20200074966 · 2020-03-05 ·

A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.

OPTICAL PICKUP AND STRING MUSIC TRANSLATION SYSTEM
20200020311 · 2020-01-16 ·

A low-cost and high-compatibility optical pickup including a light source, one set of optical sensors, and a controller. The light source illuminates a string assembled on an instrument. The set of optical sensors corresponding to the light source is provided to sense the shading of the string. The controller supplies the sensed data from the set of optical sensors to a system host for recognition of the melody played on the string. Considering the other strings assembled on the instrument, the optical pickup includes other sets of optical sensors to sense the shading of the other strings which are also illuminated by the light source. The controller also supplies the sensed data of the other sets of optical sensors to the system host for recognition of the melody played on the other strings.

AUDIO EXTRACTION APPARATUS, MACHINE LEARNING APPARATUS AND AUDIO REPRODUCTION APPARATUS
20190392802 · 2019-12-26 · ·

A processor in an audio extraction apparatus performs a preprocessing operation to determine, for a stereo audio source including first channel audio data including an accompaniment sound and a vocal sound for a first channel and second channel audio data including an accompaniment sound and a vocal sound for a second channel, a difference between the first channel audio data and the second channel audio data to generate center cut audio data, and an audio extraction operation to input the first channel audio data, the second channel audio data and the center cut audio data to a trained machine learning model to extract any one of the accompaniment sound and the vocal sound.

Information processing method and information processing system for sound synthesis utilizing identification data associated with sound source and performance styles

An information processing system includes at least one memory storing a program and at least one processor. The at least one processor implements the program to input a piece of sound source data obtained by encoding a first identification data representative of a sound source, a piece of style data obtained by encoding a second identification data representative of a performance style, and synthesis data representative of sounding conditions into a synthesis model generated by machine learning, and to generate, using the synthesis model, feature data representative of acoustic features of a target sound of the sound source to be generated in the performance style and according to the sounding conditions, and to generate an audio signal corresponding to the target sound using the generated feature data.

Display control method, display control device, and program
11893304 · 2024-02-06 · ·

A display control method includes causing a display device to display a processing image in which a first image representing a note corresponding to a synthesized sound and a second image representing a sound effect are arranged in an area, in which a pitch axis and a time axis are set, in accordance with synthesis data that specify the synthesized sound generated by sound synthesis and the sound effect added to the synthesized sound.

Electronic device for improving cooperation among a plurality of members
10511398 · 2019-12-17 · ·

An electronic device 10 that are wirelessly connected to other electronic devices within a range of a limited communication distance, and the device has a controller CNT that configured to determine at least one of whether or not there are a plurality of members who each have an electronic device within a predetermined range and whether or not a plurality of members who each have an electronic device show a same behavior, and to transmit, if it is determined that there are a plurality of members who each have an electronic device within the predetermined range, or that a plurality of members who each have an electronic device show a same behavior, a request to the plurality of electronic devices to generate rhythm signals at a same tempo to encourage improvement in cooperation among the plurality of members.

METHOD AND SYSTEM FOR GENERATING AN AUDIO OR MIDI OUTPUT FILE USING A HARMONIC CHORD MAP
20190378483 · 2019-12-12 ·

Techniques are provided for generating an output file. One technique involves the steps of generating audio or MIDI content blocks from one or more musical performances; receiving an input file having audio or MIDI music content; generating a harmonic chord map for the input file; using the harmonic chord map to automatically select a subset of the audio or MIDI content blocks, and generating the output file by combining the selected subset of content blocks and the input file. This technique may enable the creation of unique and new musical accompaniments by re-purposing audio or MIDI content from back catalogs and/or out-takes of musical works. The new arrangement may be provided in multiple music styles, genres, or moods and may contain performances from multiple musical instruments, which may be pre-recorded from live instrument performances and/or of MIDI generated musical content.

METHOD FOR PROVIDING BIDIRECTIONAL COMMUNICATION SERVICE FOR DIFFERENT BREEDS OF ANIMALS
20190362727 · 2019-11-28 ·

Provided is a method for providing a bidirectional communication service for different breeds of animals, comprising the steps of: receiving, from a user terminal, a registration event for registering a communication terminal for different breeds of animals that is wirelessly linked to the user terminal and is mounted to a communication object, and storing the same; when sound data is collected from the communication terminal for different breeds of animals, analyzing the sound data on the basis of a pre-stored animal sound translation algorithm; transmitting, to the user terminal, an animal content comprising at least one of text, a voice, an image, and a moving image mapped to the analysis result, or at least one combination thereof; when a human being content comprising at least one of a voice, an image, and a moving image is collected from the user terminal, outputting the collected human being content to a content input/output apparatus linked to the user terminal and the communication terminal for different breeds of animals; and streaming, to the user terminal, a real-time image collected from the content input/output apparatus.

Music production using recorded hums and taps
10431192 · 2019-10-01 · ·

Embodiments of the present invention provide for the composition of new music based on analysis of unprocessed audio, which may be in the form of melodic hums and rhythmic taps. As a result of this analysismusic information retrieval or MIRmusical features such as pitch and tempo are output. These musical features are then used by a composition engine to generate a new and socially co-created piece of content represented as an abstraction. This abstraction is then used by a production engine to produce audio files that may be played back, shared, or further manipulated.

Method and apparatus for analyzing characteristics of music information
10431191 · 2019-10-01 ·

The purpose of the present invention is to provide a system capable of analyzing intuitively-created improvisation performances without relying on music theories. There is provided an improvisation performance analysis system, comprising: a music information coding section 10 for analyzing and coding music data of an improvisation performer stored in a music storage medium; a tone sequence pattern extraction section 11 for extracting all of first- to n-th-order tone sequence patterns which are likely to occur as n-th Markov chains in order to perform a stochastic analysis with a Markov model using the coded music data; a pitch transition sequence extraction section 12 for obtaining a pitch transition sequence for each of the extracted tone sequence patterns; a transition probability/appearance probability calculation section 13 for using the Markov model to calculate a transition probability of each pitch transition sequence and an appearance probability of each transition sequence at each of the first- to n-th-order hierarchical levels; and an improvisation performance phrase structuring section 14 for rearranging the pitch transition probabilities at each hierarchical level based on the transition probabilities and the appearance probabilities, identifying pitch transition sequences which are statistically likely to occur and expressing the pitch transition sequences in all keys as music scores based on the twelve-tone equal temperament to thereby generate improvisation performance phrases.