G10H2210/071

COMPUTING ORDERS OF MODELED EXPECTATION ACROSS FEATURES OF MEDIA

A method implemented by a determination engine is provided. The determination engine receives a media dataset comprising target piece music information, target piece audience information, corpus music information, corpus audience information, and corpus preference data. The determination engine determines a subset of the corpus music and preference information and determines at least one surprise factor of the subset of the corpus music and preference information across features at one of a plurality of orders. The determination engine learns a model that estimates a likelihood that time-varying surprise trends across the features achieves a preference level. The determination engine determines at least one surprise factor of the target piece music information across the features at the one of the plurality of orders and predicts, using the model, preference information using the time-varying surprise trends for the target piece music information across the features.

SYSTEM AND METHOD FOR CREATING AND OUTPUTTING MUSIC
20210335335 · 2021-10-28 ·

The subject matter discloses a system implemented on in a mobile electronic device, the system comprising a processing system of the device; and a memory that contains instructions comprising: detecting ambient sounds in the vicinity of the mobile electronic device; determining at least one property selected from a group consisting of a relative direction and relative distance of the ambient sounds relative to the mobile electronic device; analyzing the detected ambient sounds; outputting audio Interactive Music data based on the analysis of the ambient sounds and based on at least one of a relative direction and relative distance of the ambient sounds relative to the mobile electronic device; wherein said outputting is performed on the mobile electronic device.

METHOD OF TRAINING A NEURAL NETWORK TO REFLECT EMOTIONAL PERCEPTION AND RELATED SYSTEM AND METHOD FOR CATEGORIZING AND FINDING ASSOCIATED CONTENT

A property vector representing extractable measurable properties, such as musical properties, of a file is mapped to semantic properties for the file. This is achieved by using artificial neural networks “ANNs” in which weights and biases are trained to align a distance dissimilarity measure in property space for pairwise comparative files back towards a corresponding semantic distance dissimilarity measure in semantic space for those same files. The result is that, once optimised, the ANNs can process any file, parsed with those properties, to identify other files sharing common traits reflective of emotional-perception, thereby rendering a more liable and true-to-life result of similarity/dissimilarity. This contrasts with simply training a neural network to consider extractable measurable properties that, in isolation, do not provide a reliable contextual relationship into the real-world.

Audio matching with semantic audio recognition and report generation

Example articles of manufacture and apparatus for producing supplemental information for audio signature data are disclosed herein. An example apparatus includes memory including computer readable instructions. The example apparatus also includes a processor to execute the instructions to at least obtain first audio signature data associated with a first time period of media, obtain first semantic signature data associated with the first time period of the media and second semantic signature data associated with a second time period of the media, and when second audio signature data associated with the second time period of the media is unavailable, identify the media based on the first audio signature data associated with the first time period of media when the second semantic signature data associated with the second time period matches the first semantic signature data associated with the first time period of the media.

Audio processing techniques for semantic audio recognition and report generation

Example methods, apparatus and articles of manufacture to determine semantic information for audio are disclosed. Example apparatus disclosed herein are to process an audio signal obtained by a media device to determine values of a plurality of features that are characteristic of the audio signal, compare the values of the plurality of features to a first template having corresponding first ranges of the plurality of features to determine a first score, the first template associated with first semantic information, compare the values of the plurality of features to a second template having corresponding second ranges of the plurality of features to determine a second score, the second template associated with second semantic information, and associate the audio signal with at least one of the first semantic information or the second semantic information based on the first score and the second score.

Auto-generated accompaniment from singing a melody

A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.

ARBITRARY SIGNAL INSERTION METHOD AND ARBITRARY SIGNAL INSERTION SYSTEM
20210241740 · 2021-08-05 · ·

An arbitrary signal insertion method and an arbitrary signal insertion system, capable of inserting a transmittable arbitrary signal (insertion information M) at a predetermined insertion timing into an acoustic sound played in real time. An insertion timing is previously associated with a predetermined time code with master rhythm information. An acoustic sound into which insertion information will be inserted is music sound generated by a real-time performance unit and is accompanied by a second rhythm. The insertion information is inserted into the music sound generated by the real-time performance unit at the insertion timing after the rhythm of master rhythm information and the rhythm of the music sound generated by the real-time performance unit are synchronized. The synchronization is achieved by a rhythm transmitter which notifies a player of a rhythm session musical instrument of the rhythm of the master rhythm information with sound or light.

Method of training a neural network to reflect emotional perception and related system and method for categorizing and finding associated content

A property vector representing extractable measurable properties, such as musical properties, of a file is mapped to semantic properties for the file. This is achieved by using artificial neural networks “ANNs” in which weights and biases are trained to align a distance dissimilarity measure in property space for pairwise comparative files back towards a corresponding semantic distance dissimilarity measure in semantic space for those same files. The result is that, once optimised, the ANNs can process any file, parsed with those properties, to identify other files sharing common traits reflective of emotional-perception, thereby rendering a more liable and true-to-life result of similarity/dissimilarity. This contrasts with simply training a neural network to consider extractable measurable properties that, in isolation, do not provide a reliable contextual relationship into the real-world.

Method of training a neural network to reflect emotional perception and related system and method for categorizing and finding associated content

A property vector representing extractable measurable properties, such as musical properties, of a file is mapped to semantic properties for the file. This is achieved by using artificial neural networks “ANNs” in which weights and biases are trained to align a distance dissimilarity measure in property space for pairwise comparative files back towards a corresponding semantic distance dissimilarity measure in semantic space for those same files. The result is that, once optimised, the ANNs can process any file, parsed with those properties, to identify other files sharing common traits reflective of emotional-perception, thereby rendering a more liable and true-to-life result of similarity/dissimilarity. This contrasts with simply training a neural network to consider extractable measurable properties that, in isolation, do not provide a reliable contextual relationship into the real-world.

SYSTEMS, DEVICES, AND METHODS FOR MUSICAL CATALOG AMPLIFICATION SERVICES
20210241402 · 2021-08-05 ·

Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).