G10H2210/071

AUDIO ONSET DETECTION METHOD AND APPARATUS
20220358956 · 2022-11-10 ·

An audio onset detection method and apparatus, an electronic device, and a computer readable storage medium. The audio onset detection method comprises: determining a first voice frequency spectrum parameter corresponding to each frequency band according to a frequency domain signal corresponding to an audio signal of an audio; for each frequency band, determining a second voice frequency spectrum parameter of a current frequency band according to the first voice frequency spectrum parameter of the current frequency band and the first voice frequency spectrum parameters of frequency bands positioned before the current frequency band according to a time sequence; and determining one or more onset positions of notes and syllables in the audio according to the second voice frequency spectrum parameters corresponding to the frequency bands.

CONTROLLER FOR VISUAL DISPLAY OF MUSIC
20230041100 · 2023-02-09 · ·

Systems and methods for visualizations of music may include one or more processors which receive an audio input, and compute a simulation of a human auditory periphery using the audio input. The processor(s) may generate one or more visual patterns on a visual display, according to the simulation, the one or visual patterns synchronized to the audio input.

Searching for Music

In implementations of searching for music, a music search system can receive a music search request that includes a music file including music content. The music search system can also receive a selected musical attribute from a plurality of musical attributes. The music search system includes a music search application that can generate musical features of the music content, where a respective one or more of the musical features correspond to a respective one of the musical attributes. The music search application can then compare the musical features that correspond to the selected musical attribute to audio features of audio files, and determine similar audio files to the music file based on the comparison of the musical features to the audio features of the audio files.

Systems and methods for analyzing and treating learning disorders

Devices, systems, and methods are provided for analyzing and treating learning disorders using software as a medical device. A method may include identifying, by a device, application-based cognitive musical training (CMT) exercises associated with performance of software; receiving a first user input to generate a first sequence of the application-based CMT exercises; presenting a first application-based CMT exercise of the application-based CMT exercises based on the first sequence; receiving, during the presentation of the first application-based CMT exercise, a second user input indicative of a user interaction with the first application-based CMT exercise; generating, based on a comparison of the second user input to a performance threshold, a second sequence of the application-based CMT exercises, the first sequence different than the second sequence; and presenting a second application-based CMT exercise of the application-based CMT exercises based on the second sequence.

Systems, devices, and methods for musical catalog amplification services
11615772 · 2023-03-28 · ·

Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).

Searching for music

In implementations of searching for music, a music search system can receive a music search request that includes a music file including music content. The music search system can also receive a selected musical attribute from a plurality of musical attributes. The music search system includes a music search application that can generate musical features of the music content, where a respective one or more of the musical features correspond to a respective one of the musical attributes. The music search application can then compare the musical features that correspond to the selected musical attribute to audio features of audio files, and determine similar audio files to the music file based on the comparison of the musical features to the audio features of the audio files.

Method of generating a tactile signal using a haptic device

A haptic device according to one embodiment can comprise: a database unit for storing acoustic information or receiving the acoustic information from an external device; a control unit for converting the acoustic information into an electrical signal according to a predetermined pattern; a driving unit for generating a motion signal on the basis of the electrical signal; and a transfer unit for transferring a patterned tactile signal to a user by means of the motion signal.

MUSICAL PIECE INFERENCE DEVICE, MUSICAL PIECE INFERENCE METHOD, MUSICAL PIECE INFERENCE PROGRAM, MODEL GENERATION DEVICE, MODEL GENERATION METHOD, AND MODEL GENERATION PROGRAM
20230162712 · 2023-05-25 ·

A musical piece inference device includes an electronic controller configured to execute a data acquisition module, an inference module, and an output module. The data acquisition module is configured to acquire target data including an input token sequence that is arranged to indicate at least a part of a musical piece and includes a plurality of bar-line/beat tokens arranged to indicate bar-line/beat positions of at least the part of the musical piece. The bar-line/beat positions are positions of bar lines of at least the part of the musical piece, positions of beats of at least the part of the musical piece, or both. The inference module is configured to, by using a trained inference model, generate an output token sequence indicating a result of an inference with respect to the musical piece from the input token sequence. The output module is configured to output the result of the inference.

Systems and methods for audio interpretation of media data
11468867 · 2022-10-11 · ·

A system and method for providing acoustic output is disclosed, the system comprising a communication device, a processor coupled to the communication device, and a memory coupled to the processor. The processor receives multimedia data associated with a multimedia output stream, extracts audio data based on the multimedia data, and generates a rhythmic data set including time-series acoustic characteristic data based on the extracted audio data. A sequence of visual elements is generated based on the time-series acoustic characteristic data and associated with the respective visual elements in the sequence of visual elements with the multimedia data. The multimedia data for visually displaying the acoustic characteristic data concurrently with the multimedia stream is transmitted to a multimedia output device.

Method of training a neural network to reflect emotional perception and related system and method for categorizing and finding associated content

A property vector representing extractable measurable properties, such as musical properties, of a file is mapped to semantic properties for the file. This is achieved by using artificial neural networks “ANNs” in which weights and biases are trained to align a distance dissimilarity measure in property space for pairwise comparative files back towards a corresponding semantic distance dissimilarity measure in semantic space for those same files. The result is that, once optimised, the ANNs can process any file, parsed with those properties, to identify other files sharing common traits reflective of emotional-perception, thereby rendering a more liable and true-to-life result of similarity/dissimilarity. This contrasts with simply training a neural network to consider extractable measurable properties that, in isolation, do not provide a reliable contextual relationship into the real-world.