G10H2210/056

DIgital Audio Workstation with Audio Processing Recommendations
20210357174 · 2021-11-18 ·

Presentation of a recommendation to a user for individual processing of audio tracks in a digital audio workstation. Training audio tracks are provided to a human sound mixer and responsive to the training audio tracks individually processed training audio tracks are received from the human sound mixer. The training audio tracks and the individually processed training audio tracks are input to a machine to train the machine. Audio processing operations are output from the trained machine and stored in a record of a database.

Method of combining data

A method of combining data, the method comprising: receiving video data, the video data corresponding to recorded video having a video duration determined by a user; selecting backing audio data, the backing audio data corresponding to backing audio having a predetermined duration; determining a difference between the predetermined duration and the video duration; and modifying the backing audio data by adjusting the predetermined duration based on the video duration to create an adjusted predetermined duration, the adjusted predetermined duration being such that the backing audio and recorded video may be simultaneously output in synchronisation.

Song analysis device and song analysis program

A music piece analyzer includes: a beat-position-acquiring-unit configured to detect beat positions in music piece data; a snare drum detector configured to detect sounding positions of a snare drum in the music piece data; a bass drum detector configured to detect sounding positions of a bass drum in, the music piece data; a one-beat-shift-determination-unit configured to determine whether a bar beginning of the music piece data is shifted by one beat based upon the sounding positions of the snare drum detected by the snare drum detector; a two-beat-shift-determination-unit configured to determine whether the bar beginning of the music piece data is shifted by two beats on a basis of the sounding positions of the bass drum detected by the bass drum detector; and a bar-beginning-setting-unit configured to set the bar beginning of the music piece data on a basis of results determined by the one-beat-shift-determination-unit and the two-beat-shift-determination-unit.

LEARNING PROGRESSION FOR INTELLIGENCE BASED MUSIC GENERATION AND CREATION
20210350776 · 2021-11-11 ·

An artificial intelligence (AI) method includes generating a first musical interaction behavioral model. The first musical interaction behavioral model causes an interactive electronic device to perform a first set of musical operations and a first set of motional operations. The AI method further includes receiving user inputs received in response to the performance of the first set of musical operations and the first set of motional operations and determining a user learning progression level based on the user inputs. In response to determining that the user learning progression level is above a threshold, the AI method includes generating a second musical interaction behavioral model. The second musical interaction behavioral model causes the interactive electronic device to perform a second set of musical operations and a second set of motional operations. The AI method further includes performing the second set of musical operations and the second set of motional operations.

DATA EXCHANGE FOR MUSIC CREATION APPLICATIONS

An operator of a digital audio workstation (DAW) application is able to assign individual tracks of a DAW session for export to specific players within a musical score of a scorewriter application. The DAW operator associates each track with a player identifier, which is retained in association with an interoperable format file generated by the export process. When the scorewriter imports such a file, it extracts the player identifier and uses it to map the track to a scored instrument. The mapping may also depend on a scorewriter arrangement of players for the instruments. The DAW operator may assign multiple tracks representing a given instrument played with different techniques to a single instrument part in a score. The playing techniques for the instruments are also associated with the tracks and may be parsed by the scorewriter to annotate the score with the corresponding notations.

Method and device for processing, playing and/or visualizing audio data, preferably based on AI, in particular decomposing and recombining of audio data in real-time

The present invention relates to a method for processing and playing audio data comprising the steps of receiving mixed input data and playing recombined output data. Furthermore, the invention relates to a device for processing and playing audio data, preferably DJ equipment, comprising an audio input unit for receiving a mixed input signal, a recombination unit and a playing unit for playing recombined output data. In addition, the present invention relates to a method and a device for representing audio data, i.e. on a display.

Audio source separation processing pipeline systems and methods

Systems and methods for audio source separation include receiving a single-track audio input sample having an unknown mixture of audio signals generated from a plurality of audio sources, and separating one or more of the audio sources from the single-track audio input sample using a sequential audio source separation model. Separating one or more of the audio sources may include defining a processing recipe comprising a plurality of source separation processes configured to receive an audio input mixture and output one or more separated source signals and a remaining complement signal mixture, and processing the single-track audio input sample in accordance with the processing recipe to generate a plurality of audio stems separated from the unknown mixture of audio signals.

METHOD AND DEVICE FOR DECOMPOSING, RECOMBINING AND PLAYING AUDIO DATA

A system processes mixed input data using a neural network trained to separate audio data of predetermined timbres from mixed audio data and to obtain a group of decomposed tracks comprising at least first, second, and third decomposed audio tracks representing audio signals of first, second, and third predetermined timbres, respectively. The system reads a control input representing a setting of a first volume level and of a second volume level. The system recombines at least a first selected track and a second selected track selected from the group of decomposed tracks to generate a first recombined track. The system recombines the first recombined track at the first volume level with at least a third track selected from the group of decomposed tracks, at the second volume level, to obtain a second recombined track. The system plays the audio data based on the second recombined track.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20230335090 · 2023-10-19 ·

To appropriately generate evaluation data to be compared with user input data. An information processing device includes a comparison unit that compares evaluation data generated on the basis of first user input data with second user input data.

Methods and systems for vocalist part mapping

Systems and methods for mapping parts in a digital sheet music file for a harmony. The method may include receiving a selection of a music segment for part mapping, receiving a digital sheet music representation of the selected music segment, and determining a plurality of plausible part mapping for the digital sheet music representation. A part mapping identifies one or more distinct musical parts in the digital sheet music representation, each of said one or more distinct musical parts corresponding to a performer of the harmony. The method may also include analyzing one or more features of the plurality of plausible part mapping to identify a highest probability part mapping based on previously stored information, and outputting the highest probability part mapping.