Patent classifications
G10H2220/101
Cognitive music engine using unsupervised learning
A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece.
Systems and methods for generating recommendations in a digital audio workstation
A method includes displaying a user interface of a digital audio workstation, which includes a first region for generating a composition. The first region includes a first compositional segment that has been added to the composition by a user. Based on the first compositional segment, one or more recommended predefined compositional segments are identified and displayed in a second region. The method includes receiving the selection of a second compositional segment. The method includes adding the compositional segment to the composition.
Method and apparatus for displaying music points, and electronic device and medium
Disclosed are a method and apparatus for displaying music points, and an electronic device and a medium. One specific embodiment of the method includes: acquiring audio material; analyzing initial music points in the audio material, wherein the initial music points include beat points and/or note starting points in the audio material; and on an operation interface of video clipping, displaying, according to the position of the audio material on a clip timeline and the positions of target music points in the audio material, identifiers of the target music points on the clip timeline, wherein the target music points are some of or all of the initial music points. According to the embodiment, the time for a user to process audio material and to make music points is reduced, and the flexibility of tools is also guaranteed.
AI-BASED DJ SYSTEM AND METHOD FOR DECOMPOSING, MISING AND PLAYING OF AUDIO DATA
The present invention relates to a method for processing and playing audio data comprising the steps of receiving mixed input data and playing recombined output data. Furthermore, the invention relates to a device 10 for processing and playing audio data, preferably DJ equipment, comprising an audio input unit for receiving a mixed input signal, a recombination unit 32 and a playing unit 34 for playing recombined output data. In addition, the present invention relates to a method and a device for representing audio data, i.e. on a display.
AI BASED REMIXING OF MUSIC: TIMBRE TRANSFORMATION AND MATCHING OF MIXED AUDIO DATA
The present invention provides a method for processing audio data, comprising the steps of providing input audio data containing a mixture of audio data including first audio data of a first musical timbre and second audio data of a second musical timbre different from said first musical timbre, decomposing the input audio data to provide decomposed data representative of the first audio data, transforming the decomposed data to obtain third audio data.
SONG PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
This application provides a song processing method performed by a computer device. The method includes: presenting a song recording interface in response to a singing instruction triggered in a session interface of a group chat session; recording a song in response to a song recording instruction triggered in the song recording interface, and determining a reverberation effect corresponding to the recorded song; and transmitting, in response to a song transmitting instruction, a target song obtained by processing the song based on the reverberation effect to members of the group chat session, presenting a session message corresponding to the target song in the session interface, and presenting the pick-up singing function item corresponding to the target song in the session interface, the pick-up singing function item being used for implementing pick-up singing of the target song by a member of the group chat session.
System and method for generating an audio file
The present invention relates to a computer implemented system and method for generating an audio output file. The method including using one or more processors to perform steps of: receiving audio tracks, each audio track created according to audio parameters; separating each audio track into at least one selectable audio block, each audio block including audio content from a musical instrument involved in creating the audio track; assigning a unique identifier to each audio block; using the unique identifiers to select audio blocks, and generating the audio output by combining the audio blocks. The present invention prevents the use of the same combination of audio blocks in the generation of audio output to ensure that the audio output files generated a sufficiently unique. Also provided are audio file recording, editing and mixing modules enabling a user to have full creative control over mix and other parameters to modify as desired the audio file generated.
Handheld musical instrument with control buttons
A handheld musical instrument for playing a variety of audio program, the handheld musical instrument comprising a body portion with one or more sensors and a handle portion coupled to said body part, wherein said handle portion includes one or more operational buttons to control the operation of said handheld musical instrument. The buttons being operated with a finger or thumb trigger.
Computer vision and mapping for audio applications
Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.
Method and system for interactive song generation
A method and system may provide for interactive song generation. In one aspect, a computer system may present options for selecting a background track. The computer system may generate suggested lyrics based on parameters entered by the user. User interface elements allow the computer system to receive input of lyrics. As the user inputs lyrics, the computer system may update its suggestions of lyrics based on the previously input lyrics. In addition, the computer system may generate proposed melodies to go with the lyrics and the background track. The user may select from among the melodies created for each portion of lyrics. The computer system may optionally generate a computer-synthesized vocal(s) or capture a vocal track of a human voice singing the song. The background track, lyrics, melodies, and vocals may be combined to produce a complete song without requiring musical training or experience by the user.