Patent classifications
G10H2220/126
SYSTEMS AND METHODS FOR IMPORTING AUDIO FILES IN A DIGITAL AUDIO WORKSTATION
A method includes displaying a user interface of a digital audio workstation, which includes a composition region for generating a composition. The composition region includes a representation of a first MIDI file that has already been added to the composition by a user. The method further includes receiving a user input to import, into the composition region, an audio file. In response to the user input to import the audio file, the method includes importing the audio file, which includes, without user intervention, aligning the audio file with a rhythm of the first MIDI file, modifying a rhythm of the audio file based on the rhythm of the first MIDI file, and displaying a representation of the audio file in the composition region.
System and method for generating an audio file
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
Systems and methods for generating a visual color display of audio-file data
Systems and methods for generating a visual color display of audio-file data are provided. The system includes a processor that performs a method including receiving audio-file data; generating filtered-audio data by processing the audio-file data by frequency-band filters. The frequency band filters have different frequency bands. The method includes generating one or more waveforms corresponding to the filtered-audio data and displaying the waveforms superimposed in unique color relative to one another. The method includes downsampling the waveforms. The method includes processing the waveforms through an envelope detector. The method includes processing the waveforms through an expander and applying a gain factor. The waveforms have transparency levels at sections that are proportional or inversely proportional to amplitudes at the sections.
Systems and methods for generating audio content in a digital audio workstation
A method includes displaying a graphical user interface (GUI) for a step sequencer in a digital audio workstation. The GUI includes a sequence of user interface elements corresponding to a portion of a roll for an audio composition. Each user interface element in the sequence of user interface elements represents a respective time interval for a note. The sequence of user interface elements. The method includes receiving a user input interacting with a first user interface element. The method includes, in response to the user input: splitting a played note represented by the first user interface element into two or more played notes. The method further includes providing the audio composition for playback by a speaker.
SYSTEM AND METHOD FOR MUTATION TUNING OF AN AUDIO FILE
Disclosed is a system and method for mutating an audio file, and more particularly, for user-trained mutation tracking and tuning of an audio file, comprising the steps of: receiving a user input, wherein the user input is at least one of an audio file; entering at least a pattern into a grid sequencer by selecting any number of squares in the grid, wherein each square represents a particular count occupancy probability at a particular count in a musical composition bar that the user prefers to render as a final output; uploading at least one ‘good’ and ‘bad’ audio file sample by the user to affect the particular count occupancy probability based on the user input and pattern; and rendering the final output comprising the mutated audio file based on the user input, pattern, and upload.
System and method for generating an audio file
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
Systems and methods for generating a playback-information display during time compression or expansion of an audio signal
Systems and methods for generating a playback-information display during time compression or expansion of an audio signal are provided. The system includes a processor that performs a method including displaying a first remaining playback-time associated with an audio file; adjusting the playback speed of the audio file during playback of the audio file; and, in response to the playback speed being adjusted, automatically displaying a second remaining playback-time associated with the audio file during playback of the audio file.
TECHNIQUES FOR CREATING AND PRESENTING MEDIA CONTENT
Different types of media experiences can be developed based on characteristics of the consumer. “Linear” experiences may require execution of a pre-built script, although the script could be dynamically modified by a media production platform. Linear experiences can include guided audio tours that are modified or updated based on the location of the consumer. “Enhanced” experiences include conventional media content that is supplemented with intelligent media content. For example, turn-by-turn directions could be supplemented with audio descriptions about the surrounding area. “Freeform” experiences, meanwhile, are those that can continually morph based on information gleaned from a consumer. For example, a radio station may modify what content is being presented based on the geographical metadata uploaded by a computing device associated with the consumer.
System and method for generating an audio file
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
System and method for generating an audio file
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.