G10H2220/101

Systems and Methods for Generating Recommendations in a Digital Audio Workstation

A method includes displaying a user interface of a digital audio workstation, which includes a first region for generating a composition. The first region includes a first compositional segment that has been added to the composition by a user. Based on the first compositional segment, one or more recommended predefined compositional segments are identified and displayed in a second region. The method includes receiving the selection of a second compositional segment. The method includes adding the compositional segment to the composition.

COMPUTER VISION AND MAPPING FOR AUDIO APPLICATIONS
20210366449 · 2021-11-25 ·

Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.

Modifying an array of cells in a cell matrix for a step-sequencer
11169650 · 2021-11-09 · ·

A method of operating a UI for controlling a virtual musical instrument can include receiving a first input corresponding to a selection of an array of cells within a cell matrix, each array of the cell matrix assigned to audio sample data stored in a computer-readable medium that, when triggered, causes the audio sample data to output corresponding audio, and each cell within its particular array of the cell matrix, in response to being selected for playback and upon being triggered to begin playback, causes the audio sample data corresponding to the cell's particular array to be played. The method can further include receiving a second input corresponding to a changing of a number of cells within the particular selected array; and changing the number of cells within the selected array based on the second input and maintaining a number of cells in other arrays within the cell matrix.

Method and device for processing, playing and/or visualizing audio data, preferably based on AI, in particular decomposing and recombining of audio data in real-time

The present invention relates to a method for processing and playing audio data comprising the steps of receiving mixed input data and playing recombined output data. Furthermore, the invention relates to a device for processing and playing audio data, preferably DJ equipment, comprising an audio input unit for receiving a mixed input signal, a recombination unit and a playing unit for playing recombined output data. In addition, the present invention relates to a method and a device for representing audio data, i.e. on a display.

System for generating an output file

A system for creating an output comprises a processing unit, a user input module operably connected to the processing unit, and a video monitor operably connected to the processing unit. The processing unit provides on the video monitor: a grid image comprising multiple cells, each cell representing a duration of time; and a selection area comprising multiple select icons, each select icon representing a source data file. The processing unit is configured such that a user can create a grid layout representing the correlation between individual selected source data files and one or more of the multiple cells. The processing unit produces the output based on the correlation.

Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music
11776518 · 2023-10-03 · ·

An automated music composition and generation system including a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the system user interface, for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to the musical experience descriptors and time and/or space parameters selected by the system user. Each digital piece of composed and generated music contains a set of musical notes arranged and performed in the digital piece of music. The engine includes: a digital piece creation subsystem and a digital audio sample producing subsystem supported by virtual musical instrument libraries.

Method of and system for automatically generating music compositions and productions using lyrical input and music experience descriptors
11657787 · 2023-05-23 · ·

An automated music composition and generation process within an automated music composition and generation system driven by lyrical musical experience descriptors. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to select and provide musical experience descriptors, including lyrics, to the automated music composition and generation engine for processing by said automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on the musical experience descriptors and lyrics provided.

Automated music composition and generation system driven by lyrical input
11651757 · 2023-05-16 · ·

An automated music composition and generation process within an automated music composition and generation system driven by lyrics. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to provide lyrics to the automated music composition and generation engine for processing by the automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on lyrics the provided as input. The lyrics are analyzed for vowel formants to generate pitch events, which are used to support the automated music composition process.

GESTURE-CONTROLLED VIRTUAL REALITY SYSTEMS AND METHODS OF CONTROLLING THE SAME

Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.

System and method for generating an audio file
11569922 · 2023-01-31 · ·

A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.