Patent classifications
G10H1/0025
MOBILITY SOUND GENERATION APPARATUS AND METHOD THEREOF
A mobility sound generation apparatus of generating a sound suitable for a landscape while driving of a mobility may include an information acquisition device provided in a mobility to acquire information on an outside landscape of the mobility while driving of the mobility; a sound generation device configured to generate a sound corresponding to the information on the outside landscape; and a sound output device configured to output the generated sound, whereby it is possible to acquire information on an outside landscape while driving of the mobility and generate a sound corresponding to the information on the outside landscape and it is possible for facilitating an occupant to admire a landscape by expressing the outside landscape with sound while driving to generate and provide the sound corresponding to water, mountains, buildings or the like forming the outside landscape of the mobility.
SYSTEM AND METHOD FOR GENERATING HARMONIOUS COLOR SETS FROM MUSICAL INTERVAL DATA
Systems and methods are disclosed for generating color sets based on musical concepts of pitch intervals and harmony. Color sets are derived via a music-to-hue process which analyzes musical pitch data associated with musical input to determine pitch intervals included in the music. Pitch interval angles associated with the pitch intervals are applied to a tuned hue index to identify hue note ordered within the index which are separated by a hue interval angle similar to the pitch angle associated with the analyzed pitch data. The systems and methods provide for the creation of color sets which are analogous to musical chords in that they include multiple hue notes selected based on hue interval angles derived from musical interval angles associated with the received musical input.
Method and system for hybrid AI-based song variant construction
According to an embodiment, there is provided a system and method for automatic AI-based song construction based on ideas of a user. It provides and benefits from a combination of expert knowledge resident in an expert engine which contains rules for a musically correct song generation and machine learning in an AI-based audio loop selection engine for the selection of fitting audio loops from a database of audio loops.
Systems, devices, and methods for musical catalog amplification services
Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).
MULTIDIMENSIONAL GESTURES FOR MUSIC CREATION APPLICATIONS
A graphical user interface for music creation applications, such as score notation applications and digital audio workstations, includes multi-dimensional gestures. To enter a sound event into a musical project, a user uses an input device to select and drag a desired sound event in one or more dimensions. The relative position or rate of movement along a given dimension defines a value of a sound event parameter allocated to the given dimension. The sound event is entered into the project when the selection is released. The user inputs the gesture using a pointing device such as a mouse, stylus with a touch screen, or finger on a touch screen. Stylus dimensions mapped to sound event parameters may include, horizontal and vertical stylus tip positions, vertical and horizontal tilt of the stylus, and stylus tip pressure. Sound event parameters controlled by the gestures may include diatonic pitch, chromatic inflection, and duration.
Modular automated music production server
A music production system comprises: a computer interface comprising at least one input for receiving an external request for a piece of music and at least one output for transmitting a response to the external request which comprises or indicates a piece of music incorporating first music data; a first music production component configured to process second music data according to at least a first input setting so as to generate the first music data; a second music production component configured to receive via the computer interface an internal request, and provide the second music data based on at least a second input setting denoted by the internal request; and a controller configured to determine in response to the external request the first and second input settings, and instigate the internal request via the computer interface.
METHOD AND SYSTEM FOR TRANSLATION OF BRAIN SIGNALS INTO ORDERED MUSIC
The present invention is a computer-implemented method comprising: receiving, by one or more processors, data from an electroencephalogram device worn by a user, wherein data is collected related to at least one brainwave; separating the collected data into individual data streams related to the one or more brainwaves; performing at least one manipulation to each of the individual data streams, wherein each of the data streams are manipulated to produce a sound applying at least one filter to each of the sounds; and generating each of the sounds, wherein a musical composition is formed.
MULTIMEDIA MUSIC CREATION USING VISUAL INPUT
A system for creating music using visual input. The system detects events and metrics (e.g., objects, gestures, etc.) in user input (e.g., video, audio, music data, touch, motion, etc.) and generates music and visual effects that are synchronized with the detected events and correspond to the detected metrics. To generate the music, the system selects parts from a library of stored music data and assigns each part to the detected events and metrics (e.g., using heuristics to match musical attributes to visual attributes in the user input). To generate the visual effects, the system applies rules (e.g., that map musical attributes to visual attributes) to translate the generated music data to visual effects. Because the visual effects are generated using music data that is generated using the detected events/metrics, both the generated music and the visual effects are synchronized with—and correspond to—the user input.
Neurostimulation Systems and Methods
The present application discloses and describes neurostimulation systems and methods that include, among other features, (i) neural stimulation through audio with dynamic modulation characteristics, (ii) audio content serving and creation based on modulation characteristics, (iii) extending audio tracks while avoiding audio discontinuities, and (iv) non-auditory neurostimulation and methods, including non-auditory neurostimulation for anesthesia recovery.
Audio Source Separation Processing Pipeline Systems and Methods
Systems and methods for audio source separation include receiving a single-track audio input sample having an unknown mixture of audio signals generated from a plurality of audio sources, and separating one or more of the audio sources from the single-track audio input sample using a sequential audio source separation model. Separating one or more of the audio sources may include defining a processing recipe comprising a plurality of source separation processes configured to receive an audio input mixture and output one or more separated source signals and a remaining complement signal mixture, and processing the single-track audio input sample in accordance with the processing recipe to generate a plurality of audio stems separated from the unknown mixture of audio signals.