G10H2220/101

Method, device and software for controlling transport of audio data
11488568 · 2022-11-01 · ·

A method for processing music audio data, including providing input audio data representing a first piece of music comprising a mixture of musical timbres. The method also includes decomposing the input audio data to generate at least first-timbre decomposed data representing a first timbre selected from the musical timbres of the first piece of music, and second-timbre decomposed data representing a second timbre selected from the musical timbres of the first piece of music. The method also includes applying a transport control to obtain transport controlled first-timbre decomposed data. The method also includes recombining audio data obtained from the transport controlled first-timbre decomposed data with audio data obtained from the second-timbre decomposed data to obtain recombined audio data.

SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
20220052773 · 2022-02-17 ·

A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.

Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
11430419 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.

TRANSITION FUNCTIONS OF DECOMPOSED SIGNALS

A device including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain decomposed signals, a playback unit to start playback of a first output signal obtained from recombining at least first and second decomposed signals at first and second volume levels, respectively, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit is adapted for reducing the first/second volume levels according to first/second transition functions. The device includes an analyzing unit to analyze an audio signal to determine a song part junction between two song parts. The transition time interval of at least one of the transition functions is set such as to include the song part junction.

SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
20210391936 · 2021-12-16 ·

A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.

METHOD FOR GENERATING SONG MELODY AND ELECTRONIC DEVICE
20220208156 · 2022-06-30 ·

Provided is a method for generating a song melody. The method includes: displaying a melody configuration page; acquiring melody attribute information selected based on the melody configuration page; displaying a melody generation button in a triggerable state on the melody configuration page in response to selection of the melody attribute information being completed; displaying a candidate melody page in response to a triggering operation on the melody generation button; and determining one or more selected candidate melodies from at least one candidate melody as a target melody.

METHOD, DEVICE AND SOFTWARE FOR CONTROLLING TRANSPORT OF AUDIO DATA
20220199056 · 2022-06-23 · ·

A method for processing music audio data, including providing input audio data representing a first piece of music comprising a mixture of musical timbres. The method also includes decomposing the input audio data to generate at least first-timbre decomposed data representing a first timbre selected from the musical timbres of the first piece of music, and second-timbre decomposed data representing a second timbre selected from the musical timbres of the first piece of music. The method also includes applying a transport control to obtain transport controlled first-timbre decomposed data. The method also includes recombining audio data obtained from the transport controlled first-timbre decomposed data with audio data obtained from the second-timbre decomposed data to obtain recombined audio data.

Gesture-controlled virtual reality systems and methods of controlling the same

Gesture-controlled virtual reality systems and methods of controlling the same are disclosed herein. An example apparatus includes an on-body sensor to output first signals associated with at least one of movement of a body part of a user or a position of the body part relative to a virtual object and an off-body sensor to output second signals associated with at least one of the movement or the position relative to the virtual object. The apparatus also includes at least one processor to generate gesture data based on at least one of the first or second signals, generate position data based on at least one of the first or second signals, determine an intended action of the user relative to the virtual object based on the position data and the gesture data, and generate an output of the virtual object in response to the intended action.

Transition functions of decomposed signals

A device for processing audio signals, including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain a plurality of decomposed signals, a playback unit configured to start playback of a first output signal obtained from recombining at least a first decomposed signal at a first volume level with a second decomposed signal at a second volume level, such that the first output signal substantially equals the first input signal, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit has a volume control section adapted for reducing the first and second volume levels according to first and second transition functions.

COMPUTER VISION AND MAPPING FOR AUDIO APPLICATIONS
20230267900 · 2023-08-24 ·

Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.