Patent classifications
G10H2240/175
Audio-visual effects system for augmentation of captured performance based on content thereof
Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.
Headphone
An audio signal output system including a first audio signal path, a second audio signal path and control electronics that provide a mixed sound from sound sources to the first and second audio signal paths, and control the mixed sound in accordance with a position at which a sound image associated with at least one of the sound sources is localized relative to an orientation of a user's head. The control electronics provide one or more signals corresponding to the mixed sound to the first and second audio signal paths based on one or more transfer functions associated with one or more values corresponding to at least one of a distance from the user to the at least one sound source, an angle of the user with respect to the at least one sound source, and a size of a space in which the user is located.
MUSIC RECORDING AND COLLABORATION PLATFORM
Methods, systems and non-transitory computer-readable mediums for remote audio project collaboration. The method includes generating a first version of an audio project file including a reference track. The method also includes receiving a first audio track from a first user computing device. The first audio track is synced to the reference track. The method further includes generating a second version of the audio project file by adding the first audio track to the audio project file. The method also includes receiving a second audio track from a second user computing device. The second audio track is synced to the reference track. The second user computing device is remotely located from the first user computing device. The method further includes generating a third version of the audio project file by adding the second audio track to the audio project file.
Musical notation, system, and methods
In one aspect, provided herein is a device for notating a musical composition. The device, in various implementations, is structured so as to be less laborious to notate, easier to read, and more simple to employ in notating, reading, and/or playing the music of a given composition to be composed and/or played. Accordingly, in its most basic form, the device herein disclosed includes a template, upon which template one or more symbols may be notated, where such notation is configured in a manner that more closely relates the note to be played with the mechanical action needed to be performed so as to play that note, such as on an instrument to be or being played.
METHOD AND SYSTEM FOR GENERATING MEDIA CONTENT
Systems and techniques are described herein for determining latencies between user devices, ordering or grouping the user devices according to those determined latencies, and then streaming audio to the user devices where the audio stream is played using speakers, and at the same time vocalizations of the user as they sing along to the received stream are captures, and combined with vocalizations of other users to create a final combined audio file.
SONG PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
This application provides a song processing method performed by a computer device. The method includes: presenting a song recording interface in response to a singing instruction triggered in a session interface of a group chat session; recording a song in response to a song recording instruction triggered in the song recording interface, and determining a reverberation effect corresponding to the recorded song; and transmitting, in response to a song transmitting instruction, a target song obtained by processing the song based on the reverberation effect to members of the group chat session, presenting a session message corresponding to the target song in the session interface, and presenting the pick-up singing function item corresponding to the target song in the session interface, the pick-up singing function item being used for implementing pick-up singing of the target song by a member of the group chat session.
PROVIDING MIDI CONTROLS WITHIN A VIRTUAL CONFERENCING SYSTEM
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing Musical Instrument Digital Interface (MIDI) controls within a virtual conferencing system. The program and method provide, in association with designing a room for virtual conferencing, an interface for updating a property of an element within a room based on a received MIDI message; receiving, based on the interface, an indication of user input specifying to update the property of the element when the received MIDI message includes a predefined value; providing a virtual conference between plural participants within the room, the room including the element; receiving a MIDI message that includes the predefined value; and updating, in response to receiving the MIDI message, the property of the element.
TECHNIQUES OF COORDINATING SENSORY EVENT TIMELINES OF MULTIPLE DEVICES
Embodiments described herein relate to techniques of coordinating sensory event timelines of multiple devices. The devices may use the sensory even timelines to output sensory events such as audio segments. The devices may take turns determining the sensory events to be output by the devices using their sensory event timelines. The techniques coordinate transitions of the devices between a first mode in which a device is allowed to determine sensory events to be output and a second mode in which a device outputs sensory events determined by another device.
Non-transitory computer-readable medium having computer-readable instructions and system
A system including: an electronic memory device and a processor. The processor is configured to: control a communication device to receive first input information indicating a first instruction, the first instruction corresponding to control of sound associated with at least one first sound source; control a transmitter of the communication device to transmit first information to an audio output device, the first information corresponding to the first input information indicating the first instruction; control the communication device to receive second input information indicating a second instruction corresponding to control of sound associated with the at least one second sound source; and control the transmitter to transmit second information to the audio output device, the second information including or corresponding to an audio signal associated with the at least one second sound source and processed according to the second input information indicating the second instruction.
SPATIAL MUSIC CREATION INTERFACE
Disclosed is a method of providing a music creation interface using a head-mounted device, including displaying first and second geometric loops fixed relative to a location in the real world, the first and second geometric loops each including a plurality of beat indicators. The second geometric loop is spaced apart from the first geometric loop. An interface comprising a plurality of sound or note icons is displayed, and in response to receiving user selection to move a selected sound or note icon to a particular beat indicator on one of the geometric loops, the selected sound or note icon is displayed at the particular beat indicator. In use, the geometric loops are rotated relative to at least one play indicator, and the selected sound or note icon is rendered when it reaches the at least one play indicator.