Patent classifications
G10H2220/005
Animation effect attachment based on audio characteristics
Systems and methods for rendering a video effect to a display are described. More specifically, video data and audio data are obtained. The video data is analyzed to determine one or more attachment points of a target object that appears in the video data. The audio data is analyzed to determine audio characteristics. A video effect associated with an animation to be added to the one or more attachment points is determined based on the audio characteristics. A rendered video is generated by applying the video effect to the video data.
Controller for real-time visual display of music
A controller for real-time visual display of music includes a music analysis module and a display control module. The music analysis module receives an audio input, determines human perceived musical structures, human felt affect and emotion as a function of the audio input, and outputs a signal corresponding to the determined structure, affect and emotion. The display control module is operatively coupled to the music analysis module and receives the signal and controls a visual display as a function thereof to express the determined musical structure, affect and emotion in a visual manner.
Virtual and real composite image data generation method, virtual and real images compositing system, trained model generation method, virtual and real composite image data generation device
A method for generating virtual and real composite image data includes: acquiring captured image data capturing an image of a real space as seen from a user's point of view; inputting the captured image data into a trained model, the training model outputting segmentation data segmenting the captured image data into a first region in which a target object is displayed, a second region in which at least a part of the user's body is displayed, and a third region that is other than the first and second regions; and compositing data of the first region and data of the second region with a virtual space image data based on the segmentation data.
CONTROLLER FOR VISUAL DISPLAY OF MUSIC
Systems and methods for visualizations of music may include one or more processors which receive an audio input, and compute a simulation of a human auditory periphery using the audio input. The processor(s) may generate one or more visual patterns on a visual display, according to the simulation, the one or visual patterns synchronized to the audio input.
SYSTEM AND METHOD FOR GENERATING HARMONIOUS COLOR SETS FROM MUSICAL INTERVAL DATA
Systems and methods are disclosed for generating color sets based on musical concepts of pitch intervals and harmony. Color sets are derived via a music-to-hue process which analyzes musical pitch data associated with musical input to determine pitch intervals included in the music. Pitch interval angles associated with the pitch intervals are applied to a tuned hue index to identify hue note ordered within the index which are separated by a hue interval angle similar to the pitch angle associated with the analyzed pitch data. The systems and methods provide for the creation of color sets which are analogous to musical chords in that they include multiple hue notes selected based on hue interval angles derived from musical interval angles associated with the received musical input.
Audio-visual effects system for augmentation of captured performance based on content thereof
Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.
Systems And Methods For Providing Paint Colors Based On Music
Systems and methods are provided and include a computing device that receives a selection of a song available for streaming from the music streaming server and transmits a query to the music streaming service server for a musical attribute associated with the selected song. The computing device receives the musical attribute and determines a paint color associated with the selected song based on the musical attribute. The computing device displays the determined paint color and a name and identification code of the paint color on a display of the computing device. The computing device outputs the selected song to a speaker of the computing device while displaying the determined paint color on the display of the at least one computing device.
SYSTEMS AND METHODS FOR AN IMMERSIVE AUDIO EXPERIENCE
A computer-implemented method for creating an immersive audio experience. The method includes receiving a selection of an audio track via a user interface, and receiving audio track metadata for the audio track. The method includes querying an audio database based on the track metadata and determining that audio data for the audio track is not stored on the audio database. The method includes analyzing the audio track to determine one or more audio track characteristics. The method includes generating vibe data based on the one or more audio track characteristics, wherein the vibe data includes time-coded metadata. Based on the vibe data, generating visualization instructions for one or A/V devices in communication with a user computing device, and transmitting the generated visualization instructions and the audio track to the user computing device.
MULTIMEDIA MUSIC CREATION USING VISUAL INPUT
A system for creating music using visual input. The system detects events and metrics (e.g., objects, gestures, etc.) in user input (e.g., video, audio, music data, touch, motion, etc.) and generates music and visual effects that are synchronized with the detected events and correspond to the detected metrics. To generate the music, the system selects parts from a library of stored music data and assigns each part to the detected events and metrics (e.g., using heuristics to match musical attributes to visual attributes in the user input). To generate the visual effects, the system applies rules (e.g., that map musical attributes to visual attributes) to translate the generated music data to visual effects. Because the visual effects are generated using music data that is generated using the detected events/metrics, both the generated music and the visual effects are synchronized with—and correspond to—the user input.
Spoken words analyzer
A lyrics analyzer generates tags and explicitness indicators for a set of tracks. These tags may indicate the genre, mood, occasion, or other features of each track. The lyrics analyzer does so by generating an n-dimensional vector relating to a set of topics extracted from the lyrics and then using those vectors to train a classifier to determine whether each tag applies to each track. The lyrics analyzer may also generate playlists for a user based on a single seed song by comparing the lyrics vector or the lyrics and acoustics vectors of the seed song to other songs to select songs that closely match the seed song. Such a playlist generator may also take into account the tags generated for each track.