Patent classifications
G10H2220/355
AUTOMATED GENERATION OF COORDINATED AUDIOVISUAL WORK BASED ON CONTENT CAPTURED FROM GEOGRAPHICALLY DISTRIBUTED PERFORMERS
Vocal audio of a user together with performance synchronized video is captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors. Selections are in accord with a visual progression that codes a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.
SYSTEM AND METHOD FOR CREATING A PERSONALIZED USER ENVIRONMENT
A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
METHODS, SYSTEMS, APPARATUSES, AND DEVICES FOR FACILITATING THE INTERACTIVE CREATION OF LIVE MUSIC BY MULTIPLE USERS
Disclosed herein is a method for facilitating creating of music in real time by multiple users, in accordance with some embodiments. Accordingly, the method comprises receiving first music segment selections from a first user device, receiving second music segment selections from a second user device, transmitting the first music segment selections to the second user device, and transmitting the second music segment selections to the first user device. Further, each of the first user device and the second user device is configured for retrieving the first music segments, retrieving the second music segments, determining a universal time, synchronizing the second music segments with the first music segments to a common musical beat based on the universal time, mixing the second music segments with the first music segments based on the synchronizing, and generating a mixed music comprising the first music segments and the second music segments based on the mixing.
System and method for creating a personalized user environment
A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
Automated generation of coordinated audiovisual work based on content captured geographically distributed performers
Vocal audio of a user together with performance synchronized video is captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors. Selections are in accord with a visual progression that codes a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.
SYSTEMS AND METHODS FOR VISUAL IMAGE AUDIO COMPOSITION BASED ON USER INPUT
The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing apparatus detachable from a human body includes a control unit that generates feedback information that provides feedback in accordance with information based on strength/weakness of a keystroke of a musical instrument.
Stick Controller
A stick device that includes a base and a tip end, and a tip secured to the tip end of the stick, the stick tip including a sensor. The stick including the base thereof, and includes at least one control button, a communication element, and a processor in communication with the at least one control button, the stick tip and the communication element. The processor is configured to receive a signal from the stick tip and to generate output to the communication element. The output so generated includes a signal that specifies a sound file selected by operation of the at least one control button.
Pace-aware music player
An electronic device may comprise audio processing circuitry, pace tracking circuitry, and positioning circuitry. The pace tracking circuitry may be operable to selects a tempo of songs for playback, by the audio processing circuitry based on position data generated by the positioning circuitry, a desired tempo, and whether the songs are stored locally or network-accessible. The position data may indicate the pace of a runner during a preceding, determined time interval. The pace tracking circuitry may control the song selection and/or time stretching based on a runner profile data stored in memory of the music device. The profile data may include runner's distance-per-stride data. The electronic device may include sensors operable to function as a pedometer. The pace tracking circuitry may update the distance-per-stride data based on the position data and based on data output by the one or more sensors.
METHOD FOR EMBEDDING AND EXECUTING AUDIO SEMANTICS
Aspects of the subject disclosure may include, for example, a device that includes a processing system having a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, where the operations include determining parameters for adapting audio in the content to the device, wherein the device renders the content, and wherein the parameters are based on semantic metadata embedded in the content, adapting the audio in the content based on the parameters, and rendering the content, as adapted by the parameters, to represent a semantic in the semantic metadata. Other embodiments are disclosed.