Patent classifications
G10H2220/355
Drumstick controller
A percussion device includes a drumstick assembly. The drumstick assembly includes a drumstick having a base and a tip end, and a drumstick tip secured to the tip end of the drumstick, the drumstick tip including a sensor. The drumstick including the base thereof, and includes at least one control button, a communication element, and a processor in communication with the at least one control button, the drumstick tip and the communication element. The processor is configured to receive a signal from the drumstick tip and to generate output to the communication element. The output so generated includes a signal that specifies a sound file selected by operation of the at least one control button.
Systems and methods for visual image audio composition based on user input
The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.
System and method for creating a personalized user environment
A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
Method for embedding and executing audio semantics
Aspects of the subject disclosure may include, for example, a device that includes a processing system having a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, where the operations include determining parameters for adapting audio in the content to the device, wherein the device renders the content, and wherein the parameters are based on semantic metadata embedded in the content, adapting the audio in the content based on the parameters, and rendering the content, as adapted by the parameters, to represent a semantic in the semantic metadata. Other embodiments are disclosed.
Cadence determination and media content selection
Systems, devices, apparatuses, components, methods, and techniques for cadence determination and media content selection are provided. An example media-playback device comprises a media-output device that plays media content items, a cadence-acquiring device, and a cadence-based media content selection engine. The cadence-acquiring device includes an accelerometer and a cadence-determination engine configured to determine a cadence based on acceleration data captured by the accelerometer. The cadence-based media content selection engine is configured to identify a media content item based on the cadence determined by the cadence-determining engine and cause the media-output device to playback the identified media content item.
SYSTEM AND METHOD FOR CREATING A PERSONALIZED USER ENVIRONMENT
A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
Textual display of aural information broadcast via frequency modulated signals
An electronic device includes a display screen and circuitry. The circuitry receives a first frequency modulated (FM) signal from a first FM radio transmitter, via a first FM radio channel. The first FM signal comprises a broadcast data signal that includes an audio segment of aural information of a performer-of-interest at of a live event, text information associated with the audio segment, and synchronization information. The synchronization information is associated with the text information and the audio segment. The circuitry extracts the synchronization information from a plurality of data packets of the broadcast data signal. The circuitry extracts a portion of the text information from the extracted plurality of data packets of the broadcast data signal based on the extracted synchronization information. The circuitry controls display of the extracted portion of the text information on the display screen.
TEXTUAL DISPLAY OF AURAL INFORMATION BROADCAST VIA FREQUENCY MODULATED SIGNALS
An electronic device includes a display screen and circuitry. The circuitry receives a first frequency modulated (FM) signal from a first FM radio transmitter, via a first FM radio channel. The first FM signal comprises a broadcast data signal that includes an audio segment of aural information of a performer-of-interest at of a live event, text information associated with the audio segment, and synchronization information. The synchronization information is associated with the text information and the audio segment. The circuitry extracts the synchronization information from a plurality of data packets of the broadcast data signal. The circuitry extracts a portion of the text information from the extracted plurality of data packets of the broadcast data signal based on the extracted synchronization information. The circuitry controls display of the extracted portion of the text information on the display screen.
Information processing method and image processing apparatus
There is provided an information processing method including analyzing a beat of input music, extracting a plurality of unit images from an input image, and generating, by a processor, editing information for switching the extracted unit images depending on the analyzed beat.
Apparatus and methods for cellular compositions
Broadly speaking, embodiments of the present invention provide systems, methods and apparatus for cellular compositions/generating music in real-time using cells (i.e. short musical motifs), where the cellular compositions are dependent on user data.