Patent classifications
G10H2220/096
Information processing method, terminal device and computer storage medium
A method for processing information, terminal device and a computer storage medium are disclosed. The method for processing information includes that: a first control instruction is acquired, and a first application is switched to a preset mode according to the first control instruction; a first triggering operation is acquired based on the preset mode, at least two pieces of multimedia data are selected based on the first triggering operation, and a first playing interface is generated; when a second control instruction is acquired, the at least two pieces of multimedia data in the first playing interface are sequentially played; in a process of playing first multimedia data in the at least two pieces of multimedia data, first audio data is acquired; and the first multimedia data and the first audio data are synthesized as second multimedia data.
AUDIOVISUAL COLLABORATION SYSTEM AND METHOD WITH SEED/JOIN MECHANIC
User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc. The resulting group performance, whether full-length or just a chunk, may be posted, livestreamed, or otherwise disseminated in a social network.
SYSTEM FOR GENERATING A SIGNAL BASED ON A TOUCH COMMAND AND ON AN OPTICAL COMMAND
A system for generating a signal includes a touchpad including touch cells and a touch detection device for detecting the location and intensity of a pressure exerted on the touchpad; a first computer generating a first instruction based on the location and intensity of the pressure; an optical detection device for detecting a movement and/or a position, including optics for capturing images; a second computer for determining a motion parameter based on the captured images and for generating a second instruction based on the parameter; and a signal generator for producing a second signal based on the first instruction or on a first signal extracted from the first instruction, to which there is applied a special effect extracted from the second instruction; or on the second instruction or on a first signal extracted from the second instruction, to which there is applied a special effect extracted from the first instruction.
DIGITAL AUDIO SYSTEM
A portable digital audio system for a musician. The digital audio system includes an amplifier for processing an audio signal from a musical instrument or microphone electronically connected to the digital audio system and a speaker for playing a sound associated with the audio signal processed by the amplifier. The portable digital audio system also includes an audio control system providing operational control of the digital audio system and a primary housing for supporting the amplifier, the audio control system, and the speaker. Further, the digital audio system has a touch screen display in electronic communication with the audio control system and supported by the primary housing.
GESTURE-ENABLED INTERFACES, SYSTEMS, METHODS, AND APPLICATIONS FOR GENERATING DIGITAL MUSIC COMPOSITIONS
This disclosure is directed to systems, methods, apparatuses, and techniques that utilize enhanced gesture-based input mechanisms to facilitate rapid creation and editing of digital music compositions. These technologies can be specially designed and configured to optimize creation, editing, and/or sharing of digital music compositions on mobile electronic devices that include capacitive sensing mechanisms. The technologies include multi-gesture functionalities that enable users to view and access various notation customization features in a compact space of a mobile device display. Additionally, the technologies encompass improved data storage models that enable underlying notation data to be accessed in multiple operational modes, and permit frequencies or pitches of notations to be accurately generated and incorporated into audio signals.
Gesture-enabled interfaces, systems, methods, and applications for generating digital music compositions
This disclosure is directed to systems, methods, apparatuses, and techniques that utilize enhanced gesture-based input mechanisms to facilitate rapid creation and editing of digital music compositions. These technologies can be specially designed and configured to optimize creation, editing, and/or sharing of digital music compositions on mobile electronic devices that include capacitive sensing mechanisms. The technologies include multi-gesture functionalities that enable users to view and access various notation customization features in a compact space of a mobile device display. Additionally, the technologies encompass improved data storage models that enable underlying notation data to be accessed in multiple operational modes, and permit frequencies or pitches of notations to be accurately generated and incorporated into audio signals.
Dynamic Pedal and Display
A small (smartphone-sized or tablet-sized), foot-enabled, flat, tiltable, rotatable, dynamic touch screen pedal and controller that uses a tilt mechanism to toggle between and select different audio (or other) functions and effects that are displayed on the attached display. The tilt of the device as well as optional tapping sequences activate different functions in predetermined function locations. Specific audio effects, audio effect presets, loops, songs, and controller functions can each be assigned to different touch screen locations, pedal buttons, and/or tilt directions.
MULTIDIMENSIONAL GESTURES FOR MUSIC CREATION APPLICATIONS
A graphical user interface for music creation applications, such as score notation applications and digital audio workstations, includes multi-dimensional gestures. To enter a sound event into a musical project, a user uses an input device to select and drag a desired sound event in one or more dimensions. The relative position or rate of movement along a given dimension defines a value of a sound event parameter allocated to the given dimension. The sound event is entered into the project when the selection is released. The user inputs the gesture using a pointing device such as a mouse, stylus with a touch screen, or finger on a touch screen. Stylus dimensions mapped to sound event parameters may include, horizontal and vertical stylus tip positions, vertical and horizontal tilt of the stylus, and stylus tip pressure. Sound event parameters controlled by the gestures may include diatonic pitch, chromatic inflection, and duration.
MULTIMEDIA MUSIC CREATION USING VISUAL INPUT
A system for creating music using visual input. The system detects events and metrics (e.g., objects, gestures, etc.) in user input (e.g., video, audio, music data, touch, motion, etc.) and generates music and visual effects that are synchronized with the detected events and correspond to the detected metrics. To generate the music, the system selects parts from a library of stored music data and assigns each part to the detected events and metrics (e.g., using heuristics to match musical attributes to visual attributes in the user input). To generate the visual effects, the system applies rules (e.g., that map musical attributes to visual attributes) to translate the generated music data to visual effects. Because the visual effects are generated using music data that is generated using the detected events/metrics, both the generated music and the visual effects are synchronized with—and correspond to—the user input.
PICKUPS FOR ENHANCED PLAYABILITY OF MUSICAL INSTRUMENT, STRINGED INSTRUMENT WITH PICKUPS, AND METHOD FOR CONTROLLING PICKUPS
A pickup, a stringed instrument, and a pickup control method are disclosed. The pickup comprises: pickup components(110), multi-touch screens(130) and processing components(140). The multi-touch screens(130) and the processing components(140) are connected. The pickup components(110) and the processing components(140) are connected. The pickup components(110) is configured to pick up sound information emitted by a musical instrument. The multi-touch screen(130) is configured to display function application information that is selected from a group consisting of timbre adjustment function application information, tone adjustment function application information, and equalizer adjustment function application information. The multi-touch screen is configured to receive a function triggering instruction. The multi-touch screen displays information as to functional applications available through the pickup capabilities. Control functions available to a user are expanded beyond control by a single knob as in the prior art, the playing experience of a user on the instrument is enhanced.