Patent classifications
G10H1/0058
Systems and methods for automatic mixing of media
A first device includes one or more processors and memory storing one or more programs configured to be executed by the one or more processors. The one or more programs include instructions for receiving, from a second device, audio mix information for a first audio item and receiving, from the second device, an indication that the first audio item is to be mixed with a second audio item distinct from the first audio item. In response to the indication, the one or more programs include instructions for transmitting to the second device an audio stream including the first audio item and the second audio item mixed in accordance with the audio mix information.
DATA STRUCTURE FOR COMPUTER GRAPHICS, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
The present invention is designed to allow easy synchronization of the movement of a computer graphics (CG) model with sound data. The data structure according to an embodiment of the present invention presents a data structure that relates to a computer graphics (CG) model, including first time-series information for designating the coordinates of the components of the CG model on a per beat basis, and the first time-series information is used on a computer to process the CG model.
Dynamic CFI using line-of-code behavior and relation models
Disclosed herein are techniques for analyzing control-flow integrity based on functional line-of-code behavior and relation models. Techniques include receiving data based on runtime operations of a controller; constructing a line-of-code behavior and relation model representing execution of functions on the controller based on the received data; constructing, based on the line-of-code behavioral and relation model, a dynamic control flow integrity model configured for the controller to enforce in real-time; and deploying the dynamic control flow integrity model to the controller.
CROWD-SOURCED TECHNIQUE FOR PITCH TRACK GENERATION
Digital signal processing and machine learning techniques can be employed in a vocal capture and performance social network to computationally generate vocal pitch tracks from a collection of vocal performances captured against a common temporal baseline such as a backing track or an original performance by a popularizing artist. In this way, crowd-sourced pitch tracks may be generated and distributed for use in subsequent karaoke-style vocal audio captures or other applications. Large numbers of performances of a song can be used to generate a pitch track. Computationally determined pitch trackings from individual audio signal encodings of the crowd-sourced vocal performance set are aggregated and processed as an observation sequence of a trained Hidden Markov Model (HMM) or other statistical model to produce an output pitch track.
SIGNAL PROCESSING APPARATUS, SIGNAL PROCESSING METHOD, PROGRAM, SIGNAL PROCESSING SYSTEM, AND ENCODING APPARATUS
Provided is a signal processing apparatus having: a sound source separation unit configured to perform sound source separation on a mixed sound signal obtained by mixing a plurality of sound source signals; a sound source type determination unit configured to determine a type of a predetermined sound source signal obtained by the sound source separation; and an output destination control unit configured to output the predetermined sound source signal to a corresponding output device on the basis of a determination result of the sound source type determination unit.
Parameter Inference Method, Parameter Inference System, and Parameter Inference Program
A parameter inference method realized by a computer, includes obtaining target performance information indicating a performance of music using an electronic musical instrument; inferring assist information from the target performance information with use of a trained inference model generated through machine learning, the assist information being related to setting of a parameter of the electronic musical instrument that conforms to a tendency of the performance; and outputting the inferred assist information related to the setting of the parameter.
Sampler for an Intelligent Cable or Cable Adapter
A specialized audio/instrument cable with built-in digital signal processing capabilities that adds digital audio sampling capabilities that allows the user to trigger synthesized sounds or virtual musical instruments from within the cable itself to affect the sound generated from an instrument or microphone such that the cable is the only connection needed between the instrument or microphone and an output device. Using voice recognition, the specialized cable can select an audio effects chain algorithm and/or sampled sound algorithm extrapolated from a musical digital audio fingerprint (MDAF) created from a desired musician's instrument to alter the sound of the input instrument's audio.
Computer-Implemented Method, System, and Non-Transitory Computer-Readable Storage Medium for Inferring Evaluation of Performance Information
A computer-implemented method includes obtaining a trained model trained to store a relationship between first performance information and evaluation information. The first performance information includes a plurality of performance units. The evaluation information includes a plurality of pieces of evaluation information respectively associated with the plurality of performance units. The method also includes obtaining second performance information including an evaluation of each performance unit of the plurality of performance units. The method also includes processing the second performance information using the trained model to infer the evaluation of the each performance unit.
Visualization of code execution through line-of-code behavior and relation models
Disclosed herein are techniques for visualizing and configuring controller function sequences. Techniques include identifying at least one executable code segment associated with a controller; analyzing the at least one executable code segment to determine at least one function and at least one functional relationship associated with the at least one code segment; constructing, a software functionality line-of-code behavior and relation model visually depicting the determined at least one function and at least one functional relationship; displaying the software functionality line-of-code behavior and relation model at a user interface; receiving a first input at the interface; in response to the received first input, animating the line-of-code behavior and relation model to visually depict execution of the at least one executable code segment on the controller; receiving a second input at the interface; and in response to the received second input, animating an update to the line-of-code behavior and relation model.
Sound experience generator
Audio data is generated for a vehicle audio system using a portable computing device. Vehicle parameter data is received at the portable computing device from a vehicle. Sound parameter data is generated from the vehicle parameter data. Audio data is generated using a synthesizer based on the sound parameter data. The generated audio data is transmitted to the vehicle audio system from the portable computing device.