G10H1/366

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.

SYSTEM AND METHOD FOR GENERATING HARMONIOUS COLOR SETS FROM MUSICAL INTERVAL DATA
20230096679 · 2023-03-30 ·

Systems and methods are disclosed for generating color sets based on musical concepts of pitch intervals and harmony. Color sets are derived via a music-to-hue process which analyzes musical pitch data associated with musical input to determine pitch intervals included in the music. Pitch interval angles associated with the pitch intervals are applied to a tuned hue index to identify hue note ordered within the index which are separated by a hue interval angle similar to the pitch angle associated with the analyzed pitch data. The systems and methods provide for the creation of color sets which are analogous to musical chords in that they include multiple hue notes selected based on hue interval angles derived from musical interval angles associated with the received musical input.

Audio-visual effects system for augmentation of captured performance based on content thereof

Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.

ELECTRONIC INSTRUMENT, METHOD FOR CONTROLLING ELECTRONIC INSTRUMENT, AND STORAGE MEDIUM
20230033464 · 2023-02-02 · ·

An electronic instrument includes at least one processor, and the at least one processor is configured to determine, based on previously acquired fingering time information relating to a time required for a fingering operation performed by a performer, a delay set time for confirming a new fingering operation in response to the new fingering operation.

Method and system for implementing a modal processor
11488574 · 2022-11-01 ·

The implementation of modal processors, which involve the parallel combination resonant filters, may be costly for applications such as artificial reverberation that can require thousands of modes. In one embodiment, the input signal is decomposed into a plurality of subbands, the outputs of which are downsampled. In each downsampled band, resonant filters are applied at the downsampled sampling rate, and their output is upsampled and filtered to form the band output. In these and other embodiments, a feature of responses of the mode filters have been optimized to minimize an aspect of a residual error after a point in time.

INFORMATION PROCESSING DEVICE, METHOD AND RECORDING MEDIA
20230090773 · 2023-03-23 · ·

An information processing device includes: an input interface; and at least one processor, configured to perform the following: selecting an instrument, a musical tone of which is to be digitally synthesized based on corresponding musical tone data, via the input interface; acquiring a parameter value that has been set for the selected instrument; generating a random number based on a random function; and changing a pitch of the musical tone of the selected instrument based on the generated random number and the acquired parameter values.

Method and a system for modular circuit bending and modding of electric and electronic music instruments, audio amplifiers and sound equipment
20230089612 · 2023-03-23 ·

A method and system for shaping the tone by modular circuit bending and modding of electric and electronic music instruments, audio amplifiers and sound equipment. The design of the system consists of a modding structure of a plurality of inserts, inputs and outputs connections for compatible modules, to reconfigure the electronic circuit without the physical changing of component values and without adding or removing electronic parts of the audio equipment by soldering. A plurality of tone modules for shaping the sound by the intervention into the signal path and a plurality of modules for shaping the tone by other sound design methods that are not part of the signal path are provided. For the modular system, an external device can be used as a multiplier, which consists of a plurality of inserts, inputs and outputs connections for compatible modules also including a plurality of combined series/parallel circuits of freely selectable inputs and outputs for the direct swap of a plurality of modding modules. In addition, switching options are available to control bypass, for advanced series or parallel connections, and for AB testing capabilities. The modular modding system can be used as a standalone device or as an upgrade to existing equipment.

GENERATING TONALLY COMPATIBLE, SYNCHRONIZED NEURAL BEATS FOR DIGITAL AUDIO FILES
20230128812 · 2023-04-27 · ·

Methods and systems for improved neural beat generation for digital audio files are provided. In one embodiment the method is provided that includes receiving a digital audio file and a beat frequency for a neural beat. Chromagram features may be extracted from the digital audio file and may be used to identify dominant pitch classes at a plurality of timestamps within the digital audio file. A plurality of carrier frequencies at different time periods within the digital audio file may be selected based on the dominant pitch classes. A neural beat may be synthesized for the digital audio file based on the beat frequency in the plurality of carrier frequencies. The neural beat may be stored and/or may be combined with the digital audio file to generate a combined audio track, which may be stored.

Method for processing audio and electronic device

Provided is a method for processing audio including: acquiring an accompaniment audio signal and a voice signal of a current to-be-processed musical composition; determining a target reverberation intensity parameter value of the acquired accompaniment audio signal, wherein the target reverberation intensity parameter value is configured to indicate a rhythm speed, an accompaniment type, and a performance score of a singer of the current to-be-processed musical composition; and reverberating the acquired vocal signal based on the target reverberation intensity parameter value.

COORDINATING AND MIXING AUDIOVISUAL CONTENT CAPTURED FROM GEOGRAPHICALLY DISTRIBUTED PERFORMERS
20230112247 · 2023-04-13 ·

Audiovisual performances, including vocal music, are captured and coordinated with those of other users in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for visually prominent presentation performance synchronized video of one or more of the contributors. Prominence of particular performance synchronized video may be based, at least in part, on computationally-defined audio features extracted from (or computed over) captured vocal audio. Over the course of a coordinated audiovisual performance timeline, these computationally-defined audio features are selective for performance synchronized video of one or more of the contributing vocalists.