G10H2250/621

MUSICAL SOUND SIGNAL GENERATION DEVICE, MUSICAL SOUND SIGNAL GENERATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
20230103520 · 2023-04-06 · ·

A musical sound signal generation device continuously connects any one of a connected zeroth delay unit and a connected second delay unit to a fractional delay block and connects at least any one of a new zeroth delay unit and a new second delay unit to at least any one of the fractional delay block other than the fractional delay block connected to a new first delay unit in response to setting any one of the connected zeroth delay unit and the connected second delay unit as the new first delay unit, setting a delay unit in a preceding stage of the new first delay unit as the new zeroth delay unit, and setting a delay unit in a subsequent stage of the new first delay unit as the new second delay unit in accordance with a change in a designated tone pitch.

Audio waveform display using mapping function
11183163 · 2021-11-23 · ·

The described technology is generally directed towards providing a visible waveform representation of an audio signal, by processing the audio signal with a polynomial (e.g., cubic) mapping function. Coefficients of the polynomial mapping function are predetermined based on constraints (e.g., slope information and desired range of the resultant curve), and whether the plotted audio waveform corresponds to sound field quantities or power quantities. Once the visible representation of the reshaped audio waveform is displayed, audio and/or video editing operations can be performed, e.g., by time-aligning other audio or video with the reshaped audio waveform, and/or modifying the reshaped audio waveform to change the underlying audio data.

Sound source for electronic percussion instrument and sound production control method thereof

A sound source for an electronic percussion instrument and sound production control method thereof are provided. An electronic drum sound source device performs a weighting operation on four pieces of waveform information (pitch envelope, amplitude envelope, start phase) stored in a waveform table according to the beating conditions (hitting point position, velocity) based on the output from a struck sensor of an electronic drum pad. The electronic drum sound source device creates a sine wave on the basis of the waveform information whereon the weighting operation was performed, and generates musical sounds (sounds of percussion instrument) by synthesizing the sine wave with the waveforms of residual waveform data whereon the weighting operation was performed. The sine wave is not synthesized with waveform data other than the residual waveform data, and thus consistent musical sounds with no phase interference can be reproduced.

SOUND SOURCE FOR ELECTRONIC PERCUSSION INSTRUMENT AND SOUND PRODUCTION CONTROL METHOD THEREOF

A sound source for an electronic percussion instrument and sound production control method thereof are provided. An electronic drum sound source device performs a weighting operation on four pieces of waveform information (pitch envelope, amplitude envelope, start phase) stored in a waveform table according to the beating conditions (hitting point position, velocity) based on the output from a struck sensor of an electronic drum pad. The electronic drum sound source device creates a sine wave on the basis of the waveform information whereon the weighting operation was performed, and generates musical sounds (sounds of percussion instrument) by synthesizing the sine wave with the waveforms of residual waveform data whereon the weighting operation was performed. The sine wave is not synthesized with waveform data other than the residual waveform data, and thus consistent musical sounds with no phase interference can be reproduced.

AUDIO WAVEFORM DISPLAY USING MAPPING FUNCTION
20200152163 · 2020-05-14 ·

The described technology is generally directed towards providing a visible waveform representation of an audio signal, by processing the audio signal with a polynomial (e.g., cubic) mapping function. Coefficients of the polynomial mapping function are predetermined based on constraints (e.g., slope information and desired range of the resultant curve), and whether the plotted audio waveform corresponds to sound field quantities or power quantities. Once the visible representation of the reshaped audio waveform is displayed, audio and/or video editing operations can be performed, e.g., by time-aligning other audio or video with the reshaped audio waveform, and/or modifying the reshaped audio waveform to change the underlying audio data.

Audio waveform display using mapping function
10565973 · 2020-02-18 · ·

The described technology is generally directed towards providing a visible waveform representation of an audio signal, by processing the audio signal with a polynomial (e.g., cubic) mapping function. Coefficients of the polynomial mapping function are predetermined based on constraints (e.g., slope information and desired range of the resultant curve), and whether the plotted audio waveform corresponds to sound field quantities or power quantities. Once the visible representation of the reshaped audio waveform is displayed, audio and/or video editing operations can be performed, e.g., by time-aligning other audio or video with the reshaped audio waveform, and/or modifying the reshaped audio waveform to change the underlying audio data.

AUDIO WAVEFORM DISPLAY USING MAPPING FUNCTION
20190378487 · 2019-12-12 ·

The described technology is generally directed towards providing a visible waveform representation of an audio signal, by processing the audio signal with a polynomial (e.g., cubic) mapping function. Coefficients of the polynomial mapping function are predetermined based on constraints (e.g., slope information and desired range of the resultant curve), and whether the plotted audio waveform corresponds to sound field quantities or power quantities. Once the visible representation of the reshaped audio waveform is displayed, audio and/or video editing operations can be performed, e.g., by time-aligning other audio or video with the reshaped audio waveform, and/or modifying the reshaped audio waveform to change the underlying audio data.

Generating music with deep neural networks

The present disclosure provides systems and methods that include or otherwise leverage a machine-learned neural synthesizer model. Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, the neural synthesizer model can use deep neural networks to generate sounds at the level of individual samples. Learning directly from data, the neural synthesizer model can provide intuitive control over timbre and dynamics and enable exploration of new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer. As one example, the neural synthesizer model can be a neural synthesis autoencoder that includes an encoder model that learns embeddings descriptive of musical characteristics and an autoregressive decoder model that is conditioned on the embedding to autoregressively generate musical waveforms that have the musical characteristics one audio sample at a time.

Automated performance technology using audio waveform data
09613635 · 2017-04-04 · ·

In order to play waveform data back at a variable performance tempo by using waveform data which complies with a desired reference tempo, the present invention performs a timeline-expansion/contraction control on the waveform data to be played back, according to the relationship between the performance tempo and the reference tempo. The present invention also determines whether to limit the playback of the waveform data according to the relationship between the performance tempo and the reference tempo. In the case that playback is to be limited, the present invention stops playback of the waveform data, or reduces the resolution of playback processing and continues playback of the waveform data. The present invention stops playback of the waveform data when, for example, the relationship between the performance tempo and the reference tempo is a relationship in which the waveform data would be played back at a performance tempo which would cause a processing delay or a deterioration of sound quality. As a result, it is possible to preemptively prevent a system freeze and solve problems such as the generation of music which has a slower tempo than the desired performance tempo, or the generation of music which includes the intermittent cutting out of sound due to noise, or a significant reduction to sound quality.

Wavetable Waveform Iterative Interpolation System for Digital Synthesizers
20260057867 · 2026-02-26 ·

A wavetable waveform interpolation system for digital synthesizers utilizes a progressively iterative method, by which an initial anchor waveform continuously fades into a final anchor waveform. Multiple interpolation points are positioned in progressive succession between the initial and final anchor positions in the wavetable. Each interpolation point has a normalized final position increment between it and the final anchor position, as well as a normalized initial position decrement between it and the initial anchor position. These provide the basis for weighting factors that determine the relative contributions of the initial and final waveforms at each interpolation point.