G10H2210/201

SOUND SIGNAL SYNTHESIS METHOD, GENERATIVE MODEL TRAINING METHOD, SOUND SIGNAL SYNTHESIS SYSTEM, AND RECORDING MEDIUM
20210366453 · 2021-11-25 ·

A method generates first pitch data indicating a pitch of a first sound signal to be synthesized; and uses a generative model to estimate output data indicative of the first sound signal based on the generated first pitch data. The generative model has been trained to learn a relationship between second pitch data indicating a pitch of a second sound signal and the second sound signal. The first pitch data includes a first plurality of pieces of pitch notation data corresponding to pitch names, and is generated by setting, from among the first plurality of pieces of pitch notation data, a first piece of pitch notation data that corresponds to the pitch of the first sound signal as a hot value based on a difference between a reference pitch of a pitch name corresponding to the first piece of pitch notation data and the pitch of the first sound signal.

SOUND SIGNAL SYNTHESIS METHOD, NEURAL NETWORK TRAINING METHOD, AND SOUND SYNTHESIZER
20210350783 · 2021-11-11 ·

A sound signal synthesis method includes inputting control data representing conditions of a sound signal into a neural network, and thereby estimating first data representing a deterministic component of the sound signal and second data representing a stochastic component of the sound signal, and combining the deterministic component represented by the first data and the stochastic component represented by the second data, and thereby generating the sound signal. The neural network has learned a relationship between control data that represents conditions of a sound signal of a reference signal, a deterministic component of the sound signal of the reference signal, and a stochastic component of the sound signal of the reference signal.

Sound enhancing accessory for a musical instrument
11217215 · 2022-01-04 ·

An accessory for modifying sound output of a musical instrument. The body of the instrument has a soundboard. The accessory includes a sound sensor, an actuator, a fastener, and a controller. The sound sensor engages the body and senses vibration of the body representing the sound output of the musical instrument. The actuator engages the soundboard and deforms the soundboard of the musical instrument so as to modify the sound output of the musical instrument. The sound sensor is preferably arranged distally to the actuator. The fastener engages the accessory to the musical instrument, to locate the actuator against the soundboard of the musical instrument. The controller is connected to the actuator and the sound sensor for receiving and analysing the sound output sensed by the sound sensor, and controlling the actuator in dependence on the sound output sensed by the sound sensor.

AUDIO TRANSLATOR
20230282200 · 2023-09-07 · ·

Audio translation system includes a feature extractor and a style transfer machine learning model. The feature extractor generates for each of a plurality of source voice files one or more source voice parameters encoded as a collection of source feature vectors, and generates for each of a plurality of target voice files one or more target voice parameters encoded as a collection of target feature vectors. The style transfer machine learning model trained on the collection of source feature vectors for the plurality of source voice files and the collection of target feature vectors for the plurality of target voice files to generate a style transformed feature vector.

SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
20220293073 · 2022-09-15 ·

The present technology relates to a signal processing device, a signal processing method, and a program that enable intuitive operation of sound.

The signal processing device includes an acquisition unit that acquires a sensing value indicating a motion of a predetermined portion of a body of a user or motion of an instrument, and a control unit that performs non-linear acoustic processing on an acoustic signal according to the sensing value. The present technology can be applied to an acoustic reproduction system.

AUDIO TRANSLATOR
20220293086 · 2022-09-15 ·

Audio translation system includes a feature extractor and a style transfer machine learning model. The feature extractor generates for each of a plurality of source voice files one or more source voice parameters encoded as a collection of source feature vectors, and generates for each of a plurality of target voice files one or more target voice parameters encoded as a collection of target feature vectors. The style transfer machine learning model trained on the collection of source feature vectors for the plurality of source voice files and the collection of target feature vectors for the plurality of target voice files to generate a style transformed feature vector.

VIBRATO ARM AND SYSTEM
20210201857 · 2021-07-01 ·

A manual vibrato control device, system and processing arrangement are disclosed. A manual vibrato includes a rotatable shaft, a raised cam section on the shaft, first and second biased collars received on the shaft either side of the cam section, the bias of the first collar being rotationally opposite to the bias of the second collar such that as the shaft rotates in one direction, it receives a return force from the first collar but does not rotate the second collar, and vice versa.

Also disclosed are processing techniques to take the rotational data from rotational sensors, preferably Hall Effect, on the shaft and generate pitch change instructions for a pitch modification device. The mapping is user controllable to produce desired effects and performance.

Vibrato arm and system

A manual vibrato control device, system and processing arrangement are disclosed. A manual vibrato includes a rotatable shaft, a raised cam section on the shaft, first and second biased collars received on the shaft either side of the cam section, the bias of the first collar being rotationally opposite to the bias of the second collar such that as the shaft rotates in one direction, it receives a return force from the first collar but does not rotate the second collar, and vice versa. Also disclosed are processing techniques to take the rotational data from rotational sensors, preferably Hall Effect, on the shaft and generate pitch change instructions for a pitch modification device. The mapping is user controllable to produce desired effects and performance.

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.