G10H2250/015

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument in one aspect of the disclosure includes; a plurality of operation elements to be performed by a user for respectively specifying different pitches; a memory that stores musical piece data that includes data of a vocal part, the vocal part including at least a first note with a first pitch and an associated first lyric part that are to be played at a first timing; and at least one processor, wherein if the user does not operate any of the plurality of operation elements in accordance with the first timing, the at least one processor digitally synthesizes a default first singing voice that includes the first lyric part and that has the first pitch in accordance with data of the first note stored in the memory, and causes the digitally synthesized default first singing voice to be audibly output at the first timing.

Timing prediction method and timing prediction device
10699685 · 2020-06-30 · ·

A timing prediction method includes updating a state variable relating to the timing of a next sound generation event in a performance using a plurality of observation values relating to a timing of sound generation in a performance, and outputting an updated state variable that has been updated.

Timing control method and timing control device
10650794 · 2020-05-12 · ·

A timing control method includes generating a timing designation signal according to one of a first generation mode for generating the timing designation signal which designates, based on a detection result of a first event in a performance of a music piece, a timing of a second event in the performance, and a second generation mode for generating the timing designation signal without using the detection result, and outputting a command signal for commanding an execution of the second event according to one of a first output mode for outputting the command signal in accordance with the timing designated by the timing designation signal, and a second output mode for outputting the command signal in accordance with a timing determined based on the music piece.

Control method and control device
10636399 · 2020-04-28 · ·

A control method includes receiving a detection result relating to a first event in a performance; changing a following degree in a middle of the performance to which a second event in the performance follows the first event; and determining an operating mode of the second event based on the following degree.

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes: a memory that stores a machine-learning trained acoustic model mimicking voice of a singer and at least one processor. When a vocoder mode is on, prescribed lyric data and pitch data corresponding to a user operation of an operation element of the musical instrument are inputted to the trained acoustic model, and inferred singing voice data that infers a singing voice of the singer is synthesized on the basis of acoustic feature data output by the trained acoustic model and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element. When the vocoder mode is off, the inferred singing voice data is synthesized based on the acoustic feature data without using the sound waveform data.

CONTROLLER FOR REAL-TIME VISUAL DISPLAY OF MUSIC
20200105292 · 2020-04-02 ·

A controller for real-time visual display of music includes a music analysis module and a display control module. The music analysis module receives an audio input, determines human perceived musical structures, human felt affect and emotion as a function of the audio input, and outputs a signal corresponding to the determined structure, affect and emotion. The display control module is operatively coupled to the music analysis module and receives the signal and controls a visual display as a function thereof to express the determined musical structure, affect and emotion in a visual manner.

APPARATUS FOR ATTRIBUTE PATH GENERATION
20240028952 · 2024-01-25 · ·

In an aspect, an apparatus for attribute path generation is presented. An apparatus includes at least a processor and a memory communicatively connected to the at least a processor. A memory contains instructions configuring at least a processor to receive user data. At least a processor configured to identify a plurality of attributes of user data. At least a processor is configured to compare an attribute to an improvement threshold. At least a processor is configured to determine an objective as a function of a comparison. At least a processor is configured to create an attribute path including an objective. The attribute path may be displayed to a user by way of a metamap.

Music data processing method and program
10586520 · 2020-03-10 · ·

A music data processing method includes estimating a performance position within a musical piece, and updating a tempo designated by music data representing a performance content of the musical piece such that a tempo trajectory corresponds to a transition in a degree of dispersion of a performance tempo, which is generated as a result of estimating the performance position with respect to a plurality of performances of the musical piece, and the transition in the degree of dispersion of a reference tempo. The performance tempo is preferentially reflected in a portion of the musical piece in which the degree of dispersion of the performance tempo falls below the degree of dispersion of the reference tempo, and the reference tempo is preferentially reflected in a portion of the musical piece in which the degree of dispersion of the performance tempo exceeds the degree of dispersion of the reference tempo.

Systems and methods for detecting musical features in audio content
10546566 · 2020-01-28 · ·

Systems and methods for identifying musical features in audio content are presented. Audio content information may be obtained from a digital audio file, the information providing a duration for playback of the audio content and a representation of sound frequencies associated with various moments throughout the duration of the audio content. Sound frequencies associated with one or more of the moments throughout the duration of the audio content may be identified, and characteristics or patterns of the identified sound frequencies may be recognized as being indicative of one or more musical features (e.g., parts, phrases, hits, bars, onbeats, beats, quavers, semiquavers, etc.). Some implementations of the present technology define display objects for display on a digital display, the display objects provided with visual features in an arrangement that distinguishes one musical feature from another across the duration of the audio content.

Generative composition with defined form atom heuristics
11887568 · 2024-01-30 · ·

The disclosed generative composition system produces a composition to a briefing that describes a musical journey in emotional descriptions. The composition is assembled from concatenated interchangeable Form Atoms FAs selectable by tags aligning emotional descriptions with respective compositional heuristics. Each FA has self-contained constructional properties representative of an historical musical corpus. These heuristics support generation of chords, in chord schemes of musical tonics, achieving an equivalent form function. Each FA also includes chord spacer heuristics that temporally space generated chords across a defined musical window, and a chord list in a local tonic defining branching structures giving options for generating different chords. A progression descriptor, in combination with a form function, expresses musically a question, an answer or a statement, with each FA creating a meta-map of a chord scheme for a musical section. Musical transitions between FA reflect groupings in which FA have similar tags but different constructional properties.