Patent classifications
G10H2210/066
APPARATUS AND METHOD FOR A PHONATION SYSTEM
A system and method for presenting a phonation system game that includes a graphical animation and a phonation system song to a child using an electronic screen-based device is provided. One embodiment detects sounds of the singing child; identifies a song word sung by the child; identifies a song word presented by the phonation system song, wherein the song word presented by the phonation system is the same as the song word sung by the child; identifies an attribute of interest in the song word sung by the child; retrieves a predefined song word attribute associated with the song word presented by the phonation system from a coded event database, wherein the predefined song word attribute is associated with the song word sung by the child; and compares the predefined song word attribute with the identified attribute of interest in the song word sung by the child.
AI Tool to Improve Music Performance
Disclosed embodiments include systems and methods to teach and analyze a student's progress in learning to play a musical instrument, sing or perform other musical endeavors. Embodiments include the production of an AI score or music AI score which may be an extraction of performance parameters such as a student's tone, speed, rhythm, pitch loudness and other metrics. A musical AI score may also track changes in measured performance while playing a piece of music. Such changes within a piece of music or over time in performing various pieces of music can be valuable is a student's self-assessment or a music teacher's approach tailored to the particular student. A music or signing AI score can help a student to select and to prioritize their music repertoire, focus performance efforts, optimize time schedule, improve appreciation for music, improve the overall quality of music performance and instructor relationship.
METHOD AND INSTALLATION FOR PROCESSING A SEQUENCE OF SIGNALS FOR POLYPHONIC NOTE RECOGNITION
This is a method and installation in which a time-domain digital audio signal is split into a plurality of narrow-band time-domain digital audio signals confined to specific frequency bands, short-term segments of which are temporarily stored in memory. The method comprises the use of signal processing algorithms for extracting multiple signal features from said short-term segments in a fixed sequence or upon request from a decision-making algorithm. Said decision-making algorithm makes tentative or final decisions about the type of occupancy of frequency bands resulting from the extracted features. Said decision-making algorithm may request from said signal processing algorithms further specific feature extractions from specific short-term segments and make further tentative or final decisions about the type of occupancy of frequency bands resulting from the requested features. Next, said decision-making algorithm stores its tentative decisions and makes final decisions about band occupancy for processing together with results from later short-term segments. Eventually, said decision-making algorithm outputs final decisions derived from current and past short-segments in the form of a set of notes having been played over some recent time interval, together with information as to the timing of each note from the set.
Interactive guitar game
An interactive game designed for learning to play a guitar. A guitar may be connected to a computer or other platform, capable of loading music and displaying notes and chords and other feedback and visual learning aids on a display screen, allowing a user to read music and play along. The goal of the software or interactive game engine is for players to learn how to play a guitar. Users may operate the game in a number of modes with different goals, playing mini-games throughout the levels of the game. The game provides feedback and statistics to help users learn how to play the guitar.
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM
An electronic device having a circuitry configured to perform audio source separation on an audio input signal to obtain a vocals signal and an accompaniment signal and to perform a confidence analysis on a user's voice signal based on the vocals signal to provide guidance to the user.
MUSICAL PIECE STRUCTURE ANALYSIS DEVICE AND MUSICAL PIECE STRUCTURE ANALYSIS METHOD
A musical piece structure analysis method includes acquiring an acoustic signal of a musical piece, extracting a first feature amount indicating changes in tone from the acoustic signal of the musical piece, extracting a second feature amount indicating changes in chords from the acoustic signal of the musical piece, outputting a first boundary likelihood indicating likelihood of a constituent boundary of the musical piece from the first feature amount using a first learning model, outputting a second boundary likelihood indicating likelihood of the constituent boundary of the musical piece from the second feature amount using a second learning model, identifying the constituent boundary of the musical piece by performing weighted synthesis of the first boundary likelihood and the second boundary likelihood, and dividing the acoustic signal of the musical piece into a plurality of sections at the constituent boundary that has been identified.
SCALABLE SIMILARITY-BASED GENERATION OF COMPATIBLE MUSIC MIXES
Scalable similarity-based generation of compatible music mixes. Music clips are projected in a pitch interval space for computing musical compatibility between the clips as distances or similarities in the pitch interval space. The distance or similarity between clips reflects the degree to which clips are harmonically compatible. The distance or similarity in the pitch interval space between a candidate music clip and a partial mix can be used to determine if the candidate music clip is harmonically compatible with the partial mix. An indexable feature space may be both beats-per-minute (BPM)-agnostic and musical key-agnostic such that harmonic compatibility can be quickly determined among potentially millions of music clips. A graphical user interface-based user application allows users to easily discover combinations of clips from a library that result in a perceptually high-quality mix that is highly consonant and pleasant-sounding and reflects the principles of musical harmony.
VEHICLE SYSTEMS AND RELATED METHODS
Vehicle machine learning methods include providing one or more computer processors communicatively coupled with a vehicle. Using data gathered from biometric sensors and/or vehicle sensors, a machine learning model is trained to determine a mental state of a driver and/or a driving state corresponding with a portion of a trip. In implementations the mental or driving state may be determined without a machine learning model. Based at least in part on the determined mental state and the determined driving state, one or more interventions are automatically initiated to alter the mental state of the driver. The interventions may include preparing (or modifying) and initiating a music playlist, altering a lighting condition within the vehicle, altering an audio condition within the vehicle, altering a temperature condition within the vehicle, and initiating, altering, or withholding conversation from a conversational agent. Vehicle machine learning systems perform the vehicle machine learning methods.
Accurate extraction of chroma vectors from an audio signal
A matrix is generated that stores sinusoidal components evaluated for a given sample rate corresponding to the matrix. The matrix is then used to convert an audio signal to chroma vectors representing of a set of “chromae” (frequencies of interest). The conversion of an audio signal portion into its chromae enables more meaningful analysis of the audio signal than would be possible using the signal data alone. The chroma vectors of the audio signal can be used to perform analyzes such as comparisons with the chroma vectors obtained from other audio signals in order to identify audio matches.
Social music system and method with continuous, real-time pitch correction of vocal performance and dry vocal capture for subsequent re-rendering based on selectively applicable vocal effect(s) schedule(s)
Embodiments described provide a method for mixing vocal performances from different vocalists. A vocal score temporally synchronized with a corresponding backing track and lyrics is retrieved via a communications interface of a portable computing device. A first vocal performance of a user is captured, via a microphone interface of the portable computing device, and in correspondence with the backing track. An open call indication for soliciting, from a second vocalist, a second vocal performance to be mixed for audible rendering with the first vocal performance is transmitted. A mix to one of the user and the second vocalist is provided by selecting, based on to whom the mix is provided, the mix from alternative mixes each having a different prominent vocal performance.