G10H2210/331

Natural Ear
20210375303 · 2021-12-02 ·

Methods and systems for assisting tonally-challenged singers. A microphone can be integrated with a sound reinforcement system used in a live performance. The microphone, which can transduce the performer's voice, can serve multiple purposes such as, for example, to feed input to the natural ear and to the sound reinforcement system. The processed sound of the performer's voice (with fundamental frequency emphasized) can be mixed into the signal fed to a stage “monitor” speaker facing the performer or a headset worn by the performer.

Note stabilization and transition boost in automatic pitch correction system
20220189444 · 2022-06-16 ·

Disclosed is subject matter related generally to audio signal processing, and in particular to automatic pitch correction systems.

Template-based excerpting and rendering of multimedia performance

Disclosed herein are computer-implemented method, system, and computer-readable storage-medium embodiments for implementing template-based excerpting and rendering of multimedia performances technologies. An embodiment includes at least one computer processor configured to retrieve a first content instance and corresponding first metadata. The first content instance may include a first plurality of structural elements, with at least one structural element corresponding to at least part of the first metadata. The first content instance may be transformed by a rendering engine running on the at least one computer processor and/or transmitted to a content-playback device.

MUSICAL INSTRUMENT TUNER, MUSICAL PERFORMANCE SUPPORT DEVICE AND MUSICAL INSTRUMENT MANAGEMENT DEVICE

The musical instrument tuner includes: a sensor device that is attached to a musical instrument; and an operation device that is able to perform wireless communication mutually with the sensor device, in which the sensor device includes an acceleration sensor that has at least two detection axes, frequency detection means for detecting, as a detected frequency, a frequency of a vibration of musical sound generated through an operation of the musical instrument based on an output from the acceleration sensor, and sensor-side communication means for transmitting transmission information including information regarding the detected frequency to the operation device, and the operation device includes operation-side communication means for receiving the transmission information transmitted from the sensor device, display means, and control means for generating tuning information of the musical instrument and causing the display means to display the tuning information based on the transmission information received from the sensor device.

AUDIO-VISUAL EFFECTS SYSTEM FOR AUGMENTATION OF CAPTURED PERFORMANCE BASED ON CONTENT THEREOF

Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.

SOUND PROCESSING METHOD, SOUND PROCESSING SYSTEM, ELECTRONIC MUSICAL INSTRUMENT, AND RECORDING MEDIUM
20230290325 · 2023-09-14 ·

A computer-implemented sound processing method includes: outputting singing sound data based on a sound signal representing singing sound; and outputting sound data representing musical instrument sound that correlates with musical elements of the singing sound, by inputting input data that includes the singing sound data to a trained model that has learned, by machine learning, a relationship between singing sound for training and musical instrument sound for training.

Automated generation of coordinated audiovisual work based on content captured from geographically distributed performers

Vocal audio of a user together with performance synchronized video is captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors. Selections are in accord with a visual progression that codes a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.

Lane- and rhythm-based melody generation system
11640815 · 2023-05-02 · ·

To generate a melody, one or more machine-readable constraints are accepted from a user through a user interface. The constraints include rhythm constraints and pitch constraints. A sequence of musical elements is generated based on the constraints, each of the musical elements specifying, in machine-readable data, a musical pitch or silence and a duration of the musical pitch or silence. The pitch constraints prescribe pitches in the sequence of musical elements and the rhythm constraints prescribe rhythm of the sequence of musical elements. The sequence of musical elements is rendered in human-perceivable form as a melody.

METHOD AND ELECTRONIC DEVICE FOR MATCHING MUSICAL IMAGINARY ELECTROENCEPHALOGRAM WITH MUSICAL PITCH VALUES

A method, performed by an electronic device, of matching a musical imaginary electroencephalogram (EEG) with a melody includes obtaining an EEG generated by imagining music from a user, obtaining at least one brain wave segment from the EEG, identifying key points included in the at least one brain wave segment, matching a pitch value to each of the at least one brain wave segment based on the identified key points, and compensating for the pitch value matching the at least one brain wave segment based on a musical probability map.

Coordinating and mixing audiovisual content captured from geographically distributed performers
11394855 · 2022-07-19 · ·

Audiovisual performances, including vocal music, are captured and coordinated with those of other users in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for visually prominent presentation performance synchronized video of one or more of the contributors. Prominence of particular performance synchronized video may be based, at least in part, on computationally-defined audio features extracted from (or computed over) captured vocal audio. Over the course of a coordinated audiovisual performance timeline, these computationally-defined audio features are selective for performance synchronized video of one or more of the contributing vocalists.