G10H2240/251

Communicating data with audible harmonies
09755764 · 2017-09-05 · ·

In some implementations, a process for communicating data over audio is performed. In one aspect, one or more ordered sequences of audio attribute values that are selected based on a musical relationship between the audio attribute values and associated with data values may be played by a first device and received by a second device. This technique may allow for sound-based communications to take place between devices that listeners may find pleasant.

Coordinating and mixing vocals captured from geographically distributed performers

Despite many practical limitations imposed by mobile device platforms and application execution environments, vocal musical performances may be captured and continuously pitch-corrected for mixing and rendering with backing tracks in ways that create compelling user experiences. Based on the techniques described herein, even mere amateurs are encouraged to share with friends and family or to collaborate and contribute vocal performances as part of virtual “glee clubs.” In some implementations, these interactions are facilitated through social network- and/or eMail-mediated sharing of performances and invitations to join in a group performance. Using uploaded vocals captured at clients such as a mobile device, a content server (or service) can mediate such virtual glee clubs by manipulating and mixing the uploaded vocal performances of multiple contributing vocalists.

MUSICAL PERFORMANCE SYSTEM, TERMINAL DEVICE, METHOD AND ELECTRONIC MUSICAL INSTRUMENT
20210407475 · 2021-12-30 · ·

A musical performance system includes an instrument and a term1nal. Terminal includes a processor. Processor executes outputting first track data or first pattern data obtained by arbitrarily combining pieces of track data. Processor executes automatically outputting second track data or second pattern data obtained by arbitrarily combining pieces of track data. Instrument includes a processor. Processor executes acquiring first track/pattern data from terminal. Processor executes generating a sound of a music composition in accordance with first track/pattern data. Processor executes acquiring second track/pattern data from terminal. Processor executes generating a sound of a music composition in accordance with second track/pattern data.

Apparatus and methods for cellular compositions
11195502 · 2021-12-07 · ·

Broadly speaking, embodiments of the present invention provide systems, methods and apparatus for cellular compositions/generating music in real-time using cells (i.e. short musical motifs), where the cellular compositions are dependent on user data.

Apparatus and Methods for Cellular Compositions
20220157283 · 2022-05-19 · ·

Systems, methods and apparatus for cellular compositions/generating music in real-time using cells are provided. The cellular compositions may be dependent on user data.

Mobile Machine
20230262429 · 2023-08-17 · ·

A system for providing mobile content to a mobile communication device includes a first computing system including one or more servers to cause a graphical user interface to be displayed at a second computing system, the graphical user interface (i) enabling a user of the second computing system to at least one of create, edit, or select the mobile content and (ii) enabling the user of the second computing system to provide a phone number associated with the mobile communication device, the mobile communication device being separate and remote from the first computing system and the second computing system. The first computing system uses the phone number to cause delivery of the mobile content to the mobile communication device via a wireless communications network in a format compatible with one or more operational parameters of the mobile communication device, the one or more operational parameters including at least one of a mobile communication device type and a software platform type, wherein the using of the phone number to cause the delivery of the mobile content to the mobile communication device is performed without the mobile communication device identifying the one or more operational parameters to the first computing system or the second computing system.

Song Recording Method, Audio Correction Method, and Electronic Device
20220130360 · 2022-04-28 ·

A method includes displaying, by an electronic device, a first interface, where the first interface includes a recording button used to record a first song, obtaining, by the electronic device, accompaniment of the first song and feature information of a cappella of an original singer, starting to record a cappella of the user that is sung by the user, and displaying, by the electronic device, guidance information on a second interface based on the feature information of the a cappella of the original singer, where the guidance information guides one or more of breathing and vibrato during the user's singing.

Template-based excerpting and rendering of multimedia performance

Disclosed herein are computer-implemented method, system, and computer-readable storage-medium embodiments for implementing template-based excerpting and rendering of multimedia performances technologies. An embodiment includes at least one computer processor configured to retrieve a first content instance and corresponding first metadata. The first content instance may include a first plurality of structural elements, with at least one structural element corresponding to at least part of the first metadata. The first content instance may be transformed by a rendering engine running on the at least one computer processor and/or transmitted to a content-playback device.

AUDIO-VISUAL EFFECTS SYSTEM FOR AUGMENTATION OF CAPTURED PERFORMANCE BASED ON CONTENT THEREOF

Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.

Automated generation of coordinated audiovisual work based on content captured from geographically distributed performers

Vocal audio of a user together with performance synchronized video is captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors. Selections are in accord with a visual progression that codes a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.