Patent classifications
G10H2210/576
GENERATIVE COMPOSITION WITH DEFINED FORM ATOM HEURISTICS
A generative composition system reduces existing musical artefacts to constituent elements termed “Form Atoms”. These Form Atoms may each be of varying length and have musical properties and associations that link together through Markov chains. To provide myriad new composition, a set of heuristics ensures that musical textures between concatenated musical sections follow a supplied and defined briefing narrative for the new composition whilst contiguous concatenated Form Atoms are also automatically selected to see that similarities in respective and identified attributes of musical textures for those musical sections are maintained to support maintenance of musical form. Independent aspects of the disclosure further ensure that, within the composition work, such as a media product or a real-time audio stream, chord spacing determination and control are practiced to maintain musical sense in the new composition. Further, a structuring of primitive heuristics operates to maintain pitch and permit key transformation. The system and its functionality provides signal analysis and music generation through allowing emotional connotations to be specified and reproduced from cross-referenced Form-Atoms.
AUTOMATIC MUSIC PLAYING CONTROL DEVICE, ELECTRONIC MUSICAL INSTRUMENT, METHOD OF PLAYING AUTOMATIC MUSIC PLAYING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
Provided is an automatic music playing control device that automatically plays chords to achieve a good music playing. The automatic music playing control device has at least one processor, wherein the at least one processor probabilistically selects any one of a plurality of timing types that defines the number of sound emissions, probabilistically selects any one of a plurality of note timing tables that defines sound emission timings, corresponding to the selected timing type, and instructs a sound source to emit a chord at a sound emission timing based on the selected note timing table.
VEHICLE SYSTEMS AND RELATED METHODS
Vehicle machine learning methods include providing one or more computer processors communicatively coupled with a vehicle. Using data gathered from biometric sensors and/or vehicle sensors, a machine learning model is trained to determine a mental state of a driver and/or a driving state corresponding with a portion of a trip. In implementations the mental or driving state may be determined without a machine learning model. Based at least in part on the determined mental state and the determined driving state, one or more interventions are automatically initiated to alter the mental state of the driver. The interventions may include preparing (or modifying) and initiating a music playlist, altering a lighting condition within the vehicle, altering an audio condition within the vehicle, altering a temperature condition within the vehicle, and initiating, altering, or withholding conversation from a conversational agent. Vehicle machine learning systems perform the vehicle machine learning methods.
Audiovisual capture and sharing framework with coordinated, user-selectable audio and video effects filters
Coordinated audio and video filter pairs are applied to enhance artistic and emotional content of audiovisual performances. Such filter pairs, when applied in audio and video processing pipelines of an audiovisual application hosted on a portable computing device (such as a mobile phone or media player, a computing pad or tablet, a game controller or a personal digital assistant or book reader) can allow user selection of effects that enhance both audio and video coordinated therewith. Coordinated audio and video are captured, filtered and rendered at the portable computing device using camera and microphone interfaces, using digital signal processing software executable on a processor and using storage, speaker and display devices of, or interoperable with, the device. By providing audiovisual capture and personalization on an intimate handheld device, social interactions and postings of a type made popular by modern social networking platforms can now be extended to audiovisual content.
Input Support Apparatus and Method Therefor
An input support method is provided for use in an input support apparatus that supports input of a music note. The method includes: controlling a display unit to display a pitch-time plane that includes a pitch-axis and a time-axis, a chord sequence that is associated with the time-axis of the pitch-time plane, and a pointer that indicates a position on the time-axis along the chord sequence; identifying constituent music notes that form a chord corresponding to a display position of the pointer along the chord sequence; and controlling the display unit to display areas on the pitch-time plane, each displayed area indicating a corresponding one of the identified constituent music notes, differently from other areas on the pitch-time plane.
Method for adjusting the complexity of a chord in an electronic device
Conventionally, an electronic musical user input, such as an electronic keyboard has pre-programmed pitches associated with each key. These pre-programmed pitches correspond to the pitches from their acoustic counterparts. While some methods do exist of remapping the keys in such a way that a user cannot make a so called ‘bad’ sound by playing a wrong 5 not, there is little freedom in the selection of the ‘good’ notes. Therefore, there is herein provided a method of adjusting the complexity of a chord which therein determines the actual set of pitches which can be assigned to a user input device in order to increase the flexibility of remapping systems.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus according to the present disclosure includes a generation unit that generates a model regarding generation of content by using data provided by a user subject of a service regarding creation of the content, the user subject having one authority level among a plurality of authority levels of the service, and a determination unit that determines a usage mode of the model generated by the generation unit according to the one authority level of the user subject.
COMPUTER-BASED SYSTEMS, DEVICES, AND METHODS FOR GENERATING MUSICAL COMPOSITIONS THAT ARE SYNCHRONIZED TO VIDEO
Computer-based systems, devices, and methods for generating musical compositions that are purposefully synchronized with video are described. A video timeline is defined with various time-markers that demarcate specific events in the video. A music timeline is generated based on the video timeline. The music timeline preserves the various time-markers from the video timeline. A computer-based musical composition system generates a musical composition based on the music timeline. The musical composition includes various musical events that align, synchronize, or coincide with the time-markers such that when the video and musical composition are played together the musical events align, synchronize, or coincide with the demarcated events in the video.
METHOD AND SYSTEM FOR GENERATING AN AUDIO OR MIDI OUTPUT FILE USING A HARMONIC CHORD MAP
Techniques are provided for generating an output file. One technique involves the steps of generating audio or MIDI content blocks from one or more musical performances; receiving an input file having audio or MIDI music content; generating a harmonic chord map for the input file; using the harmonic chord map to automatically select a subset of the audio or MIDI content blocks, and generating the output file by combining the selected subset of content blocks and the input file. This technique may enable the creation of unique and new musical accompaniments by re-purposing audio or MIDI content from back catalogs and/or out-takes of musical works. The new arrangement may be provided in multiple music styles, genres, or moods and may contain performances from multiple musical instruments, which may be pre-recorded from live instrument performances and/or of MIDI generated musical content.
CHORD PROCESSING METHOD AND CHORD PROCESSING DEVICE
A chord processing device includes a memory storing instructions, and a processor configured to implement the stored instructions to execute a plurality of tasks, including: a receiving task that receives a first chord consisting of plural notes, an analysis task that determines whether the first chord c is a subject chord, and a converting task that, in a case where the analysis task determines that the first chord is the subject chord, converts the first chord into a second chord that relates to the first chord a case where the first chord satisfies a prescribed chord-related condition relating to the first and second chords, while not converting the first chord in a case where the first chord does not satisfy the prescribed chord-related condition.