Patent classifications
G10H1/36
Animation effect attachment based on audio characteristics
Systems and methods for rendering a video effect to a display are described. More specifically, video data and audio data are obtained. The video data is analyzed to determine one or more attachment points of a target object that appears in the video data. The audio data is analyzed to determine audio characteristics. A video effect associated with an animation to be added to the one or more attachment points is determined based on the audio characteristics. A rendered video is generated by applying the video effect to the video data.
AUTOMATIC DISPLAY MODULATION BASED ON AUDIO ARTIFACT COMPOSITION
The disclosed technology provides solutions for enhancing a user's experience of content playback, such as that of a user that is viewing multimedia content, such as a music video, on a mobile device. In some aspects, a process of the disclosed technology can include steps for receiving a mean energy curve associated with a sound file and dynamically modulating a brightness level of a displayed content on a display based on audio properties of the mean energy curve whereby an average brightness experienced over playback of the displayed content is equal to a default brightness of the display.
MUSIC GENERATION DEVICE, MUSIC GENERATION METHOD, AND RECORDING MEDIUM
A music generation device includes: an acquisition unit that acquires first stream data and second stream data different from the first stream data; an accompaniment generation unit that generates accompaniment information, which is music data indicating an accompaniment, based on a change in the first stream data; a melody generation unit that generates melody information, which is music data indicating a melody, based on a change in the second stream data; a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information; and an output unit that outputs the generated musical piece information.
MUSIC GENERATION DEVICE, MUSIC GENERATION METHOD, AND RECORDING MEDIUM
A music generation device includes: an acquisition unit that acquires first stream data and second stream data different from the first stream data; an accompaniment generation unit that generates accompaniment information, which is music data indicating an accompaniment, based on a change in the first stream data; a melody generation unit that generates melody information, which is music data indicating a melody, based on a change in the second stream data; a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information; and an output unit that outputs the generated musical piece information.
ARTIFICIAL INTELLIGENCE MODELS FOR COMPOSING AUDIO SCORES
A method for training one or more AI models for generating audio scores accompanying visual datasets includes obtaining training data comprising a plurality of audiovisual datasets and analyzing each of the plurality of audiovisual datasets to extract multiple visual features, textual features, and audio features. The method also includes correlating the multiple visual features and textual features with the multiple audio features via a machine learning network. Based on the correlations between the visual features, textual features, and audio features, one or more AI models are trained for composing one or more audio scores for accompanying a given dataset.
Integrated karaoke device
The present invention discloses an integrated karaoke device, including a microphone and a sound box. The sound box includes a sound chamber containing a loudspeaker. The integrated karaoke device further includes a connecting part made of flexible material that is fixedly connected between a bottom of the microphone and a top of the sound chamber. The use of a flexible material connecting part rather than a solid structural part reduces the transmission to the microphone of sound vibration generated by the sound box. The beneficial effect of implementing the present invention is that a flexible material connecting part is fixedly connected between the bottom of the microphone and the top of the sound chamber, and meanwhile, a gap is kept between the bottom of a printed circuit board (PCB) and the top of the sound chamber; and because the flexible material connecting part can effectively reduce sound vibration, the transmission of vibration to the microphone from the sound box can be effectively eliminated, thereby preventing squealing, and enabling integration of the microphone and the sound box for use in karaoke.
Device, system and method for generating an accompaniment of input music data
A device for automatically generating a real time accompaniment of input music data includes a music input that receives music data. A music analyzer analyzes received music data to obtain a music data description including one or more characteristics of the analyzed music data. A query generator generates a query to a music database including music patterns and associated metadata including one or more characteristics of the music patterns, the query being generated from the music data description and from an accompaniment description describing preferences of the real time accompaniment and/or music rules describing general rules of music. A query interface queries the music database using a generated query and receives a music pattern selected from the music database by use of the query. A music output outputs the received music pattern.
SYSTEM AND METHOD FOR GENERATING HARMONIOUS COLOR SETS FROM MUSICAL INTERVAL DATA
Systems and methods are disclosed for generating color sets based on musical concepts of pitch intervals and harmony. Color sets are derived via a music-to-hue process which analyzes musical pitch data associated with musical input to determine pitch intervals included in the music. Pitch interval angles associated with the pitch intervals are applied to a tuned hue index to identify hue note ordered within the index which are separated by a hue interval angle similar to the pitch angle associated with the analyzed pitch data. The systems and methods provide for the creation of color sets which are analogous to musical chords in that they include multiple hue notes selected based on hue interval angles derived from musical interval angles associated with the received musical input.
ELECTRONIC DEVICE, ELECTRONIC MUSICAL INSTRUMENT, AND METHOD THEREFOR
In an electronic device for an electronic musical instrument, a determination grace period during which a plurality of operations on the electronic musical instrument by a user are determined to be simultaneously performed for the first section is set based on the data included in a first section of the song having a plurality of sections. Automatic accompaniment is advanced from the first section to the next second section when a user operation of the electronic musical instrument is detected outside of the determination grace period for the first section during the playback of the first section of the accompaniment, and automatic accompaniment is not advanced from the first section to the second section when the user operation of the electronic musical instrument is detected within the determination grace period for the first section during the playback of the first section of the accompaniment.
Audio-visual effects system for augmentation of captured performance based on content thereof
Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.