G10H2210/071

COMPUTING ORDERS OF MODELED EXPECTATION ACROSS FEATURES OF MEDIA

A method implemented by a determination engine is provided. The determination engine receives a media dataset comprising target piece music information, target piece audience information, corpus music information, corpus audience information, and corpus preference data. The determination engine determines a subset of the corpus music and preference information and determines at least one surprise factor of the subset of the corpus music and preference information across features at one of a plurality of orders. The determination engine learns a model that estimates a likelihood that time-varying surprise trends across the features achieves a preference level. The determination engine determines at least one surprise factor of the target piece music information across the features at the one of the plurality of orders and predicts, using the model, preference information using the time-varying surprise trends for the target piece music information across the features.

AUTOMATIC TRANSLATION USING DEEP LEARNING
20210027761 · 2021-01-28 ·

Audio data of an original work is received. Text in the audio data is translated to a target language. The audio data is passed to a first deep learning model to learn voice features in the audio data. The audio data is passed to a second deep learning model to learn audio properties in the audio data. The translated text is synchronized to play in the position of original text of the original work in a synthesized voice. A translated audio data of the original work is created by combining the synchronized translated text in the synthesized voice with music of the audio data.

Intelligent system for matching audio with video
20210020149 · 2021-01-21 ·

An intelligent system for matching audio with video of the present invention provides a video analysis module targeting color tone, storyboard pace, video dialogue, length and category and director's special requirement, actors expression, movement, weather, scene, buildings, spacial and temporal, things and a music analysis module targeting recorded music form, sectional turn, style, melody and emotional tension, and then uses an AI matching module to adequately match video of the video analysis module with musical characteristics of the music analysis module, so as to quickly complete a creative composition selection function with respect to matching audio with a video.

METHOD OF DISPLAYING LIGHT WITH THE RHYTHM OF MUSIC
20240008154 · 2024-01-04 ·

A method of displaying light with the rhythm of music that uses a host system to control display units to display light-emitting colors along with music. The control unit of each display unit controls display elements to operate separately or synchronously. The processor of the host system analyzes the transitions of intro, verse, and hook segments to play the rhythm of the music melody. The host system is used to transmit the display signal to the display units in conjunction with the music to be played, to change various display lighting methods, so as to achieve the effect of displaying light with the rhythm of the music.

MACHINE-CONTROL OF A DEVICE BASED ON MACHINE-DETECTED TRANSITIONS
20200410966 · 2020-12-31 ·

Apparatus, methods, and systems that operate to provide interactive streaming content identification and processing are disclosed. An example apparatus includes a classifier to determine an audio characteristic value representative of an audio characteristic in audio; a transition detector to detect a transition between a first category and a second category by comparing the audio characteristic value to a threshold value among a set of threshold values, the set of threshold values corresponding to the first category and the second category; and a context manager to control a device to switch from a first fingerprinting algorithm to a second fingerprinting algorithm different than the first fingerprinting algorithm, responsive to the detected transition between the first category and the second category.

Technologies for generating a musical fingerprint

Techniques are described herein for generating a music fingerprint representative of a performance style of an individual. One or more characteristics associated with musical data are identified. A score to associated with each of the identified one or more characteristics is determined. The music fingerprint is generated based on the determine score for each of the identified one or more characteristics.

Device configurations and methods for generating drum patterns

The present disclosure relates to methods and devices for generating drum patterns. In one embodiment a method includes receiving a user generated input including a plurality of events during a time interval, and detecting the events. The method also includes analyzing the events to define a rhythmic pattern based on number of events detected, placement of each event in the time interval, and duration of the time interval. Each of the plurality of events may be classified into at least one type of drum pattern element and a drum pattern may be generated based on the rhythmic pattern to include a drum element for each event of the rhythmic pattern. In certain embodiments, pitch or tone of the events may be determined to classify events as components of a drum pattern. Processes and devices allow for professional sounding drum patterns to be output based on received input.

METHOD FOR CONVERTING ACOUSTIC SIGNAL INTO HAPTIC SIGNAL, AND HAPTIC DEVICE USING SAME

A haptic device according to one embodiment can comprise: a database unit for storing acoustic information or receiving the acoustic information from an external device; a control unit for converting the acoustic information into an electrical signal according to a predetermined pattern; a driving unit for generating a motion signal on the basis of the electrical signal; and a transfer unit for transferring a patterned tactile signal to a user by means of the motion signal.

Information processing method and image processing apparatus

There is provided an information processing method including analyzing a beat of input music, extracting a plurality of unit images from an input image, and generating, by a processor, editing information for switching the extracted unit images depending on the analyzed beat.

Information processing method, information processing apparatus, and information processing program
11869467 · 2024-01-09 · ·

An information processing apparatus 100 according to the present disclosure includes an extraction unit 131 that extracts first data from an element constituting first content, and a model generation unit 132 that generates a learned model that has a first encoder 50 that calculates a first feature quantity which is a feature quantity of first content, and a second encoder 55 that calculates a second feature quantity which is a feature quantity of the extracted first data.