G10H2220/355

Music compilation systems and related methods
11580941 · 2023-02-14 · ·

Music compilation methods disclosed herein include providing a database. Data is stored therein associating a user with access credentials for a plurality of music streaming services. A first server is communicatively coupled with the database and with multiple third party servers each of which includes a music library associated with the user. A list is stored in the database listing audio tracks of the libraries. A play selector is displayed on a user interface of a computing device communicatively coupled with the first server. User selection of the play selector initiates playback of a sample set, the sample set including portions of audio tracks in the list. The sample set is determined based on contextual information gathered by the computing device, the contextual information not including any user selection. Music compilation systems disclosed herein include systems configured to carry out the music compilation methods.

Mobile system allowing adaptation of the runner's cadence

A mobile music listening device synchronizing in a personalized way music and movement, and dedicated to improving the kinematics of the runner. Thanks to inertial units connected to a smartphone, the runner's steps are detected in real time by the mobile application. A dedicated algorithm adapts the pulsation of the musical excerpts in such a way as to bring the runner to a suitable cadence, capable of preventing injuries. A method for the synchronization of the rhythmic stimulation with the biological variability using a Kuramoto model characterized in that phase oscillator with a coupling term from the movement dynamics with parameters of, coupling strength, maximum and minimum frequencies for a fraction of the unmodified song frequency, maximum difference between the tempo and target frequency, Target the target frequency.

SYSTEM AND METHOD FOR 3D SOUND PLACEMENT
20220400352 · 2022-12-15 ·

A phone app is disclosed that enables a user to place 3D sound in a room. The user of this app is able to locate precisely where sound is perceived to originate by aiming their phone. This app may be used by audio professionals in place of the controls on a traditional sound mixer.

VIRTUAL TUTORIALS FOR MUSICAL INSTRUMENTS WITH FINGER TRACKING IN AUGMENTED REALITY
20220375362 · 2022-11-24 ·

Systems, devices, media, and methods are described for presenting a tutorial in augmented reality on the display of a smart eyewear device. The system includes a marker registration utility for setting a marker on a musical instrument, a localization utility for locating the eyewear device relative to the marker location and the instrument, a virtual object rendering utility for presenting a series of virtual tutorial objects on the display near one or more actuators on the instrument, and a hand tracking utility for tracking the performer's finger locations in real time during playback of a song file. A high-definition video camera captures sequences of frames of video data. The series of virtual tutorial objects, in one example, includes graphical elements presented on a virtual scroll that appears to move toward the instrument at a speed correlated with the song tempo. The hand tracking utility calculates a set of expected fingertip coordinates based on a detected hand shape and a library of hand poses and landmarks.

Audio-visual effects system for augmentation of captured performance based on content thereof

Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or determined from an audiovisual performance using machine learning techniques.

VEHICLE SYSTEMS AND RELATED METHODS
20230186878 · 2023-06-15 ·

Vehicle machine learning methods include providing one or more computer processors communicatively coupled with a vehicle. Using data gathered from biometric sensors and/or vehicle sensors, a machine learning model is trained to determine a mental state of a driver and/or a driving state corresponding with a portion of a trip. In implementations the mental or driving state may be determined without a machine learning model. Based at least in part on the determined mental state and the determined driving state, one or more interventions are automatically initiated to alter the mental state of the driver. The interventions may include preparing (or modifying) and initiating a music playlist, altering a lighting condition within the vehicle, altering an audio condition within the vehicle, altering a temperature condition within the vehicle, and initiating, altering, or withholding conversation from a conversational agent. Vehicle machine learning systems perform the vehicle machine learning methods.

Pace-aware music player

An electronic device may comprise audio processing circuitry, pace tracking circuitry, and positioning circuitry. The pace tracking circuitry may be operable to selects songs to be processed for playback, and/or control time stretching applied to such songs, by the audio processing circuitry based on position data generated by the positioning circuitry, a desired tempo, and whether the songs are stored locally or network-accessible. The position data may indicate the pace of a runner during a preceding, determined time interval. The pace tracking circuitry may control the song selection and/or time stretching based on a runner profile data stored in memory of the music device. The profile data may include runner's distance-per-stride data. The electronic device may include sensors operable to function as a pedometer. The pace tracking circuitry may update the distance-per-stride data based on the position data and based on data output by the one or more sensors.

SYSTEMS AND METHODS FOR GENERATING A CONTINUOUS MUSIC SOUNDSCAPE USING AUTOMATIC COMPOSITION

Disclosed are systems and techniques for creating a personalized sound environment for a user. Output is received from a plurality of sensors, wherein the sensor output detects a state of a user and an environment in which the user is active. Two or more sound sections for presentation to the user are selected from a plurality of sound sections, the selecting based on the sensor output and automatically determined sound preferences of the user. A first sound phase is generated, wherein the first sound phase includes the two or more selected sound sections. A personalized sound environment for presentation to the user is generated, wherein the personalized sound environment includes at least the first sound phase and a second sound phase. The personalized sound environment is presented to the user on a user device.

INFORMATION PROCESSING METHOD, IMAGE PROCESSING APPARATUS, AND PROGRAM

[Object] To propose an image processing method, image processing apparatus and program which are capable of exciting the emotions of a viewer more effectively. [Solution] An information processing method including: analyzing a beat of input music; extracting a plurality of unit images from an input image; and generating, by a processor, editing information for switching the extracted unit images depending on the analyzed beat.

Apparatus and methods for cellular compositions
11195502 · 2021-12-07 · ·

Broadly speaking, embodiments of the present invention provide systems, methods and apparatus for cellular compositions/generating music in real-time using cells (i.e. short musical motifs), where the cellular compositions are dependent on user data.