G10H2220/321

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20210383714 · 2021-12-09 · ·

There is provided an information processing device, an information processing method, and a program that can provide an information processing device, an information processing method, and a program that can effectively assist in learning performance. The information processing device includes a sensing data obtaining section configured to obtain sensing data regarding at least one of a motion pattern, a motion speed, a motion accuracy, and a motion amount of a motion element of a user in a practice in performance performed by movement of at least a part of a body of the user and a state of a result produced by the performance, an analyzing section configured to analyze the obtained sensing data and estimate information regarding the practice in the performance of the user on a basis of a result of the analysis, and an output section configured to output a result of the estimation to the user.

Integrated Musical Instrument Systems
20220208160 · 2022-06-30 ·

A system suitable for use as a musical instrument system is provided. The system includes at least one sensor. The system also includes at least one control surface configured to interface with the at least one sensor. Further, the system includes at least one controller configured to interface with the at least one sensor. Additionally, the system includes at least one program module configured to interface with the at least one sensor. The system includes an enclosure. The at least one sensor and the at least one control surface are positionable on the base. The system also includes at least one data processor configured to interface with the at least one sensor, the at least one control surface, and the at least one program module arranged to function as a musical instrument system. The system also includes an enclosure

MOTION FEEDBACK DEVICE

A motion feedback device includes a housing, a speaker and a control module carried by said housing. The control module includes a controller and a motion sensor. The controller is configured to include a mapping adapted for the creation of sound in response to any user-produced movement of the housing as detected by the motion sensor. This allows for continuous original sound generation or composition based upon the user produced movements of the housing.

MAGNETIC EARPHONES HOLDER
20230269516 · 2023-08-24 ·

One or more accelerometers embedded with an earbud and/or a set of earphones are able to sense a moving pace of a user. Based on a moving pace of the user, a signal is sent to a remotely connected electronic device. The electronic device is able to separately increase and decrease a beat or rhythm of the audio from the electronic device based on a pace of the user. In some embodiments, an audio alert is sent to the user to inform the user of pace and whether the user has increased or decreased their pace. Additionally, in some embodiments, a program stored on the electronic device is used to compare the user's current progress and/or speed based on past runs and workouts.

SYSTEM AND METHOD FOR ADAPTING AUDITORY BIOFEEDBACK CUES AND GAIT ANALYSIS USING WIRELESS SIGNALS AND DIGITAL RESPONSES
20220155851 · 2022-05-19 ·

A method for adapting auditory biofeedback cues to adjust a user's gait includes receiving a series of respective first signals from each sensor of an integrated sensor system and converting the series of respective first signals into a series of respective second signals. Each respective second signal within the series of respective second signals can be quantized as an audio biofeedback cue and modified so the timing of each respective second signal is aligned to a pre-selected temporal or musical grid. The user's gait is then analyzed as a function of the audio biofeedback cues and the pre-selected temporal or musical grid can be adapted to adjust entrainment of the user's gait.

COMPUTER VISION AND MAPPING FOR AUDIO APPLICATIONS
20230267900 · 2023-08-24 ·

Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.

VIBROTACTILE CONTROL SYSTEMS AND METHODS

Methods and systems are disclosed to facilitate creating the sensation of vibrotactile movement on the body of a user. Vibratory motors are used to generate a haptic language for music or other stimuli that is integrated into wearable technology. The disclosed system in certain embodiments enables the creation of a family of devices that allow people such as those with hearing impairments to experience sounds such as music or other input to the system. For example, a “sound vest” or other wearable array transforms musical input to haptic signals so that users can experience their favorite music in a unique way, and can also recognize auditory or other cues in the user's real or virtual reality environment and convey this information to the user using haptic signals.

COMPUTER VISION AND MAPPING FOR AUDIO APPLICATIONS
20210366449 · 2021-11-25 ·

Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.

VIBROTACTILE CONTROL SYSTEMS AND METHODS

Methods and systems are disclosed to facilitate creating the sensation of vibrotactile movement on the body of a user. Vibratory motors are used to generate a haptic language for music or other stimuli that is integrated into wearable technology. The disclosed system in certain embodiments enables the creation of a family of devices that allow people such as those with hearing impairments to experience sounds such as music or other input to the system. For example, a “sound vest” or other wearable array transforms musical input to haptic signals so that users can experience their favorite music in a unique way, and can also recognize auditory or other cues in the user's real or virtual reality environment and convey this information to the user using haptic signals.

Transitions between media content items

A system of playing media content items determines transitions between pairs of media content items by determining desirable locations in which transitions across the pairs of media content items occur. The system uses a plurality of track features of media content items and determines such track features of each media content item associated with each of transition point candidates, such as beat positions, of that media content item. The system determines similarity in the plurality of track features between the transition point candidates of a first media content item and the transition point candidates for a second media content item being played subsequent to the first media content item. The transition points or portions of the first and second media content items are selected from the transition point candidates for the first and second media content items based on the similarity results.