G10L21/16

SPATIAL AUDIO AND HAPTICS

An example non-transitory computer-readable storage medium comprises instructions that, when executed by a processing resource of a computing device, cause the processing resource to generate haptics metadata using audio-haptics classification based at least in part on spatial audio associated with a digital environment. The instructions further cause the processing resource to encode the spatial audio with the haptics metadata to generate a rendering package.

Method and device for audio signal processing, and storage medium

A method and device for audio signal processing is provided. The method includes steps of: obtaining an inputted audio signal; parsing the audio signal to obtain at least one audio feature; determining at least one vibration feature corresponding to the at least one audio feature; and generating a vibration signal corresponding to the audio signal according to the at least one vibration feature. The inputted audio signal is automatically converted into a vibration signal by the vibration feature corresponding to the audio feature of the inputted audio signal, which can avoid errors caused by manual operation and make the vibration signal possess high versatility.

Method and device for audio signal processing, and storage medium

A method and device for audio signal processing is provided. The method includes steps of: obtaining an inputted audio signal; parsing the audio signal to obtain at least one audio feature; determining at least one vibration feature corresponding to the at least one audio feature; and generating a vibration signal corresponding to the audio signal according to the at least one vibration feature. The inputted audio signal is automatically converted into a vibration signal by the vibration feature corresponding to the audio feature of the inputted audio signal, which can avoid errors caused by manual operation and make the vibration signal possess high versatility.

Speech Processing Method and System in A Cochlear Implant

The invention discloses a speech processing method and system in a cochlear implant. The method includes: obtaining a sound signal, and converting the sound signal into a digital signal; decomposing the digital signal using a mode decomposition method, obtaining a plurality of intrinsic mode functions, and converting the plurality of intrinsic mode functions into instantaneous frequencies and instantaneous amplitudes or instantaneous energy intensities; sorting the instantaneous frequencies to corresponding the preset electrode frequency bands of the electrodes in the cochlear implant; selecting N most energetic components from the corresponding frequency bands of the electrodes, and generating corresponding electrode stimulation signals according to the selected components. The present invention analyzes sound and composes the final electrode signals all in the time domain based on Hilbert-Huang transform; it is not limited by the principle of uncertainty, and there is no noise generated by harmonics.

METHOD AND SYSTEM FOR PROVIDING ADJUNCT SENSORY INFORMATION TO A USER

A method for providing information to a user, the method including: receiving an input signal from a sensing device associated with a sensory modality of the user; generating a preprocessed signal upon preprocessing the input signal with a set of preprocessing operations; extracting a set of features from the preprocessed signal; processing the set of features with a neural network system; mapping outputs of the neural network system to a device domain associated with a device including a distribution of haptic actuators in proximity to the user; and at the distribution of haptic actuators, cooperatively producing a haptic output representative of at least a portion of the input signal, thereby providing information to the user.

METHOD AND SYSTEM FOR PROVIDING ADJUNCT SENSORY INFORMATION TO A USER

A method for providing information to a user, the method including: receiving an input signal from a sensing device associated with a sensory modality of the user; generating a preprocessed signal upon preprocessing the input signal with a set of preprocessing operations; extracting a set of features from the preprocessed signal; processing the set of features with a neural network system; mapping outputs of the neural network system to a device domain associated with a device including a distribution of haptic actuators in proximity to the user; and at the distribution of haptic actuators, cooperatively producing a haptic output representative of at least a portion of the input signal, thereby providing information to the user.

Information processing apparatus and information processing method for generating and processing a file including speech waveform data and vibration waveform data

Provided is an information processing apparatus including a file generation unit that generates a file including speech waveform data and vibration waveform data. The file generation unit cuts out waveform data in a to-be-synthesized band from first speech data, synthesizes waveform data extracted from a synthesizing band of vibration data with the to-be-synthesized band to generate second speech data, and encodes the second speech data to generate the file.

Information processing apparatus and information processing method for generating and processing a file including speech waveform data and vibration waveform data

Provided is an information processing apparatus including a file generation unit that generates a file including speech waveform data and vibration waveform data. The file generation unit cuts out waveform data in a to-be-synthesized band from first speech data, synthesizes waveform data extracted from a synthesizing band of vibration data with the to-be-synthesized band to generate second speech data, and encodes the second speech data to generate the file.

LISTENING DEVICES FOR OBTAINING METRICS FROM AMBIENT NOISE

A device may receive audio data based on a capturing of sounds associated with a structure. The device may obtain a model associated with the structure. The model may have been trained to receive the audio data as input, determine a score that identifies a likelihood that a sound is present in the audio data, and identify the sound based on the score. The device may determine at least one parameter associated with the sound. The device may generate a metric based on the at least one parameter associated with the sound, and perform an action based on the metric.

LISTENING DEVICES FOR OBTAINING METRICS FROM AMBIENT NOISE

A device may receive audio data based on a capturing of sounds associated with a structure. The device may obtain a model associated with the structure. The model may have been trained to receive the audio data as input, determine a score that identifies a likelihood that a sound is present in the audio data, and identify the sound based on the score. The device may determine at least one parameter associated with the sound. The device may generate a metric based on the at least one parameter associated with the sound, and perform an action based on the metric.