Patent classifications
G10L21/0356
Systems and Methods for Assisted Translation and Lip Matching for Voice Dubbing
Systems and methods for generating candidate translations for use in creating synthetic or human-acted voice dubbings, aiding human translators in generating translations that match the corresponding video, automatically grading how well a candidate translation matches the corresponding video, suggesting modifications to the speed and/or timing of the translated text to improve the grading of a candidate translation, and suggesting modifications to the voice dubbing and/or video to improve the grading of a candidate translation. In that regard, the present technology may be used to fully automate the process of generating lip-matched translations and associated voice dubbings, or as an aid for human-in-the-loop processes that may reduce or eliminate the time and effort required from translators, adapters, voice actors, and/or audio editors to generate voice dubbings.
Method and system for speech enhancement
A method and a system for speech enhancement including a time synchronization unit configured to synchronize microphone signals sent from at least two microphones; a source separation unit configured to separate the synchronized microphone signals and output a separated speech signal, which corresponds to a speech source; and a noise reduction unit including a feature extraction unit configured to extract a speech feature of the separated speech signal and a neural network configured to receive the speech feature and output a clean speech feature.
Audiovisual content rendering with display animation suggestive of geolocation at which content was previously rendered
Techniques have been developed to facilitate (1) the capture and pitch correction of vocal performances on handheld or other portable computing devices and (2) the mixing of such pitch-corrected vocal performances with backing tracks for audible rendering on targets that include such portable computing devices and as well as desktops, workstations, gaming stations, even telephony targets. Implementations of the described techniques employ signal processing techniques and allocations of system functionality that are suitable given the generally limited capabilities of such handheld or portable computing devices and that facilitate efficient encoding and communication of the pitch-corrected vocal performances (or precursors or derivatives thereof) via wireless and/or wired bandwidth-limited networks for rendering on portable computing devices or other targets.
Audiovisual content rendering with display animation suggestive of geolocation at which content was previously rendered
Techniques have been developed to facilitate (1) the capture and pitch correction of vocal performances on handheld or other portable computing devices and (2) the mixing of such pitch-corrected vocal performances with backing tracks for audible rendering on targets that include such portable computing devices and as well as desktops, workstations, gaming stations, even telephony targets. Implementations of the described techniques employ signal processing techniques and allocations of system functionality that are suitable given the generally limited capabilities of such handheld or portable computing devices and that facilitate efficient encoding and communication of the pitch-corrected vocal performances (or precursors or derivatives thereof) via wireless and/or wired bandwidth-limited networks for rendering on portable computing devices or other targets.
AUTONOMOUS MOBILE BODY, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING APPARATUS
The present technology relates to an autonomous mobile body, an information processing method, a program, and an information processing apparatus capable of improving user experience by an output sound of the autonomous mobile body.
The autonomous mobile body includes: a recognition unit that recognizes a motion of its own device; and a sound control unit that controls an output sound output from the own device. The sound control unit controls output of a plurality of operation sounds that is the output sound corresponding to a plurality of the motions of the own device, and changes the operation sound in a case where the plurality of motions has been recognized. The present technology can be applied to, for example, a robot.
AUTONOMOUS MOBILE BODY, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING APPARATUS
The present technology relates to an autonomous mobile body, an information processing method, a program, and an information processing apparatus capable of improving user experience by an output sound of the autonomous mobile body.
The autonomous mobile body includes: a recognition unit that recognizes a motion of its own device; and a sound control unit that controls an output sound output from the own device. The sound control unit controls output of a plurality of operation sounds that is the output sound corresponding to a plurality of the motions of the own device, and changes the operation sound in a case where the plurality of motions has been recognized. The present technology can be applied to, for example, a robot.
SYSTEM AND METHOD FOR AUGMENTING VEHICLE PHONE AUDIO WITH BACKGROUND SOUNDS
A vehicle infotainment system that adds background sounds to an outgoing call on a mobile device. The infotainment system comprises: i) a database of selectable augmenting audio signals; and ii) audio processing circuitry configured to receive at a first input an uplink signal from the infotainment system and receive at a second input a selected augmenting audio signal. The audio processing circuitry adapts a spectrum of the first selected augmenting audio signal to prevent the selected augmenting audio signal from masking the uplink signal and combines the adapted selected augmenting audio signal and the uplink signal to produce an augmented uplink signal at an output.
SYSTEM AND METHOD FOR AUGMENTING VEHICLE PHONE AUDIO WITH BACKGROUND SOUNDS
A vehicle infotainment system that adds background sounds to an outgoing call on a mobile device. The infotainment system comprises: i) a database of selectable augmenting audio signals; and ii) audio processing circuitry configured to receive at a first input an uplink signal from the infotainment system and receive at a second input a selected augmenting audio signal. The audio processing circuitry adapts a spectrum of the first selected augmenting audio signal to prevent the selected augmenting audio signal from masking the uplink signal and combines the adapted selected augmenting audio signal and the uplink signal to produce an augmented uplink signal at an output.
Systems and methods for processing audio based on changes in active speaker
System and methods for processing audio signals are disclosed. In one implementation, a system may comprise a wearable camera configured to capture images from an environment of a user; a microphone; and a processor. The processor may be configured to receive an audio signal representative of sounds captured by the microphone during a time period; and receive the images captured by the wearable camera. The processor may process the audio signal in a first mode based on audio data accumulated in a buffer prior to the time period; detect a change in the active speaker from the first individual to a second individual; and cease processing in the first mode and process the audio signal in a second mode that differs from the first mode.
Systems and methods for processing audio based on changes in active speaker
System and methods for processing audio signals are disclosed. In one implementation, a system may comprise a wearable camera configured to capture images from an environment of a user; a microphone; and a processor. The processor may be configured to receive an audio signal representative of sounds captured by the microphone during a time period; and receive the images captured by the wearable camera. The processor may process the audio signal in a first mode based on audio data accumulated in a buffer prior to the time period; detect a change in the active speaker from the first individual to a second individual; and cease processing in the first mode and process the audio signal in a second mode that differs from the first mode.