Patent classifications
G10L25/90
Dynamic creation and insertion of content
In an aspect, during a presentation of a presentation material, viewers of the presentation material can be monitored. Based on the monitoring, new content can be determined for insertion into the presentation material. The new content can be automatically inserted to the presentation material in real time. In another aspect, during the presentation, a presenter of the presentation material can be monitored. The presenter's speech can be intercepted and analyzed to detect a level of confidence. Based on the detected level of confidence, the presenter's speech can be adjusted and the adjusted speech can be played back automatically, for example, in lieu of the presenter's original speech that is intercepted.
MITIGATING VOICE FREQUENCY LOSS
Computer-implemented methods, computer program products, and computer systems for mitigating frequency loss may include one or more processors configured for receiving first audio data corresponding to unobstructed user utterances, receiving second audio data corresponding to first obstructed user utterances, generating a frequency loss (FL) model representing frequency loss between the first audio data and the second audio data, receiving third audio data corresponding to one or more second obstructed user utterances, processing the third audio data using the FL model to generate fourth audio data corresponding to a frequency loss mitigated version of the second obstructed user utterances, and transmitting the fourth audio data to a recipient computing device. The first obstructed user utterances are obstructed by a facemask and the one or more second obstructed user utterances is obstructed by the facemask. The FL model may be executed as an audio plugin in a web conferencing program.
MITIGATING VOICE FREQUENCY LOSS
Computer-implemented methods, computer program products, and computer systems for mitigating frequency loss may include one or more processors configured for receiving first audio data corresponding to unobstructed user utterances, receiving second audio data corresponding to first obstructed user utterances, generating a frequency loss (FL) model representing frequency loss between the first audio data and the second audio data, receiving third audio data corresponding to one or more second obstructed user utterances, processing the third audio data using the FL model to generate fourth audio data corresponding to a frequency loss mitigated version of the second obstructed user utterances, and transmitting the fourth audio data to a recipient computing device. The first obstructed user utterances are obstructed by a facemask and the one or more second obstructed user utterances is obstructed by the facemask. The FL model may be executed as an audio plugin in a web conferencing program.
Hearing system containing a hearing instrument and a method for operating the hearing instrument
A hearing system contains a hearing instrument and the hearing instrument is configured to support the hearing of a hearing-impaired user. The hearing instrument is operated via an operating method. The method includes capturing a sound signal from an environment of the hearing instrument, processing the captured sound signal to at least partially compensate the hearing-impairment of the user and outputting the processed sound signal to the user. The captured sound signal is analyzed to recognize speech intervals, in which the captured sound signal contains speech. During recognized speech intervals, at least one time derivative of an amplitude and/or a pitch of the captured sound signal is determined. The amplitude of the processed sound signal is temporarily increased, if the at least one derivative fulfills a predefined criterion.
BEHAVIOR DETECTION
A system includes a microphone and a computing device including a processor and a memory. The memory stores instructions executable by the processor to identify a word sequence in audio input received from the microphone, to determine a behavior pattern from the word sequence, and to report the behavior pattern to a remote server at a specified time.
BEHAVIOR DETECTION
A system includes a microphone and a computing device including a processor and a memory. The memory stores instructions executable by the processor to identify a word sequence in audio input received from the microphone, to determine a behavior pattern from the word sequence, and to report the behavior pattern to a remote server at a specified time.
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM
An electronic device having circuitry configured to perform source separation on an audio signal to obtain a separated source and a residual signal, to perform feature extraction on the separated source to obtain one or more processing parameters, and to perform audio processing on a captured audio signal based on the one or more processing parameters to obtain an adjusted separated source.
UTTERANCE EVALUATION APPARATUS, UTTERANCE EVALUATION, AND PROGRAM
A stable evaluation result is obtained from a voice of speech for any sentence. A speech evaluation device (1) outputs a score for evaluating speech of an input voice signal spoken by a speaker in a first group. A feature extraction unit (11) extracts an acoustic feature from the input voice signal. A conversion unit (12) converts the acoustic feature of the input voice signal to an acoustic feature when a speaker in a second group speaks the same text as text of the input voice signal. An evaluation unit (13) calculates a score indicating a higher evaluation as a distance between the acoustic feature before the conversion and the acoustic feature after the conversion becomes shorter.
INFORMATION PROCESSING APPARATUS AND COMMAND PROCESSING METHOD
An acoustic feature detection unit (31) detects acoustic features of voice discretely input separately from a command instructing movement of an operation target. A movement control unit (32) controls the movement of the operation target instructed by the command on the basis of the acoustic features detected by the acoustic feature detection unit (31).
INFORMATION PROCESSING APPARATUS AND COMMAND PROCESSING METHOD
An acoustic feature detection unit (31) detects acoustic features of voice discretely input separately from a command instructing movement of an operation target. A movement control unit (32) controls the movement of the operation target instructed by the command on the basis of the acoustic features detected by the acoustic feature detection unit (31).