G10L21/06

Wearable vibrotactile speech aid

A method for training vibrotactile speech perception in the absence of auditory speech includes selecting a first word, generating a first control signal configured to cause at least one vibrotactile transducer to vibrate against a person's body with a first vibration pattern based on the first word, sampling a second word spoken by the person, generating a second control signal configured to cause at least one vibrotactile transducer to vibrate against the person's body with a second vibration pattern based on the sampled second word, and presenting a comparison between the first word and the second word to the person. An array of vibrotactile transducers can be in contact with the person's body. A method for improving auditory and/or visual speech perception in adverse listening conditions or for hearing-impaired individuals can also include sampling a speech signal, extracting a speech envelope, and generating a control signal configured to cause a vibrotactile transducer to vibrate again a person's body with an intensity that varies over time based on the speech envelope.

Wearable communication enhancement device
09848260 · 2017-12-19 · ·

Embodiments disclosed herein may include a wearable apparatus including a frame having a memory and processor associated therewith. The apparatus may include a camera associated with the frame and in communication with the processor, the camera configured to track an eye of a wearer. The apparatus may also include at least one microphone associated with the frame. The at least one microphone may be configured to receive a directional instruction from the processor. The directional instruction may be based upon an adaptive beamforming analysis performed in response to a detected eye movement from the infrared camera. The apparatus may also include a speaker associated with the frame configured to provide an audio signal received at the at least one microphone to the wearer.

Method and apparatus for processing speech signal

An apparatus for processing a speech signal is provided. The apparatus includes a communicator comprising communication circuitry configured to transmit and receive data, an actuator comprising actuation circuitry configured to generate vibration and to output a signal, a formant enhancement filter configured to increase a formant of the speech signal, and a controller comprising processing circuitry configured to control the speech signal to be received through the communicator, to estimate at least one formant frequency from the speech signal based on linear predictive coding (LPC), to estimate a bandwidth of the at least one formant frequency, to determine whether the speech signal is a voiced sound or a voiceless sound, to configure the formant enhancement filter based on the at least one formant frequency, the bandwidth of the at least one formant frequency, characteristics of the determined voiced sound or voiceless sound, and signal delivery characteristics of a human body, to apply the formant enhancement filter to the speech signal, and to control the speech signal to which the formant enhancement filter is applied to be output using the actuator through the human body.

Method and apparatus for processing speech signal

An apparatus for processing a speech signal is provided. The apparatus includes a communicator comprising communication circuitry configured to transmit and receive data, an actuator comprising actuation circuitry configured to generate vibration and to output a signal, a formant enhancement filter configured to increase a formant of the speech signal, and a controller comprising processing circuitry configured to control the speech signal to be received through the communicator, to estimate at least one formant frequency from the speech signal based on linear predictive coding (LPC), to estimate a bandwidth of the at least one formant frequency, to determine whether the speech signal is a voiced sound or a voiceless sound, to configure the formant enhancement filter based on the at least one formant frequency, the bandwidth of the at least one formant frequency, characteristics of the determined voiced sound or voiceless sound, and signal delivery characteristics of a human body, to apply the formant enhancement filter to the speech signal, and to control the speech signal to which the formant enhancement filter is applied to be output using the actuator through the human body.

TASK FLOW IDENTIFICATION BASED ON USER INTENT

The intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions. The system can be implemented using any of a number of different platforms, such as the web, email, smartphone, and the like, or any combination thereof. In one embodiment, the system is based on sets of interrelated domains and tasks, and employs additional functionally powered by external services with which the system can interact.

Providing content on multiple devices

Techniques for receiving a voice command from a user and, in response, providing audible content to the user via a first device and providing visual content for the user via a second device. In some instances, the first device includes a microphone for generating audio signals that include user speech, as well as a speaker for outputting audible content in response to identified voice commands from the speech. However, the first device might not include a display for displaying graphical content. As such, the first device may be configured to identify devices that include displays and that are proximate to the first device. The first device may then instruct one or more of these other devices to output visual content associated with a user's voice command.

PROVIDING ACCESS TO USER-CONTROLLED RESOURCES BY AUTOMATED ASSISTANTS

Methods, apparatus, and computer readable media are described herein for allowing a first user to interface with an automated assistant to assign tasks to additional user(s), and/or for causing notification(s) of the assigned task to be rendered to the additional user(s) via corresponding automated assistant interface(s). In various implementations, one or more criteria can be utilized in selecting a group of client device(s), linked to the additional user, via which to provide the notification(s) for the task assigned to the additional user. Also, in various implementations condition(s) for providing the notification(s) for the task can be determined, and the notification(s) provided based on determining satisfaction of the condition(s).

PROVIDING ACCESS TO USER-CONTROLLED RESOURCES BY AUTOMATED ASSISTANTS

Methods, apparatus, and computer readable media are described herein for allowing a first user to interface with an automated assistant to assign tasks to additional user(s), and/or for causing notification(s) of the assigned task to be rendered to the additional user(s) via corresponding automated assistant interface(s). In various implementations, one or more criteria can be utilized in selecting a group of client device(s), linked to the additional user, via which to provide the notification(s) for the task assigned to the additional user. Also, in various implementations condition(s) for providing the notification(s) for the task can be determined, and the notification(s) provided based on determining satisfaction of the condition(s).

DISPLAY APPARATUS, VOICE ACQUIRING APPARATUS AND VOICE RECOGNITION METHOD THEREOF

Disclosed are a display apparatus, a voice acquiring apparatus and a voice recognition method thereof, the display apparatus including: a display unit which displays an image; a communication unit which communicates with a plurality of external apparatuses; and a controller which includes a voice recognition engine to recognize a user's voice, receives a voice signal from a voice acquiring unit, and controls the communication unit to receive candidate instruction words from at least one of the plurality of external apparatuses to recognize the received voice signal.

DISPLAY APPARATUS, VOICE ACQUIRING APPARATUS AND VOICE RECOGNITION METHOD THEREOF

Disclosed are a display apparatus, a voice acquiring apparatus and a voice recognition method thereof, the display apparatus including: a display unit which displays an image; a communication unit which communicates with a plurality of external apparatuses; and a controller which includes a voice recognition engine to recognize a user's voice, receives a voice signal from a voice acquiring unit, and controls the communication unit to receive candidate instruction words from at least one of the plurality of external apparatuses to recognize the received voice signal.