Patent classifications
H04R1/406
EYEGLASS AUGMENTED REALITY SPEECH TO TEXT DEVICE AND METHOD
A method and apparatus to assist people with hearing loss. An augmented reality device with microphones and a display captures speech of a person talking to the wearer of the device and displays real-time captions in the wearer's field of view, while optionally not captioning the wearer's own speech. The microphone system in this apparatus inverts the use of microphones in augmented reality devices by analyzing and processing environmental sounds while ignoring the wearer's own voice.
SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, PROGRAM, AND SIGNAL PROCESSING SYSTEM
Provided is a signal processing device including a main speech detection unit configured to detect, by using a neural network, whether or not a signal input to a sound collection device assigned to each of at least two speakers includes a main speech that is a voice of the corresponding speaker, and output frame information indicating presence or absence of the main speech.
Associated spatial audio playback
An apparatus including at least one processor and at least one memory including computer code for one or more programs, the at least one memory and the computer code configured, with the at least one processor, to cause the apparatus at least to: generate content lock information for a content lock, wherein the content lock information enables control of audio signal processing associated with audio signals related to one or more audio sources based on a position and/or orientation input.
Using classified sounds and localized sound sources to operate an autonomous vehicle
An ambient sound environment is captured by a microphone array of an autonomous vehicle traveling in the ambient sound environment. A perception module of the autonomous vehicle classifies sounds and localizes sound sources in the ambient sound environment. Classification is performed using spectrum analysis and/or machine learning. In an embodiment, sound sources within a field of view (FOV) of an image sensor of the autonomous vehicle are localized in a visual scene generated by the perception module. In an embodiment, one or more sound sources outside the FOV of the image sensors are localized in a static digital map. Localization is performed using parametric or non-parametric techniques and/or machine learning. The output of the perception module is input into a planning module of the autonomous vehicle to plan a route or trajectory for the autonomous vehicle in the ambient sound environment.
System to determine direction toward user
A device has a microphone array that acquires sound data and a camera that acquires image data. A portion of the device may be moveable by one or more actuators. Responsive to the user, the portion of the device is moved toward an estimated direction of the user. The estimated direction is based on sensor data including the sound data and the image data. First variance values for individual sound direction values are calculated. Data derived from the image data or data from other sensors may be used to modify the first variance values and determine second data comprising second variances. The second data may be processed to determine the estimated direction of the user. For example, the second data may be processed by both a forward and a backward Kalman filter, and the output combined to determine an estimated direction toward the user.
Acoustic devices
The present disclosure provides an acoustic device including a microphone array, a processor, and at least one speaker. The microphone array may be configured to acquire an environmental noise. The processor may be configured to estimate a sound field at a target spatial position using the microphone array. The target spatial position may be closer to an ear canal of a user than each microphone in the microphone array. The processor may be configured to generate a noise reduction signal based on the environmental noise and the sound field estimation of the target spatial position. The at least one speaker may be configured to output a target signal based on the noise reduction signal. The target signal may be used to reduce the environmental noise. The microphone array may be arranged in a target area to minimize an interference signal from the at least one speaker to the microphone array.
Wearable device with directional audio
A wearable device can provide an audio module that is operable to provide audio output from a distance away from the ears of the user. For example, the wearable device can be worn on clothing of the user and direct audio waves to the ears of the user. Such audio waves can be focused by a parametric array of speakers that limit audibility by others. Thus, the privacy of the audio directed to the user can be maintained without requiring the user to wear audio headsets on, over, or in the ears of the user. The wearable device can further include microphones and/or connections to other devices that facilitate calibration of the audio module of the wearable device. The wearable device can further include user sensors that are configured to detect, measure, and/or track one or more properties of the user.
Improved Localization of an Acoustic Source
Processing sound signals acquired by a microphone, for example an ambisonic type, to locate a sound source in a space including at least one wall. A time-frequency transform is applied to the acquired signals, and, from the acquired signals, a velocity vector, complex with real and imaginary parts, is expressed in the frequency domain, wherein the velocity vector characterizes a composition between: a first acoustic path, direct between the source and the microphone, represented by a first vector; and a second acoustic path resulting from a reflection on the wall and represented by a second vector. The second path has a first delay with respect to the direct path. Depending on the first delay and the first and second vectors, a parameter is determined from among a direction of the direct path, a distance from the source to the microphone, and a distance from the source to said wall.
METHOD AND DEVICE FOR CONTROLLING THE PROPAGATION OF ACOUSTIC WAVES ON A WALL
A method and a device for controlling the propagation of acoustic waves in the vicinity of a wall, the method and device implementing a master device for controlling a set Nc of cells primarily made up of a speaker, a set of Nm microphones connected to the speaker, and a control unit, by means of control laws that determine the intensity of the electrical signal that must be sent to each speaker so as to obtain a target determined generalized acoustic impedance for each speaker, such that a fraction of the acoustic waves is absorbed by the membrane of each speaker.
Apparatus, Method and Computer Program for Enabling Audio Zooming
Examples of the disclosure relate to apparatus, methods and computer programs for enabling audio zooming. The apparatus can include circuitry configured for determining, for an audio signal, if sound energy in at least one first direction is different to sound energy in at least one second direction by at least a threshold amount. The circuitry may also be configured for controlling an amount of headroom provided based on whether or not sound energy in at least one first direction is different to the sound energy in at least one second direction by at least a threshold amount.