H04R5/027

SERVICE FOR TARGETED CROWD SOURCED AUDIO FOR VIRTUAL INTERACTION

An audio generation system is provided to enable coordinated control of multiple IoT devices for audio collection and distribution of one or more audio sources according to location and user preference. The audio generation system enables a location sensitive acoustic control of sound, both as a shaped envelope for a particular source, and as an individualized experience. The audio generation system also facilitates an interactive visual system for visualization and manipulation of the audio environment including via the use of augmented reality and/or virtual reality to depict soundscapes. The audio generation system can also facilitate a system for improving and achieving an audio environment (sound influence zone) and an intuitive way to understand where sounds will be heard.

EAR MODEL, PERFORMANCE EVALUATION METHOD, AND PERFORMANCE EVALUATION SYSTEM
20230007418 · 2023-01-05 · ·

[Problem] To evaluate, easily and at low cost, the performance of an earphone device used for ear acoustic certification.

[Solution] Holes are provided in a plurality of plate-shaped members (201), an artificial eardrum member (202) corresponds to the eardrum of an individual, and the holes provided in each of the plurality of plate-shaped members (201) are connected, whereby the plurality of plate-shaped members (201) are layered over the artificial eardrum member (202) so as to simulate the external auditory canal of the individual.

CONTROL APPARATUS, SIGNAL PROCESSING METHOD, AND SPEAKER APPARATUS

A control apparatus according to an embodiment of the present technology includes an audio control section and a vibration control section.

The audio control section generates audio control signals of a plurality of channels with audio signals of the plurality of channels as input signals, the audio signals each including a first audio component and a second audio component different from the first audio component. The vibration control section generates a vibration control signal for vibration presentation by taking a difference between audio signals of two channels among the plurality of channels.

Camera microphone drainage system designed for beamforming
11570546 · 2023-01-31 · ·

An image capture device includes an audio depression formed into the housing with a drainage microphone mounted therein. A cover protects the drainage microphone disposed beneath the cover from an environment external to the image capture device. The cover and audio depression define a drainage channel extending from a channel entrance, through a channel volume, and out a channel exit. The surface area of the opening of the channel entrance is proportioned relative to the channel volume such that the ratio of the surface area to volume is greater than ten percent. This allows the cover to shift resonance outside of a desired frequency band.

Camera microphone drainage system designed for beamforming
11570546 · 2023-01-31 · ·

An image capture device includes an audio depression formed into the housing with a drainage microphone mounted therein. A cover protects the drainage microphone disposed beneath the cover from an environment external to the image capture device. The cover and audio depression define a drainage channel extending from a channel entrance, through a channel volume, and out a channel exit. The surface area of the opening of the channel entrance is proportioned relative to the channel volume such that the ratio of the surface area to volume is greater than ten percent. This allows the cover to shift resonance outside of a desired frequency band.

METHOD AND SYSTEM FOR IMPLEMENTING VOICE MONITORING AND TRACKING OF PARTICIPANTS IN GROUP SETTINGS

Novel tools and techniques are provided for implementing voice monitoring and tracking, and, more particularly, to methods, systems, and apparatuses for implementing voice monitoring and tracking of participants in group settings. In various embodiments, a computing system might receive, from at least one audio sensor among the one or more audio sensors disposed within the first space, voice signals corresponding to voices associated with individuals present within the first space. The computing system might analyze the received voice signals to identify one or more individuals who are present within the first space. The computing system might present, within a user interface of the user device associated with the user, information regarding the identified one or more individuals to assist the user in coordinating discussions among the individuals present within the first space.

METHOD AND SYSTEM FOR IMPLEMENTING VOICE MONITORING AND TRACKING OF PARTICIPANTS IN GROUP SETTINGS

Novel tools and techniques are provided for implementing voice monitoring and tracking, and, more particularly, to methods, systems, and apparatuses for implementing voice monitoring and tracking of participants in group settings. In various embodiments, a computing system might receive, from at least one audio sensor among the one or more audio sensors disposed within the first space, voice signals corresponding to voices associated with individuals present within the first space. The computing system might analyze the received voice signals to identify one or more individuals who are present within the first space. The computing system might present, within a user interface of the user device associated with the user, information regarding the identified one or more individuals to assist the user in coordinating discussions among the individuals present within the first space.

Presence detection using ultrasonic signals with concurrent audio playback

Techniques for presence-detection devices to detect movement of a person in an environment by emitting ultrasonic signals using a loudspeaker that is concurrently outputting audible sound. To detect movement by the person, the devices characterize the change in the frequency, or the Doppler shift, of the reflections of the ultrasonic signals off the person caused by the movement of the person. However, when a loudspeaker plays audible sound while emitting the ultrasonic signal, audio signals generated by microphones of the devices include distortions caused by the loudspeaker. These distortions can be interpreted by the presence-detection devices as indicating movement of a person when there is no movement, or as indicating lack of movement when a user is moving. The techniques include processing audio signals to remove distortions to more accurately identify changes in the frequency of the reflections of the ultrasonic signals caused by the movement of the person.

Presence detection using ultrasonic signals with concurrent audio playback

Techniques for presence-detection devices to detect movement of a person in an environment by emitting ultrasonic signals using a loudspeaker that is concurrently outputting audible sound. To detect movement by the person, the devices characterize the change in the frequency, or the Doppler shift, of the reflections of the ultrasonic signals off the person caused by the movement of the person. However, when a loudspeaker plays audible sound while emitting the ultrasonic signal, audio signals generated by microphones of the devices include distortions caused by the loudspeaker. These distortions can be interpreted by the presence-detection devices as indicating movement of a person when there is no movement, or as indicating lack of movement when a user is moving. The techniques include processing audio signals to remove distortions to more accurately identify changes in the frequency of the reflections of the ultrasonic signals caused by the movement of the person.

Spherical harmonic decomposition of a sound field detected by an equatorial acoustic sensor array

An audio system includes an equatorial acoustic sensor array (EASA) that may be coupled to an object. The audio system is configured to detect, via the EASA, signals corresponding to a portion of a sound field in a local area. The detected signals are converted into a plurality of corresponding abstract representations that describe the portion of the sound field. Effects of scattering of the object are removed from the abstract representations to create adjusted abstract representations. A set of spherical harmonic (SH) coefficients is determined using the adjusted abstract representations. The set of SH coefficients describe an entirety of the sound field. And the set of SH coefficients and head related transfer functions of a user are used for binaural rendering of the reconstructed sound field to the user.