Patent classifications
H04R2460/07
Head-tracked spatial audio
Spatial filters are generated that map response of an audio capture device to head related transfer functions (HRTFs) for different positions of the audio capture device relative to the HRTFs. A current set of spatial filters are determined based on the plurality of spatial filters and a head position of a user. The microphone signals are convolved with the current set of spatial filters, resulting in a left audio channel and right audio channel that form output binaural audio channels. The binaural audio channels can be used to drive speakers of a headphone set to generate sound that is perceived to have a spatial quality. Other aspects are described and claimed.
HEADSET AND APPLICATION CONTROL BASED ON LOCATION DATA
Disclosed is a headset for audio communication, a software application for an electronic device associated with a headset, and a method for controlling a headset feature. The headset is configured to be worn by a user. The headset comprises a speaker for sound transmission into the user's ear, a transceiver or a radio communication unit for communication with an external device, a connection to a location-based service software, the location-based service software is configured for controlling at least one headset feature based on location data of the headset, and a processing unit. The processing unit is configured for enabling the location-based service software to detect if the current location data of the headset indicates a change in location data corresponding to a certain change criterion and changing the at least one headset feature, if a change criterion associated with the change in location data is satisfied.
HEADPHONE-ONLY AUDIO OPTIONS
An example computing device is provided. The computing device includes an interface to connect to a headphone and a processor communicatively coupled to the interface. The processor is to execute an application to hos an audio call, determine a headphone-only audio option is enabled for the application, detect that the headphone is connected to the interface, and allow a participant to join the audio call based on detection of the headphone connected to the interface.
System and method for differentially locating and modifying audio sources
A system and method for differentially locating and modifying audio sources that includes receiving multiple audio inputs from a set of distinct locations; determining a multi-dimensional audio map from the audio inputs; acquiring a set of positional audio control inputs applied to the audio map, each audio control input comprising a location and audio processing property; and generating an audio output according to the audio control inputs and the audio inputs. The audio control inputs capable of configuration through manual, automatic, computer vision analysis, and other configuration modes.
Distributed audio capturing techniques for virtual reality (VR), augmented reality (AR), and mixed reality (MR) systems
Systems, devices, and methods for capturing audio which can be used in applications such as virtual reality, augmented reality, and mixed reality systems. Some systems can include a plurality of distributed monitoring devices. Each monitoring device can include a microphone and a location tracking unit. The monitoring devices can capture audio signals in an environment, as well as location tracking signals which respectively indicate the locations of the monitoring devices over time during capture of the audio signals. The system can also include a processor to receive the audio signals and the location tracking signals. The processor can determine one or more acoustic properties of the environment based on the audio signals and the location tracking signals.
Method and system for generating an HRTF for a user
A method of obtaining a head-related transfer function for a user is provided. The method comprises generating an audio signal for output by a handheld device and outputting the generated audio signal at a plurality of locations by moving the handheld device to those locations. The audio output by the handheld device is detected at left-ear and right-ear microphones. A pose of the handheld device relative to the user's head is determined for at least some of the locations. One or more personalised HRTF features are then determined based on the detected audio and corresponding determined poses of the handheld device. The one or more personalised HRTF features are then mapped to a higher-quality HRTF for the user, wherein the higher-quality HRTF corresponds to an HRTF measured in an anechoic environment. This mapping may be learned using machine learning, for example. A corresponding system is also provided.
Ear-worn electronic device for conducting and monitoring mental exercises
An ear-worn electronic device includes a right ear device comprising a first processor and a left ear device comprising a second processor communicatively coupled to the first processor. A physiologic sensor module comprises one or more physiologic sensors configured to sense at least one physiologic parameter from a wearer. A motion sensor module comprises one or more sensors configured to sense movement of the wearer. The first and second processors are coupled to the physiologic and motion sensor modules. The first and second processors are configured to produce a three-dimensional virtual sound environment comprising relaxing sounds, generate verbal instructions within the three-dimensional virtual sound environment that guide the wearer through a predetermined mental exercise that promotes wearer relaxation, and generate verbal commentary that assesses wearer compliance with the predetermined mental exercise in response to one or both of the sensed movement and the at least one physiologic parameter.
Extrapolation of acoustic parameters from mapping server
Determination of a set of acoustic parameters for a headset is presented herein. The set of acoustic parameters can be determined based on a virtual model of physical locations stored at a mapping server. The virtual model describes a plurality of spaces and acoustic properties of those spaces, wherein the location in the virtual model corresponds to a physical location of the headset. A location in the virtual model for the headset is determined based on information describing at least a portion of the local area received from the headset. The set of acoustic parameters associated with the physical location of the headset is determined based in part on the determined location in the virtual model and any acoustic parameters associated with the determined location. The headset presents audio content using the set of acoustic parameters received from the mapping server.
Stereophonic apparatus for blind and visually-impaired people
A method and a wearable system which includes distance sensors, cameras and headsets, which all gather data about a blind or visually impaired person's surroundings and are all connected to a portable personal communication device, the device being configured to use scenario-based algorithms and an A.I to process the data and transmit sound instructions to the blind or visually impaired person to enable him/her to independently navigate and deal with his/her environment by provision of identification of objects and reading of local texts.
Combined HRTF for spatial audio plus hearing aid support and other enhancements
A HRTF used for 3D spatialized audio is combined with, e.g., by concatenation, additional settings to provide a more comfortable, accessible, and enjoyable experience for a listener such as a player of a computer game listening to audio through a headset. A single transfer function is thus created that includes the other settings, and once the transfer function is computed the run-time processing can be treated as a single combined transfer function rather than multiple separate stages, resulting in computational savings. The additional settings pertain to hearing aids normally worn by the listener as well as a room-related function specific to a particular listening venue.