Patent classifications
H04S7/302
3D Spatialisation of Voice Chat
The invention provides techniques for intelligently positioning speech in a virtual environment. Factors such as user preferences, the location of virtual environment audio and/or visual events, avatar location, and others can be taken into account when selecting a suitable location for the speech. The virtual environment can be a game environment, a meeting environment, an augmented reality environment, a virtual reality environment, and the like. The invention can be implemented by an audio processing unit which may be part of a game console.
APPARATUS AND METHOD FOR GENERATING IMPULSE RESPONSE USING RAY TRACING
Provided is a method and apparatus for generating an impulse response using ray tracing. The method of generating an impulse response may include calculating a number of rays reaching a receiver from a transmitter based on acoustic geometry information including a position of the transmitter and a position of the receiver disposed in a sound space, a maximum ray length or a sound space volume, and a radius of the receiver, tracing the rays using a path of the calculated rays, and generating an impulse response based on the traced rays.
METHOD AND APPARATUS FOR RENDERING OBJECT-BASED AUDIO SIGNAL CONSIDERING OBSTACLE
A method and apparatus for rendering an object-based audio signal considering an obstacle are disclosed. A method for rendering an object-based audio signal according to an example embodiment, the method includes identifying an object-based input signal and metadata for the input signal, generating a binaural filter based on the metadata using a binaural room impulse response (BRIR), determining, based on the metadata, whether an obstacle is present between a listener and an object, modifying the generated binaural filter when it is determined that the obstacle is present, and generating a rendered output signal by convolving the modified binaural filter and the input signal.
Integration of remote audio into a performance venue
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for validating and publishing workflows from remote environments. In some implementations, a link that includes at least one of audio data and video data is established between a wireless device of a remote participant and a computational mixer. A profile for the remote participant is referenced. A venue signal related to the at least one audio data and video data is generated based on the profile for the remote participant and using the computational mixer. The venue signal is transmitted.
Audio communication device
An audio communication device includes: a sound position determiner that determines sound localization positions for N audio signals in a virtual space having first and second walls; N sound localizers each performing sound localization processing to localize sound in the sound localization position determined by the sound position determiner, and outputting localized sound signals; an adder that sums the N localized sound signals, and outputs a summed localized sound signal. Each sound localizer performs the processing using: a first head-related transfer function (HRTF) assuming that a sound wave emitted from the sound localization position of the sound localizer determined by the sound position determiner directly reaches each ear of a hearer virtually present at the hearer position; and a second HRTF assuming that the sound wave emitted from the sound localization position reaches each ear of the hearer after being reflected by closer one of the first and second walls.
SYSTEMS AND METHODS OF SPATIAL AUDIO PLAYBACK WITH ENHANCED IMMERSIVENESS
A method of playing back audio content with improved immersiveness can include receiving, at a playback device, audio input including vertical content having a high-frequency portion and a low-frequency portion. The playback device can face along a first sound axis and comprise an up-firing transducer configured to direct sound along a second sound axis that is vertically angled with respect to the primary sound axis and a side-firing transducer or array configured to direct sound along a third axis that is horizontally angled with respect to the first sound axis. The low-frequency portion of the vertical content can be played back via the side-firing transducer or array, while the high-frequency portion of the vertical content can be played back via the up-firing transducer.
IN-VEHICLE INDEPENDENT SOUND ZONE CONTROL METHOD, SYSTEM AND RELATED DEVICE
The present disclosure discloses an in-vehicle independent sound zone control method, a system and a related device, applied to a vehicle. The method includes the following steps: presetting a control area and a non-control area; arranging a speaker array behind a front seat of the vehicle for generating a first acoustic response, and arranging a headrest speaker at a headrest on a rear seat of the vehicle for generating a second acoustic response; fitting a virtual target speaker, wherein the virtual target speaker is configured to generate a target acoustic response within the control area; and controlling a sound quality of the in-vehicle independent sound zone through an audio algorithm processing on the target acoustic response, the first acoustic response and the second acoustic response.
AUDIO FILTER EFFECTS VIA SPATIAL TRANSFORMATIONS
An audio system of a client device applies transformations to audio received over a computer network. The transformations (e.g., HRTFs) effect changes in apparent source positions of the received audio, or of segments thereof. Such transformations may be used to achieve “animation” of audio, in which the source positions of the audio or audio segments appear to change over time (e.g., circling around the listener). Additionally, segmentation of audio into distinct semantic audio segments, and application of separate transformations for each audio segment, can be used to intuitively differentiate the different audio segments by causing them to sound as if they emanated from different positions around the listener.
AUDIO INTEGRATION OF PORTABLE ELECTRONIC DEVICES FOR ENCLOSED ENVIRONMENTS
Implementations of the subject technology provide for audio integration of portable electronic devices into enclosed environments. A portable electronic device may be carried, by a user, into an enclosed environment, such as into an enclosure of a building, a room, or other apparatus. One or more remote speakers may be disposed in the enclosed environment. The remote speaker(s) may be operated in cooperation with the portable electronic device to spatially coordinate audio output from the remote speaker(s) with video content displayed by the portable electronic device.
OCCUPANT-BASED AUDIO CONTROL FOR ENCLOSED ENVIRONMENTS
Implementations of the subject technology provide occupant-based audio for enclosed environments. For example, an apparatus having an enclosure and one or more speakers may determine a location and/or an identity of an occupant within an enclosed environment defined by the enclosure, and operate the one or more speakers to provide audio output to the location of the occupant. The apparatus may also operate the one or more speakers to reduce the audio output to one or more other locations within the enclosure, such as to one or more non-occupant locations and/or to one or more locations of one or more other occupants. The audio output may be occupant-specific audio output, such as a personalized notifications, in one or more implementations.