Patent classifications
H04S7/30
Methods and apparatus for rendering audio objects
Multiple virtual source locations may be defined for a volume within which audio objects can move. A set-up process for rendering audio data may involve receiving reproduction speaker location data and pre-computing gain values for each of the virtual sources according to the reproduction speaker location data and each virtual source location. The gain values may be stored and used during “run time,” during which audio reproduction data are rendered for the speakers of the reproduction environment. During run time, for each audio object, contributions from virtual source locations within an area or volume defined by the audio object position data and the audio object size data may be computed. A set of gain values for each output channel of the reproduction environment may be computed based, at least in part, on the computed contributions. Each output channel may correspond to at least one reproduction speaker of the reproduction environment.
Method and Apparatus for Producing an Acoustic Field
The present invention concerns a method and apparatus for the modulation of an acoustic field for providing tactile sensations. A method of creating haptic feedback using ultrasound is provided. The method comprises the steps of generating a plurality of ultrasound waves with a common focal point using a phased array of ultrasound transducers, the common focal point being a haptic feedback point, and modulating the generation of the ultrasound waves using a waveform selected to produce little or no audible sound at the haptic feedback point.
MULTICHANNEL AUDIO ENHANCEMENT, DECODING, AND RENDERING IN RESPONSE TO FEEDBACK
In some embodiments, a method for performing at least one of enhancement, decoding, or rendering of a multichannel audio signal in response to compression feedback or feedback from a smart amplifier. For example, the compression feedback may be indicative of amount of compression applied to each of multiple frequency bands, of the audio signal or an enhanced audio signal generated in response thereto. The enhancement (e.g., bass enhancement) may include dynamic routing of audio content of the input audio signal between channels of an enhanced audio signal generated in response thereto. The enhancement and compression may be performed on a per speaker class basis. Other aspects are systems (e.g., programmed processors) and devices (e.g., devices having physically-limited bass reproduction capabilities, such as, for example, a notebook or laptop computer, tablet, soundbar, mobile phone, or other device with small speakers) configured to perform any embodiment of the method.
Audio output apparatus and method of controlling thereof
An audio output apparatus is disclosed. The audio output apparatus that outputs a multi-channel audio signal through a plurality of speakers disposed at different locations, the audio output apparatus includes an input interface, and a processor configured to, based on the multi-channel audio signal input through the inputter being received, obtain scene information on a type of audio included in the multi-channel audio signal and sound image angle information about an angle formed by sound image of the type of audio included in the multi-channel audio signal based on a virtual user, and generate an output signal to be output through the plurality of speakers from the multi-channel audio signal based on the obtained scene information and sound image angle information, wherein the type of audio includes at least one of sound effect, shouting sound, music, and voice, and a number of the plurality of speakers is equal to or greater than a number of channels of the multi-channel audio signal.
Sound spatialisation method
A sound spatialisation method includes determining digital processing parameters to be applied to sound signals to be broadcast by a set of at least two loudspeakers in order to reproduce a virtual sound source at a desired position, and restoring sound signals by the loudspeakers during which the digital processing parameters are applied to the sound signals. The sound spatialisation method also includes defining a trajectory defined by a set of N points, with two consecutive points of the trajectory being connected together by a curve, and positioning during which the desired position of the virtual sound source is defined on the trajectory.
APPARATUS AND METHOD FOR RENDERING AN AUDIO SCENE USING VALID INTERMEDIATE DIFFRACTION PATHS
An apparatus for rendering an audio scene comprising an audio source at an audio source position and a plurality of diffracting objects, comprises: a diffraction path provider for providing a plurality of intermediate diffraction paths through the plurality of diffracting objects, an intermediate diffraction path having a starting point and an output edge of the plurality of diffracting objects and an associated filter information for the intermediate diffraction path; a renderer for rendering the audio source at a listener position, wherein the renderer is configured for determining, based on the output edges of the intermediate diffraction paths and the listener position, one or more valid intermediate diffraction paths from the audio source position to the listener position, determining, for each valid intermediate diffraction path of the one or more valid intermediate diffraction paths, a filter representation for a full diffraction path, and calculating audio output signals for the audio scene using an audio signal associated to the audio source and the filter representation for each full diffraction path.
GENERATING AN AUDIO SIGNAL ASSOCIATED WITH A VIRTUAL SOUND SOURCE
A method for generating an audio signal associated with a virtual sound source is disclosed. The method comprises obtaining an input audio signal x(t) and modifying the input audio signal x(t) to obtain a modified audio signal. The latter step comprises performing a signal delay operation. Optionally, modifying the input audio signal comprises a signal inverting operation and/or a signal amplification or attenuation and/or a signal feedback operation. The method further comprises generating the audio signal y(t) based on a combination, e.g. a summation, of the input audio signal x(t) and the modified audio signal.
SIGNAL PROCESSING SIMULATION METHOD AND SIGNAL PROCESSING SIMULATOR
A signal processing simulation method includes obtaining a designation of an audio signal processing apparatus that performs first signal processing on a first audio signal input, obtaining a designation of a signal processing component, obtaining settings of the designated signal processing component, and constructing a virtual signal processing device having the designated signal processing component, the constructed virtual processing device corresponding to the designated audio signal processing apparatus. The virtual signal processing device inputs a second audio signal to the designated signal processing component. The designated signal processing component performs second signal processing, according to the obtained settings, on the second audio signal, and outputs the second audio signal on which the second signal processing has performed. The virtual signal processing device includes a logic processor that operates by using the second audio signal inputted to the designated signal processing component as a trigger.
Object-Based Audio Conversion
An audio system that is configured to convert a plurality of audio input channels to object-based audio, and a related computer program product. Correlation between input channels and energy balance between the input channels are determined. The determined correlation and energy balance are mapped to output three-dimensional spatial locations.
Sound field adjustment
A device includes one or more processors configured to receive, via wireless transmission from a streaming device, encoded ambisonics audio data representing a sound field. The one or more processors are also configured to perform decoding of the ambisonics audio data to generate decoded ambisonics audio data. The decoding of the ambisonics audio data includes base layer decoding of a base layer of the encoded ambisonics audio data and selectively includes enhancement layer decoding in response to an amount of movement of the device. The one or more processors are further configured to adjust the decoded ambisonics audio data to alter the sound field based on data associated with at least one of a translation or an orientation associated with the movement of the device. The one or more processors are also configured to output the adjusted decoded ambisonics audio data to two or more loudspeakers for playback.