Patent classifications
H04S2400/15
Method and system for audio critical listening and evaluation
Disclosed herein is a method of constructing and utilizing a sound engineering evaluation and comparison process to allow for improved finished results. Such a method entails the utilization of a high-pass filter for listening evaluation of recorded music or sounds including consistency with low-frequency mixing to allow for a tool to implement changes in relation to the filtered results in order to accommodate sensitivities of the human ear (with the optional inclusion of a comparison method to provide possible further enhanced results and the avoidance of biases). In such a manner, a facilitating method for sound engineering mixing adjustments that provide such accommodations are provided for improved sound recordings for distribution within on-line or recording product frameworks.
Sound recording apparatus, sound system, sound recording method, and carrier means
An apparatus, system, and method, each of which: acquires sound data generated from a plurality of sound signals collected at a plurality of microphones; acquires, from one or more sensors, a result of detecting a position of the sound recording apparatus at a time point during a time period when the plurality of sound signals is collected; and stores, in a memory, position data indicating the position of the sound recording apparatus detected at the time point, and sound data generated based on a plurality of sound signals collected at the microphones at the time point at which the position was detected, in association with each other.
Ear-mount able listening device with voice direction discovery for rotational correction of microphone array outputs
Techniques described herein include generating first audio signals representative of sounds emanating from an environment and captured with an array of microphones disposed within an ear-mountable listing device. A rotational position of the array of microphones is determined. A rotational correction is applied to the first audio signals to generate a second audio signal. The rotational correction is based at least in part upon the determined rotational position. A speaker of the ear-mountable listening device is driven with the second audio signal to output audio into an ear.
AUTOMATIC SPATIAL CALIBRATION FOR A LOUDSPEAKER SYSTEM USING ARTIFICIAL INTELLIGENCE AND NEARFIELD RESPONSE
One embodiment provides a method of automatic spatial calibration. The method comprises estimating one or more distances from one or more loudspeakers to a listening area based on a machine learning model and one or more propagation delays from the one or more loudspeakers to the listening area. The method further comprises estimating one or more incidence angles of the one or more loudspeakers relative to the listening area based on the one or more propagation delays. The method further comprises applying spatial perception correction to audio reproduced by the one or more loudspeakers based on the one or more distances and the one or more incidence angles. The spatial perception correction comprises delay and gain compensation that corrects misplacement of any of the one or more loudspeakers relative to the listening area.
WIRELESS CHARGING OF DEVICES
Disclosed is a method of operating an array of receiver devices as a phased array. The receiver devices are in a fixed mutual relationship within a zone and each receiver device comprises a photovoltaic element. The method involves receiving a signal from within the zone at a plurality of the receiver devices to generate a plurality of received signals and processing the received signals using at least one phase difference therebetween. The method also involves directing a beam of light from a unit located within the zone to the photovoltaic elements, thereby providing power to said receiver devices. The invention extends to an array of transmitter devices and to an array of both transmitter and receiver devices.
AI-BASED DJ SYSTEM AND METHOD FOR DECOMPOSING, MISING AND PLAYING OF AUDIO DATA
The present invention relates to a method for processing and playing audio data comprising the steps of receiving mixed input data and playing recombined output data. Furthermore, the invention relates to a device 10 for processing and playing audio data, preferably DJ equipment, comprising an audio input unit for receiving a mixed input signal, a recombination unit 32 and a playing unit 34 for playing recombined output data. In addition, the present invention relates to a method and a device for representing audio data, i.e. on a display.
Audio Representation and Associated Rendering
An apparatus for immersive audio communication including circuitry configured to: receive at least a first audio data stream and a second audio data stream, wherein at least one of the first and second audio stream includes a spatial audio stream to enable immersive audio during a communication; determine a type of each of the first and second audio streams to identify which of the received first and second audio data streams the spatial audio stream; process the second audio data stream with at least one parameter dependent on the determined type; and render the first audio data stream and the processed second audio data stream.
Time domain neural networks for spatial audio reproduction
A device for reproducing spatial audio using a machine learning model may include at least one processor configured to receive multiple audio signals corresponding to a sound scene captured by respective microphones of a device. The at least one processor may be further configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on a target rendering configuration. The at least one processor may be further configured to provide, responsive to providing the multiple audio signals to the machine learning model, multichannel audio signals that comprise a spatial reproduction of the sound scene in accordance with the target rendering configuration.
AUDIO PROCESSING METHOD, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
An audio processing method includes: obtaining relative attitude information between a lens and a plurality of microphones, where the lens is movable relative to at least one of the plurality of microphones; obtaining original audio signals acquired by the plurality of microphones; determining weight information corresponding to the original audio signals based on the relative attitude information; and synthesizing the original audio signals based on the weight information to obtain a target audio signal, where the target audio signal is played with images captured by the lens. The method disclosed in this application resolves a problem that a sound source orientation indicated by recorded audio does not match the images captured by the lens.
Microphone Array
Microphone arrays comprise several microphone capsules, the outputs of which being electronically combined for directional recording of sound. The directional and frequency properties of the microphone array depend on the number and positions of the microphone array. In order to obtain the smallest possible microphone array with only few microphone capsules, which, however, has an essentially uniform directional and frequency dependence over a speech frequency range, is scalable and robust against small incorrect positioning of the capsules, fifteen or twenty-one microphone capsules (K.sub.15,11-K.sub.15,35, K.sub.21,11-K.sub.21,37) are arranged on a carrier such that they lie on three similar branches, each with the same number of microphone capsules, which are rotated against each other by 120°. Each of the microphone capsules lies on a corner of a triangle of a grid in a flat isometric coordinate system with three axes rotated by 120° against each other and forming the grid of equilateral triangles.