Patent classifications
H04S7/00
METHOD FOR PROCESSING SOUND ON BASIS OF IMAGE INFORMATION, AND CORRESPONDING DEVICE
A method of processing an audio signal including at least one audio object based on image information includes: obtaining the audio signal and a current image that corresponds to the audio signal; dividing the current image into at least one block; obtaining motion information of the at least one block; generating index information including information for giving a three-dimensional (3D) effect in at least one direction to the at least one audio object, based on the motion information of the at least one block; and processing the audio object, in order to give the 3D effect in the at least one direction to the audio object, based on the index information.
Spatial audio navigation
Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.
Manipulation of Playback Device Response Using Signal Processing
An example playback device receives left and right channels of audio content and generates a center channel of the audio content by combining at least a portion of the left right channels. The playback device generates first and second side channels of the audio content by combining the center channel and a difference of the left channel and the right channel and combining the center channel and an inverse of the difference of the left channel and the right channel, respectively. The playback device plays back the center channel of the audio content according to a first radiation pattern having a maximum aligned with a first direction, the first side channel according to a second radiation pattern having a maximum aligned with a second direction, and the second side channel according to a third radiation pattern having a maximum aligned with a fourth direction.
Audio content playback method and apparatus for portable terminal
An audio content playback method for a portable terminal. The audio content playback method includes checking a channel that is supportable by audio content that is currently engaged in group's simultaneous playback, in group's simultaneous playback of the audio content. The method includes allocating a channel to each of devices included in a group based on position information of each device included in the group or based on an input state in a user interface environment that is preset for channel allocation for each device included in the group, and transmitting the allocated channel information to each device included in the group to allow the device to select its allocated channel and play the audio content.
Method and apparatus for space of interest of audio scene
Aspects of the disclosure include methods, apparatuses, and non-transitory computer-readable storage mediums for decoding audio data of an audio scene. One apparatus includes processing circuitry that receives first audio source data and second audio source data. The first audio source data corresponds to a space of interest in the audio scene and the second audio source data does not correspond to the space of interest in the audio scene. The space of interest in the audio scene is represented by at least one of a listener space, an audio channel, or an audio object. The processing circuitry decodes the first audio source data based on the space of interest.
Graphical user interface and parametric equalizer in gaming systems
A system that incorporates the subject disclosure may include, for example, a gaming system that cooperates with a graphical user interface to enable user modification and enhancement of one or more audio streams associated with the gaming system. In embodiments, the audio streams may include a game audio stream, a chat audio stream of conversation among players of a video game, and a microphone audio stream of a player of the video game. Additional embodiments are disclosed.
Customized automated audio tuning
An example method of operation may include identifying, in a particular room environment, a number of speakers and one or more microphones on a network controlled by a controller and amplifier, providing test signals to play sequentially from each amplifier channel of the amplifier and the speakers, monitoring the test signals from the one or more microphones simultaneously to detect operational speakers and amplifier channels, providing additional test signals to the speakers to determine tuning parameters, detecting the additional test signals at the one or more microphones controlled by the controller, and automatically establishing a background noise level and noise spectrum of the room environment based on the detected additional test signals.
Audio Response Based on User Worn Microphones to Direct or Adapt Program Responses System and Method
A system, method, and wireless earpieces for communicating with a virtual reality headset. A position and an orientation of a head of a user are detected utilizing at least wireless earpieces. Audio content is received. The audio content is enhanced utilizing the position and the orientation of the head of the user. The audio content is immediately delivered to the user. The method may further include communicating the position and the orientation of the head of the user to the virtual reality headset. The audio content may be based on the orientation and position of the head of the user.
Moving an emoji to move a location of binaural sound
During an electronic communication between a first user and a second user, an electronic device of the second user displays a graphical representation at a located selected by the first user. The graphical representation provides an indication to the second user where binaural sound associated with the graphical representation will externally localize to the second user. Subsequent movement of the graphical representation changes a location where the binaural sound externally localizes to the second user.
MEDIA PLAYBACK BASED ON SENSOR DATA
Example techniques relate to playback based on acoustic signals in a system including a first network device and a second network device. A first network device may detect a presence of a user using a camera and/or infrared sensors. The first network device sends, in response to detecting the presence of the user, a particular signal via the first network interface. The second network device receives data corresponding to the particular signal and plays back an audio output corresponding to the particular signal.