Patent classifications
H04S7/303
THREE-DIMENSIONAL AUDIO SYSTEMS
A three-dimensional sound generation system includes one or more processors of a computing device, including capability to receive sound tracks, each of the sound tracks comprising one or more sound sources, each of the one or more sound sources corresponding to one or more respective sound categories, receive or determine a first configuration in a three-dimensional space, the first configuration comprising a listener position and a computing device location relative to the listener position, determine a second configuration comprising a change to at least one of the listener location or the computing device location relative to the listener position, generate, using the one or more sound tracks and the second configuration, one or more channels of sound signals, and provide the one or more channels of sound signals to drive one or more sound generation devices to generate a three-dimensional sound field.
APPARATUS AND METHOD FOR DETERMINING VIRTUAL SOUND SOURCES
An acoustic image source model for early reflections in a room is generated by iteratively mirroring (305) rooms around boundaries (e.g. walls) of rooms of the previous iteration. The boundaries around which to mirror in each iteration is determined (303) by a specific selection criterion including requiring that mirror directions cannot be reversed, cannot be in an excluded direction and cannot be repeated unless in a continuous series of mirrorings.
Apparatus, Method, or Computer Program for Processing an Encoded Audio Scene using a Parameter Conversion
An apparatus for processing an encoded audio scene representing a sound field related to a virtual listener position, the encoded audio scene including information on a transport signal and a first set of parameters related to the virtual listener position includes a parameter converter for converting the first set of parameters into a second set of parameters related to a channel representation including two or more channels for a reproduction at predefined spatial positions for the two or more channels, and an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.
AUDIO SCENE CHANGE SIGNALING
There is disclosed inter alia a method for rendering a virtual reality audio scene comprising: receiving information defining a limited area audio scene within the virtual reality audio scene (301), wherein the limited area audio scene defines a sub space of the virtual reality audio scene (304), wherein the information defines the limited area audio scene by defining an extent a user can move within the virtual audio scene; determining if the movement of the user within the limited area audio scene meets a condition of an audio scene change (302); and processing the audio scene change when the movement of the user within the limited area audio scene meets the condition of an audio scene change (306).
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
[Object] To provide a new guide method in voice guidance that is capable of coping with even inaccuracy in an obtained user orientation.
[Solving Means] An information processing apparatus according to the present technology includes a control unit. The control unit predicts a user orientation, performs voice guidance that guides a user to a destination on a route to the destination on the basis of the predicted user orientation, calculates a degree of reliability of the user orientation, and switches a method for guiding the user in the voice guidance on the basis of the degree of reliability.
APPARATUS AND METHOD FOR RENDERING A SOUND SCENE COMPRISING DISCRETIZED CURVED SURFACES
An apparatus for rendering a sound scene having reflection objects and a sound source at a sound source position, includes: a geometry data provider for providing an analysis of the reflection objects of the sound scene to determine a reflection object represented by a first polygon and a second adjacent polygon having associated a first image source position for the first polygon and a second image source position for the second polygon, wherein the first and second image source positions result in a sequence including a first visible zone related to the first image source position, an invisible zone and a second visible zone related to the second image source position; an image source position generator for generating an additional image source position such that the additional image source position is placed between the first image source position and the second image source position; and a sound renderer for rendering the sound source at the sound source position and, additionally for rendering the sound source at the first image source position, when a listener position is located within the first visible zone, for rendering the sound source at the additional image source position, when the listener position is located within the invisible zone, or for rendering the sound source at the second image source position, when the listener position is located within the second visible zone.
SIGNAL PROCESSING APPARATUS AND METHOD, AND PROGRAM TO REDUCE CALCULATION AMOUNT BASED ON MUTE INFORMATION
The present technology relates to a signal processing apparatus and method, and a program that make it possible to reduce an arithmetic operation amount.
The signal processing apparatus performs, on the basis of audio object mute information indicative of whether or not a signal of an audio object is a mute signal, at least either one of a decoding process or a rendering process of an object signal of the audio object. The present technology can be applied to a signal processing apparatus.
SERVICE FOR TARGETED CROWD SOURCED AUDIO FOR VIRTUAL INTERACTION
An audio generation system is provided to enable coordinated control of multiple IoT devices for audio collection and distribution of one or more audio sources according to location and user preference. The audio generation system enables a location sensitive acoustic control of sound, both as a shaped envelope for a particular source, and as an individualized experience. The audio generation system also facilitates an interactive visual system for visualization and manipulation of the audio environment including via the use of augmented reality and/or virtual reality to depict soundscapes. The audio generation system can also facilitate a system for improving and achieving an audio environment (sound influence zone) and an intuitive way to understand where sounds will be heard.
AUDIO CONTENT DISTRIBUTION SYSTEM
An audio content distribution system includes a server device and a user terminal. The server device includes an audio-data acquisition unit that acquires audio data from a distribution source, an audio-content-data generation unit that adds, to the audio data, acoustic-effect setting information indicating whether to add echo and/or attenuation of sound in a virtual reality space, and generates audio content data, and an audio-content-data distribution unit that distributes the audio content data to a user terminal of a user who operates an avatar in the virtual reality space. The user terminal includes an audio-content-data receiving unit that receives the audio content data from the server device, and an audio-data output-control unit that outputs the audio data stored in the audio content data with an acoustic effect according to the acoustic-effect setting information.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
Provided is an information processing device that performs processing on a content. An information processing device is provided with an estimation unit that estimates sounding coordinates at which a sound image is generated on the basis of a video stream and an audio stream, a video output control unit that controls an output of the video stream, and an audio output control unit that controls an output of the audio stream so as to generate the sound image at the sounding coordinates. A discrimination unit that discriminates a gazing point of a user who views video and audio is further provided, in which the estimation unit estimates the sounding coordinates at which the sound image of the object gazed by the user is generated on the basis of a discrimination result.