Patent classifications
H04S7/40
Multi-frequency sensing system with improved smart glasses and devices
The systems and methods described relate to the concept that smart devices can be used to: sense various types of phenomena like sound, blue light exposure, RF and microwave radiation, and, in real-time, analyze, report and/or control outputs (e.g., displays or speakers). The systems are configurable and use standard computing devices, such as wearable electronics (e.g., smart glasses), tablet computers, and mobile phones to measure various frequency bands across multiple points, allowing a single user to visualize and/or adjust environmental conditions.
Audio responsive augmented reality
Systems and methods for receiving audio data; identifying one or more graphical interface elements that correspond to the audio data; generating a display of the identified one or more graphical interface elements, wherein a first portion of the one or more graphical interface elements is persistently displayed, and wherein a second portion of the one or more graphical interface elements is temporarily displayed for a predetermined period of time together with the first portion of the one or more graphical interface elements; and at expiry of the predetermined period of time, ceasing display of the second portion while maintaining display of the first portion.
Information processing apparatus and information processing method
Provided is an information processing apparatus that includes a determination unit and a display control unit. The determination unit obtains a position of a virtual object relative to a display region and determines whether or not a correction allowable region set in a region different from the virtual object overlaps at least a part of the display region when the virtual object is located outside the display region. The display control unit causes at least a part of a display object showing the virtual object in the display region to be displayed in a case where the determination unit determines that the correction allowable region overlaps at least the part of the display region.
AUDIO ENHANCED AUGMENTED REALITY
Devices, media, and methods are presented for an audio enhanced augmented reality (AR) experience using an eyewear device. The eyewear device has a microphone system, a presentation system, a support structure configured to be head-mounted on a user, and a processor. The support structure supports the microphone system and the presentation system. The eyewear device is configured to capture, with the microphone system, audio information of an environment surrounding the eyewear device, identify an audio signal within the audio information, detect a direction of the audio signal with respect to the eyewear device, classify the audio signal, and present, by the presentation system, an application associated with the classification of the audio signal.
Systems, Methods, and Graphical User Interfaces for Selecting Audio Output Modes of Wearable Audio Output Devices
A computer system displays an audio settings user interface that includes a first user interface element that is activatable to change a current audio output mode of a set of one or more audio output devices. The computer system receives a second set of one or more inputs including an input directed to the first user interface element. In response, the computer system transitions the set of one or more audio output devices from a first audio output mode to a different second audio output mode. In the first audio output mode, audio is output based on a first frame of reference that is a three-dimensional physical environment surrounding the set of one or more audio output devices. In the second audio output mode, audio is output based on a different second frame of reference that is fixed relative to the set of one or more audio output devices.
Internet of things enable operated aerial vehicle to operated sound intensity detector
A method, the method comprising retrieving a sound intensity map for a venue, wherein the sound intensity map is divided up into a plurality of regions, wherein the sound intensity map predicts a sound quality for each region during a current event. Receiving data from a plurality of IOT enabled operated aerial vehicles, where each IOT enabled operated aerial vehicle of the plurality of IOT enabled operated aerial vehicles travels around different regions of the plurality of regions, wherein each IOT enabled operated aerial vehicle collects data during the event. Comparing the received data to the sound intensity map to determine the region where an audio component of a venue audio needs to be adjusted. Determining the adjustment required for the audio component and adjusting the audio equipment.
SYSTEMS AND METHODS FOR SELECTIVELY PROVIDING AUDIO ALERTS
Systems and methods for selectively providing audio alerts via a speaker device are disclosed herein. A system plays first audio content through a speaker. A microphone captures second audio content comprising an alert Output of the second audio content through the speaker is suppressed by using noise cancellation The system identifies the alert within the second audio content and determines a priority level of the alert. The system determines, based on the priority level, that the alert should be reproduced, and audibly reproduces the alert via the speaker, with the first audio content or instead of the first audio content.
INFORMATION PROCESSING DEVICE, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
Provided is an information processing apparatus capable of outputting a desired sound to a user. An information processing apparatus includes: a first acquisition unit that acquires first position information indicating position information of a user; a second acquisition unit that acquires second position information indicating position information of a predetermined object; a generation unit that generates, based on the first position information and the second position information, sound information in which a sound image is localized on the predetermined object, the sound information being related to the predetermined object; and a control unit that executes control to output the generated sound information to the user.
Audio processing apparatus and method therefor
An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703). The render controller (709) can select the rendering mode for a first audio transducer of the set of audio transducers (703) in response to a position of the first audio transducer relative to a predetermined position for the audio transducer. The approach may provide improved adaptation, e.g. to scenarios where most speakers are at desired positions whereas a subset deviate from the desired position(s).
Differential spatial rendering of audio sources
Methods and systems for intuitive spatial audio rendering with improved intelligibility are disclosed. By establishing a virtual association between an audio source and a location in the listener's virtual audio space, a spatial audio rendering system can generate spatial audio signals that create a natural and immersive audio field for a listener. The system can receive the virtual location of the source as a parameter and map the source audio signal to a source-specific multi-channel audio signal. In addition, the spatial audio rendering system can be interactive and dynamically modify the rendering of the spatial audio in response to a user's active control or tracked movement.