Patent classifications
H04R2460/07
PORTABLE SPEAKER WITH DYNAMIC DISPLAY CHARACTERISTICS
Various implementations include portable speakers with dynamic display characteristics. In some particular aspects, a portable speaker includes an enclosure housing: at least one electro-acoustic transducer for providing an audio output; a processor coupled with the at least one transducer; an audio input module coupled with the processor for receiving audio input signals; and a battery configured to power the at least one transducer, the processor, and the audio input module; an input channel for receiving a hard-wired audio input connection; at least one wireless input channel for receiving an audio input from a source device via a wireless connection; and a display on the enclosure coupled with the processor, wherein the processor adjusts an orientation of the display between a first orientation and a second orientation in response to detecting a change in orientation of the portable speaker.
Augmented hearing system
Some implementations may involve receiving, via an interface system, personnel location data indicating a location of at least one person and receiving, from an orientation system, headset orientation data corresponding with the orientation of a headset. First environmental element location data, indicating a location of at least a first environmental element, may be determined. Based at least in part on the headset orientation data, the personnel location data and the first environmental element location data, headset coordinate locations of at least one person and at least the first environmental element in a headset coordinate system corresponding with the orientation of the headset may be determined. An apparatus may be caused to provide spatialization indications of the headset coordinate locations. Providing the spatialization indications may involve controlling a speaker system to provide environmental element sonification corresponding with at least the first environmental element location data.
Apparatus, method, computer program for enabling access to mediated reality content by a remote user
An apparatus comprising means for: simultaneously controlling content rendered by a hand portable device and content rendered by a spatial audio device; and providing for rendering to a user, in response to an action by the user, of a first part, not a second part, of a spatial audio content via the hand portable device not the spatial audio device.
TERMINAL FOR CONTROLLING WIRELESS SOUND DEVICE, AND METHOD THEREFOR
A terminal for controlling a wireless sound device can include a communication interface configured to wirelessly connect to at least one or more wireless sound devices; and a processor configured to transmit and receive a positioning signal to and from the at least one or more wireless sound devices, determine a relative position of the at least one or more wireless sound devices based on the positioning signal, receive an acceleration sensor value from the at least one or more wireless sound devices, determine a posture of the at least one or more wireless sound devices based on the acceleration sensor value, determine a wearing state of the at least one or more wireless sound devices based on the relative position and the posture of the at least one or more wireless sound devices, and transmit an audio signal to a worn wireless sound device among the wireless sound devices.
USER ADJUSTMENT INTERFACE USING REMOTE COMPUTING RESOURCE
Disclosed herein, among other things, are systems and methods for a user adjustment interface using remote computing resources. Specifically, a system can include a mobile device in communication with a hearing assistance device or a remote server. The mobile device can interpret an acoustic environment and send information about the environment to a remote server. The remote server can determine and send information to the mobile device for use in a user interface. The mobile device can receive a user selection of hearing assistance parameter information to be sent to the hearing assistance device.
Input and Edit Functions Utilizing Accelerometer Based Earpiece Movement System and Method
A method for performing voice dictation with an earpiece worn by a user includes receiving as input to the earpiece voice sound information from the user at one or more microphones of the earpiece, receiving as input to the earpiece user control information from one or more sensors within the earpiece independent from the one or more microphones of the earpiece, inserting a machine-generated transcription of the voice sound information from the user into a user input area associated with an application executing on a computing device and manipulating the application executing on the computing device based on the user control information.
Devices, methods, and user interfaces for adaptively providing audio outputs
An electronic device includes one or more pose sensors for detecting a pose of a user of the electronic device relative to a first physical environment and is in communication with one or more audio output devices. While a first pose of the user meets first presentation criteria, the electronic device provides audio content at a first simulated spatial location relative to the user. The electronic device detects a change in the pose of the user from the first pose to a second pose. In response to detecting the change in the pose of the user, and in accordance with a determination that the second pose of the user does not meet the first presentation criteria, the electronic device provides audio content at a second simulated spatial location relative to the user that is different from the first simulated spatial location.
Validation of audio calibration using multi-dimensional motion check
Examples described herein involve validating motion of a microphone during calibration of a playback device. An example implementation involves a mobile device detecting, via one or more microphones, audio signals emitted from one or more playback devices as part of a calibration process. After the one or more playback devices emit the audio signals, the mobile device determines whether the detected audio signals indicate that sufficient horizontal translation of the mobile device occurred during the calibration process. When the detected audio signals indicate that insufficient horizontal translation occurred, the mobile device displays a prompt to move the mobile device more while the one or more playback devices emit one or more additional audio signals as part of the calibration process. When the detected audio signals indicate that sufficient horizontal translation occurred, the mobile device calibrates the one or more playback devices with a calibration based on the detected audio signals.
Devices, systems and processes for providing adaptive audio environments
Devices, systems and processes for providing an adaptive audio environment are disclosed. For an embodiment, a system may include a wearable device and a hub. The hub may include an interface module configured to communicatively couple the wearable device and the hub and a processor, configured to execute non-transient computer executable instructions for a machine learning engine configured to apply a first machine learning process to at least one data packet received from the wearable device and output an action-reaction data set and for a sounds engine configured to apply a sound adapting process to the action-reaction data set and provide audio output data to the wearable device via the interface module.
DETECTION OF PHYSICAL ABUSE OR NEGLECT USING DATA FROM EAR-WEARABLE DEVICES
A system may obtain a set of features characterizing a segment of inertial measurement unit (IMU) data generated by an IMU of an ear-wearable device. The system may apply a machine learning model (MLM) that takes the features characterizing the segment of the IMU data as input. The system may determine, based on output values produced by the MLM, whether a user of the ear-wearable device has potentially been subject to physical abuse. The system may then perform an action in response to determining that the user of the ear-wearable device has potentially been subject to physical abuse.