Patent classifications
H04M9/085
Methods of an intelligent personal assistant moving binaural sound
A method provides binaural sound to a user. An intelligent personal assistant selects a location for the user where the user hears binaural sound that emanates from a sound localization point (SLP) in empty space away from a head of the user. A wearable electronic device (WED) receives a voice command from the user to move the SLP to another location.
Headphones execute voice command to move binaural sound
A digital signal processor (DSP) in headphones processes sound with head-related transfer functions (HRTFs) to produce binaural sound that externally localizes away from a head of a user wearing the headphones. The headphones include a microphone that receives a voice command that causes the headphones to move a location of the binaural sound.
Controlling a location of binaural sound with a command
A wearable electronic device (WED) worn on a head of a user displays first and second virtual images. A digital signal processor (DSP) processes sound to a first sound localization point (SLP) at the first virtual image and to a second SLP at the second virtual image. A command controls the first and second SLPs when a head orientation of the user is directed to the SLP.
Head mounted display that moves binaural sound in response to a voice command
A method provides a voice in binaural sound to a first user during an electronic communication between the first user wearing a head mounted display (HMD) and a second user. The method includes designating a sound localization point (SLP) with a handheld portable electronic device (HPED), processing the voice of the second user with a digital signal processor (DSP) in the HMD, and moving the voice of the second user in response to a microphone in the HMD receiving a voice command from the first user.
Headphones execute voice command to intelligent personal assistant and move binaural sound
A digital signal processor (DSP) in headphones processes sound with head-related transfer functions (HRTFs) to produce binaural sound that externally localizes away from a head of a user wearing the headphones. The headphones include a microphone that receives a voice command to an intelligent personal assistant that causes the headphones to change a location of the binaural sound.
Wearable electronic device executes voice command to intelligent personal assistant and moves binaural sound
A digital signal processor (DSP) in a wearable electronic device (WED) worn on a head of a user processes sound with head-related transfer functions (HRTFs) to produce binaural sound that externally localizes away from the head of the user wearing the WED. The WED includes a microphone that receives a voice command to an intelligent personal assistant that causes the WED to move a location of the binaural sound.
Intercommunication system with adaptive transmit delay
An intercommunication system with Adaptive Transmit Delay patches audio sources to a radio system (e.g., a trunked-radio system) such that outgoing audio is sent to the radio system as soon as a channel has been acquired, without further transmit delay. The intercommunication system generates a pattern representative of the radio's Talk Permit Tone. The intercommunication system then buffers outgoing audio while analyzing incoming audio from the radio and comparing the incoming audio to the generated pattern to determine when a Talk Permit Tone has been received. When the Talk Permit Tone is received, the intercommunication system releases buffered outgoing audio for transmission.
METHOD FOR ELIMINATING SOUND AND ELECTRONIC DEVICE PERFORMING THE SAME
A method for eliminating sound is disclosed. The method is applied to an electronic device capable of connecting with a sound playback device and includes a microphone. The method includes the following steps of: receiving a first input sound via the microphone to acquire a first input sound signal; recording the first input sound signal and transmitting the first input sound signal to the sound playback device; receiving a second input sound from the sound playback device to acquire a second input sound signal, wherein the second input sound is generated by the sound playback device according to the first input sound signal; determining a difference in generation times between the first input sound signal and the second input sound signal; and filtering the second input sound signal according to the difference in generation times and the first input sound signal.
Wearable electronic device selects HRTFs based on eye distance and provides binaural sound
A method provides a voice in binaural sound to a user. A wearable electronic device (WED) selects head-related transfer functions (HRTFs) based on a distance between eyes of the user. A digital signal processor (DSP) in the WED processes the voice with the HRTFs to generate the voice in the binaural sound that localizes at a sound localization point (SLP) at a location in empty space to the user.
Headphones that provide binaural sound
Headphones include a digital signal processor (DSP) that processes sound into binaural sound that externally localizes to a user wearing the headphones at a sound localization point (SLP) in empty space at least one meter away from a head of the user. A microphone receives a voice command that executes to change the sound from playing as the binaural sound that externally localizes to the user at the SLP in empty space at least one meter away from the head of the user to playing as stereo sound.