G01S3/802

Mobile device based control device locator

Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for a mobile device based control device locator. An embodiment operates by receiving a request to locate a control device, transmitting acoustic token transmission information to the control device to activate an electroacoustic transducer on the control device, receiving an acoustic signal including an acoustic token signal from the control device via a plurality of acoustic sensors, and determining distance information of the control device based on the received acoustic token signal generated by the electroacoustic transducer of the control device.

Mobile device based control device locator

Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for a mobile device based control device locator. An embodiment operates by receiving a request to locate a control device, transmitting acoustic token transmission information to the control device to activate an electroacoustic transducer on the control device, receiving an acoustic signal including an acoustic token signal from the control device via a plurality of acoustic sensors, and determining distance information of the control device based on the received acoustic token signal generated by the electroacoustic transducer of the control device.

Glasses with closed captioning, voice recognition, volume of speech detection, and translation capabilities
11727952 · 2023-08-15 · ·

The glasses with display may include a bridge, two temples hingedly coupled to the bridge, and a directional microphone array, the directional microphone array including two or more microphones positioned on the bridge or the temples. The glasses with display may also include a user microphone array, the user microphone array including one or more microphones positioned on the temples and oriented toward the mouth of a user wearing the glasses with display or one or more bone conduction microphones. In addition, the glasses with display include two lenses positioned in the bridge, at least one of the lenses including a display, the display visible by the user, the display including one or more of a directional display, closed caption display, and user volume display. The glasses with display additionally include a processor adapted to receive audio signals from the directional microphone array and the user microphone array, or from a separate mobile device, the processor adapted to control the display.

Detecting wireless signal leakage

Systems, apparatuses, and methods are described for operating and maintaining a data network, and for detecting problems such as signal leakage. In one implementation, a computing device may determine, based on availability and location, one or more mobile devices and may cause the mobile devices to detect a wireless signal. The detected wireless signal may be identified as having leaked from a network, such as a wired network, and used to detect the source of leaks.

ELECTRONIC DEVICE FOR PROVIDING VISUALIZED ARTIFICIAL INTELLIGENCE SERVICE ON BASIS OF INFORMATION ABOUT EXTERNAL OBJECT, AND OPERATING METHOD FOR ELECTRONIC DEVICE

With respect to an electronic device and an operating method for the electronic device, according to various embodiments, the electronic device comprises: a rotatable vision sensor configured to detect an external object in a space in which the electronic device is arranged; a rotatable projector configured to output a picture in the space in which the electronic device is arranged; a memory storing spatial information about the space in which the electronic device is arranged; and a processor, wherein the processor can be configured to: control the vision sensor so that the vision sensor tracks the external object while rotating, determine the position of the picture to be output by the projector based on the spatial information and external object information generated based on the tracking of the external object, and control the projector to output the picture at the determined position.

Method and apparatus for robust low-cost variable-precision self-localization with multi-element receivers in GPS-denied environments

A practically implementable robust direction-of-arrival (DoA) estimation approach that is resistant to localization errors due to mobility, multipath reflections, impulsive noise, and multiple-access interference. As part of the disclosed invention the inventors consider infrastructure-less 3D localization of autonomous underwater vehicles (AUVs) with no GPS assistance and no availability of global clock synchronization. The proposed method can be extended to challenging communication environments and applied for the localization of assets/objects in space, underground, intrabody, underwater and other complex, challenging, congested and sometimes contested environments. Each AUV leverages known-location beacon signals to self-localize and can simultaneously report its sensor data and measurement location. The approach uses two known location beacon nodes, where the beacons are single-hydrophone acoustic nodes that are deployed at known locations and transmit time-domain coded signals in a spread-spectrum fashion.

DIRECTION OF ARRIVAL ESTIMATION DEVICE, SYSTEM, AND DIRECTION OF ARRIVAL ESTIMATION METHOD

Provided is a direction of arrival estimation device wherein: a calculation circuit calculates a frequency weighting factor for each of a plurality of frequency components of signals recorded in a microphone array, on the basis of the differences among unit vectors indicating the directions of the sound sources of each of the plurality of frequency components; and an estimation circuit estimates the direction of arrival of a signal from the sound source, on the basis of the frequency weighting factors.

Speech translation apparatus, speech translation method, and recording medium storing the speech translation method

A speech translation apparatus includes: an estimator which estimates a sound source direction, based on an acoustic signal obtained by a microphone array unit; a controller which identifies that an utterer is a user or a conversation partner, based on the sound source direction estimated after the start of translation is instructed by a button, using a positional relationship indicated by a layout information item stored in storage and selected in advance, and determines a translation direction indicating input and output languages in and into which content of the acoustic signal is recognized and translated, respectively; and a translator which obtains, according to the translation direction, original text indicating the content in the input language and translated text indicating the content in the output language. The controller displays the original and translated texts on first and second display areas corresponding to the positions of the user and conversation partner, respectively.

Speech translation apparatus, speech translation method, and recording medium storing the speech translation method

A speech translation apparatus includes: an estimator which estimates a sound source direction, based on an acoustic signal obtained by a microphone array unit; a controller which identifies that an utterer is a user or a conversation partner, based on the sound source direction estimated after the start of translation is instructed by a button, using a positional relationship indicated by a layout information item stored in storage and selected in advance, and determines a translation direction indicating input and output languages in and into which content of the acoustic signal is recognized and translated, respectively; and a translator which obtains, according to the translation direction, original text indicating the content in the input language and translated text indicating the content in the output language. The controller displays the original and translated texts on first and second display areas corresponding to the positions of the user and conversation partner, respectively.

Audio distance estimation for spatial audio processing

A method for spatial audio signal processing including: determining at least one first direction parameter for at least one frequency band based on microphone signals received from a first microphone array; determining at least one second direction parameter for the at least one frequency band based on at least one microphone signal received from at least one second microphone, wherein microphones from the first microphone array and the at least one second microphone are spatially separated from each other; processing the determined at least one first direction parameter and the at least one second direction parameter to determine at least one distance parameter for the at least one frequency band; and enabling an output and/or store of the at least one distance parameter, at least one audio signal, and the at least one first direction parameter.