Patent classifications
H04M2203/5072
Centrally controlling communication at a venue
One example may include a method that includes initiating an audio recording to capture audio data, comparing the audio data received from a microphone of a mobile device to an audio data range, determining whether the audio data is above an optimal level based on a result of the comparison, and queuing the audio data in an audio data queue when the audio data is above the optimal level.
Dynamic locale based aggregation of full duplex media streams
A cloud-based video/audio conferencing system and method performs locale based aggregation of a full duplex media stream to organize multiple connections to a conference call that originate from the same physical location or a shared locale. The cloud-based video/audio conferencing system performs synchronization of the microphone and speaker audio signals of the same-locale connections. In this manner, a conference call may be held with multiple user devices making connections from the same physical location. User experience is enhanced by allowing each user in the same location to use his/her own individual devices to connect to the same conference call.
Method for improving perceptual continuity in a spatial teleconferencing system
The present document relates to audio conference systems. In particular, the present document relates to improving the perceptual continuity within an audio conference system. According to an aspect, a method for multiplexing first and second continuous input audio signals is described, to yield a multiplexed output audio signal which is to be rendered to a listener. The first and second input audio signals (123) are indicative of sounds captured by a first and a second endpoint (120, 170), respectively. The method comprises determining a talk activity (201, 202) in the first and second input audio signals (123), respectively; and determining the multiplexed output audio signal based on the first and/or second input audio signals (123) and subject to one or more multiplexing conditions. The one or more multiplexing conditions comprise: at a time instant, when there is talk activity (201) in the first input audio signal (123), determining the multiplexed output audio signal at least based on the first input audio signal (123); at a time instant, when there is talk activity (202) in the second input audio signal (123), determining the multiplexed output audio signal at least based on the second input audio signal (123); and at a silence time instant, when there is no talk activity (201, 202) in the first and in the second input audio signals (123), determining the multiplexed output audio signal based on only one of the first and second input audio signals (123).
Speaker identification for use in multi-media conference call system
Methods for operating a meeting coordinator to detect which participant is speaking during a teleconference meeting using a teleconferencing system, having corresponding mobile electronic devices and computer-readable media, comprise: accessing calendaring information concerning the teleconference meeting, the calendaring information including a time of the teleconference meeting, a meeting location, identities of meeting invitees, and contact information for the meeting invitees; automatically setting up the teleconference meeting using the calendaring information to enable the teleconferencing system to connect remotely-located meeting invitees; generating a roster of meeting participants comprising at least some of the meeting invitees; tracking a participation status for the meeting participants; and accessing prerecorded unique digital voice signatures for the meeting invitees.
Systems and methods for improving audio conferencing services
Systems and methods are disclosed herein for improving audio conferencing services. One aspect relates to processing audio content of a conference. A first audio signal is received from a first conference participant, and a start and an end of a first utterance by the first conference participant are detected from the first audio signal. A second audio signal is received from a second conference participant, and a start and an end of a second utterance by the second conference participant is detected from the second audio signal. The second conference participant is provided with at least a portion of the first utterance, wherein at least one of start time, start point, and duration is determined based at least in part on the start, end, or both, of the second utterance.
Shared speakerphone system for multiple devices in a conference room
A speakerphone system is shared with multiple participant devices of participants in a physical meeting that are using a web conferencing service. An active speaker is identified from the participants. The participant device of the active speaker is switched, such that the speakerphone system receives and renders audio of the active speaker. Video of the participant device of the active speaker is enabled, such that the web conferencing service displays the video to the participant devices.