Patent classifications
H04L12/1827
BROADCAST PRIORITY FLAGS FOR ONLINE MEETINGS
A method and system for managing delivery of a content stream to a plurality of devices participating in an online conference session, including delivering, to each of the devices, the content stream associated with the online conference session at a first signal quality, receiving an indication signal indicating that a first device is to broadcast the content stream, responsive to the indication signal, increasing a signal quality of the content stream delivered to the first device from the first signal quality to a second signal quality, wherein the second signal quality is higher than the first signal quality, delivering the content stream to the first device at the second signal quality, and delivering the content stream to the rest of the devices at the first signal quality or a third signal quality, wherein the third signal quality is lower than the first signal quality.
GATEWAYING OF CONFERENCE CALLS TO BROWSER-BASED CONFERENCES
Systems and methods for interconnecting point-to-point (P2P) (i.e. SIP/H323) and web browser compatible video conferencing services. The conference platform gatewaying service may use a virtual web browser participant to send and/or receive video and/or audio over VoIP/Video standards, such as SIP/H323 or other point-to-point protocols into a web browser compatible conference by means of a virtual web browser participant. The conference platform gateway service may create a binding between a SIP address and a web meeting URL. When communication is initiated from a compatible peer device to the gateway/server by means of a point-to-point protocol, the gateway/server establishes a connection to the web-based conference using the binding between URI and URL, and establishes a connection between the Point-to-point and Web browser compatible meeting service.
System and Method for an Interactive Digitally Rendered Avatar of a Subject Person
A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.
INTELLIGENT MEETING HOSTING USING ARTIFICIAL INTELLIGENCE ALGORITHMS
A device may analyze input data received at the device. The device may extract contextual information associated with the input data based on the analyzing. The device may provide at least a portion of the contextual information to a machine learning network. The device may receive an output from the machine learning network in response to the machine learning network processing at least the portion of the contextual information. The device may output a notification associated with enabling content sharing based on the output from the machine learning network. The input data may include audio data, and analyzing the input data may include applying speech processing operations, natural language processing operations, or both to the audio data. The input data may include video data, and analyzing the input data may include applying gesture recognition operations, object tracking operations, or both to the video data.
Systems and methods to automatically join conference
Systems and methods are described to enable a device of a user to automatically join an ongoing conference, where the device is not currently joined to the conference. A first audio signature is generated based on voices of users already in the conference, and a second audio signature is generated based on an audio signal captured by a microphone of the device associated with the first user when the device associated with the first user was not joined to the conference. The first audio signature and the second audio signature are compared, and in response to determining that first audio signature matches the second audio signature, the device associated with the first user is joined to the conference.
Systems and methods for enhancing meetings
The present disclosure provides methods and systems for quantifying meeting effectiveness. A method for quantifying meeting effectiveness may comprise: (a) receiving calendar data related to a meeting; (b) generating a feedback survey based on the calendar data for collecting user feedback data, wherein the feedback survey is presented to a user on an electronic device; (c) generating, using a trained machine learning algorithm, a meeting score indicative of an effectiveness of the meeting based on the meeting data and the user feedback data, and (d) displaying the meeting score within a graphical user interface (GUI) on the electronic device.
Video Communication Method and Video Communications Apparatus
A video communication method and associated apparatus and program are adapted for detecting an operation of starting a first application, where the first application controls a video collection device to perform video collection. The first application begins in a started state, displaying a home page of the first application and a video communication button on a display interface of the video communications device. At least one contact is displayed in response to a triggering operation performed on the video communication button, a first contact is determined in response to a user selection, and video communication with the first contact is established.
METHOD AND APPARATUS FOR GENERATING INTERACTION RECORD, AND DEVICE AND MEDIUM
A method and apparatus for generating an interaction record, and a device and a medium are provided. The method includes: firstly, from a multimedia data stream, collecting behavior data, represented by the multimedia data stream, of a user, wherein the behavior data includes voice information and/or operation information; and then, on the basis of the behavior data, generating interaction record data corresponding to the behavior data. According to the technical solution, by means of collecting voice information and/or operation information from a multimedia data stream, and generating interaction record data on the basis of the voice information and the operation information, an interacting user can determine interaction information by using the interaction record data, and the interaction efficiency of the interacting user is improved, thereby also improving the user experience.
Automated real-time data stream switching in a shared virtual area communication environment
Switching real-time data stream connections between network nodes sharing a virtual area is described. In one aspect, the switching involves storing a virtual area specification. The virtual area specification includes a description of one or more switching rules each defining a respective connection between sources of a respective real-time data stream type and sinks of the real-time data stream type in terms of positions in the virtual area. Real-time data stream connections are established between network nodes associated with respective objects each of which is associated with at least on of a source and a sink of one or more of the real-time data stream types. The real-time data stream connections are established based on the one or more switching rules, the respective sources and sinks associated with the objects, and respective positions of the objects in the virtual area.
Automatic adjusting background
A conferencing endpoint selects a background for a conferencing system. The conferencing endpoint captures an initial series of images of a foreground object in front of a background image, and segments at least one frame of the initial series of images into the foreground object and the background image according to a first segmentation technique. The conferencing endpoint generates one or more test backgrounds and evaluates the test backgrounds according to a second segmentation technique. The conferencing endpoint selects a final background from the test backgrounds for segmenting a subsequent series of images according to the second segmentation technique.