Patent classifications
H04M2201/42
SECOND LEVEL INTERACTIVE VOICE RESPONSE COMPONENT
In an example embodiment, a solution is provided that introduces a second level IVR component controlled by a call control service that also controls a first IVR component. Controlling the IVR components using this call control service (which also interfaces with client software operated by a human agent) allows for data collected during the IVR sessions or during a live session with the human agent to be shared among the components. This also acts to eliminate the need for a traditional “transfer” of a call from a human agent to an IVR or vice versa, which would often be accompanied by audible clicks or beeps discernable to the caller.
Sentiment-based prioritization of contact center engagements
A sentiment-based score is determined for a contact center engagement between a first contact center service operator and a contact center user. The sentiment-based score is indicated within a graphical user interface displaying information associated with multiple contact center engagements at a device of a second contact center service operator. Based on a request to participate in the contact center engagement received from the device of the second contact center service operator via the graphical user interface, information associated with the contact center engagement is transmitted to the device of the second contact center service operator, and a contact center session involving a device of the contact center user and the device of the second contact center service operator is established.
Apparatuses and methods involving a contact center virtual agent
Apparatuses and methods concerning providing a data-communications contact center virtual agent are disclosed. As an example, user-data-communications between client and participant stations are facilitated as follows, which may be implemented using a data communications server and associated communications circuitry. Service request data is received from users at a participant stations, and context information is identified for user-data-communications between a client station and the participant stations based on the service request data at least one communications-specific characteristic associated with the user-data-communications. The identified context information is aggregated for the client station and used for choosing a data routing option routing data with each user at the participant stations, based on the service request data and the aggregated context information.
Systems and methods for videoconferencing with spatial audio
A system may provide for the generation of spatial audio for audiovisual conferences, video conferences, etc. (referred to herein simply as “conferences”). Spatial audio may include audio encoding and/or decoding techniques in which a sound source may be specified at a location, such as on a two-dimensional plane and/or within a three-dimensional field, and/or in which a direction or target for a given sound source may be specified. A conference participant's position within a conference user interface (“UI”) may be set as the source of sound associated with the conference participant, such that different conference participants may be associated with different sound source positions within the conference UI.
ANIMATED EXPRESSIVE ICON
Embodiments described herein include an expressive icon system to present an animated graphical icon, wherein the animated graphical icon is generated by capture facial tracking data at a client device. In some embodiments, the system may track and capture facial tracking data of a user via a camera associated with a client device (e.g., a front facing camera, or a paired camera), and process the facial tracking data to animate a graphical icon.
Visual Interactive Voice Response
A method includes connecting a call from a client device to a destination having an interactive voice response service; transcribing audio from the destination during the call to identify menu options of the interactive voice response service; generating visualizations representing the menu options; and outputting the visualizations to a display associated with the client device. A system includes a telephony system, an automatic speech recognition processing tool, and a visualization output generation tool. The telephony system connects a call from a client device to a destination having an interactive voice response service. The automatic speech recognition processing tool transcribes audio from the destination during the call to identify menu options of the interactive voice response service. The visualization output generation tool generates visualizations representing the menu options. The telephony system outputs the visualizations to a display associated with the client device.
ELECTRONIC DEVICE INCLUDING FLEXIBLE DISPLAY FOR SCREEN RECORDING, AND METHOD THEREOF
The disclosure relates to an electronic device including a flexible display for screen recording, and a method thereof. The electronic device may include: a memory, a display module including a flexible display, and at least one processor electrically coupled to the memory and the display module. The at least one processor may be configured to: record a screen of the display displayed in an visible area of the display in a reference screen size, based on a screen size of the visible area being changed by extension or contraction of the display during the recording, control the display module to display an object to which a visual effect related to at least one content displayed on the screen is applied, in part of the visible area corresponding to the changed size, and in response to completion of the extension or contraction of the visible area of the display during the recording, control the display module to display an extended or contracted screen in an extended visible area or a contracted visible area.
Systems and methods for emergency data integration
Described herein are systems, devices, methods, and media providing emergency data to emergency service providers (ESP; e.g., public safety answering points (PSAPs)). Also provided are systems, methods, and media for utilizing location data and geofences to provide emergency data to ESPs and interactive graphical displays to efficiently display relevant emergency data.
Emergency call data aggregation and visualization
A system for providing locations of emergency callers receives call data related to emergency calls received at a public safety answering point (PSAP) and a supplemental data signal that includes a location of an emergency. A signal correlation engine determines whether the supplemental data signal corresponds to one of the emergency calls received at the PSAP. A web server provides a user interface that includes a map and a supplemental signal indicator corresponding to the supplemental data signal. The supplemental signal indicator is positioned on the map at the location of the emergency, and the supplemental signal indicator has a visual characteristic indicating whether or not the supplemental data signal corresponds to an emergency call received at the PSAP.
Emotes for non-verbal communication in a videoconferencing system
A method is disclosed for videoconferencing in a three-dimensional virtual environment. In the method, a position and direction, a specification of an emote, and a video stream are received. The position and direction specify a location and orientation in the virtual environment and are input by a first user. The specification of the emote is also input by the first user. The video stream is captured from a camera on a device of the first user that is positioned to capture photographic images of the first user. The video stream is mapped onto a three-dimensional model of an avatar. From a perspective of a virtual camera of a second user, the virtual environment is rendering for display to the second user. The rendered environment includes the mapped three-dimensional model of the avatar located at the position and oriented at the direction and the emote attached to the video stream-mapped avatar.