Patent classifications
H04M2201/42
ELECTRONIC DEVICE FOR PROVIDING INFORMATION AND/OR FUNCTIONS THROUGH ICON, AND METHOD FOR CONTROLLING SAME
An electronic device is provided. The electronic device includes a flexible display, and a processor disposed, wherein the processor is configured to display, by the electronic device, a first shortcut icon for executing a first application on the flexible display where a first portion of the flexible display is exposed to an outside, detect a first input for exposing a second portion including the first portion of the flexible display to the outside, and based on the first input, display the first shortcut icon on the second portion as a first extended shortcut icon, if a size of the second portion is greater than or equal to a first threshold size, wherein the first extended shortcut icon may include at least one menu for executing a designated function of the first application.
Automated Call Queue Agent Conversation Item Selection
Agent conversation item selection is automated by a server that automatically detects speech in a call and converts that speech to text. Software running on the server retrieves one or more items from a data store based on a determination that the text includes one or more keywords or a change in the subject of the call. The keywords can include phrases. The retrieved items include one or more of scripts, articles, manuals, daily bulletins regarding a system state, or any resource that can be used to assist with a customer call or interaction. The software running on the server generates a user interface (UI) output based on the retrieve items, and transmits the UI output to an agent device. Software running on the agent device receives the UI output and displays the retrieved items on a display of the agent device.
User Experience Workflow Configuration
A user experience workflow may be configured based on input received for various object types selectively arranged within the user experience workflow and then bound to a destination identifier, such as a telephone number or web address. A user interface of software for configuring a user experience workflow is presented at a user device and input from that user device is used to selectively arrange objects within a user experience workflow and/or to configure objects thereof. After configurations are applied to the objects, the user experience workflow is bound to the destination identifier. An end user device which accesses the destination identifier (e.g., by calling the telephone number, visiting the web address, or using an application connecting to the web address) may then traverse the user experience workflow, including in some cases having configured content presented thereto.
Visual Interactive Voice Response
A method includes connecting a call from a client device to a destination having an interactive voice response service; transcribing audio from the destination during the call to identify menu options of the interactive voice response service; generating visualizations representing the menu options; and outputting the visualizations to a display associated with the client device. A system includes a telephony system, an automatic speech recognition processing tool, and a visualization output generation tool. The telephony system connects a call from a client device to a destination having an interactive voice response service. The automatic speech recognition processing tool transcribes audio from the destination during the call to identify menu options of the interactive voice response service. The visualization output generation tool generates visualizations representing the menu options. The telephony system outputs the visualizations to a display associated with the client device.
INFORMATION PROCESSING TERMINAL, PROGRAM, AND METHOD
There is a demand for an information processing terminal capable of correctly displaying a display on a divided display even when the display is bent. Therefore, proposed is an information processing terminal capable of bending a display unit, the information processing terminal including: the display unit; a sensor unit that detects an inclination and a rotation direction of the information processing terminal, and detects a bending amount with respect to the information processing terminal; and a screen control unit that divides display of the display unit based on a bending position with respect to the information processing terminal, and controls the display divided based on the inclination, the rotation direction, the bending position, and the bending amount.
TELECOMMUNICATION CALL MANAGEMENT AND MONITORING SYSTEM WITH VOICEPRINT VERIFICATION
Disclosed is a secure telephone call management system for authenticating users of a telephone system in an institutional facility. Authentication of the users is accomplished by using a personal identification number, preferably in conjunction with speaker independent voice recognition and speaker dependent voice identification. When a user first enters the system, the user speaks his or her name which is used as a sample voice print. During each subsequent use of the system, the user is required to speak his or her name. Voice identification software is used to verify that the provided speech matches the sample voice print. The secure system includes accounting software to limit access based on funds in a user's account or other related limitations. Management software implements widespread or local changes to the system and can modify or set any number of user account parameters.
SYSTEMS AND METHODS FOR VIDEOCONFERENCING WITH SPATIAL AUDIO
A system may provide for the generation of spatial audio for audiovisual conferences, video conferences, etc. (referred to herein simply as “conferences”). Spatial audio may include audio encoding and/or decoding techniques in which a sound source may be specified at a location, such as on a two-dimensional plane and/or within a three-dimensional field, and/or in which a direction or target for a given sound source may be specified. A conference participant's position within a conference user interface (“UI”) may be set as the source of sound associated with the conference participant, such that different conference participants may be associated with different sound source positions within the conference UI.
Systems and methods for computerized interactive skill training
The present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training. An example embodiment provides a method and system for providing skill training using a computerized system. The computerized system receives a selection of a first training subject. Several related training components can be invoked, such as reading, watching, performing, and/or reviewing components. In addition, a scored challenge session is provided, wherein a training challenge is provided to a user via a terminal, optionally in video form.
Rendering of sounds associated with selected target objects external to a device
The techniques disclosed herein include a first device including one or more processors configured to detect a selection of at least one target object external to the first device, and initiate a channel of communication between the first device and a second device associated with the at least one target object external to the first device. The one or more processors may be configured to receive audio packets, from the second device, in response to the selection of at least one target object external to the device, decode the audio packets, received from the second device, to generate an audio signal. The one or more processors may be configured to output the audio signal based on the selection of the at least one target object external to the first device. The first device includes a memory, coupled to the one or more processors, configured to store the audio packets.
REAL-TIME AGENT ASSISTANCE USING REAL-TIME AUTOMATIC SPEECH RECOGNITION AND BEHAVIORAL METRICS
A method of assisting an agent in real-time includes receiving a call interaction between a customer and an agent; identifying words spoken in the call interaction; providing the words to a behavioral models module; computing a score for a plurality of behavioral metrics; providing a phrase formed by the words to a knowledge article selection module; providing each score for the plurality of behavioral metrics to the knowledge article selection module; providing a plurality of knowledge selection rules to the knowledge article selection module; evaluating a combination of the phrase and the scores of the plurality of behavioral metrics against each of the plurality of knowledge selection rules; matching a knowledge selection rule to the combination; selecting a knowledge article associated with the matched knowledge selection rule; generating a visual representation of the selected knowledge article; and presenting in real-time the visual representation on a graphical user interface.