Patent classifications
H04M2203/1025
AI avatar coaching system based on free speech emotion analysis for managing in place of CS managers
Disclosed is an AI avatar coaching system based on a free speech emotion analysis for acting for CS managers. The AI avatar coaching system includes: an AI avatar coach server generating an AI avatar coach video for practical counseling training, and providing the generated AI avatar coach video; an educated/inexperienced counselor terminal receiving and outputting the AI avatar coach video provided from the AI avatar coach server; a purchase customer terminal performing a voice call for a counseling inquiry of a purchase customer; a counselor terminal performing the voice call for a counselor to perform counseling processing for the counseling inquiry of the purchase customer; and an omni channel customer/company consulting service server setting a voice call session for the voice call between the purchase customer terminal and the counselor terminal, and transmitting a report for the counseling inquiry and the counseling processing, in order to act for counseling services for multiple selling company customers. By the AI avatar coaching system based on a free speech emotion analysis for acting for CS managers, there is an effect that a counseling video of an experienced counselor is configured to be simulated into an avatar video and provided to educated/inexperienced counselors to learn a counseling/response method and effectively train the counselors through a specific practical cases.
Information processing apparatus, information processing method, and program
There is provided an information processing apparatus, an information processing method, and a program that make it possible to recognize a state of telecommunication among multiple points more easily. Information regarding telecommunication performed among telecommunication apparatuses is received, and an image that indicates a state of telecommunication between a first telecommunication apparatus and another communication apparatus and a state of telecommunication between a plurality of other telecommunication apparatuses is generated on the basis of the received information, and then the generated image is displayed. The present disclosure can be applied, for example, to an information processing apparatus, a telecommunication apparatus, electronic equipment, an information processing method, a program, or the like.
VARIABLE-VOLUME AUDIO STREAMS
Systems and methods for enhanced teleconferencing. An example method includes generating a teleconference interface with a plurality of user-controlled participant interface elements representing participants of the teleconference; identifying a first conversation based on positions, in the teleconference interface, of a first subset of the participant interface elements; identifying a second conversation based on the positions, in the teleconference interface, of a second subset of the participant interface elements; accessing supplemental data, from at least one of a networking or social media database, for the participants of the teleconference; and presenting, within the participant interface elements, the supplemental data.
Room capture and projection
Examples associated with room capture and projection are disclosed. One example includes an information management module that may maintain information regarding a virtual space and a first digital object within the virtual space. The first digital object may be associated with an artifact in a physical space. A room calibration module may map the virtual space to the physical space using sensors to detect attributes of the physical space. A capture module may record a modification to the artifact to be maintained by the information management module. A projection module may project a representation of a second digital object into the physical space. The representation may be projected based on a signal from the information management module.
TECHNOLOGIES FOR INCORPORATING AN AUGMENTED VOICE COMMUNICATION INTO A COMMUNICATION ROUTING CONFIGURATION
A method for incorporating an augmented voice communication into a communication routing configuration of a contact center system according to an embodiment includes selecting a vocal avatar, wherein the vocal avatar includes phonetic characteristics having first values, receiving a text communication and input user parameters from an input user, generating the augmented voice communication based on the text communication and the input user parameters, wherein the augmented voice communication includes phonetic characteristics having second values, wherein the first values of the phonetic characteristics of the vocal avatar are different from the second values of the phonetic characteristics of the augmented voice communication, and incorporating the augmented voice communication into the communication routing configuration of the contact center system.
Complex computing network for initiating and extending audio conversations among mobile device users on a mobile application
Systems, methods, and computer program products are provided for initiating and extending audio conversations among mobile device users on a mobile application. For example, a method comprises: determining a first user accesses a mobile application on a first mobile device of the first user; determining a second user accesses the mobile application on a second mobile device of the second user; initiating an audio conversation between the first user and the second user; transmitting audio conversation information to at least one of the first user or the second user; and broadcasting the audio conversation to a third user who accesses the mobile application on a third mobile device of the third user.
Emotes for non-verbal communication in a videoconferencing system
A method is disclosed for videoconferencing in a three-dimensional virtual environment. In the method, a position and direction, a specification of an emote, and a video stream are received. The position and direction specify a location and orientation in the virtual environment and are input by a first user. The specification of the emote is also input by the first user. The video stream is captured from a camera on a device of the first user that is positioned to capture photographic images of the first user. The video stream is mapped onto a three-dimensional model of an avatar. From a perspective of a virtual camera of a second user, the virtual environment is rendering for display to the second user. The rendered environment includes the mapped three-dimensional model of the avatar located at the position and oriented at the direction and the emote attached to the video stream-mapped avatar.
Complex computing network for customizing a visual representation for use in an audio conversation on a mobile application
Systems, methods, and computer program products are provided for generating visual representations for use in audio conversations. For example, a method comprises receiving user information associated with a first user; receiving visual representation information input by the first user, wherein the visual representation information comprises a first feature, wherein the visual representation information further comprises a second feature distinct from the first feature, wherein the first feature comprises a facial feature; generating a visual representation based on the visual representation information, wherein the visual representation is presented to a second user during an audio conversation between the first user and a second user, wherein at least one of the first feature or second feature changes form when the first user speaks during the audio conversation, and wherein both the first feature and the second feature remain static when the second user speaks during the audio conversation.
Volume areas in a three-dimensional virtual conference space, and applications thereof
Disclosed herein is a web-based videoconference system that allows for video avatars to navigate within the virtual environment. The system has a presented mode that allows for a presentation stream to be texture mapped to a presenter screen situated within the virtual environment. The relative left-right sound is adjusted to provide sense of an avatar's position in a virtual space. The sound is further adjusted based on the area where the avatar is located and where the virtual camera is located. Video stream quality is adjusted based on relative position in a virtual space. Three-dimensional modeling is available inside the virtual video conferencing environment.
Devices, Methods, and Graphical User Interfaces for Providing Computer-Generated Experiences
A computing system displays, via a first display generation component, a first computer-generated environment and concurrently displays, via a second display generation component: a visual representation of a portion of a user of the computing system who is in a position to view the first computer-generated environment via the first display generation component, and one or more graphical elements that provide a visual indication of content in the first computer-generated environment. The computing system changes the visual representation of the portion of the user to represent changes in an appearance of the user over a respective period of time and changes the one or more graphical elements to represent changes in the first computer-generated environment over the respective period of time.