H04M2203/1025

ROOM CAPTURE AND PROJECTION

Examples associated with room capture and projection are disclosed. One example includes an information management module that may maintain information regarding a virtual space and a first digital object within the virtual space. The first digital object may be associated with an artifact in a physical space. A room calibration module may map the virtual space to the physical space using sensors to detect attributes of the physical space. A capture module may record a modification to the artifact to be maintained by the information management module. A projection module may project a representation of a second digital object into the physical space. The representation may be projected based on a signal from the information management module.

Emotes for non-verbal communication in a videoconferencing system

A method is disclosed for videoconferencing in a three-dimensional virtual environment. In the method, a position and direction, a specification of an emote, and a video stream are received. The position and direction specify a location and orientation in the virtual environment and are input by a first user. The specification of the emote is also input by the first user. The video stream is captured from a camera on a device of the first user that is positioned to capture photographic images of the first user. The video stream is mapped onto a three-dimensional model of an avatar. From a perspective of a virtual camera of a second user, the virtual environment is rendering for display to the second user. The rendered environment includes the mapped three-dimensional model of the avatar located at the position and oriented at the direction and the emote attached to the video stream-mapped avatar.

Variable-volume audio streams

Systems and methods for enhanced teleconferencing. An example method includes generating a teleconference interface with a plurality of user-controlled participant interface elements representing participants of the teleconference; identifying a first conversation based on positions, in the teleconference interface, of a first subset of the participant interface elements; identifying a second conversation based on the positions, in the teleconference interface, of a second subset of the participant interface elements; accessing supplemental data, from at least one of a networking or social media database, for the participants of the teleconference; and presenting, within the participant interface elements, the supplemental data.

Dynamic virtual environment
11575531 · 2023-02-07 · ·

Techniques for conducting a virtual event are described. One example method includes displaying, on a display screen of a computing device, a plurality of icons, each icon representing a different virtual event participant, wherein the plurality of icons includes a first icon representing a virtual event participant associated with the computing device; receiving, from an input device of the computing device, input representing a direction of movement for the first icon; and in response to receiving the input, moving the first icon on the display screen in the direction represented by the input.

VARIABLE-VOLUME AUDIO STREAMS

Systems and methods for enhanced teleconferencing. An example method includes generating a teleconference interface with a plurality of user-controlled participant interface elements representing participants of the teleconference; identifying a first conversation based on positions, in the teleconference interface, of a first subset of the participant interface elements; identifying a second conversation based on the positions, in the teleconference interface, of a second subset of the participant interface elements; accessing supplemental data, from at least one of a networking or social media database, for the participants of the teleconference; and presenting, within the participant interface elements, the supplemental data.

DYNAMIC VIRTUAL ENVIRONMENT
20230188372 · 2023-06-15 ·

Techniques for conducting a virtual event are described. One example method includes displaying, on a display screen of a computing device, a plurality of icons, each icon representing a different virtual event participant, wherein the plurality of icons includes a first icon representing a virtual event participant associated with the computing device; receiving, from an input device of the computing device, input representing a direction of movement for the first icon; and in response to receiving the input, moving the first icon on the display screen in the direction represented by the input.

Artificial ventriloquist-like contact center agents
11677873 · 2023-06-13 · ·

The need for efficient and effective communications is of key importance to contact centers. Agent communications with customers are designed to maximize results while minimizing resources, in particular the time required for human agents to be engaged with a particular customer. Often the impact of two agents on a communication can both improve customer satisfaction and better produce the intended result of the communication. However, two (or more) live agents is resource intensive. By providing a virtual agent controlled, entirely or in part, by a live agent, the customer may be presented with the appearance of two agents while requiring the human resources of a single agent.

INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20220053164 · 2022-02-17 · ·

An information processing apparatus includes a memory; and a processor configured to, in a case where information on surroundings is acquired and transmitted to a terminal apparatus of a user at a remote place, perform control for disabling at least part of a function of transmitting information to the terminal apparatus of the user in a case where a current time is not included in a time window for which the user is scheduled to converse with another user in schedule of the user.

EMOTES FOR NON-VERBAL COMMUNICATION IN A VIDEOCONFERENCING SYSTEM
20220311971 · 2022-09-29 ·

A method is disclosed for videoconferencing in a three-dimensional virtual environment. In the method, a position and direction, a specification of an emote, and a video stream are received. The position and direction specify a location and orientation in the virtual environment and are input by a first user. The specification of the emote is also input by the first user. The video stream is captured from a camera on a device of the first user that is positioned to capture photographic images of the first user. The video stream is mapped onto a three-dimensional model of an avatar. From a perspective of a virtual camera of a second user, the virtual environment is rendering for display to the second user. The rendered environment includes the mapped three-dimensional model of the avatar located at the position and oriented at the direction and the emote attached to the video stream-mapped avatar.

DETECTING USER IDENTITY IN SHARED AUDIO SOURCE CONTEXTS

Computerized systems are provided for determining an identity of one or more users that use a same audio source, such as a microphone. The identity of one or more users that use a same audio source can based on generating a list of participant candidates who are likely to participate in an associated event, such as a meeting. For instance, embodiments can generate one or more network graphs of a meeting invitee any only voice input samples of the meeting invitee's N closest connections are compared to an utterance to determine the identity of the user associated with the utterance. One or more indicators that identify the users who are using the same audio source, as well as additional information or metadata associated with the identified user can be caused to be presented.