H04N7/15

AVATAR ANIMATION IN VIRTUAL CONFERENCING
20230051409 · 2023-02-16 ·

According to a general aspect, a method can include receiving a photo of a virtual conference participant, and a depth map based on the photo, and generating a plurality of synthesized images based on the photo. The plurality of synthesized images can have respective simulated gaze directions of the virtual conference participant. The method can also include receiving, during a virtual conference, an indication of a current gaze direction of the virtual conference participant. The method can further include animating, in a display of the virtual conference, an avatar corresponding with the virtual conference participant. The avatar can be based on the photo. Animating the avatar can be based on the photo, the depth map and at least one synthesized image of the plurality of synthesized images, the at least one synthesized image corresponding with the current gaze direction.

AVATAR ANIMATION IN VIRTUAL CONFERENCING
20230051409 · 2023-02-16 ·

According to a general aspect, a method can include receiving a photo of a virtual conference participant, and a depth map based on the photo, and generating a plurality of synthesized images based on the photo. The plurality of synthesized images can have respective simulated gaze directions of the virtual conference participant. The method can also include receiving, during a virtual conference, an indication of a current gaze direction of the virtual conference participant. The method can further include animating, in a display of the virtual conference, an avatar corresponding with the virtual conference participant. The avatar can be based on the photo. Animating the avatar can be based on the photo, the depth map and at least one synthesized image of the plurality of synthesized images, the at least one synthesized image corresponding with the current gaze direction.

SCREEN PROJECTION CONTROL METHOD, STORAGE MEDIUM AND COMMUNICATION APPARATUS

A screen projection control method, storage medium and communication apparatus are provided. The screen projection control method includes: a screen projection receiving end receiving screen projection request information from a first screen projection sending end; determining, according to the screen projection request information, whether a third screen projection sending end that has triggered an interference-free mode of screen projection exists in at least one second screen projection sending end that is performing screen projection, wherein enabling the interference-free mode comprises stopping accepting a screen projection request; and if the third screen projection sending end that has enabled the screen projection interference-free mode exists, the screen projection receiving end sending response information for refusing the screen projection request to the first screen projection sending end.

VIDEO COMMUNICATIONS APPARATUS AND METHOD
20230048798 · 2023-02-16 ·

Provided are apparatuses and associated methods for video communications and related features. In one embodiment, a big-screen video communications apparatus is provided that includes a projector and speaker for projecting received images and sounds and includes a camera and microphone for capturing images and sounds for transmission.

VIDEO COMMUNICATIONS APPARATUS AND METHOD
20230048798 · 2023-02-16 ·

Provided are apparatuses and associated methods for video communications and related features. In one embodiment, a big-screen video communications apparatus is provided that includes a projector and speaker for projecting received images and sounds and includes a camera and microphone for capturing images and sounds for transmission.

SYSTEMS AND METHODS FOR MULTI-AGENT CONVERSATIONS
20230053267 · 2023-02-16 ·

A first input is received from a user input device. Based on the first input, a list of candidate intents is generated, and a plurality of agents is initialized. Each agent of the plurality of agents corresponds to a respective candidate intent. Each agent then provides a different response to the first input in accordance with its respective corresponding intent. A second input is then received that responds to one or more of the agents. Based on the agents to which the second input is responsive, the list of candidate intents is refined and, based on the refined list, one or more agents are deactivated.

Automated home system for senior care

An improved home automation system is provided to facilitate senior care, as well as to facilitate care for individuals suffering from Alzheimer's disease or other dementias. A home control unit is provided that is connected to, and interfaces with, a combination of health equipment, smart home appliances, a smart medicine cabinet, a smart pantry, wearable sensors, motion detectors, video cameras, microphones, video monitors, speakers, smart thermostat, lighting, floor sensors, bed sensors, smoke detectors, glass breakage detectors, door sensors, and other perimeter sensors. A distributed computational architecture is provided having a CPU associated with each video camera and an associated proximate microphone and speaker, wherein speech detection and processing, and video processing, is performed by each such CPU in conjunction with its associated video camera, microphone, and speaker. Remote backup for such distributed speech processing is selectively provided by a remote server based upon confidence scopes generated by each such CPU. The distributed computational architecture is also utilized for video processing to facilitate peer-to-peer video conferencing communication using industry standard formats and to reduce latency and response times that would otherwise be encountered using remote servers.

System and method for an interactive digitally rendered avatar of a subject person
11582424 · 2023-02-14 · ·

A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.

Altering undesirable communication data for communication sessions

This disclosure describes techniques implemented partly by a communications service for identifying and altering undesirable portions of communication data, such as audio data and video data, from a communication session between computing devices. For example, the communications service may monitor the communications session to alter or remove undesirable audio data, such as a dog barking, a doorbell ringing, etc., and/or video data, such as rude gestures, inappropriate facial expressions, etc. The communications service may stream the communication data for the communication session partly through managed servers and analyze the communication data to detect undesirable portions. The communications service may alter or remove the portions of communication data received from a first user device, such as by filtering, refraining from transmitting, or modifying the undesirable portions. The communications service may send the modified communication data to a second user device engaged in the communication session after removing the undesirable portions.

Conference device with multi-videostream capability
11582422 · 2023-02-14 · ·

A conference device comprising a first image sensor for provision of first image data, a second image sensor for provision of second image data, a first image processor configured for provision of a first primary videostream and a first secondary videostream based on the first image data, a second image processor configured for provision of a second primary videostream and a second secondary videostream based on the second image data, and an intermediate image processor in communication with the first image processor and the second image processor and configured for provision of a field-of-view videostream and a region-of-interest videostream, wherein the field-of-view videostream is based on the first primary videostream and the second primary videostream, and wherein the region-of-interest videostream is based on one or more of the first secondary videostream and the second secondary videostream.