H04M2203/359

Communication session using a virtual environment

Implementations for providing communication services using a virtual environment are described. An audio communication session may be established between a first user device and a second user device. The second user device may answer the audio communication session using a virtual environment. The virtual environment may be updated to display virtual features associated with the communication session.

Holographic Calling for Artificial Reality

A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.

Holographic Calling for Artificial Reality

A holographic calling system can capture and encode holographic data at a sender-side of a holographic calling pipeline and decode and present the holographic data as a 3D representation of a sender at a receiver-side of the holographic calling pipeline. The holographic calling pipeline can include stages to capture audio, color images, and depth images; densify the depth images to have a depth value for each pixel while generating parts masks and a body model; use the masks to segment the images into parts needed for hologram generation; convert depth images into a 3D mesh; paint the 3D mesh with color data; perform torso disocclusion; perform face reconstruction; and perform audio synchronization. In various implementations, different of these stages can be performed sender-side or receiver side. The holographic calling pipeline also includes sender-side compression, transmission over a communication channel, and receiver-side decompression and hologram output.

Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
20220398816 · 2022-12-15 ·

Systems and methods for superimposing the human elements of video generated by computing devices, wherein a first user device and second user device capture and transmit video to a central server which analyzes the video to identify and extract human elements, superimpose these human elements upon one another, adds in at least one augmented reality element, and then transmits the newly created superimposed video back to at least one of the user devices.

INTEGRATED DIGITAL NETWORK MANAGEMENT PLATFORM

A digital network assistant which can detect network anomalies, identify actions likely to remediate them, and assist the user in carrying out those actions. In particular, a digital network assistant constantly monitors data streams associated with the network to determine key performance indicators for the network. When these key performance indicators indicate a network anomaly, the digital network assistant associates it with a digital string to one or more actions likely to remediate similar network issues. The digital network assistant can take these actions automatically or present them to a user to be taken. The system can also aid the user in taking the required actions via an augmented reality interface. In addition, the system can create narratives embedding findings from data analysis eliminating subjectivity. The system can also find optimal parameter sets by continuously analyzing anomaly-free parts of the network and their key performance indicators.

JOINING EXECUTABLE COMPONENT TO ONLINE CONFERENCE

Online conferencing involving video and audio in which automatic actions such as recording and broadcasting is performed by adding a visualized representation of the action into the online conference area of a user interface. The action appears as a visualization in the contacts portion of the user interface, some of which contacts may represent individuals that may be joined into a conference. Recording or broadcasting the action may thus be efficiently performed in a consistent manner as how individuals are added into an online conference, thereby taking advantage of muscle memory of the participant.

AUGMENTED IMAGING ASSISTANCE FOR VISUAL IMPAIRMENT

Systems, apparatuses, services, platforms, and methods are discussed herein that provide assistance for user interface devices. In one example, an assistance application is provided comprising an imaging system configured to capture an image of a scene, an interface system configured to provide data associated with the image to a distributed assistance service that responsively processes the data to recognize properties of the scene and establish feedback for a user based at least on the properties of the scene, and a user interface configured to provide the feedback to the user.

Providing city services using mobile devices and a sensor network

Apparatus and methods related providing city services, such as parking, are described. A mobile device can be configured to receive information from local sensor nodes, such as parking sensor nodes, in the vicinity of the mobile device. In a parking application, the mobile device located in a moving vehicle can be configured to locate available parking based upon the information received from the parking sensor nodes. In other embodiments, the mobile device can be utilized in a retail establishment in conjunction with a remote server to display eye-level image data taken at various locations throughout the retail establishment. The eye-level image data can include products displayed throughout the retail establishment and can be augmented with one or more indicators that indicate product placement locations associated with the products.

Virtual environment generating system

A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device.

Avatar Spatial Modes

Avatars may be displayed in a multiuser communication session using various spatial modes. One technique for presenting avatars includes presenting avatars such that an attention direction of the avatar is retargeted to match the intent of the remote user corresponding to the avatar. Another technique for presenting avatars includes a pinned mode in which a spatial relationship between one or more avatars remains displayed in a consistent spatial relationship to a local user regardless of movements of the local user. Another technique for presenting avatars includes providing user-selectable presentation modes between a room scale mode and a stationary mode for presenting a representation of a multiuser communication session.