Patent classifications
H04N7/15
Systems and methods for immersive scenes
One example system for displaying immersive scenes includes a processor and at least one memory device. The memory device includes instructions that are executable by the processor to cause the processor to receive a collection of metadata associated with an immersive scene, identify each of a plurality of properties of the immersive scene based on the collection of metadata, receive a dynamic immersive background, receive a plurality of video streams associated with a video conference, and display each of the plurality of video streams in the immersive scene based at least in part of the plurality of properties of the immersive scene and on the dynamic immersive background.
Information processing system, information processing apparatus including circuitry to store position information of users present in a space and control environment effect production, information processing method, and room
An information processing system includes: an image display apparatus provided in a space and configured to display an image; a sensor apparatus carried by a user who is present in the space and configured to output a signal for detecting position information of the user in the space; and an information processing apparatus. The information processing apparatus includes circuitry configured to store a plurality of pieces of position information of a plurality of users including the user, who are in present in the space, in association with the plurality of users, the plurality of users being detected based on signals output from a plurality of sensor apparatuses including the sensor apparatus, and control environment effect production that supports communication between the plurality of users by the image displayed by the image display apparatus, based on each of the plurality of pieces of position information of the plurality of users.
Continuous video generation from voice data
One example method includes capturing audio data at a client engine while outputting an output video, the output video being based upon an original video stored at the client engine, delivering the captured audio data to a prediction engine upon the captured audio data being captured for a pre-determined time, receiving from the prediction engine substitute frame data used by the client engine to stitch one or more frames into the original video stored at the client engine, and following stitching the one or more frames into the output video to generate an altered output video, outputting the captured audio data and the altered video from the client engine.
VIRTUAL SOUND LOCALIZATION FOR VIDEO TELECONFERENCING
This disclosure provides methods, devices, and systems for videoconferencing. The present implementations more specifically relate to audio signal processing techniques that can be used to identify speakers in a videoconference. In some aspects, an audio signal processor may map each speaker in a videoconference to a respective spatial direction and transform the audio signals received from each speaker using one or more transfer functions associated with the spatial direction to which the speaker is mapped. The audio signal processor may further transmit the transformed audio signals to an audio output device that emits sounds waves having a directionality associated with the transformation. For example, the audio signal processor may apply one or more head-related transfer functions to the audio signals received from a particular speaker so that the sound waves emitted by the audio output device are perceived as originating from the spatial direction to which the speaker is mapped.
VIRTUAL SOUND LOCALIZATION FOR VIDEO TELECONFERENCING
This disclosure provides methods, devices, and systems for videoconferencing. The present implementations more specifically relate to audio signal processing techniques that can be used to identify speakers in a videoconference. In some aspects, an audio signal processor may map each speaker in a videoconference to a respective spatial direction and transform the audio signals received from each speaker using one or more transfer functions associated with the spatial direction to which the speaker is mapped. The audio signal processor may further transmit the transformed audio signals to an audio output device that emits sounds waves having a directionality associated with the transformation. For example, the audio signal processor may apply one or more head-related transfer functions to the audio signals received from a particular speaker so that the sound waves emitted by the audio output device are perceived as originating from the spatial direction to which the speaker is mapped.
INTELLIGENT ORCHESTRATION OF VIDEO PARTICIPANTS IN A PLATFORM FRAMEWORK
Embodiments of systems and methods for intelligent orchestration of video participants in a platform framework are described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: process at least a first portion of a video workload using the processor; and in response to a change in context of the IHS, process at least a second portion of the video workload using an offload core.
Methods and systems for facilitating a collaborative work environment
The present disclosure describes techniques for facilitating a collaborative work environment. The techniques comprise creating at least one virtual room accessible by the plurality of users, wherein at least one subset of users are associated with the at least one virtual room, the at least one virtual room enables real-time communications among the at least one subset of users, and the at least one subset of users communicate with each other in the at least one virtual room through a first communication channel; receiving a request from a first user to communicate with at least a second user separately from the first communication channel; and establishing a first sub-communication channel between the first user and the at least a second user while the first communication channel remains accessible to the first user and the at least a second user.
Conference device, method of controlling conference device, and computer storage medium
The present disclosure provides a conference device, a method of controlling the conference device, and a computer storage medium. The conference device includes a display, an image sensor, a holographic projector, and a controller configured to identify, by using an image data from the image sensor, a modification action performed at a target location for a holographic image projected by the holographic projector, modify holographic projection data based on the modification action, and convert modified holographic projection data into modified two-dimensional imaging data.
Communication methods and systems, electronic devices, servers, and readable storage media
The present disclosure provides a communication method, and an electronic device. The method includes: obtaining, by an electronic device, a plurality of 2D images and/or a plurality of depth maps for a current scene, the plurality of 2D images and/or the plurality of depth maps being aligned in time; and transmitting, by the electronic device, the plurality of 2D images and/or the plurality of depth maps to the server by means of wireless communication.
Communication methods and systems, electronic devices, servers, and readable storage media
The present disclosure provides a communication method, and an electronic device. The method includes: obtaining, by an electronic device, a plurality of 2D images and/or a plurality of depth maps for a current scene, the plurality of 2D images and/or the plurality of depth maps being aligned in time; and transmitting, by the electronic device, the plurality of 2D images and/or the plurality of depth maps to the server by means of wireless communication.