Patent classifications
H04N21/816
Efficient Delivery of Multi-Camera Interactive Content
Techniques are disclosed relating to encoding recorded content for distribution to other computing devices. In various embodiments, a first computing device records content of a physical environment in which the first computing device is located, the content being deliverable to a second computing device configured to present a corresponding environment based on the recorded content and content recorded by one or more additional computing devices. The first computing device determines a pose of the first computing device within the physical environment and encodes the pose in a manifest usable to stream the content recorded by the first computing device to the second computing device. The encoded pose is usable by the second computing device to determine whether to stream the content recorded by the first computing device.
HYPER-CONNECTED AND SYNCHRONIZED AR GLASSES
Systems and methods are described for selectively sharing audio and video streams amongst electronic eyewear devices. Each electronic eyewear device includes a camera arranged to capture a video stream in an environment of the wearer, a microphone arranged to capture an audio stream in the environment of the wearer, and a display. A processor of each electronic eyewear device executes instructions to establish an always-on session with other electronic eyewear devices and selectively shares an audio stream, a video stream, or both with other electronic eyewear devices in the session. Each electronic eyewear device also generates and receives annotations from other users in the session for display with the selectively shared video stream on the display of the electronic eyewear device that provided the selectively shared video stream. The annotation may include manipulation of an object in the shared video stream or overlay images registered with the shared video stream.
EXTENDED DIGITAL INTERFACE (XDI) SYSTEMS, DEVICES, CONNECTORS, AND METHODS
Extended Digital Interface (XDI) provides systems, devices, connectors, signals, and methods to send 3D Vector and Motion based audio video serial digital signals through local systems or internet with significantly reduced bandwidth requirements and lower device costs, over longer cable runs. The XDI system has higher flexibility for connection topologies and scalability. The XDI system is much simpler to install employing the single coax cables and connectors, or internet, or Wi-Fi, which is simple and easy to work with, without introducing any signal quality losses or delays comparing to the current 2D Frame and Pixel based digital systems using multiple conductors like HDMI, DVI, DP or SDI when using the already compressed audio video content. The XDI system also provides solutions for integrating the uncompressed audio video content and Internet content into this system. These systems, devices, connectors and methods are collectively called “XDI” (Extended Digital Interface).
Enhanced immersive digital media
This disclosure describes systems, methods, and devices related to immersive digital media. A method may include receiving, at a first device, first volumetric data, and second volumetric data including a first volumetric time slice of a first volumetric media stream. The method may include determining that the first volumetric time slice includes a first portion and a second portion, the first portion representing a first object and including an amount of the second volumetric data. The method may include determining that the first volumetric data represents the first object. The method may include generating a second volumetric time slice including the first volumetric data and the second portion of the first volumetric time slice, and generating a second volumetric media stream including the second volumetric time slice. The method may include sending the second volumetric media for presentation at a third device.
Apparatus and method for providing content with multiplane image transcoding includes user history of confidence
Aspects of the subject disclosure may include, for example, transmitting viewpoint information associated with a first portion of a three-dimensional (3D)/volumetric video to a device, wherein the viewpoint information comprises a first coordinate in 3D space associated with a first viewing direction in a playback of the first portion and a first timestamp associated with the first portion, receiving, from the device, a multiplane image (MPI) representation of a second portion of the 3D video responsive to the transmitting of the viewpoint information, and providing an image of the MPI representation to a display device. Other embodiments are disclosed.
ATSC over-the-air (OTA) broadcast of public volumetric augmented reality (AR)
Techniques are described for using the Advanced Television Systems Committee (ATSC) 3.0 television protocol to deliver volumetric information for presentation on various displays using ATSC over-the-air communications channels.
QUALIFICATION TEST IN SUBJECT SCORING
Aspects of the disclosure provide methods and apparatuses for subjective evaluation. In some examples, processing circuitry receives scores graded by a subject to a media presentation. The scores by the subject includes a plurality of self comparison scores that are graded to self comparison tests in the media presentation. The processing circuitry applies a first rule and a second rule to the plurality of self comparison scores. The first rule requires a first subset of the plurality of self comparison scores in a first range. The second rule requires a second subset of the plurality of self comparison scores in a second range to limit at least an outlier to the first rule according to the second range. The processing circuitry determines that the scores by the subject are qualified for the subjective evaluation in response to the first rule and the second rule being satisfied.
BIDIRECTIONAL PRESENTATION DATASTREAM USING CONTROL AND DATA PLANE CHANNELS
Aspects of the disclosure provide methods and apparatuses for media processing. In some examples, an apparatus includes processing circuitry. The processing circuitry can exchange, with a server device, a plurality of control messages over a control plane channel that uses a first transport protocol. The plurality of control messages belongs to a control plane of a bidirectional protocol for immersive media distribution. The processing circuitry receives, from the server device, a first plurality of data messages over a first data plane channel that uses a second transport protocol. The first plurality of data messages belongs to a data plane of the bidirectional protocol and carries immersive media content. The processing circuitry presents the immersive media content carried by the first plurality of data messages.
SYSTEM AND METHOD OF SERVER-SIDE DYNAMIC SPATIAL AND TEMPORAL ADAPTATIONS FOR MEDIA PROCESSING AND STREAMING
The techniques described herein relate to methods, apparatus, and computer readable media configured to provide video data for immersive media implemented by a server in communication with a client device. A request to access a stream of media data associated with immersive content at a point in time the client is first accessing the stream of media data for the immersive content is received from the client device. An initial portion of media data is determined, starting from the point in time the client requests to access, for the immersive content for delivery to the client device. In response to the request to access the stream of media data, the initial portion of media data is transmitted to the client device.
VR 360 video for remote end users
An apparatus for delivering virtual reality data portions to a client device, including a processing unit configured to perform the following in each one of a plurality of iterations: (1) receive from a network a current orientation data indicating a current orientation of a client device, (2) apply a rotation to a segment of a sphere defined in a virtual reality (VR) video file according to the current orientation, (3) crop from the rotated segment of the sphere in an equirectangular projection format an extended field of view (EFOV) frame in the equirectangular projection format according to the current orientation, and (4) instruct the network to transmit the EFOV frame to the client device.