Patent classifications
H04N21/42202
Method and device for latency reduction of an image processing pipeline
In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
Systems and methods for seamlessly outputting embedded media from a digital page on nearby devices most suitable for access
Systems and methods for enhancing user experience in accessing media of a certain content type by outputting the media on a nearby device that is better suited for access. For example, a media guidance application may determine that a user is accessing, on his/her smartphone, a digital page (e.g., a website, a newsfeed, etc.) that features embedded content (e.g., photos, movies, music, etc.). In response to determining that the user has navigated to an embedded content, such as a video clip, the media guidance application may determine a device in the vicinity of the user that is better suited than the user's smartphone for playback of the video clip. For example, a nearby smart television may have a larger screen, better sound output, and a better display resolution than the smartphone. As a result, the media guidance application may cause the smart television to output the video clip.
Multi-viewpoint multi-user audio user experience
An apparatus including circuitry configured for receiving a spatial media content file including a plurality of viewpoints; circuitry configured for determining a first viewpoint from the plurality of viewpoints for a first user consuming the spatial media content file; circuitry configured for receiving an indication that affects an audio rendering of the first viewpoint for the first user, wherein the indication is associated with one or more actions of at least one second user consuming the spatial media content file; and circuitry configured for controlling the audio rendering of the first viewpoint for the first user in response to the receiving of the indication based on at least one of: a position and/or orientation of the first user, and the one or more actions of the second user.
Facilitating panoramic video streaming with brain-computer interactions
Aspects of the subject disclosure may include, for example, obtaining one or more signals, the one or more signals being based upon brain activity of a viewer while the viewer is viewing media content; predicting, based upon the one or more signals, a first predicted desired viewport of the viewer; obtaining head movement data associated with the media content; predicting, based upon the head movement data, a second predicted desired viewport of the viewer; comparing the first predicted desired viewport to the second predicted desired viewport, resulting in a comparison; and determining, based upon the comparison, to use the first predicted desired viewport to facilitate obtaining a first subsequent portion of the media content or to use the second predicted desired viewport to facilitate obtaining a second subsequent portion of the media content. Other embodiments are disclosed.
System and method for creating a temporal-based dynamic watermark
Systems and methods for dynamically and automatically generating digital watermarks are provided. Watermark payloads utilized in generating the digital watermarks are altered based upon changing conditions, such as environmental characteristics associated with playback or distribution of media content. Changing conditions may also encompass a change in the distribution/presentation chain of devices associated with the playback or distribution of the media content.
Multimodal inputs for computer-generated reality
Implementations of the subject technology provide determining an operating mode of an electronic device based at least in part on whether the electronic device is communicatively coupled to an associated base device. Based on the determined operating mode, the subject technology identifies a set of input modalities for initiating a recording of content within a field of view of the electronic device. The subject technology monitors sensor information generated by at least one sensor included in, or communicatively coupled to, the electronic device. Further, the subject technology initiates the recording of content within the field of view of the electronic device when the monitored sensor information indicates that at least one of the identified set of input modalities has been triggered.
System and method for real-time synchronization of media content via multiple devices and speaker systems
A method and system for real-time customizing and synchronizing media by a client device in communication with a server device. A client device customizes stock media content based on user preferences, and synchronizes the customized content for playback with a server-side playback of the stock media content.
Low latency wireless virtual reality systems and methods
Virtual Reality (VR) processing devices and methods are provided for transmitting user feedback information comprising at least one of user position information and user orientation information, receiving encoded audio-video (A/V) data, which is generated based on the transmitted user feedback information, separating the A/V data into video data and audio data corresponding to a portion of a next frame of a sequence of frames of the video data to be displayed, decoding the portion of a next frame of the video data and the corresponding audio data, providing the audio data for aural presentation and controlling the portion of the next frame of the video data to be displayed in synchronization with the corresponding audio data.
VIRTUAL AMBIENT ILLUMINANCE SENSOR SYSTEM
The virtual ambient illuminance sensor system disclosed herein provides a method including detecting presence of an external device in vicinity of the device, wherein the external device is communicatively connected to the device, communicating with the external device to determine that the external device has an illuminance sensor, based at least in part on determining that the external device has an illuminance sensor; receiving an ambient illuminance snapshot from the external device, storing the ambient illuminance snapshot from the external device in the memory, and generating an ambient illuminance report for an operating system of the device.
METHOD AND ELECTRONIC DEVICE FOR SHARING CONTENT
An electronic device according to an embodiment disclosed in the disclosure may include an ultra-wideband (UWB) communication module, a communication module, and a processor operatively connected with the UWB communication module and the communication module. The processor may determine a position of the electronic device based on a UWB, may determine an area where the electronic device is positioned, may play a multimedia content corresponding to the area, may acquire interest information regarding a plurality of multimedia contents including the multimedia content, may select at least one multimedia content from the plurality of multimedia contents based on the interest information, may generate a user multimedia content by using the at least one selected multimedia content, and may transmit the user multimedia content to a server through the communication module.