H04N19/37

MEDIA STORAGE

A user of a storage system can upload files for a media asset, which can include a high quality media file and various related files. As part of the upload process, the storage system can extract metadata that describes the media asset. The user can specify one or more lifecycle policies to be applied for storage of the asset, and a rules engine can ensure the application of the one or more policies. The rules engine can also enable the use of simple media processing workflows. A filename hashing approach can be used to ensure that the segments and files for the asset are stored in a relatively random and even distribution across the partitions of the storage system. As part of the lifecycle for the asset, the high quality media file can be moved to less expensive storage once transcoding of the asset or another such action occurs.

MEDIA STORAGE

A user of a storage system can upload files for a media asset, which can include a high quality media file and various related files. As part of the upload process, the storage system can extract metadata that describes the media asset. The user can specify one or more lifecycle policies to be applied for storage of the asset, and a rules engine can ensure the application of the one or more policies. The rules engine can also enable the use of simple media processing workflows. A filename hashing approach can be used to ensure that the segments and files for the asset are stored in a relatively random and even distribution across the partitions of the storage system. As part of the lifecycle for the asset, the high quality media file can be moved to less expensive storage once transcoding of the asset or another such action occurs.

SYSTEMS AND METHODS FOR IMPROVED DELIVERY AND DISPLAY OF 360-DEGREE CONTENT

Systems and methods are provided for generating a viewport for display. A user preference for a character and/or a genre of a scene in a spherical media content item is determined, wherein the spherical media content item comprises a plurality of tiles. A tile of the plurality of tiles is identified based on the determined user preference. A viewport to be generated for display at a computing device is predicted, based on the identified tile. A first tile to be transmitted to a computing device at a first resolution is identified, based on the predicted viewport to be generated for display. The tile is transmitted, to the computing device, at the first resolution.

METHOD FOR IMAGE TRANSMITTING, TRANSMITTING DEVICE AND RECEIVING DEVICE
20230069743 · 2023-03-02 ·

In a method for image transmitting executed in a transmitting device, three data transmitting channels are established, the three data transmitting channels are a first channel, a second channel and a third channel. An image of a video is obtained, and the image is divided into a region of interest and a background region. A first data of the region of interest and a second data of the background region are obtained, and the first data is encoded through fountain coding to obtain a third data. The first data, the second data, and the third data are respectively transmitted through the first channel, the second channel, and the third channel to a receiving device. A network condition is received, and whether the network condition matches a preset condition is determined. When the network condition matches the preset condition, the first data is compensated according to a first preset algorithm.

METHOD FOR IMAGE TRANSMITTING, TRANSMITTING DEVICE AND RECEIVING DEVICE
20230069743 · 2023-03-02 ·

In a method for image transmitting executed in a transmitting device, three data transmitting channels are established, the three data transmitting channels are a first channel, a second channel and a third channel. An image of a video is obtained, and the image is divided into a region of interest and a background region. A first data of the region of interest and a second data of the background region are obtained, and the first data is encoded through fountain coding to obtain a third data. The first data, the second data, and the third data are respectively transmitted through the first channel, the second channel, and the third channel to a receiving device. A network condition is received, and whether the network condition matches a preset condition is determined. When the network condition matches the preset condition, the first data is compensated according to a first preset algorithm.

IMAGE DATA TRANSFER APPARATUS AND IMAGE COMPRESSION METHOD
20220337851 · 2022-10-20 · ·

An image contents acquisition section in a compression coding section of a server acquires information associated with contents indicated by a moving image generated by an image generation section. A communication status acquisition section acquires a status of communication with an image processing apparatus corresponding to a data transmission destination. A compression coding processing section adjusts a data size of the moving image according to a change of the communication status, by using means determined on the basis of the contents indicated by the moving image, and compression-codes data of the moving image. A communication section transmits compression-coded data to the image processing apparatus.

IMAGE DATA TRANSFER APPARATUS AND IMAGE COMPRESSION METHOD
20220337851 · 2022-10-20 · ·

An image contents acquisition section in a compression coding section of a server acquires information associated with contents indicated by a moving image generated by an image generation section. A communication status acquisition section acquires a status of communication with an image processing apparatus corresponding to a data transmission destination. A compression coding processing section adjusts a data size of the moving image according to a change of the communication status, by using means determined on the basis of the contents indicated by the moving image, and compression-codes data of the moving image. A communication section transmits compression-coded data to the image processing apparatus.

SUB-BITSTREAM EXTRACTION OF MULTI-LAYER VIDEO BITSTREAMS
20230106638 · 2023-04-06 ·

Examples of video encoding methods and apparatus and video decoding methods and apparatus are described. An example method of video processing includes performing a conversion between a video and a bitstream of the video according to a rule, wherein the bitstream includes network abstraction layer (NAL) units for multiple video layers according to a rule; wherein the rule defines that a sub-bitstream extraction process to generate an output bitstream comprising an output layer set (OLS) includes one or more operations that are selectively performed responsive to the following conditions: (1) a list of NAL unit header layer identifier values in the OLS does not includes all values of NAL unit header layer identifiers in all video coding layer (VCL) NAL units in the bitstream, and (2) the output bitstream containing a supplemental enhancement information (SEI) NAL unit that contains a scalable-nesting SEI message.

Video encoding system
11653026 · 2023-05-16 · ·

A video encoding system in which pixel data is decomposed into frequency bands prior to encoding. The frequency bands are organized into blocks that are provided to a block-based encoder. The encoded frequency data is packetized and transmitted to a receiving device. On the receiving device, the encoded data is decoded to recover the frequency bands. Wavelet synthesis is then performed on the frequency bands to reconstruct the pixel data for display. The system may encode parts of frames (tiles or slices) using one or more encoders and transmit the encoded parts as they are ready. A pre-filter component may perform a lens warp on the pixel data prior to the wavelet transform.

On-Camera Video Capture, Classification, and Processing
20170351922 · 2017-12-07 ·

Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. Events of interest can be tagged within the video based on, for instance, user input, audio signals, motion vectors, and metadata corresponding to the video. A camera system can process video data based on the events of interest tagged within the video before outputting the video data. For instance, video scenes associated with tagged events of interest can be combined to form a video highlight clip. Likewise, portions of video tagged with events of interest can be encoded or stored at a higher resolution or frame rate than other portions of the video.