H04N7/12

FILTERING-BASED IMAGE CODING DEVICE AND METHOD
20230023712 · 2023-01-26 ·

According to embodiments described herein, sub-pictures and/or virtual boundaries can be used for coding an image. For example, sub-pictures in the current picture can be used for predicting, reconstructing, and/or filtering the current picture. Virtual boundaries can be used for filtering reconstructed samples of the current picture. Through image coding based on the subpictures and/or virtual boundaries according to embodiments described herein, the subjective/objective quality of an image can be improved, and the consumption of hardware resources necessary for the coding can be reduced.

LOSSLESS AR VIDEO CAPTURE AND TRANSMISSION METHOD, APPARATUS AND SYSTEM
20230021901 · 2023-01-26 ·

The present disclosure relates to a lossless AR video capture and transmission method, apparatus and system. The method includes converting combined analog electronic signals synchronously captured by a plurality of image sensors and a plurality of sound pickups into multi-channels of first digital signals; losslessly converting the multi-channels of first digital signals into multi-channels of second digital signals; obtaining multi-channels of first optical signals by performing respective photoelectric conversions on the multi-channels of second digital signals; receiving the multi-channels of first optical signals, and converting the multi-channels of first optical signals into the multi-channels of second digital signals; and parsing at least one channel of second digital signal among the multi-channels of second digital signals into an AR video.

LOSSLESS AR VIDEO CAPTURE AND TRANSMISSION METHOD, APPARATUS AND SYSTEM
20230021901 · 2023-01-26 ·

The present disclosure relates to a lossless AR video capture and transmission method, apparatus and system. The method includes converting combined analog electronic signals synchronously captured by a plurality of image sensors and a plurality of sound pickups into multi-channels of first digital signals; losslessly converting the multi-channels of first digital signals into multi-channels of second digital signals; obtaining multi-channels of first optical signals by performing respective photoelectric conversions on the multi-channels of second digital signals; receiving the multi-channels of first optical signals, and converting the multi-channels of first optical signals into the multi-channels of second digital signals; and parsing at least one channel of second digital signal among the multi-channels of second digital signals into an AR video.

Method and apparatus for video decoding

This application relates to a method and apparatus, a storage medium, and a computer device for video encoding and decoding. The video encoding method includes: determining a sub-pixel interpolation mode, the sub-pixel interpolation mode comprising one of a direct sub-pixel interpolation mode or a sampled sub-pixel interpolation mode; acquiring motion estimation pixel precision corresponding to a current video frame; performing sub-pixel interpolation processing on a reference frame corresponding to the current video frame according to a resolution relationship between the current video frame and the reference frame, the motion estimation pixel precision, and the sub-pixel interpolation mode, to obtain a target reference frame; and encoding the current video frame according to the target reference frame, to obtain encoded data corresponding to the current video frame.

Method for signaling picture header in coded video stream

A method of decoding an encoded video bitstream using at least one processor includes obtaining a video coding layer (VCL) network abstraction layer (NAL) unit; determining whether the VCL NAL unit is a first VCL NAL unit of a picture unit (PU) containing the VCL NAL unit; based on determining that the VCL NAL unit is the first VCL NAL unit of the PU, determining whether the VCL NAL unit is a first VCL NAL unit of an access unit (AU) containing the PU; and based on determining that the VCL NAL unit is the first VCL NAL unit of the AU, decoding the AU based on the VCL NAL unit.

SYSTEM AND METHOD FOR CORRECTING NETWORK LOSS OF DATA
20230016064 · 2023-01-19 ·

A reference-order AL-FEC system for recovering network video data packet loss during real-time video communication includes a packetizer, a reference-order AL-FEC encoder, a reference-order AL-FEC decoder and a depacketizer. The packetizer constructs source symbols from source packets of a current frame. The encoder generates a repair symbol from the source symbols of the current frame and other reference frames based on the reference-order, not time-order, between the frames within an encoding window. The encoder also generates a repair packet based on the repair symbol. The decoder recovers a lost source symbol based on the source symbols of the frames of the encoding window and the repair symbol by decoding the repair packet. The decoding is achieved by solving a linear system of the repair symbol.

System and method for correcting network loss of data
11706456 · 2023-07-18 · ·

A reference-order AL-FEC system for recovering network video data packet loss during real-time video communication includes a packetizer, a reference-order AL-FEC encoder, a reference-order AL-FEC decoder and a depacketizer. The packetizer constructs source symbols from source packets of a current frame. The encoder generates a repair symbol from the source symbols of the current frame and other reference frames based on the reference-order, not time-order, between the frames within an encoding window. The encoder also generates a repair packet based on the repair symbol. The decoder recovers a lost source symbol based on the source symbols of the frames of the encoding window and the repair symbol by decoding the repair packet. The decoding is achieved by solving a linear system of the repair symbol.

PLANE CODING TARGET AND IMAGE SPLICING SYSTEM AND METHOD APPLYING THE SAME
20230217036 · 2023-07-06 ·

Disclosed are a plane coding target and an image splicing system and method applying the same. The plane coding target comprises a plurality of coding units distributed in an array, the coding unit comprises one central coding point, a plurality of normal coding points and at least one positioning point, and a positioning point distribution style of the positioning point is used for determining coordinates of the central coding point and the normal coding points in a coding unit coordinate system; and coding numerical value sequences of the coding units are different from each other and unique. The plane coding target can realize large-area coding and positioning functions, and the image splicing system applying the plane coding target can solve the problems of splicing error and error accumulation caused by an identification error of a splicing location, thus realizing large-range, high-precision and short-time two-dimensional image splicing.

Method and apparatus for video coding
11553205 · 2023-01-10 · ·

Aspects of the disclosure provide a method and an apparatus for video coding. In some examples, an apparatus includes processing circuitry that obtains a plurality of control point motion vectors for a current block, determines first motion vectors and second motion vectors for a plurality of sub-blocks of the current block according to the plurality of control point motion vectors. The first motion vectors correspond to a first relative position in each sub-block. At least one first motion vector is different from a corresponding second motion vector. The processing circuitry obtains a first set of predicted samples according to the first motion vectors, obtains a second set of predicted samples according to the second motion vectors, and obtains a third set of predicted samples for the current block based on the first set of predicted samples and the second set of predicted samples.

Systems and methods for player input motion compensation by anticipating motion vectors and/or caching repetitive motion vectors
11695951 · 2023-07-04 · ·

Systems and methods for reducing latency through motion estimation and compensation techniques are disclosed. The systems and methods include a client device that uses transmitted lookup tables from a remote server to match user input to motion vectors, and tag and sum those motion vectors. When a remote server transmits encoded video frames to the client, the client decodes those video frames and applies the summed motion vectors to the decoded frames to estimate motion in those frames. In certain embodiments, the systems and methods generate motion vectors at a server based on predetermined criteria and transmit the generated motion vectors and one or more invalidators to a client, which caches those motion vectors and invalidators. The server instructs the client to receive input from a user, and use that input to match to cached motion vectors or invalidators. Based on that comparison, the client then applies the matched motion vectors or invalidators to effect motion compensation in a graphic interface. In other embodiments, the systems and methods cache repetitive motion vectors at a server, which transmits a previously generated motion vector library to a client. The client stores the motion vector library, and monitors for user input data. The server instructs the client to calculate a motion estimate from the input data and instructs the client to update the stored motion vector library based on the input data, so that the client applies the stored motion vector library to initiate motion in a graphic interface prior to receiving actual motion vector data from the server. In this manner, latency in video data streams is reduced.