H04N7/0135

Data interpolation

When a picture displayed on a client device is enlarged, the client device may be configured to insert new pixels between two adjacent pixels in the picture. When actual values of the new pixels are stored on a server, the client device may submit a request to the server for actual values of the new pixels. Responsive to the request, the server may first calculate interpolation values in accordance with the same interpolation algorithm and then calculate a difference value based on the interpolation values and the actual values stored on the server. If the calculated difference value is greater than a threshold value, the server may transmit the actual values for the new pixels to the client device. Otherwise, the server may instruct the client device to calculate the interpolation values.

IMAGE PICKUP APPARATUS
20170230568 · 2017-08-10 ·

An image pickup apparatus is equipped with a distance information calculator, a high frequency signal extractor, a focus assist signal generator, and a signal synthesizer. The distance information calculator calculates distance information of a video signal and generates an in-focus range signal from current focus information and the distance information. The high frequency signal extractor extracts a high frequency signal of the video signal. The focus assist signal generator generates a focus assist signal representing a focused region by using the high frequency signal and the in-focus range signal. The signal synthesizer synthesizes the focus assist signal with the video signal to generate a focus-assist-signal-added video signal.

Directed interpolation and data post-processing

An encoding device evaluates a plurality of processing and/or post-processing algorithms and/or methods to be applied to a video stream, and signals a selected method, algorithm, class or category of methods/algorithms either in an encoded bitstream or as side information related to the encoded bitstream. A decoding device or post-processor utilizes the signaled algorithm or selects an algorithm/method based on the signaled method or algorithm. The selection is based, for example, on availability of the algorithm/method at the decoder/post-processor and/or cost of implementation. The video stream may comprise, for example, downsampled multiplexed stereoscopic images and the selected algorithm may include any of upconversion and/or error correction techniques that contribute to a restoration of the downsampled images.

IMAGE SENSING DEVICE
20170272673 · 2017-09-21 ·

An image sensing device includes: a pixel array including a plurality of pixels arranged at each cross point of rows and columns, wherein the pixel array comprises a plurality of pixel blocks, each including N pixels, N being a natural number equal to or greater than 2, wherein the pixel blocks sequentially output a plurality of pixel signals having pixel information on the same color N times during one or more single row times; a plurality of column lines suitable for sequentially transferring the plurality of pixel signals from the pixel blocks, each column line being shared by two adjacent columns and coupled to at least one of the pixel blocks; a plurality of averaging blocks suitable for grouping the pixel signals to overlap each other, into a plurality of pixel signal groups, and averaging the pixel signal groups to output a plurality of averaged pixel signals, wherein the number of the averaging blocks is smaller than the number of the column lines; and a plurality of conversion blocks suitable for converting the averaged pixel signals into a plurality of digital signals.

DISPLAY APPARATUS AND CONTROL METHOD THEREOF

A display apparatus and a control method thereof are provided. The display apparatus includes a communication interface configured to receive captured images and information related to the captured images; a display; and a processor configured to: obtain an object disparity of an object included in the captured images and a number of the captured images based on the information related to the captured images; identify whether a display disparity representable by the display matches the object disparity; based on the display disparity not matching the object disparity, generate interpolated images by performing image interpolation based on the display disparity, the object disparity, and the number of the captured images; and control the display to display a three-dimensional content based on the captured images and the interpolated images.

DEVICE AND METHOD FOR PROCESSING FRAMES

Embodiments disclosed herein relate to device and method for processing frames. For example, a buffer of a device is arranged to store a plurality of rendered frames rendered at a frame rendering rate and a time stamp for each of rendered frames. A compositor of a device is arranged to obtain a timestamp of a synchronisation signal for synchronising the display of frames with a display refresh rate. In response to obtaining a timestamp of a synchronisation signal, a compositor is arranged to trigger access to a buffer to obtain two rendered frames having timestamps closest to a timestamp of a synchronisation signal. An interpolator of a device is arranged to generate an interpolated rendered frame for display by performing an interpolation operation using two rendered frames. An interpolation operation takes into account the difference between timestamps of each of two rendered frames and a timestamp of a synchronisation signal.

Electronic device for improving graphic performance of application program and operating method thereof

An electronic device and method are disclosed herein. The electronic device includes a display and processor. The processor implements the method including executing an application, and based on detecting a frame drop, identifying an insertion position of an interpolation image for a plurality of images generated by execution of the application. An interpolation image is generated based on the identified insertion position and the interpolation image is inserted into the plurality of images at the identified insertion position.

VIDEO STREAM PROCESSING METHOD, DEVICE, TERMINAL DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
20220201205 · 2022-06-23 ·

A video stream processing method, a device, a terminal device, and a computer-readable storage medium are provided, including steps for acquiring a first video stream through a first camera and acquiring a second video stream through a second camera in response to receiving a slow-motion shooting instruction, the slow-motion shooting instruction carrying a frame rate of a slow-motion video stream; encoding the first video stream and the second video stream into a third video stream with a third frame rate, the third frame rate being greater than the first frame rate, and the third frame rate being greater than the second frame rate; and acquiring a fourth video stream with a fourth frame rate through performing a frame interpolation algorithm on the third video stream, the fourth frame rate being the same as the frame rate of the slow-motion video stream.

METHOD FOR SYNTHESIZING IMAGE

The present application relates to a method for synthesizing image and an image synthesizing apparatus using the same. The image synthesizing method of the present application and the image synthesizing apparatus using the same can reduce the time required for learning by reducing the distortion phenomenon of the algorithm and reducing the amount of calculation required for image synthesis. In addition, the image synthesizing method of the present application and the image synthesizing apparatus using the same can be utilized in various fields such as national defense, IT, and entertainment based on characteristics such as excellent performance and learning time reduction of deep learning algorithms, and can be utilized in psychological warfare or induction of command system confusion.

Frame Rate Extrapolation

In one implementation, a method of frame rate extrapolation is performed by a device including one or more processors, non-transitory memory, a scene camera, and a display. The method includes capturing, using the scene camera, an image of a scene. The method includes displaying, on the display, the image of the scene at a first time. The method includes generating an extrapolated image by transforming, using the one or more processors, the image of the scene based on movement of the device, wherein the extrapolated image includes a first area including a first plurality of pixels having respective first pixel values based on a single depth and a second area including a second plurality of pixels having respective second pixel values based on a plurality of depths. The method includes displaying, on the display, the extrapolated image at a second time after the first time.