H04N5/145

Object tracking using momentum and acceleration vectors in a motion estimation system

There is provided a method and apparatus for motion estimation in a sequence of video images. The method comprises a) subdividing each field or frame of a sequence of video images into a plurality of blocks, b) assigning to each block in each video field or frame a respective set of candidate motion vectors, c) determining for each block in a current video field or frame, which of its respective candidate motion vectors produces a best match to a block in a previous video field or frame, d) forming a motion vector field for the current video field or frame using the thus determined best match vectors for each block, and e) forming a further motion vector field by storing a candidate motion vector derived from the best match vector at a block location offset by a distance derived from the candidate motion vector. Finally, steps a) to e) are repeated for a video field or frame following the current video field or frame. The set of candidate motion vectors assigned at step b) to a block in the following video field or frame includes the candidates stored at that block location at step e) during the current video field or frame The method enables a block or tile based motion estimator to improve its accuracy by introducing true motion vector candidates derived from the physical behaviour of real world objects.

Image processing device, image display device, and program
11240407 · 2022-02-01 · ·

Provided is an image processing device, image display device, and program that allow an image captured by an imaging device not having a vibration suppression function or an image whose vibration has been suppressed incompletely to be displayed on a display device with the vibration suppressed. The image processing device includes a motion estimator configured to estimate the amount of motion of an object between a first image and a second image later than the first image and a motion compensator configured to perform a conversion process on the second image so that vibration of the object between the first image and the second image is suppressed, on the basis of the amount of motion of the object. The motion estimator has a first estimation mode in which the amount of motion of the object is estimated in a predetermined search area and a second estimation mode in which the amount of motion of the object is estimated in a larger area than the search area in the first estimation mode.

Temporal noise reduction method for noisy image and related apparatus
09721330 · 2017-08-01 · ·

Determining of still/movement may be performed with reference to quantization noise of a first section to which a first pixel belongs, and for different results of determining of whether the first pixel is in a movement area or a still area, different frame difference thresholds applicable to the movement area and the still area are separately set, and different frame difference calculation manners are used, different blending coefficients applicable to the movement area and the still area are selected according to the different frame difference thresholds applicable to the movement area and the still area and the frame difference calculation manners, and a noise reduction blending manner is selected according to the different blending coefficients applicable to the movement area and the still area, the frame difference calculation manners, and a pixel value of the first pixel in a current frame.

Systems and methods for categorizing motion events

The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.

Integrated circuits with optical flow computation circuitry
09819841 · 2017-11-14 · ·

An integrated circuit with optical flow computation circuitry is provided. The optical flow computation circuitry may include a first image shift register for receiving pixel values from a current video frame, a second image shift register for receiving pixel values from a previous video frame, column shift registers for storing column sums of various gradient-based values, square sum registers for storing square sums generated at least partly based on the column sum values, and an associated computation circuit that constructs a gradient matrix based on values stored in the square sum registers and that computes a 2-dimensional optical flow vector based on an inverse of the gradient matrix and differences between the current and previous frames. Optical flow computing circuitry configured in this way may be capable of supporting dense optical flow calculation for at least one pixel per clock cycle while supporting large window sizes.

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD THEREOF
20170324974 · 2017-11-09 ·

An image processing apparatus and an image processing method thereof are provided. A shared storage unit of a motion estimation and motion compensation apparatus captures frame data of a storage unit through a bus. A motion vector estimation unit and a motion compensation unit capture image data for executing a motion vector estimation operation and a motion compensation operation from the sharing storage unit.

HIGH ACCURACY DISPLACEMENT DETECTION SYSTEM WITH OFFSET PIXEL ARRAY
20170324919 · 2017-11-09 ·

A pixel array system for high accuracy detection of displacement, speed and acceleration includes: an array comprising m columns and n rows, wherein at least every other row is offset with respect to a preceding row of the array. In a first embodiment, every other row is offset with respect to a preceding row of the array by 25% of a pixel width. In a second embodiment, every two rows are offset with respect to a preceding two rows of the array by 25% of a pixel width.

IMAGE SENSOR PIXELS HAVING SEPARATED CHARGE STORAGE REGIONS
20170324915 · 2017-11-09 · ·

An image sensor may include pixel having nested photosensitive regions. A pixel with nested photosensitive regions may include an inner photosensitive region that has a rectangular light collecting area. The inner photosensitive region may be formed in a substrate and may be surrounded by an outer photosensitive region. The pixel with nested photosensitive regions may include trunk circuitry and transistor circuitry. Trunk circuitry may include a voltage supply source, a charge storage node, and readout transistors. Trunk circuitry may be located in close proximity to both the inner and outer photosensitive regions. Transistor circuitry may couple the inner photosensitive region, the outer photosensitive region, and trunk circuitry to one another. Microlenses may be formed over the nested photosensitive groups. Hybrid color filters having a single color filter region over the inner photosensitive region and a portion of the outer photosensitive region may also be used.

Multi-window image processing and motion compensation

An image processing system receives a sequence of frames including a current input frame and a next input frame (the next input frame is captured subsequent in time with respect to capturing of the current input frame). The image processing system stores a previously outputted output frame. The previously outputted output frame is derived from previously processed input frames in the sequence. The image processing modifies the current input frame based on detected first motion and second motion. The first motion is detected based on an analysis of the current input frame with respect to the next input frame. The second motion is detected based on an analysis of the current input frame with respect to the previously outputted output frame. According to one configuration, the image processing system implements multi-sized analyzer windows to more precisely detect the first motion and second motion.

DUAL-ENDED METADATA FOR JUDDER VISIBILITY CONTROL

Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.