H04N23/951

Video stream processing method, device, terminal device, and computer-readable storage medium

A video stream processing method, a device, a terminal device, and a computer-readable storage medium are provided, including steps for acquiring a first video stream through a first camera and acquiring a second video stream through a second camera in response to receiving a slow-motion shooting instruction, the slow-motion shooting instruction carrying a frame rate of a slow-motion video stream; encoding the first video stream and the second video stream into a third video stream with a third frame rate, the third frame rate being greater than the first frame rate, and the third frame rate being greater than the second frame rate; and acquiring a fourth video stream with a fourth frame rate through performing a frame interpolation algorithm on the third video stream, the fourth frame rate being the same as the frame rate of the slow-motion video stream.

Video stream processing method, device, terminal device, and computer-readable storage medium

A video stream processing method, a device, a terminal device, and a computer-readable storage medium are provided, including steps for acquiring a first video stream through a first camera and acquiring a second video stream through a second camera in response to receiving a slow-motion shooting instruction, the slow-motion shooting instruction carrying a frame rate of a slow-motion video stream; encoding the first video stream and the second video stream into a third video stream with a third frame rate, the third frame rate being greater than the first frame rate, and the third frame rate being greater than the second frame rate; and acquiring a fourth video stream with a fourth frame rate through performing a frame interpolation algorithm on the third video stream, the fourth frame rate being the same as the frame rate of the slow-motion video stream.

IMAGING ELEMENT, IMAGING APPARATUS, IMAGING METHOD, AND PROGRAM

An imaging element incorporates a reading portion that reads out captured image data at a first frame rate, a storage portion that stores the image data, a processing portion that processes the image data, and an output portion that outputs the processed image data at a second frame rate lower than the first frame rate. The reading portion reads out the image data of each of a plurality of frames in parallel. The storage portion stores, in parallel, each image data read out in parallel by the reading portion. The processing portion performs generation processing of generating output image data of one frame using the image data of each of the plurality of frames stored in the storage portion.

EYE TRACKING USING EFFICIENT IMAGE CAPTURE AND VERGENCE AND INTER-PUPILLARY DISTANCE HISTORY
20230239586 · 2023-07-27 ·

Tracking an eye characteristic (e.g., gaze direction or pupil position) of a user's eyes by staggering image capture and using a predicted relationship between the user's eyes between eye captures to predict that eye's eye characteristic between those eye captures. Images of a user's eyes are captured in a staggered manner in the sense that the images of second eye are captured between the capture times of the images of the first eye and vice versa. An eye characteristic of the first eye at the capture times is determined based on the images of the first eye at those times. In addition, the eye characteristic of that first eye is predicted at additional times between captures based on a predicted relationship between the eyes.

EYE TRACKING USING EFFICIENT IMAGE CAPTURE AND VERGENCE AND INTER-PUPILLARY DISTANCE HISTORY
20230239586 · 2023-07-27 ·

Tracking an eye characteristic (e.g., gaze direction or pupil position) of a user's eyes by staggering image capture and using a predicted relationship between the user's eyes between eye captures to predict that eye's eye characteristic between those eye captures. Images of a user's eyes are captured in a staggered manner in the sense that the images of second eye are captured between the capture times of the images of the first eye and vice versa. An eye characteristic of the first eye at the capture times is determined based on the images of the first eye at those times. In addition, the eye characteristic of that first eye is predicted at additional times between captures based on a predicted relationship between the eyes.

All-in-focus implementation

Various systems and methods for an all-in-focus implementation are described herein. A system to for operating a camera, comprising an image sensor in the camera to capture a sequence of images in different focal planes, at least a portion of the sequence of images occurring before receiving a signal to store an all-in-focus image, a user interface module to receive, from a user, the signal to store an all-in-focus image, and an image processor to fuse at least two images to result in the all-in-focus image, wherein the at least two images have different focal planes.

Camera module and depth map extraction method thereof

A camera module according to one embodiment of the present invention comprises: an illumination unit for outputting a signal of incident light irradiated to an object; a lens unit for collecting a signal of reflection light reflected from the object, an image sensor unit for generating an electric signal from a reflection light signal collected by the lens unit, a tilting unit for shifting an optical path of the reflection light signal, and an image control unit for extracting a depth map of the object by using a phase difference between the incident light signal with respect to a frame having shifted by the tilting unit and the reflection light signal received by the image sensor unit, wherein the lens unit is disposed on the image sensor unit and includes an infrared (IR) filter disposed on the image sensor unit and at least one lens disposed on the infrared filter, and the tilting unit controls tilt of the infrared filter.

Imaging control device, imaging control method, and imaging device for increasing resolution of an image

An imaging element of an imaging unit 24 divides the exit pupil of an imaging optical system 21 into a plurality of regions and generates a pixel signal for each region. An optical axis position adjustment unit 23 adjusts the optical axis position of the imaging optical system with respect to the imaging element. A control unit 26 calculates a parallax on the basis of the pixel signal for each region after the pupil division and performs focus control of the imaging optical system 21. The control unit 26 also moves the optical axis position using the optical axis position adjustment unit 23, and generates, using the imaging element, pixel signals indicating the same subject region in the plurality of regions after the pupil division. An image processing unit 25 performs binning of a plurality of pixel signals indicating the same subject region generated by moving the optical axis position to generate a high-resolution captured image. Calculation of the parallax and acquisition of a high-resolution captured image can be performed.

Imaging control device, imaging control method, and imaging device for increasing resolution of an image

An imaging element of an imaging unit 24 divides the exit pupil of an imaging optical system 21 into a plurality of regions and generates a pixel signal for each region. An optical axis position adjustment unit 23 adjusts the optical axis position of the imaging optical system with respect to the imaging element. A control unit 26 calculates a parallax on the basis of the pixel signal for each region after the pupil division and performs focus control of the imaging optical system 21. The control unit 26 also moves the optical axis position using the optical axis position adjustment unit 23, and generates, using the imaging element, pixel signals indicating the same subject region in the plurality of regions after the pupil division. An image processing unit 25 performs binning of a plurality of pixel signals indicating the same subject region generated by moving the optical axis position to generate a high-resolution captured image. Calculation of the parallax and acquisition of a high-resolution captured image can be performed.

Thin multi-aperture imaging system with auto-focus and methods for using same

Dual-aperture digital cameras with auto-focus (AF) and related methods for obtaining a focused and, optionally optically stabilized color image of an object or scene. A dual-aperture camera includes a first sub-camera having a first optics bloc and a color image sensor for providing a color image, a second sub-camera having a second optics bloc and a clear image sensor for providing a luminance image, the first and second sub-cameras having substantially the same field of view, an AF mechanism coupled mechanically at least to the first optics bloc, and a camera controller coupled to the AF mechanism and to the two image sensors and configured to control the AF mechanism, to calculate a scaling difference and a sharpness difference between the color and luminance images, the scaling and sharpness differences being due to the AF mechanism, and to process the color and luminance images into a fused color image using the calculated differences.