Patent classifications
H04N23/958
IMAGE FUSION METHOD AND DEVICE
CROSS REFERENCE TO RELATED APPLICATIONS The present application is a national phase application under 35 U.S.C. § 371 of International Application No. PCT/CN2021/104976, filed Jul. 7, 2021, which claims the benefit of priority to Chinese patent application No. 202010649911.2 filed with the State Intellectual Property Office of People’s Republic of China on Jul. 8, 2020 and entitled “Image Fusion Method and Device”, each of whichare incorporated herein by reference in their entirety. Please replace the Abstract with the following amended Abstract: An image fusion method and device is disclosed, which includes: obtaining a first short-focus image and a first long-focus image acquired by a short-focus sensor and a long-focus sensor at the same time; calculating a reduction coefficient corresponding to the first long-focus image when the sizes of the same target in the first long-focus image and the first short-focus image are matched; performing a reduction processing on the first long-focus image according to the reduction coefficient to obtain a second long-focus image; according to a relative angle of the current long-focus lens and short-focus lens, calculating a position of the second long-focus image in the first short-focus image when the positions of the same target in the second long-focus image and the first short-focus image are matched; and covering the first short-focus image by the second long-focus image to obtain a fused image.
AUXILIARY FOCUSING METHOD, APPARATUS, AND SYSTEM
An auxiliary focusing method, device and system. When a user uses an image capturer to photograph a target scene, an auxiliary focus image can be generated based on depth information of objects in the target scene, and the auxiliary focus image can visually display the depth distribution of the objects in the target scene and a corresponding position of a focus point of the image capturer in the target scene, so that the user can intuitively understand the current position of the focus point from the auxiliary focus image and adjust the position of the focus point according to the depth distribution of the objects, so that an object of interest to the user can be clearly imaged.
AUXILIARY FOCUSING METHOD, APPARATUS, AND SYSTEM
An auxiliary focusing method, device and system. When a user uses an image capturer to photograph a target scene, an auxiliary focus image can be generated based on depth information of objects in the target scene, and the auxiliary focus image can visually display the depth distribution of the objects in the target scene and a corresponding position of a focus point of the image capturer in the target scene, so that the user can intuitively understand the current position of the focus point from the auxiliary focus image and adjust the position of the focus point according to the depth distribution of the objects, so that an object of interest to the user can be clearly imaged.
VEHICLE-MOUNTED SENSING SYSTEM AND GATED CAMERA
A gated camera for dividing a field of view into a plurality of ranges in a depth direction and generating a plurality of slice images corresponding to the plurality of ranges, the gated camera includes: an illumination device configured to irradiate the field of view with pulse illumination light; an image sensor; and a camera controller configured to control a light emission timing of the illumination device and an exposure timing of the image sensor. The camera controller is configured to switch between a first imaging mode in which performance is relatively high and power consumption is relatively high, and a second imaging mode in which performance is relatively low and power consumption is relatively low.
VEHICLE-MOUNTED SENSING SYSTEM AND GATED CAMERA
A gated camera for dividing a field of view into a plurality of ranges in a depth direction and generating a plurality of slice images corresponding to the plurality of ranges, the gated camera includes: an illumination device configured to irradiate the field of view with pulse illumination light; an image sensor; and a camera controller configured to control a light emission timing of the illumination device and an exposure timing of the image sensor. The camera controller is configured to switch between a first imaging mode in which performance is relatively high and power consumption is relatively high, and a second imaging mode in which performance is relatively low and power consumption is relatively low.
AUTOMOTIVE SENSING SYSTEM AND GATING CAMERA
A sensing system is used for driving assistance or automatic driving. A gating camera is controlled to be in an enabled state/disabled state according to a traveling environment. The gating camera divides a field of view into a plurality of ranges in a depth direction and generates a plurality of slice images corresponding to the plurality of ranges in the enabled state. A main controller processes an output of a main sensor group and an output of the gating camera.
IMAGE CAPTURE USING DYNAMIC LENS POSITIONS
Disclosed are systems, apparatuses, processes, and computer-readable media to capture images with subjects at different depths of fields. A method of processing image data includes determining, based on a depth map of a previously captured image, a first distance to a first object and a second distance to a second object; identifying a focal point of a camera lens at least in part using the first distance and the second distance; capturing an image using the focal point as a basis for the capture, the image including a first region corresponding to the first object and a second region corresponding to the second object; and generating a second image from the image at least in part by enhancing at least one of the first region or the second region using a point spread function (PSF).
Camera switchover control techniques for multiple-camera systems
Various embodiments disclosed herein include techniques for operating a multiple camera system. In some embodiments, a primary camera may be selected from a plurality of cameras using object distance estimates, distance error information, and minimum object distances for some or all of the plurality of cameras. In other embodiments, a camera may be configured to use defocus information to obtain an object distance estimate to a target object closer than a minimum object distance of the camera. This object distance estimate may be used to assist in focusing another camera of the multi-camera system.
ELECTRONIC DEVICE AND CAMERA MODULE THEREOF
An electronic device and a camera apparatus are provided. The camera apparatus includes a lens, a driving member, a photosensitive chip, and a first refractor; the first refractor can deflect a propagation direction of light, and the first refractor is connected with the driving member. When the first refractor is located at the first position, the first refractor is located outside a path of light incident from the lens and received by the photosensitive chip, the light incident from the lens forms a first image through a first pixel sub-area of the photosensitive chip, when the first refractor is located at the second position, the first refractor is located on the path, and the light incident from the lens passes through the first refractor and forms a second image through a second pixel sub-area of the photosensitive chip.
ELECTRONIC DEVICE AND CAMERA MODULE THEREOF
An electronic device and a camera apparatus are provided. The camera apparatus includes a lens, a driving member, a photosensitive chip, and a first refractor; the first refractor can deflect a propagation direction of light, and the first refractor is connected with the driving member. When the first refractor is located at the first position, the first refractor is located outside a path of light incident from the lens and received by the photosensitive chip, the light incident from the lens forms a first image through a first pixel sub-area of the photosensitive chip, when the first refractor is located at the second position, the first refractor is located on the path, and the light incident from the lens passes through the first refractor and forms a second image through a second pixel sub-area of the photosensitive chip.