Patent classifications
G06T2207/20201
Lens unit, imaging device, control methods thereof, and storage medium
A lens unit comprises a shake detector; a shake correction mechanism for correcting image blur; a setting unit for setting a ratio of shake to be corrected by the shake correction mechanism; a control unit for, based on the shake detected by the shake detector and the ratio of shake, calculating a first shake correction amount and control an image shake correction operation by the shake correction mechanism; and a target-value correction unit for correcting the first shake correction amount, based on a difference between a result of detecting shake by the shake detector, and a result of detecting shake by a shake detector provided in the imaging device, wherein the control unit controls the shake correction mechanism based on an image stabilization amount corrected in accordance with the target-value correction unit.
SYSTEM AND METHOD FOR MULTI-EXPOSURE, MULTI-FRAME BLENDING OF RED-GREEN-BLUE-WHITE (RGBW) IMAGES
A method includes obtaining multiple images of a scene using at least one red-green-blue-white (RGBW) image sensor. The method also includes generating multi-channel frames at different exposure levels from the images. The method further includes estimating motion across exposure differences between the different exposure levels using a white channel of the multi-channel frames as a guidance signal to generate multiple motion maps. The method also includes estimating saturation across the exposure differences between the different exposure levels to generate multiple saturation maps. The method further includes using the generated motion maps and saturation maps to recover saturations from the different exposure levels and generate a saturation-free RGBW frame. In addition, the method includes processing the saturation-free RGBW frame to generate a final image of the scene.
Method and apparatus for implementing a digital graduated filter for an imaging apparatus
A digital graduated filter is implemented in an imaging device by combining multiple images of the subject wherein the combining may include combining different numbers of images for highlights and for shadows of the subject. The imaging device may present a user with a set of pre-defined graduated filter configurations to choose from. A user may also specify the direction of graduation and strength of graduation in a viewfinder. In an alternative implementation, combining may include scaling of pixels being added instead of varying the number of images being combined. In an alternative implementation, the combining of multiple images may include combining a different number of images for highlights of the subject than for shadows of subject.
PHOTOGRAPHING METHOD AND APPARATUS
This application discloses a photographing method and apparatus, to overcome blurring that occurs during photographing. When a zoom ratio is greater than a first zoom ratio threshold, a long-focus camera is started to capture an image. A zoom ratio of the long-focus camera is greater than or equal to the first zoom ratio threshold. Because a high zoom ratio causes large shake, a rotational blur occurs in the image. According to the photographing method disclosed in this application, a first neural network model for rotational image deblurring is used to implement rotational image deblurring processing. In this way, high imaging quality of an image, a video, or a preview image is presented to a user to some extent, and the imaging effect may not be inferior to the effect of photographing with a tripod.
Arbitrary motion smear modeling and removal
A method of de-smearing an image includes capturing image data from an imaging sensor and collecting motion data indicative of motion of the sensor while capturing the image data. The motion data is collected at a higher frequency than an exposure frequency at which the image data is captured. The method includes modeling motion of the sensor based on the motion data, wherein motion is modeled at the higher frequency than the exposure frequency. The method also includes modeling optical blur for the image data, modeling noise for the image data, and forming a de-smeared image as a function of the modeled motion, the modeled blur, and the modeled noise, and the image data captured from the imaging sensor.
USE MOTION DATA TO GENERATE HIGHER RESOLUTION IMAGES
Techniques for using motion data to generate a high resolution output color image from multiple images having sparse color information are disclosed. A camera generates multiple images. The camera's sensor is configured to have a sparse Bayer pattern. While the camera is generating the images, IMU data for each image is acquired. The IMU data indicates a corresponding pose the camera was in while the camera generated each image. The images and the IMU data are fed as input into a motion model. The motion model performs temporal filtering on the images and uses the IMU data to generate a red-only image, a green-only image, and a blue-only image. A high resolution output color image is generated by combining the red-only image, the green-only image, and the blue-only image.
Image enhancement for multi-layered structure in charged-particle beam inspection
An improved method and apparatus for enhancing an inspection image in a charged-particle beam inspection system. An improved method for enhancing an inspection image comprises acquiring a first image and a second image of multiple stacked layers of a sample that are taken with a first focal point and a second focal point, respectively, associating a first segment of the first image with a first layer among the multiple stacked layers and associating a second segment of the second image with a second layer among the multiple stacked layers, updating the first segment based on a first reference image corresponding to the first layer and updating the second segment based on a second reference image corresponding to the second layer, and combining the updated first segment and the updated second segment to generate a combined image including the first layer and the second layer.
METHOD AND SYSTEM FOR REPLACING SCENE TEXT IN A VIDEO SEQUENCE
To replace text in a digital video image sequence, a system will process frames of the sequence to: define a region of interest (ROI) with original text in each of the frames; use the ROIs to select a reference frame from the sequence; select a target frame from the sequence; determine a transform function between the ROI of the reference frame and the ROI of the target frame; replace the original text in the ROI of the reference frame with replacement text to yield a modified reference frame ROI; and use the transform function to transform the modified reference frame ROI to a modified target frame ROI in which the original text is replaced with the replacement text. The system will then insert the modified target frame ROI into the target frame to produce a modified target frame. This process may repeat for other target frames of the sequence.
DEEP LEARNING-BASED MEDICAL IMAGE MOTION ARTIFACT CORRECTION
Systems and methods for performing motion artifact correction in medical images. One method includes receiving, with an electronic processor, a medical image associated with a patient, the medical image including at least one motion artifact. The method also includes applying, with the electronic processor, a model developed using machine learning to the medical image for correcting motion artifacts, the model including at least one of a spatial transformer network and an attention mechanism network. The method also includes generating, with the electronic processor, a new version of the medical image, where the new version of the medical image at least partially corrects the at least one motion artifact.
Utilizing an image exposure transformation neural network to generate a long-exposure image from a single short-exposure image
The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.