Patent classifications
G06T5/003
Systems and methods for generating panning images
Images may be captured by a moving image capture device. A reference image and a background image may be selected from the images. The reference image may include depiction of an object, with the object blocking view of the background. The background image may include depiction of the background blocked by the object in the reference image. An object layer may be generated by segmenting the depiction of the object from the reference image. A background layer may be generated by combining the depiction of the background in the background image with the reference image. The background layer may be blurred and combined with the object layer to generate a panning image.
Methods and systems for image processing with multiple image sources
Various methods and systems are provided for image processing for multiple cameras. In one embodiment, a method comprises acquiring image frames with a plurality of image frame sources configured with different acquisition settings, processing the image frames based on the different acquisition settings to generate at least one final image frame, and outputting the at least one final image frame. In this way, information from different image frame sources such as cameras may be leveraged to achieve increased frame rates with improved image quality and a desired motion appearance.
MULTISCALE MODELING TO DETERMINE MOLECULAR PROFILES FROM RADIOLOGY
Systems and methods for analyzing pathologies utilizing quantitative imaging are presented herein. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analytics framework that identifies and quantify biological properties/analytes from imaging data and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine underlying biology as an intermediary to assessing pathology provides many analytic and processing advantages over systems and methods that are configured to directly determine and characterize pathology from underlying imaging data.
OBJECT DETECTION APPARATUS USING AN IMAGE PREPROCESSING ARTIFICIAL NEURAL NETWORK MODEL
An apparatus for recognizing an object in an image includes a preprocessing module configured to receive an image including an object and to output a preprocessed image by performing image enhancement processing on the received image to improve a recognition rate of the object included in the received image; and an object recognition module configured to recognize the object included in the image by inputting the preprocessed image to an input layer of an artificial neural network for object recognition.
IMAGE ENHANCEMENT BASED ON FIBER OPTIC SHAPE-SENSING
The present invention relates to an image processing system (10), comprising: a processor unit (20) arranged to receive imaging data associated with an imaging system (40) and optical shape sensing data associated with an optical shape sensing system (50) registered with the imaging system (40) such that the optical shape sensing data can be positioned in the imaging system; wherein the processor unit (20) is configured to define in the imaging data a region of interest based on the imaging data and/or the optical shape sensing data and further configured to use the optical shape sensing data as markers within the region of interest such that the processor unit applies image enhancement of imaging data on the region of interest based on received optical shape sensing data.
IMAGE EDGE DETECTION METHOD AND IMAGE EDGE DETECTION DEVICE
An image edge detection method for processing an image including multiple pixels includes performing a convolution operation on the image by a gradient operator in a first direction and a gradient operator in a second direction to obtain a first-direction gradient data and a second-direction gradient data, wherein the first direction is perpendicular to the second direction, performing a gradient statistical calculation within a neighboring area of a target pixel of the image according to the first-direction gradient data and the second-direction gradient data to obtain a gradient statistic, and determining an edge significance corresponding to the target pixel according to the gradient statistic.
CORRECTING MULTI-ZONE MOTION BLUR
Provided are methods for correcting multi-zone motion blur, which include executing, using at least one processor, an alignment of at least one image capturing device with at least one collimating device in a plurality of collimating devices, causing a rotation of at least one collimating device, receiving at least one image of at least one target object captured by the image capturing device for processing by at least one rotating collimating device, and determining, based on the at least one processed image, a degradation of the received image of the target object.
Method and electronic device for deblurring blurred image
A method for deblurring a blurred image includes encoding, by at least one processor, a blurred image at a plurality of stages of encoding to obtain an encoded image at each of the plurality of stages; decoding, by the at least one processor, an encoded image obtained from a final stage of the plurality of stages of encoding by using an encoding feedback from each of the plurality of stages and a machine learning (ML) feedback from at least one ML model; and generating, by the at least one processor, a deblurred image in which at least one portion of the blurred image is deblurred based on a result of the decoding.
Virtual and augmented reality systems and methods
A method for displaying virtual content to a user, the method includes determining an accommodation of the user's eyes. The method also includes delivering, through a first waveguide of a stack of waveguides, light rays having a first wavefront curvature based at least in part on the determined accommodation, wherein the first wavefront curvature corresponds to a focal distance of the determined accommodation. The method further includes delivering, through a second waveguide of the stack of waveguides, light rays having a second wavefront curvature, the second wavefront curvature associated with a predetermined margin of the focal distance of the determined accommodation.
Subject-aware low light photography
Devices, methods, and computer-readable media are disclosed, describing an adaptive, subject-aware approach for image bracket selection and fusion, e.g., to generate high quality images in a wide variety of capturing conditions, including low light conditions. An incoming image stream may be obtained from an image capture device, comprising images captured using differing default exposure values, e.g., according to a predetermined pattern. When a capture request is received, it may be detected whether one or more human or animal subjects are present in the incoming image stream. If a subject is detected, an exposure time of one or more images selected from the incoming image stream may be reduced relative to its default exposure time. Prior to the fusion operation, one of the selected images may be designated a reference image for the fusion operation based, at least in part, on a sharpness score and/or a blink score of the image.