G06T5/70

ASYMMETRIC NORMALIZED CORRELATION LAYER FOR DEEP NEURAL NETWORK FEATURE MATCHING

A method includes obtaining a first image of a scene using a first image sensor of an electronic device and a second image of the scene using a second image sensor of the electronic device. The method also includes generating a first feature map from the first image and a second feature map from the second image. The method further includes generating a third feature map based on the first feature map, the second feature map, and an asymmetric search window. The method additionally includes generating a depth map by restoring spatial resolution to the third feature map.

GAZE ENHANCED NATURAL MOTION BLUR
20200394766 · 2020-12-17 · ·

There is provided systems, methods and computer program products for generating motion blur on image frames, comprising: obtaining gaze data related to an eye movement between consecutive images, determining movement of at least one object in relation to said gaze data by calculating the difference in position of said at least one object and said gaze data between the image frames, forming a motion blur vector and applying a motion blur on an image frame based on said motion blur vector.

SYSTEMS AND METHODS FOR TONE MAPPING OF HIGH DYNAMIC RANGE IMAGES FOR HIGH-QUALITY DEEP LEARNING BASED PROCESSING
20200394772 · 2020-12-17 · ·

Systems and methods for tone mapping of high dynamic range (HDR) images for high-quality deep learning based processing are disclosed. In one embodiment, a graphics processor includes a media pipeline to generate media requests for processing images and an execution unit to receive media requests from the media pipeline. The execution unit is configured to compute an auto-exposure scale for an image to effectively tone map the image, to scale the image with the computed auto-exposure scale, and to apply a tone mapping operator including a log function to the image and scaling the log function to generate a tone mapped image.

CONTENT-BASED OBJECT DETECTION, 3D RECONSTRUCTION, AND DATA EXTRACTION FROM DIGITAL IMAGES
20200394763 · 2020-12-17 ·

A method of detecting an object depicted in a digital image includes: detecting a plurality of identifying features of the object, wherein the plurality of identifying features are located internally with respect to the object; projecting a location of region(s) of interest of the object based on the plurality of identifying features, where each region of interest depicts content; building and/or selecting an extraction model configured to extract the content based at least in part on: the location of the region(s) of interest, the of identifying feature(s), or both; and extracting the some or all of the content from the digital image using the extraction model. Corresponding system and computer program product embodiments are disclosed. The inventive concepts enable reliable extraction of data from digital images where portions of an object are obscured/missing, and/or depicted on a complex background.

Image processing apparatus and method

The present technology relates to an image processing apparatus and an image processing method capable of suppressing an increase in a load on a subject and obtaining a captured image of the subject with higher image quality. An imaging unit reduces a light amount and performs imaging of the fundus of the eye so as to generate a plurality of fundus images. A biological information alignment processing unit aligns fundus images by using biological information of a subject. A super-resolution processing unit superimposes an aligned input image on a previous super-resolution result image so as to generate a new super-resolution result image. The super-resolution processing unit stores or outputs the super-resolution result image in a storage unit or from an output unit, and supplies the super-resolution result image to a super-resolution result image buffer so as to be stored.

Methods and apparatus for real-time interactive anamorphosis projection via face detection and tracking

Methods, apparatus, systems, and articles of manufacture for real-time interactive anamorphosis projection via face detection and tracking are disclosed. An example system includes a sensor to capture an image of a face of a user. An augmented reality controller is to access the image from the sensor, determine a position of the face of the user relative to a display surface, and apply a perspective correction to an anamorphic camera representing a vantage point of the active user. A user application is to generate a scene based on the position of the anamorphic camera. A display is to present, at the display surface, the scene based on the vantage point of the active user.

Adding motion effects to digital still images

A digital still image is processed using motion-adding algorithms that are provided with an original still image and a set of motionizing parameters. Output of the motion-adding algorithms include a motionized digital image suitable for display by any digital image display device. The motionized digital image may be used in place of a still image in any context that a still image would be used, for example, in an ebook, e-zine, digital graphic novel, website, picture or poster, or user interface.

Fixed pattern noise mitigation for a thermal imaging system

An imaging system whose Field of View FOV experiences occasional motion in relation to viewed scenes may be configured to reduce Fixed Pattern Noise (FPN) of acquired image data. FPN may be reduced by developing a pixel by pixel FPN correction term through a series of steps including blurring the image, identifying pixels to exclude from some calculations, a motion detector and an FPN updater for frames under motion and an FPN decay element for frames that are still.

Forecasting images for image processing

Systems and methods are provided for image forecasting of image processing. A trained image forecaster may be used to generate a virtual image based on prior actual images. The virtual image may be preprocessed to generate an intermediate image. The intermediate image may then be used to process the next actual image to generate a final image.

Face detection for video calls

Exemplary embodiments relate to uses of face detection in video, and especially in video calls. In some embodiments, face detection may be used to center a camera shot by maintaining a face in the center of a screen. The centering may be applied selectively, such as by overriding centering if the user is looking off-screen. The video may also be cropped to better fit a face in a screen, or to allow multiple faces to appear on screen. In some embodiments, emphasizing the face over the background (or parts of the face over the whole face) allows for improvement in video call performance. Moreover, these techniques can be used to bring certain areas of a camera shot into focus while de-emphasizing the background (or vice versa).