Patent classifications
G06T5/90
Image Enhancement Devices With Gaze Tracking
An electronic device may have a display and a camera. Control circuitry in the device can gather information on a user's point of gaze using a gaze tracking system and other sensors, can gather information on the real-world image such as information on content, motion, and other image attributes by analyzing the real-world image, can gather user vision information such as user acuity, contrast sensitivity, field of view, and geometrical distortions, can gather user input such as user preferences and user mode selection commands, and can gather other input. Based on the point-of-gaze information and/or other gathered information, the control circuitry can display the real-world image and supplemental information on the display. The supplemental information can include augmentations such as icons, text labels, and other computer-generated text and graphics overlaid on the real world image and can include enhanced image content such as magnified portions of the real-world image.
METHOD AND SYSTEM FOR GENERATING SYNTHETHIC IMAGES WITH SWITCHABLE IMAGE CONTRASTS
A system and method generate a synthetic image with switchable image contrast components for a biological object. The method includes: a) using first and second quantitative MRI acquisition techniques for measuring a value of first or second quantitative parameters Q1, Q2 for the biological object and generating first and second quantitative maps, the first and second quantitative MRI acquisition techniques generate first and second contrast-weighted images; b) using the first and second quantitative maps, and the first and second contrast weighted images as inputs in a model configured for generating a synthetic image M with arbitrary sequence parameters P1, P2, P3, according to:
M=|C.sub.if(Q.sub.1,Q.sub.2,P.sub.1,P.sub.2,P.sub.3)|
wherein C.sub.i with i=1,2, are contrast components for the generation of the synthetic image M coming from respectively the first (i=1) and second (i=2) contrast-weighted images (i=1) and f is a function of Q1, Q2, P1, P2 and P3; and c) displaying the synthetic image M.
Optical imaging method and apparatus
This application provides an imaging method. The imaging method is applied to an imaging apparatus including a color camera and a black-and-white camera, and resolution of the black-and-white camera is higher than resolution of the color camera; and the imaging method includes: obtaining a zoom magnification; simultaneously capturing a color image and a black-and-white image of a target scene, where resolution of the black-and-white image is higher than resolution of the color image; performing cropping processing on the black-and-white image and the color image separately based on the zoom magnification, where the cropped black-and-white image and the cropped color image have a same field of view; and merging the cropped color image and the cropped black-and-white image to obtain an output image of the target scene. Therefore, according to the optical imaging method provided by embodiments of this application, the obtained output image can have a better optical zoom capability.
Video dehazing device and method
A video dehazing method includes: capturing a hazy image including multiple inputted pixels by an image capture module, calculating an atmospheric light value according to the inputted pixels by an atmospheric light estimation unit, determining a sky image area according to the inputted pixels via the intermediate calculation results of a guided filter by a sky detection unit; calculating a dark channel image according to the inputted pixels based on dark channel prior (DCP) by a dark channel prior unit; calculating a fine transmission image according to the inputted pixels, the atmospheric light value, the sky image area and the dark channel image via a guided filter by a transmission estimation unit, generating a dehazing image according to the inputted pixels, the atmospheric light value and the fine transmission image by an image dehazing unit, and outputting the dehazing image by a video outputting module.
Bit depth reduction of image pixels
The bit depth of the pixels in a camera image are reduced. In one embodiment, the number of pixels for each of a set of multiple bit intensity values is counted. A pixel intensity value with a lowest count is selected and a mapping function is generated that combines the pixels with the lowest count with pixels having an adjacent pixel intensity value. This is repeated until a total number of pixel intensity value counts is reduced to a predetermined number. A reduced bit depth image is generated using the predetermined number of pixel intensity value counts by assigning a new pixel intensity value to each of the pixels using the mapping function and the reduced bit depth image is sent to an image analysis system.
SIGNAL PROCESSING DEVICE AND IMAGE DISPLAY APPARATUS INCLUDING THE SAME
Disclosed is a signal processing device and an image display apparatus including the same. The signal processing device includes: a linear tone mapper configured to perform linear tone mapping on a part of an input image; a non-linear tone mapper configured to perform non-linear tone mapping on another part of the input image; and a combiner configured to combine an output from the linear tone mapper and an output from the non-linear tone mapper. Accordingly, it is possible to map the dynamic range of an input image to a display.
Method and Apparatus for Image Processing
A method and an apparatus for image processing are provided. An original image is captured. At least one reference image is generated by adjusting brightness of the original image. Multiple denoised images are generated by performing artificial intelligence based denoising on the original image and the at least one reference image respectively. A target image is generated by performing HDR synthesis on the multiple denoised images.
Encoding and decoding HDR videos
To enable a high quality HDR video communication, which can work by sending corresponding LDR images potentially via established LDR video communication technologies, which works well in practical situations, applicant has invented a HDR video decoder (600, 1100) arranged to calculate a HDR image (Im_RHDR) based on applying to a received 100 nit standard dynamic range image (Im_RLDR) a set of luminance 5 transformation functions, the functions comprising at least a coarse luminance mapping (FC), which is applied by a dynamic range optimizer (603), and a mapping of the darkest value (0) of an intermediate luma (YHPS), being output of the dynamic range optimizer, to a received black offset value (Bk_off) by a range stretcher (604), the video decoder comprising a gain limiter (611, 1105) arranged to apply an alternate luminance transformation function to 10 calculate a subset (502) of the darkest luminances of the HDR image, from corresponding darkest lumas (Y_in) of the standard dynamic range image.
Systems, methods, and media for hierarchical progressive point cloud rendering
In accordance with some aspects, systems, methods and media for hierarchical progressive point cloud rendering are provided. In some aspects, a method for point cloud rendering is provided, the method comprising: rendering a first image based on point cloud data; requesting point cloud points, first synthetic point cloud points, and an octant of a second synthetic point cloud that intersects a new viewing frustum; reprojecting points used during rendering of the first image into frame buffer objects (FBOs) of different resolutions; replacing reprojected points if a received point corresponding to the same pixel is closer to the camera; determining that a pixel in the highest resolution FBO is unfilled; copying a point that originated in a lower resolution FBO to the gap in the highest resolution FBO; and when the highest resolution FBO is filled, rendering a second image based on the contents.
REAL-TIME VIDEO DYNAMIC RANGE ANALYSIS
A video analyzer measures and outputs a visual indication of a dynamic range of a video signal. The video analyzer includes a video input to receive the video signal and a cumulative distribution function generator generates a cumulative distribution function curve from a component of the video signal. A feature detector generates one or more feature vectors from the cumulative distribution function curve and a video dynamic range generator produces a visual output indicating a luminance of one or more portions of the video signal.