Patent classifications
G06T5/00
SEMANTIC IMAGE EXTRAPOLATION METHOD AND APPARATUS
Disclosed are a semantic image extrapolation method and a semantic image extrapolation apparatus. The present invention provides a technique for generating an empty region for image-extension in an image by using an extrapolated segmentation map and an inpainting technique. The present invention is to provide, considering that there is no information in an empty region for image-extension in an image, a semantic image extrapolation method, of first generating an extrapolated segmentation map on the basis of a segmentation map from an input image, and filling the empty region for image-extension in the image with information on the basis of the extrapolated segmentation map and the input image.
GROUND HEIGHT-MAP BASED ELEVATION DE-NOISING
The disclosed technology provides solutions provides solutions for improving sensor data accuracy and in particular, for improving radar data by de-noising radar elevation measurements using a height-map. In some aspects, a process of the disclosed technology can include steps for receiving camera data corresponding with a first location, receiving radar data comprising a plurality of radar points, and processing the radar data to generate height-corrected radar data. In some aspects, the process can further include steps for projecting the height-corrected radar data into an image space to generate radar-image data. Systems and machine-readable media are also provided.
RIO-BASED VIDEO CODING METHOD AND DEIVICE
A video recording method and a video recording device are provided. The method includes: obtaining video data to be recorded; dividing, based on the video data, each frame of the video data into a region of interest and a background region by using a preset neural network model; and encoding the region of interest of the video data based on a first encoding bit rate, and the background region based on a second bit rate, and storing the encoded video data into a storage device through a video buffer.
METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND MEDIUM
The present disclosure provides a method of processing an image, a device, and a medium. The method of processing the image includes: performing a noise reduction on an original image to obtain a smooth image; performing a feature extraction on the original image to obtain feature data for at least one direction; and determining an image quality of the original image according to the original image, the smooth image, and the feature data for the at least one direction.
METHODS AND SYSTEMS FOR GENERATING END-TO-END DE-SMOKING MODEL
The disclosure herein relates to methods and systems for generating an end-to-end de-smoking model for removing smoke present in a video. Conventional data-driven based de-smoking approaches are limited mainly due to lack of suitable training data. Further, the conventional data-driven based de-smoking approaches are not end-to-end for removing the smoke present in the video. The de-smoking model of the present disclosure is trained end-to-end with the use of synthesized smoky video frames that are obtained by source aware smoke synthesis approach. The end-to-end de-smoking model localize and remove the smoke present in the video, using dynamic properties of the smoke. Hence the end-to-end de-smoking model simultaneously identifies the regions affected with the smoke and performs the de-smoking with minimal artifacts. localized smoke removal and color restoration of a real-time video.
METHOD AND DEVICE FOR LATENCY REDUCTION OF AN IMAGE PROCESSING PIPELINE
In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
LOCAL ENHANCEMENT FOR A MEDICAL IMAGE
The present disclosure relates to locally enhancing medical images. In accordance with certain embodiments, a method includes determining a boundary of a region of interest in a displayed medical image, overlaying the boundary on the displayed medical image, adjusting a position of a collimator of a medical imaging system based on the determined boundary, enhancing image quality of the region of interest, and displaying the enhanced region of interest within the boundary.
GLOBAL TONE MAPPING WITH CONTRAST ENHANCEMENT AND CHROMA BOOST
An apparatus includes at least one processing device configured to obtain an input image and determine a cumulative distribution function (CDF) histogram from a luminance or luma (Y) channel of the input image. The at least one processing device is also configured to determine an entry CDF histogram in a CDF histogram lookup table (LUT) closest to the determined CDF histogram. The at least one processing device is further configured to apply a Y channel global tone mapping (GTM) curve to the input image based on one or more parameters assigned to the entry CDF histogram from the CDF histogram LUT.
DEEP LEARNING-BASED IMAGE QUALITY ENHANCEMENT OF THREE-DIMENSIONAL ANATOMY SCAN IMAGES
Techniques are described for enhancing the quality of three-dimensional (3D) anatomy scan images using deep learning. According to an embodiment, a system is provided that comprises a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component that receives a scan image generated from 3D scan data relative to a first axis of a 3D volume, and an enhancement component that applies an enhancement model to the scan image to generate an enhanced scan image having a higher resolution relative to the scan image. The enhancement model comprises a deep learning neural network model trained on training image pairs respectively comprising a low-resolution scan image and a corresponding high-resolution scan image respectively generated relative to a second axis of the 3D volume.
SELF-EMITTING DISPLAY (SED) BURN-IN PREVENTION BASED ON STATIONARY LUMINANCE REDUCTION
One embodiment provides a computer-implemented method that includes providing a dynamic list structure that stores one or more detected object bounding boxes. Temporal analysis is applied that updates the dynamic list structure with object validation to reduce temporal artifacts. A two-dimensional (2D) buffer is utilized to store a luminance reduction ratio of a whole video frame. The luminance reduction ratio is applied to each pixel in the whole video frame based on the 2D buffer. One or more spatial smoothing filters are applied to the 2D buffer to reduce a likelihood of one or more spatial artifacts occurring in a luminance reduced region.