G06T5/003

MOTION ARTIFACT CORRECTION FOR PHASE-CONTRAST AND DARK-FIELD IMAGING

A system (IPS) and related method for image processing, in particular dark-field or phase contrast imaging to reduce motion artifacts. The system comprises an input interface (IN) for receiving a series of projection images (π) acquired by an X-ray imaging apparatus (XI) of an object (OB) for a given projection direction, the imaging apparatus (XI) configured for phase-contrast and/or dark-field imaging A phase-contrast and/or dark-field image generator (IGEN) applies an image generation algorithm to compute a first image based on the series the projection images (π). A motion artifact detector (MD) detects a motion artifact in the first image. A combiner (Σ) combines, if a motion artifact is so detected, a part of the first image with a part of at least one auxiliary image to obtain a combined image. The auxiliary image was previously computed by a gated application of the image generation algorithm in respect of a subset of the series of the projection images (π). The combined image may be output at an output interface (OUT) as a motion artifact reduced image.

IMAGE DETECTION METHOD AND APPARATUS, COMPUTER DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
20220415038 · 2022-12-29 ·

The present application provides an image detection method performed by a server. The method includes: intercepting a first image and a second image at a preset time interval from a video stream; performing pixel matching on the first image and the second image to obtain a value of total matching pixels between the first image and the second image; performing picture content detection on the second image in response to determining that the value of total matching pixels between the first image and the second image satisfies a preset matching condition based on the value of total matching pixels; and determining that the video stream is abnormal in response to determining that no picture content is in the second image by the picture content detection. In this way, an image recognition manner can be used to perform detection on image pictures of the video stream at the preset time interval.

THERMAL IMAGE PROCESSING DEVICE, THERMAL IMAGE PROCESSING MODULE, THERMAL IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

A thermal image processing device includes: an acquirer that acquires a thermal image from a thermal image sensor; a first processor that performs, on a high temperature pixel indicating a temperature higher than a first temperature among a plurality of pixels included in the thermal image acquired, image processing for decreasing the temperature indicated by the high temperature pixel; and a second processor that performs high frequency enhancement processing for enhancing a high frequency component included in a converted image serving as the thermal image on which the image processing has been performed.

METHOD FOR PROCESSING IMAGES AND ELECTRONIC DEVICE
20220414850 · 2022-12-29 ·

Provided is a method for processing images, including: determining a target processing region in a target image based on facial key points; acquiring a low-and-mid-frequency image and a low-frequency image corresponding to the target image by filtering the target image; acquiring a first image by adjusting pixel values of pixel points in the target processing region in the low-and-mid-frequency image based on differences between the pixel values of the pixel points in the target processing region in the low-frequency image and pixel values of pixel points at corresponding positions in the low-and-mid-frequency image; and acquiring a second image by adjusting pixel values of pixel points in the target processing region in the first image based on differences between pixel values of pixel points in the target processing region in the target image and the pixel values of the pixel points at corresponding positions in the low-and-mid-frequency image.

X-RAY IMAGING RESTORATION USING DEEP LEARNING ALGORITHMS

A general workflow for deep learning based image restoration in X-ray and fluoroscopy/fluorography is disclosed. Higher quality images and lower quality images are generated as training data. This training data can further be categorized by anatomical structure. This training data can be used to train a learned model, such as a neural network or deep-learning neural network. Once trained, the learned model can be used for real-time inferencing. The inferencing can be more further improved by employing a variety of techniques, including pruning the learned model, reducing the precision of the learned mode, utilizing multiple image restoration processors, or dividing a full size image into snippets.

METHODS AND SYSTEMS TO CORRECT CROSSTALK IN ILLUMINATION EMITTED FROM REACTION SITES
20220414839 · 2022-12-29 ·

Biosensor including an array of reaction sites and corresponding light sensors may experience crosstalk in which photons from one reaction site are detected by neighbors of its corresponding light sensor, and such crosstalk may be corrected using sharpening kernels corresponding to the sensors in the array. Such sharpening kernels may be derived from generative matrices, which themselves may be derived from point spread functions representing dispersion of illumination emitted from the reaction sites.

HYPER CAMERA WITH SHARED MIRROR

An imaging system can include a first and second camera configured to capture first and second sets of oblique images along first and second scan paths, respectively, on an object area. A drive is coupled to a scanning mirror structure, having at least one mirror surface, and configured to rotate the structure about a scan axis based on a scan angle. The first and second cameras each have an optical axis set at an oblique angle to the scan axis and include a respective lens to focus first and second imaging beams reflected from the mirror surface to an image sensor located in each of the cameras. The first and second imaging beams captured by their respective cameras can vary according to the scan angle. Each of the image sensors captures respective sets of oblique images by sampling the imaging beams at first and second values of the scan angle.

IMAGE DEHAZING METHOD AND SYSTEM BASED ON CYCLEGAN

Disclosed are an image dehazing method and system based on CycleGAN. The method comprises: acquiring a to-be-processed hazy image; and inputting the image into a pre-trained densely connected CycleGAN, and outputting a clear image. The densely connected CycleGAN comprises a generator, the generator comprises an encoder, a converter and a decoder, the encoder comprises a densely connected layer for extracting features of an input image, the converter comprises a transition layer for combining the features extracted at the encoder stage, the decoder comprises a densely connected layer and a scaled convolutional neural network layer, the densely connected layer is used for restoring original features of the image, and the scaled convolutional neural network layer is used for removing a checkerboard effect of the restored original features to obtain a finally output clear image.

IMAGE PROCESSING DEVICE AND OPERATION METHOD THEREOF

There is provided an image processing device including: a camera outputting a first image obtained by photographing an object that is moving; and a control module generating a coded pattern for controlling a shutter exposure time and reconstructing a second image in which motion blur of the first image is removed, wherein the control module detects a moving speed of the object, and generates the coded pattern based on a point spread function (PSF) range set according to the moving speed.

IMAGE PROCESSING DEVICE AND OPERATION METHOD THEREOF

There is provided an image processing device including: a plurality of light sources each emitting a plurality of lights; a camera outputting a first subject image obtained by photographing a subject; and a control unit controlling each of the plurality of light sources according to an optical coded pattern set based on a complex modulation transfer function during a shutter exposure time of the camera, and outputting a second subject image in which motion blur is removed from the first subject image according to point spread functions for each color channel modulated by the optical coded pattern set.