Patent classifications
G06T2207/20052
Image processing method and computer-readable recording medium having recorded thereon image processing program
An image processing method that includes obtaining an original image including a cultured cell image with a background image, dividing the original image into blocks, each composed of a predetermined number of pixels, and obtaining a spatial frequency component of an image in each block for each block, and classifying each block as the one belonging to a cell cluster corresponding to the cell or the one belonging to other than the cell cluster in a two-dimensional feature amount space composed of a first feature amount which is a total of intensities of low frequency components having a frequency equal to or lower than a predetermined frequency and a second feature amount which is a total of intensities of high frequency components having a higher frequency than the low frequency component, and segmenting the original image into an area occupied by the blocks classified as the cell cluster and another area.
Method and apparatus for removing compressed Poisson noise of image based on deep neural network
A method for removing compressed Poisson noises in an image, based on deep neural networks, may comprise generating a plurality of block-aggregation images by performing block transform on low-frequency components of an input image; obtaining a plurality of restored block-aggregation images by inputting the plurality of block-aggregation images into a first deep neural network; generating a low-band output image from which noises for the low-frequency components are removed by performing inverse block transform on the plurality of restored block-aggregation images; and generating an output image from which compressed Poisson noises are removed by adding the low-band output image to a high-band output image from which noises for high-frequency components of the input image are removed.
Method for processing X-ray computed tomography image using neural network and apparatus therefor
A method for processing an X-ray computed tomography (CT) image using a neural network and an apparatus therefor are provided. An image reconstruction method includes receiving low-dose X-ray CT data, obtaining an initial reconstruction image for the received low-dose X-ray CT data using a predetermined analytic algorithm, and reconstructing a denoised final image using the obtained initial reconstruction image and a previously trained neural network.
Image estimating method including calculating a transverse cutoff frequency, non-transitory computer readable medium, and image estimating apparatus
The image estimating method is configured to estimate, using image data generated by capturing an object via an image-pickup optical system at a plurality of positions that are spaced at first intervals in an optical-axis direction of the image-pickup optical system, image data at a position different from the plurality of positions. The image estimating method includes an image acquiring step of acquiring image data, a frequency analyzing step of calculating a transverse cutoff frequency in a direction perpendicular to the optical-axis direction based on the image data acquired in the image acquiring step, and an interval calculating step of calculating the first interval based on the transverse cutoff frequency calculated by the frequency analyzing step.
IMAGE GENERATION METHOD AND APPARATUS, AND COMPUTER
This application discloses example image generation methods. One example method includes obtaining a target vector. The target vector can then be separately input to a first generator and a second generator to correspondingly generate a first sub-image and a second sub-image, where the first generator is obtained by a server by training, based on a low-frequency image and a first random noise variable that satisfies normal distribution, a first generative adversarial network (GAN), the second generator is obtained by the server by training, based on a high-frequency image and a second random noise variable that satisfies the normal distribution, a second GAN, and a frequency of the low-frequency image is lower than a frequency of the high-frequency image. The first sub-image and the second sub-image can then be synthesized to obtain a target image.
ANALYTIC IMAGE FORMAT FOR VISUAL COMPUTING
In one embodiment, an apparatus comprises a storage device and a processor. The storage device stores a plurality of images captured by a camera. The processor: accesses visual data associated with an image captured by the camera; determines a tile size parameter for partitioning the visual data into a plurality of tiles; partitions the visual data into the plurality of tiles based on the tile size parameter, wherein the plurality of tiles corresponds to a plurality of regions within the image; compresses the plurality of tiles into a plurality of compressed tiles, wherein each tile is compressed independently; generates a tile-based representation of the image, wherein the tile-based representation comprises an array of the plurality of compressed tiles; and stores the tile-based representation of the image on the storage device.
Image processing apparatus and method and monitoring system for classifying visual elements as foreground or background
An image processing apparatus including a unit configured to acquire a current image from an inputted video and a background model which comprises a background image and foreground/background classification information of visual elements; a unit configured to determine first similarity measures between visual elements in the current image and the visual elements in the background model; and a unit configured to classify the visual elements in the current image as the foreground or the background according to the current image, the background image in the background model and the first similarity measures. Wherein, the visual elements in the background model are the visual elements whose classification information is the background and which neighbour to corresponding portions of the visual elements in the current image. Accordingly, the accuracy of the foreground detection could be improved.
Logo Recognition in Images and Videos
Accurately detection of logos in media content on media presentation devices is addressed. Logos and products are detected in media content produced in retail deployments using a camera. Logo recognition uses saliency analysis, segmentation techniques, and stroke analysis to segment likely logo regions. Logo recognition may suitably employ feature extraction, signature representation, and logo matching. These three approaches make use of neural network based classification and optical character recognition (OCR). One method for OCR recognizes individual characters then performs string matching. Another OCR method uses segment level character recognition with N-gram matching. Synthetic image generation for training of a neural net classifier and utilizing transfer learning features of neural networks are employed to support fast addition of new logos for recognition.
Image processing method and apparatus, computer-readable medium, and electronic device
Embodiments of this application provide an image processing method performed at a computing device. The image processing method includes: obtaining a to-be-processed image with ghost reflection; calculating an image gradient of the to-be-processed image; determining, according to the image gradient, a gradient of a target image obtained after the ghost reflection is removed from the to-be-processed image; and generating the target image based on the gradient of the target image. According to the technical solution of the embodiments of this application, the ghost reflection in the image can be effectively removed, ensuring high quality of a processed image.
IMAGE FUSION METHOD AND PORTABLE TERMINAL
Provided are an image fusion method and apparatus, and a portable terminal. The method comprises: obtaining several aligned images; respectively calculating gradient information of each image; setting a mask image of each image, and generating a target gradient image; performing a gradient operation on the target gradient image, and obtaining a target Laplacian image; performing a deconvolution transform on the Laplacian image, and generating a fused panoramic image. The technical solution generates a Laplacian image by performing gradient information calculations on several images, and then performs a deconvolution transform to generate a fused panoramic image, thereby eliminating image stitching color differences, and implementing a better image fusion effect.