G06K9/56

Image processing apparatus, medical image diagnostic apparatus, and program

According to one embodiment, an image processing apparatus includes processing circuitry. The processing circuitry is configured to acquire medical image data. The processing circuitry is configured to obtain spatial distribution of likelihood values representing a likelihood of corresponding to a textual pattern in a predetermined region of a medical image for each of a plurality of textual patterns based on the medical image data. The processing circuitry is configured to calculate feature values in the predetermined region of the medical image based on the spatial distribution obtained for the each of the plurality of textual patterns.

CONVOLUTIONAL NEURAL NETWORK ON PROGRAMMABLE TWO DIMENSIONAL IMAGE PROCESSOR

A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The two-dimensional shift register provides local respective register space for the execution lanes. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register. The executing of the convolutional neural network also includes performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially: concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products; concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and, effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array.

Method and apparatus for detecting a salient point of a protuberant object

An image processing method and an apparatus (300) are provided, the method includes: obtaining a depth image of a protuberant object (210); selecting a plurality of test points placed on a circle around a pixel in the depth image as a center point of the circle; calculating a protuberance value of the center point based on a comparison between the depth value of the center point and the depth value of each of the selected test points (240); and determining one or more salient points of the protuberant object by using the protuberance value of each pixel in the depth image (250).

Convolutional neural network on programmable two dimensional image processor

A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register. The executing of the convolutional neural network also includes performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially: concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products; concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and, effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array.

Image processing device and method for producing in real-time a digital composite image from a sequence of digital images

Image processing device for producing in real-time a digital composite image from a sequence of digital images recorded by a camera device, in particular an endoscopic camera device, the image processing device including a selecting unit, a key point detection unit, a transforming unit and a joining unit, wherein the key point detection unit includes a maximum detection unit configured for executing following steps separately for the filter response for the reference image and for the filter response for the further image, wherein a variable threshold is used:
i) creating blocks by dividing the respective filter response,
ii) calculating the variable threshold for each of the blocks,
iii) discarding those blocks of the blocks from further consideration, in which the respective filter response at a reference point of the respective block is less than the respective variable threshold.

Face synthesis using generative adversarial networks
10762337 · 2020-09-01 · ·

Training a generative adversarial network (GAN) for use in facial recognition, comprising providing an input image of a particular face into a facial recognition system to obtain a faceprint; obtaining, based on the input faceprint and a noise value, a set of output images from a GAN generator; obtaining feedback from a GAN discriminator, wherein obtaining feedback comprises inputting each output image into the GAN discriminator and determining a set of likelihood values indicative of whether each output image comprises a facial image; determining, based on each output image, a modified noise value; inputting each output image into a second facial recognition network to determine a set of modified faceprints; defining, based on each modified noise value and modified faceprint, feedback for the GAN generator, wherein the feedback comprises a first value and a second value; and modifying control parameters of the GAN generator.

Information processing apparatus configured to determine whether an abnormality is present based on an integrated score, information processing method and recording medium
10685432 · 2020-06-16 · ·

An apparatus and a method are disclosed, each of which applies a plurality of different spatial filters to one input image to generate a plurality of filtered images; calculates, for each of a plurality of pixels included in each of the plurality of filtered image, a score indicating a value determined by a difference from a corresponding one of a plurality of model groups, using the plurality of model groups that respectively correspond to the plurality of filtered images and each including one or more models having a parameter representing a target shape; calculates an integrated score indicating a result of integrating the scores of the respective plurality of pixels corresponding to each other over the plurality of filtered images; and determines an abnormality based on the integrated score.

Image-processing device
10567670 · 2020-02-18 · ·

An image-processing device that is easily applicable even in a case where a subject is moving, and that generates an image having a suitable stereoscopic effect. An image-processing device (101) includes: a dark-part pixel extraction unit (107) that extracts one or more dark-part pixels; a dark-part pixel correction unit (108) that generates a correction image; a dark-part model generation unit (109) that generates a dark-part model on the basis of the dark-part pixels; and an image generation unit (110) that gives dark-part pixels constituting the dark-part model to the correction image.

Convolutional neural network on programmable two dimensional image processor

A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The two-dimensional shift register provides local respective register space for the execution lanes. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register. The executing of the convolutional neural network also includes performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially: concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products; concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and, effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array.

DISTANCE-INDEPENDENT KEYPOINT DETECTION
20200019809 · 2020-01-16 · ·

Described herein is a method for detecting keypoints in three-dimensional images in which a three-dimensional image of a scene captured by a depth sensing imaging system is processed using a distance-independent keypoint filter. Keypoints are derived from the three-dimensional image by determining a mean shift field and using x- and y-components of the mean shift field to derive intersections of 0-isolines thereof. Positive and negative keypoints or nodes are connected to one another, positive to positive and negative to negative, to form a keygraph structure.