Patent classifications
G06T5/40
PARALLEL COMPUTER VISION AND IMAGE SCALING ARCHITECTURE
Embodiments relate to an architecture of a vision pipe included in an image signal processor. The architecture includes a front-end portion that includes a pair of image signal pipelines that generate an updated luminance image data. A back-end portion of the vision pipe architecture receives the updated luminance images from the front-end portion and performs, in parallel, scaling and various computer vision operations on the updated luminance image data. The back-end portion may repeatedly perform this parallel operation of computer vision operations on successively scaled luminance images to generate a pyramid image.
PARALLEL COMPUTER VISION AND IMAGE SCALING ARCHITECTURE
Embodiments relate to an architecture of a vision pipe included in an image signal processor. The architecture includes a front-end portion that includes a pair of image signal pipelines that generate an updated luminance image data. A back-end portion of the vision pipe architecture receives the updated luminance images from the front-end portion and performs, in parallel, scaling and various computer vision operations on the updated luminance image data. The back-end portion may repeatedly perform this parallel operation of computer vision operations on successively scaled luminance images to generate a pyramid image.
Hardware-Based Convolutional Color Correction in Digital Images
A computing device may obtain an input image. The input image may have a white point represented by chrominance values that define white color in the input image. Possibly based on colors of the input image, the computing device may generate a two-dimensional chrominance histogram of the input image. The computing device may convolve the two-dimensional chrominance histogram with a filter to create a two-dimensional heat map. Entries in the two-dimensional heat map may represent respective estimates of how close respective tints corresponding to the respective entries are to the white point of the input image. The computing device may select an entry in the two-dimensional heat map that represents a particular value that is within a threshold of a maximum value in the heat map, and based on the selected entry, tint the input image to form an output image.
Hardware-Based Convolutional Color Correction in Digital Images
A computing device may obtain an input image. The input image may have a white point represented by chrominance values that define white color in the input image. Possibly based on colors of the input image, the computing device may generate a two-dimensional chrominance histogram of the input image. The computing device may convolve the two-dimensional chrominance histogram with a filter to create a two-dimensional heat map. Entries in the two-dimensional heat map may represent respective estimates of how close respective tints corresponding to the respective entries are to the white point of the input image. The computing device may select an entry in the two-dimensional heat map that represents a particular value that is within a threshold of a maximum value in the heat map, and based on the selected entry, tint the input image to form an output image.
Grayscale histogram generation
In a graphics processing unit (GPU), receiving an input image comprising an array of pixels. Each pixel having a grayscale value from a range of N grayscale values. For each particular input patch of pixels of a set of input patches partitioning the input image and in parallel for each particular grayscale value the range, counting the number of pixels in the particular input patch having the particular grayscale value. In parallel for each particular input patch of pixels of a set of input patches partitioning the input image, creating an output image patch as an ordered sequence of N pixels, with the color value of the nth pixel in each corresponding output patch representing the count of pixels in the particular input patch having the nth grayscale value. Combining the output image patches into a single composite output image of N pixels, the pixel value of the nth pixel in the single composite output image corresponding to the count of pixels in the input image having the nth grayscale value.
Grayscale histogram generation
In a graphics processing unit (GPU), receiving an input image comprising an array of pixels. Each pixel having a grayscale value from a range of N grayscale values. For each particular input patch of pixels of a set of input patches partitioning the input image and in parallel for each particular grayscale value the range, counting the number of pixels in the particular input patch having the particular grayscale value. In parallel for each particular input patch of pixels of a set of input patches partitioning the input image, creating an output image patch as an ordered sequence of N pixels, with the color value of the nth pixel in each corresponding output patch representing the count of pixels in the particular input patch having the nth grayscale value. Combining the output image patches into a single composite output image of N pixels, the pixel value of the nth pixel in the single composite output image corresponding to the count of pixels in the input image having the nth grayscale value.
Binary tracking of an anatomical tracking structure on medical images
Disclosed is a computer-implemented method for determining a position of an anatomical tracking structure in a tracking image usable for controlling a radiation treatment such as at least one of radiotherapy or radio surgery of a patient, a corresponding computer program, a non-transitory program storage medium storing such a program and a computer for executing the program, as well as a system for the position of an anatomical tracking structure in a tracking image usable for controlling a radiation treatment such as at least one of radiotherapy or radio surgery of a patient, a system comprising an electronic data storage device and the aforementioned computer.
Binary tracking of an anatomical tracking structure on medical images
Disclosed is a computer-implemented method for determining a position of an anatomical tracking structure in a tracking image usable for controlling a radiation treatment such as at least one of radiotherapy or radio surgery of a patient, a corresponding computer program, a non-transitory program storage medium storing such a program and a computer for executing the program, as well as a system for the position of an anatomical tracking structure in a tracking image usable for controlling a radiation treatment such as at least one of radiotherapy or radio surgery of a patient, a system comprising an electronic data storage device and the aforementioned computer.
Object recognition method and object recognition device performing the same
Provided is an object recognition device for performing object recognition on a field of view (FoV). The object recognition device includes a light detection and ranging (LiDAR) data acquisition module configured to acquire data for the FoV from a sensor configured to project the FoV with a laser and receive reflected light, and a control module configured to perform object recognition on an object of interest in the FoV using an artificial neural network, wherein the control module includes a region of interest extraction module configured to acquire region of interest data based on acquired intensity data for the FoV, and an object recognition module configured to acquire object recognition data using an artificial neural network, and recognize the object of interest for the FoV.
Object recognition method and object recognition device performing the same
Provided is an object recognition device for performing object recognition on a field of view (FoV). The object recognition device includes a light detection and ranging (LiDAR) data acquisition module configured to acquire data for the FoV from a sensor configured to project the FoV with a laser and receive reflected light, and a control module configured to perform object recognition on an object of interest in the FoV using an artificial neural network, wherein the control module includes a region of interest extraction module configured to acquire region of interest data based on acquired intensity data for the FoV, and an object recognition module configured to acquire object recognition data using an artificial neural network, and recognize the object of interest for the FoV.