G06T2207/20016

IMAGE SIGNAL PROCESSOR, OPERATION METHOD OF IMAGE SIGNAL PROCESSOR, AND IMAGE SENSOR DEVICE INCLUDING IMAGE SIGNAL PROCESSOR

Provided are image signal processing apparatuses and methods for operating the same. In an embodiment, the image signal processing apparatus is configured to receive, from an image sensor device, an input image. The apparatus is further configured to perform a binning and crop operation on the input image to generate a first image. The apparatus is further configured to perform a Bayer domain processing on the first image to generate a second image. The apparatus is further configured to perform RGB domain processing on the second image to generate a third image. The apparatus is further configured to perform YUV domain processing on the third image to generate an output image. The YUV domain processing comprises at least one of a spatial de-noising operation, a temporal de-noising operation, a motion compensation operation, a tone mapping operation, a detail enhance operation, and a sharpening operation.

IMAGING APPARATUS
20230012208 · 2023-01-12 ·

An imaging apparatus includes: an image sensor that captures a subject image to generate image data; a first depth measurer that acquires first depth information indicating a depth at a first spatial resolution, the depth showing a distance between the imaging apparatus and a subject in an image indicated by the image data; a second depth measurer that acquires second depth information indicating the depth in the image at a second spatial resolution different from the first spatial resolution; and a controller that acquires third depth information indicating the depth at the first or second spatial resolution for each region of different regions in the image, based on the first depth information and the second depth information.

System and method of acquiring coordinates of pupil center point

A system and a method of calculating coordinates of a pupil center point are provided. The system for acquiring the coordinates of the pupil center point includes a first camera, a second camera, a storage and a processor. The first camera is configured to capture a first image including a face and output the first image to the processor, the second camera is configured to capture a second image including a pupil and output the second image to the processor, a resolution of the first camera is smaller than a resolution of the second camera, and the storage is configured to store processing data, and the processor is configured to: acquire the first image and the second image; extract a first eye region corresponding to an eye from the first image; convert the first eye region into the second image, to acquire a second eye region corresponding to the eye in the second image; and detect a pupil in the second eye region and acquire the coordinates of the pupil center point.

Method and device for automatically drawing structural cracks and precisely measuring widths thereof
11551341 · 2023-01-10 · ·

The present invention discloses a method and device for automatically drawing structural cracks and precisely measuring widths thereof. The method comprises a method for automatically drawing cracks and a method for calculating widths of these cracks based on a single-pixel skeleton and Zernike orthogonal moments, wherein the method for automatically drawing cracks is used to rapidly and precisely draw cracks in the surface of a structure, and the method for calculating widths of these cracks based on a single-pixel skeleton and Zernike orthogonal moments is used to calculate widths of macro-cracks and micro-cracks in an image in a real-time manner.

METHOD AND SYSTEM FOR MAP TARGET TRACKING
20230215036 · 2023-07-06 ·

A method of tracking a map target according to one embodiment of the present disclosure, which tracks the map target through a map target tracking application executed by at least one processor of a terminal, includes: acquiring a basic image obtained by photographing a 3D space; acquiring a plurality of sub-images obtained by dividing the acquired basic image for respective sub-spaces in the 3D space; creating a plurality of sub-maps based on the plurality of acquired sub-images; determining at least one main key frame for each of the plurality of created sub-maps; creating a 3D main map by combining the plurality of sub-maps for which the at least one main key frame is determined; and tracking current posture information in the 3D space based on the created 3D main map.

SYSTEMS AND METHODS FOR INTELLIGENTLY COMPRESSING WHOLE SLIDE IMAGES
20230215052 · 2023-07-06 ·

Systems and methods for compressing images that include a memory storing an executable code and a processor executing the code to receive a whole slide image, the whole slide image containing a plurality of image layers and metadata associated with each image layer, extract a high-resolution image layer and the corresponding metadata, wherein the high-resolution image layer includes a plurality of image tiles including informative tiles and noninformative tiles, where the informative tiles depict a region of interest of the specimen, analyze the image tiles of the extracted high-resolution image layer, determine a first tile is a noninformative tile, create an informative image layer by removing the first tile from the extracted high-resolution image layer, the informative image layer containing a plurality of informative tiles, compress the informative image layer into a single-layer whole slide image, and save the single-layer whole slide image in the memory.

Stereo correspondence search
11550387 · 2023-01-10 · ·

Methods, systems, devices and computer software/program code products enable efficiently finding stereo correspondence between a feature or set of features in a first image or signal, and a search domain in a second image or signal.

Digital foveation for machine vision

A machine vision method includes obtaining a first representation of an image captured by an image sensor array, analyzing the first representation for an assessment of whether the first representation is sufficient to support execution of a machine vision task by the processor, if the first representation is not sufficient, determining, based on the first representation, a region of the image of interest for the execution of the machine vision task, reusing the image captured by the image sensor array to obtain a further representation of the image by directing the image sensor array to sample the image captured by the image sensor array in a manner guided by the determined region of the image of interest and by the assessment, and analyzing the further representation to assess whether the further representation is sufficient to support the execution of the machine vision task by implementing a procedure for the execution of the machine vision task in accordance with the further representation.

Learning model architecture for image data semantic segmentation

A learning model may provide a hierarchy of convolutional layers configured to perform convolutions upon image features, each layer other than a topmost layer convoluting the image features at a lower resolution to a higher layer, and each layer other than a bottommost layer returning the image features to a lower layer. Each layer fuses the lower resolution image features received from a higher layer with same resolution image features convoluted at the layer, so as to combine large-scale and small-scale features of images. Layers of the hierarchy may be substantially equal to a number of lateral convolutions at a bottommost convolutional layer. The bottommost convolutional layer ultimately passes the fused features to an attention mapping module, which utilizes two attention mapping pathways in combination to detect non-local dependencies and interactions between large-scale and small-scale features of images without de-emphasizing local interactions.

LEARNING DEVICE, OBJECT DETECTION DEVICE, LEARNING METHOD, AND RECORDING MEDIUM

A learning device makes an object detection device learn how to detect an object from an input image. A feature extraction unit performs feature extraction from input images including real images and pseudo images to generate feature maps, and the object detection unit detects objects included in the input images based on the feature maps. The domain identification unit identifies the domains forming the input images and generates domain identifiability information. Then, the feature extraction unit and the object detection unit learn common features that do not depend on the difference in domains, based on the domain identifiability information.