G06V10/75

Determining an item that has confirmed characteristics

In various example embodiments, a system and method for determining an item that has confirmed characteristics are described herein. An image that depicts an object is received from a client device. Structured data that corresponds to characteristics of one or more items are retrieved. A set of characteristics is determined, the set of characteristics being predicted to match with the object. An interface that includes a request for confirmation of the set of characteristics is generated. The interface is displayed on the client device. Confirmation that at least one characteristic from the set of characteristics matches with the object depicted in the image is received from the client device.

PICTURE ENHANCEMENT METHOD AND APPARATUS

Provided are a picture enhancement method and apparatus. The picture enhancement method includes: for any one pixel, which meets a first filtering condition, in a current block, determining a first pixel value of the pixel after first filtering; and performing enhancement on a pixel value of the pixel based on the first pixel value and a second pixel value of the pixel point before the first filtering, so as to obtain a third pixel value, which has been subjected to enhancement, of the pixel.

DATA IDENTIFICATION METHOD AND APPARATUS
20230215125 · 2023-07-06 · ·

This disclose relates to a data processing method and apparatus. The method includes: acquiring a first prediction region in a target image, the first prediction region being a prediction region corresponding to a maximum prediction category probability in N prediction regions in the target image, a prediction category probability being a probability that an object in a prediction region belongs to a prediction object category; determining a coverage region jointly covered by a second prediction region and the first prediction region; the second prediction region being a prediction region other than the first prediction region in the N prediction regions; and determining a target prediction region in the prediction regions based on an area of the coverage region and a similarity associated with the second prediction region, the similarity being for indicating a similarity between an object in the second prediction region and an object in the first prediction region.

Image modification using detected symmetry

Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map. The image modification module produces a manipulated image by manipulating the original image under global symmetry constraints imposed by the global symmetry association map.

Method and apparatus for determining a physical shape, method for manufacturing a calculation device, calculation device, and use of the calculation device

Provided is a method for determining a physical shape having a predefined physical target property that includes calculating a sensitivity landscape on the basis of a shape data record for the physical shape with the aid of a calculation device. The calculation device is a machine-taught artificial intelligence device. The shape data record identifies locations at or on the physical shape. For a plurality of these locations, the sensitivity landscape respectively indicates how the target property of the physical shape changes if the physical shape changes in the region of the location. Furthermore, the shape data record for the physical shape to be determined is changed on the basis of the sensitivity landscape in such a manner that the predefined physical target property is improved.

DAMAGE INFORMATION PROCESSING DEVICE, DAMAGE INFORMATION PROCESSING METHOD, AND PROGRAM
20230214992 · 2023-07-06 · ·

Provided is a damage information processing device, a damage information processing method, and a program that offer ease with which a user can recognize a point that has a chronologically unnatural difference. A damage information processing device (10) includes a processor (20) and is configured to process information about damage to a structure. The processor (20) is configured to acquire information about damage to a structure. The information includes first damage information about a state at one point in time and second damage information about a state at a later point in time than the first damage information. The processor (20) is configured to extract difference information concerning the difference between the first damage information and the second damage information. The processor (20) is configured to detect, by searching through the difference information, a first category point where only the first damage information is involved or where the first damage information is greater than the second damage information. The processor (20) is configured to output, to a display device, an alert indication in connection with at least one of the first damage information or the second damage information to give an indication of the first category point.

METHOD AND APPARATUS FOR GENERATING AND PROVIDING DATA VIDEO FOR GENERATING TRAINING DATA OF ARTIFICIAL INTELLIGENCE MODEL AND RECORDING MEDIUM ON WHICH PROGRAM FOR THE SAME IS RECORDED
20230215146 · 2023-07-06 · ·

Provided are a method and apparatus for generating and providing a data video for generating training data of an artificial intelligence model and a recording medium on which a program for the same is recorded. According to various embodiments of the present disclosure, a method includes generating light detection and ranging (LiDAR) images using LiDAR point cloud data for a predetermined area, generating a LiDAR video using each of the LiDAR images as a unit frame, and providing a user interface (UI) that outputs the generated LiDAR video, when a unit frame change request is acquired from a user, comparing a first unit frame currently being output with a second unit frame after the first unit frame to detect at least one pixel whose attribute changes, and updating only the one or more detected pixels in the first unit frame using the second unit frame.

Ultrafast, robust and efficient depth estimation for structured-light based 3D camera system

A system and a method are disclosed for a structured-light system to estimate depth in an image. An image is received in which the image is of a scene onto which a reference light pattern has been projected. The projection of the reference light pattern includes a predetermined number of particular sub-patterns. A patch of the received image and a sub-pattern of the reference light pattern are matched based on either a hardcode template matching technique or a probability that the patch corresponds to the sub-pattern. If a lookup table is used, the table may be a probability matrix, may contain precomputed correlations scores or may contain precomputed class IDs. An estimate of depth of the patch is determined based on a disparity between the patch and the sub-pattern.

IMAGE ACQUISITION APPARATUS AND ELECTRONIC APPARATUS INCLUDING THE SAME

An image acquisition apparatus includes: a multispectral image sensor configured to acquire images in at least four channels based on a second wavelength band of about 10 nm to about 1,000 nm; and a processor configured to estimate illumination information of the images by inputting the images of at least four channels to a deep learning network trained in advance, and convert colors of the acquired images using the estimated illumination information.

IMAGE PROCESSING METHODS AND SYSTEMS FOR TRAINING A MACHINE LEARNING MODEL TO PREDICT ILLUMINATION CONDITIONS FOR DIFFERENT POSITIONS RELATIVE TO A SCENE
20230214708 · 2023-07-06 ·

An image processing method generates a training dataset for training a machine learning model to predict illumination conditions for different positions relative to a scene, the training dataset including training images and reference data. The method includes: obtaining a training image of a training scene acquired by a first camera having an associated first coordinate system; determining local illumination maps associated to a respective position in the training scene in a respective second coordinate system and representing illumination received from different directions around the position; transforming the position of each local illumination map from the second to the first coordinate system; responsive to determining that the transformed position of a local illumination map is visible: transforming the local illumination map from the second to the first coordinate system and including the transformed local illumination map and its transformed position in the reference data associated to the training image.