G06V10/443

IMAGE PROCESSING APPARATUS AND VEHICLE
20230098659 · 2023-03-30 · ·

An image processing apparatus includes an extractor and an object identifier. The extractor is configured to extract a feature quantity included in a captured image. The object identifier is configured to perform identification of an object on the basis of the feature quantity. The extractor is configured to extract the feature quantity by performing, on the captured image, a convolution calculation multiple times using a filter including multiple filter values that are two-dimensionally arranged. The multiple filter values are initially set at values that are line-symmetric with respect to an axis of symmetry along a predetermined direction. Augmentation batch data of learning data to be used in an update process of the filter by machine learning is configured to include any unflipped image extracted and a flipped image as paired images, the flipped image resulting from applying image-flipping to the unflipped image with respect to the axis of symmetry.

Methods and Systems for Augmented Reality Tracking Based on Volumetric Feature Descriptor Data

An illustrative augmented reality tracking system obtains a volumetric feature descriptor dataset that includes: 1) a plurality of feature descriptors associated with a plurality of views of a volumetric target, and 2) a plurality of 3D structure datapoints that correspond to the plurality of feature descriptors. The system also obtains an image frame captured by a user equipment (UE) device. The system identifies a set of image features depicted in the image frame and detects, based on a match between the set of image features depicted in the image frame and a set of feature descriptors of the plurality of feature descriptors, that the volumetric target is depicted in the image frame. In response to this detecting and based on 3D structure datapoints corresponding to matched feature descriptors, the system determines a spatial relationship between the UE device and the volumetric target. Corresponding methods and systems are also disclosed.

Augmented reality image matching
11495007 · 2022-11-08 · ·

A computer-implemented method is for identifying the presence of an image trigger in a digitally captured scene. The method includes, at a component installed on a client device and upon commencement of an image trigger matching operation: a) obtaining, from a multiplicity of image triggers, a subset of image triggers; b) subdividing the subset of image triggers into a plurality of image trigger sub-subsets; and c) for each sub-subset of image triggers in turn, and at predefined time intervals, submitting the sub-subset of image triggers to an Augmented Reality (AR) core Application Programming Interface (API), to cause the client device to cache the received sub-subsets of image triggers, search the digitally captured scene for the presence of one or more of the cached image triggers, and identify a positive match of an image trigger to said component using said AR core API.

Connected interactive content data creation, organization, distribution and analysis

A method for identifying a product which appears in a video stream. The method includes playing the video stream on a video playback device, identifying key scenes in the video stream containing product images, selecting product images identified by predetermined categories of trained neural-network object identifiers stored in training datasets. Object identifiers of identified product images are stored in a database. Edge detection and masking is then performed based on at least one of shape, color and perspective of the object identifiers. A polygon annotation of the object identifiers is created using the edge detection and masking. The polygon annotation is annotated to provide correct object identifier content, accuracy of polygon shape, title, description and URL of the object identifier for each identified product image corresponding to the stored object identifiers. Also disclosed is a method for an end user to select and interact with an identified product.

APPARATUS AND METHOD FOR DETECTING KEYPOINT BASED ON DEEP LEARNIING USING INFORMATION CHANGE ACROSS RECEPTIVE FIELDS

Disclosed herein are an apparatus and method for detecting a keypoint based on deep learning robust to scale changes based on information change across receptive fields. The apparatus for detecting a keypoint based on deep learning robust to scale changes based on information change across receptive fields includes a feature extractor for extracting a feature from an input image based on a pre-trained deep learning neural network, an information accumulation pyramid module for outputting, from the feature, at least two filter responses corresponding to receptive fields having different scales, an information change detection module for calculating an information change between the at least two filter responses, a keypoint detection module for creating a score map having a keypoint probability of each pixel based on the information change, and a continuous scale estimation module for estimating a scale of a receptive field having a biggest information change for each pixel.

IMAGE PROCESSING-BASED FOREIGN SUBSTANCE DETECTION METHOD IN WIRELESS CHARGING SYSTEM AND DEVICE PERFORMING METHOD

An image processing-based foreign substance detection method in a wireless charging system and a device performing the method are disclosed. A method for detecting a foreign substance according to an example embodiment includes an operation of acquiring an image of a charging area of a wireless charging system, an operation of detecting, based on an RGB value of a frame of the image, a foreign substance in the charging area, an operation of discriminating a type of the foreign substance, and an operation of performing power control of the wireless charging system according to the type of the foreign substance.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
20230034076 · 2023-02-02 ·

An information processing apparatus that performs control related to movement of a moving object configured to measure its own position includes a memory storing instructions, and at least one processor that, upon execution of the instructions, is configured to operate as a first acquisition unit configured to acquire environmental information about an environment where the moving object moves, an estimation unit configured to estimate first positional information indicating that a region subjected to measurement accuracy degradation is below a threshold value based on the environmental information, and a determination unit configured to determine content of control information based on the first positional information.

Tree crown extraction method based on unmanned aerial vehicle multi-source remote sensing

A tree crown extraction method based on UAV multi-source remote sensing includes: obtaining a visible light image and LIDAR point clouds, taking a digital orthophoto map (DOM) and the LIDAR point clouds as data sources, using a method of watershed segmentation and object-oriented multi-scale segmentation to extract single tree crown information under different canopy densities. The object-oriented multi-scale segmentation method is used to extract crown and non-crown areas, and a tree crown distribution range is extracted with the crown area as a mask; a preliminary segmentation result of single tree crown is obtained by the watershed segmentation method based on a canopy height model; a brightness value of DOM is taken as a feature, the crown area of the DOM is performed secondary segmentation based on a crown boundary to obtain an optimized single tree crown boundary information, which greatly increases the accuracy of remote sensing tree crown extraction.

Method for generating a reconstructed image

A method for generating reconstruction a reconstructed image is adapted to an input image having a target object. The method comprises converting the input image into a feature map with vectors by an encoder; performing a training procedure according to training images of reference objects to generate feature prototypes associated with the training images and store the feature prototypes to a memory; selecting a part of feature prototypes from the feature prototypes stored in the memory according to similarities between the feature prototypes and the feature vectors; generating a similar feature map according the part of feature prototypes and weights, wherein the weights represents similarities between the part of feature prototypes and the feature vectors; and converting the similar feature map into the reconstructed image by a decoder; wherein the encoder, the decoder and the memory form an auto-encoder.

Recurrent Deep Neural Network System for Detecting Overlays in Images
20230087773 · 2023-03-23 ·

In one aspect, an example method includes a processor (1) applying a feature map network to an image to create a feature map comprising a grid of vectors characterizing at least one feature in the image and (2) applying a probability map network to the feature map to create a probability map assigning a probability to the at least one feature in the image, where the assigned probability corresponds to a likelihood that the at least one feature is an overlay. The method further includes the processor determining that the probability exceeds a threshold, and responsive to the processor determining that the probability exceeds the threshold, performing a processing action associated with the at least one feature.