G06V10/759

IMAGE AUGMENTATION APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
20230289928 · 2023-09-14 · ·

An image augmentation apparatus (2000) acquires an original training dataset (40). The original training dataset (40) includes a ground-view image (42) and an original aerial-view image (44). The image augmentation apparatus (2000) performs an image augmentation including a cropping process and a rotation process on the original aerial-view image (44) to generate an augmented aerial-view image (54). In the cropping process, a target region with a circle shape or a regular polygon shape is cropped from the original aerial-view image (44). In the rotation process, the original aerial-view image (44) is rotated. An angle of the rotation is a multiple of a center angle of the target region when the target region has a regular polygon shape. The image augmentation apparatus (2000) outputs an augmented training dataset (50) including the ground-view image (52) and the augmented aerial-view image (54).

INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND INFORMATION PROCESSING METHOD
20230290112 · 2023-09-14 · ·

An information processing apparatus includes a processor configured to: extract a first region showing a feature of an object to be registered and having a predetermined size from an object surface image obtained by imaging a surface of the object; specify, from the object surface image, a second region to be compared with the first region and having a same size as the first region; derive a similarity between the first region and the second region; and register, as a registered image, an image of the first region in a case where the similarity between the first region and the second region satisfies a predetermined standard.

Automated detection of controls in computer applications with region based detectors

Controls within images of a user interface of a computer application are detected by way of region-based R-FCN and Faster R-CNN engines. Datasets comprising images containing application control, wherein the application controls include images of application where width is greater than height, width is equal to height and height is greater than width are retrieved. Each of the datasets is processed with the R-FCN and Faster R-CNN engines to generate a software configured to recognize, from an input image, application controls wherein the application controls are characterized by dimensions where width is greater than height, where width is substantially equal to height, and where height is greater than width.

Similarity determination apparatus, similarity determination method, and similarity determination program
11756292 · 2023-09-12 · ·

A display control unit displays a tomographic image of a specific tomographic plane in a first medical image on a display unit. A finding classification unit classifies each pixel of a partial region of the first medical image into at least one finding. A feature amount calculation unit calculates a first feature amount for each finding in the partial region. A weighting coefficient setting unit sets a weighting coefficient indicating a degree of weighting, which varies depending on a size of each finding, for each finding. A similarity derivation unit performs a weighting operation for the first feature amount for each finding calculated in the partial region and a second feature amount for each finding calculated in a second medical image on the basis of the weighting coefficient to derive a similarity between the first medical image and the second medical image.

Information processing apparatus, information processing method, and storage medium
11749017 · 2023-09-05 · ·

Provided are an information processing apparatus, an information processing method, and a storage medium capable of acquiring feature information relating to sweat gland pores that can realize highly accurate identification of an individual. The information processing apparatus includes: a sweat gland pore extraction unit that extracts sweat gland pores from an image including a skin marking; and an information acquisition unit that acquires sweat gland pore information including position information about the sweat gland pore and directional information about the sweat gland pore for each of the sweat gland pores.

Estimating danger from future falling cargo

A method for estimating a future fall of a cargo, the method may include receiving by a computerized system, sensed information related to driving sessions of multiple vehicles; applying a machine learning process on the sensed information to detect actual or estimated cargo falling events and generate one or more future falling cargo predictors for multiple types of cargo; estimating, from the sensed information, an impact of cargo falling events related to at least some of the types of cargo; and responding to the estimating, wherein the responding comprises at least one out of (a) storing the one or more future falling cargo predictors for the multiple types of cargo, (b) transmitting the one or more future falling cargo predictors for the multiple types of cargo; (c) storing the estimated impact of cargo falling events related to the at least some of the types of cargo, and (d) transmitting the impact of cargo falling events related to the at least some of the types of cargo.

Method and system for generating navigation data for a geographical location
11656090 · 2023-05-23 · ·

An approach is provided for generating navigation data of a geographical location. The approach involves identifying a landmark located along a source road from a source image and segmenting the source image using a deep learning model to identify a segmentation mask. The approach also involves generating a template image based on the segmentation mask and a street image of the landmark, and matching the template image successively with a sequence of images of the landmark to determine a confidence score. The approach further involves, identifying a first image from the sequence of images with confidence score below a predetermined threshold, and selecting a second image with confidence score above the predetermined threshold from the sequence of images. The approach further involves calculating a visibility distance of the landmark based on the source image and the second image, and generating the navigation data based on the calculated visibility distance.

Systems and methods for object recognition

The present disclosure relates to systems and methods for object recognition. The system may obtain an image and a model. The image may include a search region in which the object recognition process is performed. In the objection recognition process, for each of one or more sub-regions of the search region, the system may determine a match metric indicating a similarity between the model and the sub-region of the search region. Further, the system may determine an instance of the model among the one or more sub-regions of the search region based on the match metrics.

Human identifying device, human identifying method and human-presence-based illuminating system thereof

A human identifying device includes a temperature sensing module, a temperature pattern recognizing module and a human body identifying module. The temperature sensing module senses temperature distribution within the target area. The temperature pattern recognizing module determines at least one matching region out of the target area based on a human-resembling temperature feature within the temperature distribution. The temperature pattern recognizing module identifies at least one matching temperature sensor that corresponds to the at least one matching region. The human body identifying module determines if distribution of the at least one matching temperature sensor resembles at least one part of a human body contour.

Multi-channel object matching
11651583 · 2023-05-16 · ·

A method may include obtaining first sensor data captured by a first sensor system and second sensor data captured by a second sensor system of a different type from the first sensor system. The method may include detecting a first object included in the first sensor data and a second object included in the second sensor data. The method may include assigning a first label to the first object and a second label to the second object after comparing the first and the second sensor data. The first and second labels may indicate degrees to which the first and the second objects match. Responsive to the first and second labels indicating that the first and the second objects match, the method may include designating a matched object representative of the first object and the second object and sending the matched object to a downstream computing system of an autonomous vehicle.