G06V10/7515

Method for determining a confidence value of an object of a class
11531832 · 2022-12-20 · ·

A method is described for determining a confidence value for an object of a class determined by a neural network in an input image. The method includes: preparing an activation signature with the aid of a multiplicity of output images of a layer of the neural network for the class of the object, with the input image being provided to the input of the neural network; scaling the activation signature to the size of the input image; comparing an overlapping area portion of an area of the activation signature with an area of an object frame in relation to the area of the activation signature in order to determine the confidence value.

TOOL FOR COUNTING AND SIZING PLANTS IN A FIELD

Aspects include methods and apparatuses generally relating to agricultural technology and artificial intelligence and, more particularly, to counting and sizing plants in a field. One aspect relates to a plants analysis apparatus for computer analysis of plants in an area of interest that generally includes an input device for receiving at least one aerial image of the area of interest; and an object-mask-predicting region-based convolutional neural network, Mask R-CNN, for performing object detection, wherein the Mask R-CNN is trained to detect a selected vegetable and to determine numbers and sizes of objects detected

ELECTRONIC DEVICE AND METHOD FOR CORRECTING IMAGE LATENCY

An electronic device includes a communication interface and a processor, wherein the processor receives a first image predistorted and rendered at a first time, from an external device through the communication interface, calculates a pixel shift for each pixel of the received first image at a second time that is different from the first time, generates a second image by reprojecting the first image based on the calculated pixel shift, and transmits the generated second image to the external device through the communication interface.

Using identity information to facilitate interaction with people moving through areas
11587349 · 2023-02-21 · ·

A system receives a digital representation of a biometric for a person, uses the digital representation of the biometric to determine and/or otherwise retrieve identity information associated with the person, and uses the identity information to perform one or more actions related to the person's presence in one or more areas. For example, the system may estimate a path for the person and signal an agent electronic device based on the path. In another example, the system may determine a presence of a person within the area and/or transmit information to an agent electronic device regarding the determined presence. In still another example, the system may receive a request to communicate with the person and forward the communication to the person using the identity information.

System and method for space object detection in daytime sky images

In some embodiments, space objects may be detected within shortwave infrared (SWIR) images captured during the daytime. Some embodiments include obtaining a stacked image by stacking shortwave infrared (SWIR) images. A spatial background-difference image may be generated based on the stacked image, and a matched-filter image may be obtained based on the spatial background-difference image. A binary mask may be generated based on the matched-filter image. The binary mask may include a plurality of bits each of which including a first value or a second value based on whether a signal-to-noise ratio (SNR) associated with that bit satisfies a threshold condition. Output data may be generated based on the spatial background-difference image and the binary mask, where the output data provides observations on detected space objects in orbit.

METHOD AND SYSTEM FOR CLASSIFICATION OF AN OBJECT IN A POINT CLOUD DATA SET

A method for classifying an object in a point cloud includes computing first and second classification statistics for one or more points in the point cloud. Closest matches are determined between the first and second classification statistics and a respective one of a set of first and second classification statistics corresponding to a set of N classes of a respective first and second classifier, to estimate the object is in a respective first and second class. If the first class does not correspond to the second class, a closest fit is performed between the point cloud and model point clouds for only the first and second classes of a third classifier. The object is assigned to the first or second class, based on the closest fit within near real time of receiving the 3D point cloud. A device is operated based on the assigned object class.

Object collating device and object collating method

It is an object of the present invention to provide an object collating device and an object collating method that enable matching of images of a dividable medical article with desirable accuracy and easy confirmation of matching results. In the object collating device according to the first aspect, when the object is determined to be divided, the first image for matching is collated with the image for matching (the second matching image) for the objects in the undivided state, so that the region to be matched is not narrowed, and matching of the images of the dividable medical article is achieved with desirable accuracy. In addition, since the first and second display processing is performed on the images for display determined to contain the objects of the same type, matching results can easily be confirmed.

METHOD, APPARATUS, AND PROGRAM FOR MATCHING POINT CLOUD DATA
20230101689 · 2023-03-30 ·

An apparatus for matching point cloud data according to an embodiment includes a generation unit configured to generate an input model and an input model group including a plurality of the input models, the input model being obtained by reducing a data amount of the point cloud data, an extraction unit configured to extract some of the input models from the input model group according to shape information of the input models, a calculation unit configured to compare an extracted extraction model with a reference model based on reference point cloud data and calculate a cost, a determination unit configured to determine whether or not the cost has converged, and a change unit configured to change a position and a posture so that the cost is reduced when the cost has not converged.

Automated Determination Of Acquisition Locations Of Acquired Building Images Based On Determined Surrounding Room Data

Techniques are described for computing devices to perform automated operations to determine the acquisition locations of images, such as within a building interior based on automatically determined shapes of rooms of the building, and for using the determined image acquisition location information in further automated manners. The image may be a panorama image or of another type (e.g., a rectilinear perspective image) and acquired at an acquisition location in a multi-room building's interior, and the determined acquisition location for such an image may be at least a location on the building's floor plan and optionally an orientation/direction for at least a part of the image—in addition, the automated image acquisition location determination may be further performed without having or using information from any depth sensors or other distance-measuring devices about distances from an image's acquisition location to walls or other objects in the surrounding building.

Image Registration Using A Fully Convolutional Network
20230087977 · 2023-03-23 ·

Methods and systems for analyzing images are disclosed. An example method may comprise inputting one or more of a first image or a second image into a fully convolutional network, and determining an updated fully convolutional network by optimizing a similarity metric associated with spatially transforming the first image to match the second image. The one or more values of the fully convolutional network may be adjusted to optimize the similarity metric. The method may comprise registering one or more of the first image or the second image based on the updated fully convolutional network.