G06T7/596

PERCEPTION UNCERTAINTY

A computer-implemented method of perceiving structure in an environment comprises steps of: receiving at least one structure observation input pertaining to the environment; processing the at least one structure observation input in a perception pipeline to compute a perception output; determining one or more uncertainty source inputs pertaining to the structure observation input; and determining for the perception output an associated uncertainty estimate by applying, to the one or more uncertainty source inputs, an uncertainty estimation function learned from statistical analysis of historical perception outputs.

Method and apparatus for inspecting and sorting

A method and apparatus for sorting is described, and which includes providing a product stream formed of individual objects of interest having feature aspects which can be detected; generating multiple images of each of the respective objects of interest; classifying the feature aspects of the objects of interest; identifying complementary images by analyzing some of the multiplicity of images; fusing the complementary images to form an aggregated region representation of the complementary images; and sorting the respective objects of interest based at least in part upon the aggregated region representation which is formed.

System and method for performing quality control of manufactured models

Disclosed herein are example embodiments of methods and systems for identifying manufacturing defects of a manufactured dentition model. One of the methods for performing quality control comprises: determining whether the manufactured dentition model is a good or a defective product based on a statistical characteristic of a differences model. The differences model can be generated based on differences between a scanned 3D patient-dentition data and a scanned 3D manufactured-dentition data. The scanned 3D patient-dentition data can be generated using 3D data of a patient's dentition, and the scanned 3D manufactured-dentition data can be generated using 3D data of the manufactured dentition model. The manufactured dentition model can be a 3D printed model.

Systems and methods for encoding image files containing depth maps stored as metadata

Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image, a set of low resolution images, and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map and the set of low resolution images to create a rendered image.

TRACKING SYSTEM AND TRACKING METHOD FOR WATER-SURFACE OBJECTS, AND MARINE VESSEL INCLUDING TRACKING SYSTEM FOR WATER-SURFACE OBJECTS
20220126959 · 2022-04-28 ·

A tracking system for tracking water-surface objects includes a stereo camera on a hull, at least one memory, and at least one processor coupled to the at least one memory. The at least one processor is configured or programmed to detect at least one object based on a first image and a second image captured by a first imaging unit and a second imaging unit of the stereo camera, and set one detected object as a tracking target in a third image captured by the first imaging unit, the second imaging unit or another imaging unit. The at least one processor is further configured or programmed to track the tracking target using a temporal change in a feature of the tracking target, and use at least one object detected based on the first image and the second image during tracking to correct the tracking result.

Foreground-background-aware atrous multiscale network for disparity estimation

A system for disparity estimation includes one or more feature extractor modules configured to extract one or more feature maps from one or more input images; and one or more semantic information modules connected at one or more outputs of the one or more feature extractor modules, wherein the one or more semantic information modules are configured to generate one or more foreground semantic information to be provided to the one or more feature extractor modules for disparity estimation at a next training epoch.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20230245343 · 2023-08-03 ·

Resulting from a reduction in the accuracy of a feature point obtained from a viewpoint inclined with respect to the front direction of an object, the estimation accuracy of three-dimensional coordinates of the feature point is reduced. Consequently, from each of a plurality of images captured from a plurality of viewpoints, the feature point of the object is detected and attribute information indicating which area of the object the detected feature point belongs to is appended to the detected feature point. Then, for each of the same attribute information, the three-dimensional coordinates of the feature point are calculated by using two-dimensional coordinates of the feature point on the image corresponding to the viewpoints not more than the plurality of viewpoints and not less than two.

System and method for determining operating deflection shapes of a structure using optical techniques
11763476 · 2023-09-19 · ·

A system for measuring total operating deflection shapes of a structure includes one or more imagers, each including two cameras spaced apart from one another and each oriented and positioned to have corresponding fields of view of a different corresponding section of the structure, with the corresponding sections that may include overlap area of the structure within each of the different sections of the structure. Each of the cameras generates a corresponding data stream, which is communicated to a controller, which is configured to measure the response of the structure to an excitation, such as a vibration or an impulse. The system is configured to convert time-domain data from each of the data streams to the frequency-domain data using a Fourier Transform algorithm and stitching the shapes to obtain the total operating deflection shapes of the structure by scaling and stitching together the frequency-domain data.

Plate reconstruction of obscured views of a main imaging device using capture device inputs of the same scene

An imagery processing system obtains capture inputs from capture devices that might have capture parameters and characteristics that differ from those of a main imagery capture device. By normalizing outputs of those capture devices, potentially arbitrary capture devices could be used for reconstructing portions of a scene captured by the main imagery capture device when reconstructing a plate of the scene to replace an object in the scene with what the object obscured in the scene. Reconstruction could be of one main image, a stereo pair of images, or some number, N, of images where N>2.

METHOD AND SYSTEM FOR DETERMINING PLANT LEAF SURFACE ROUGHNESS
20210358160 · 2021-11-18 ·

Provided is a method and system for determining plant leaf surface roughness. The method includes: acquiring a plurality of continuously captured zoomed-in leaf images by using a zoom microscope image capture system; determining a feature match set according to the zoomed-in leaf images; removing de-noised images of which the number of feature matches in feature match set is less than a second set threshold, to obtain n screened images; combining the n screened images to obtain a combined grayscale image; and determining plant leaf surface roughness according to the combined grayscale image. In the present disclosure, first, a plurality of zoomed-in leaf images are directly acquired by the zoom microscope image capture system quickly and accurately; the zoomed-in leaf images are then screened and combined to form a combined grayscale image; finally, three-dimensional roughness of a plant leaf surface is determined quickly and accurately according to the combined grayscale image.