G06V10/44

SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)

A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.

SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)

A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.

SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMAGE ANALYSIS

Image analytics systems, methods, and computer program products to autonomously analyze an image to identify and detect features in the image, such as the horizon, and/or identify and detect objects of interest therein, such as, smoke or possible smoke. The image is captured, for example, by RGB cameras, and depicts a scene to be analyzed. The intelligent image analytic system is configured to provide alerts and/or other information to one or more concerned parties and/or computing systems to take an appropriate response.

Digital Image Ordering using Object Position and Aesthetics
20230051564 · 2023-02-16 · ·

Digital image ordering based on object position and aesthetics is leveraged in a digital medium environment. According to various implementations, an image analysis system is implemented to identify visual objects in digital images and determine aesthetics attributes of the digital images. The digital images can then be arranged in way that prioritizes digital images that include relevant visual objects and that exhibit optimum visual aesthetics.

Digital Image Ordering using Object Position and Aesthetics
20230051564 · 2023-02-16 · ·

Digital image ordering based on object position and aesthetics is leveraged in a digital medium environment. According to various implementations, an image analysis system is implemented to identify visual objects in digital images and determine aesthetics attributes of the digital images. The digital images can then be arranged in way that prioritizes digital images that include relevant visual objects and that exhibit optimum visual aesthetics.

CONTOUR SHAPE RECOGNITION METHOD
20230047131 · 2023-02-16 ·

Provided is a contour shape recognition method, including: sampling and extracting salient feature points of a contour of a shape sample; calculating a feature function of the shape sample at a semi-global scale by using three types of shape descriptors; dividing the scale with a single pixel as a spacing to acquire a shape feature function in a full-scale space; storing feature function values at various scales into a matrix to acquire three types of feature grayscale map representations of the shape sample in the full-scale space; synthesizing the three types of grayscale map representations of the shape sample, as three channels of RGB, into a color feature representation image; constructing a two-stream convolutional neural network by taking the shape sample and the feature representation image as inputs at the same time; and training the two-stream convolutional neural network, and inputting a test sample into a trained network model to achieve shape classification.

Biomarker Prediction Using Optical Coherence Tomography

Deep learning methods and systems for detecting biomarkers within optical coherence tomography volumes using such deep learning methods and systems are provided. Embodiments predict the presence or absence of clinically useful biomarkers in OCT images using deep neural networks. The lack of available training data for canonical deep learning approaches is overcome in embodiments by leveraging a large external dataset consisting of foveal scans using transfer learning. Embodiments represent the three-dimensional OCT volume by “tiling” each slice into a single two dimensional image, and adding an additional component to encourage the network to consider local spatial structure. Methods and systems, according to embodiments are able to identify the presence or absence of AMD-related biomarkers on par with clinicians. Beyond identifying biomarkers, additional models could be trained, according to embodiments, to predict the progression of these biomarkers over time.

Biomarker Prediction Using Optical Coherence Tomography

Deep learning methods and systems for detecting biomarkers within optical coherence tomography volumes using such deep learning methods and systems are provided. Embodiments predict the presence or absence of clinically useful biomarkers in OCT images using deep neural networks. The lack of available training data for canonical deep learning approaches is overcome in embodiments by leveraging a large external dataset consisting of foveal scans using transfer learning. Embodiments represent the three-dimensional OCT volume by “tiling” each slice into a single two dimensional image, and adding an additional component to encourage the network to consider local spatial structure. Methods and systems, according to embodiments are able to identify the presence or absence of AMD-related biomarkers on par with clinicians. Beyond identifying biomarkers, additional models could be trained, according to embodiments, to predict the progression of these biomarkers over time.

PROCESSING DEVICE

Erroneous detection due to erroneous parallax measurement is suppressed to accurately detect a step present on a road. An in-vehicle environment recognition device 1 includes a processing device that processes a pair of images acquired by a stereo camera unit 100 mounted on a vehicle. The processing device includes a stereo matching unit 200 that measures a parallax of the pair of images and generates a parallax image, a step candidate extraction unit 300 that extracts a step candidate of a road on which the vehicle travels from the parallax image generated by the stereo matching unit 200, a line segment candidate extraction unit 400 that extracts a line segment candidate from the images acquired by the stereo camera unit 100, an analysis unit 500 that performs collation between the step candidate extracted by the step candidate extraction unit 300 and the line segment candidate extracted by the line segment candidate extraction unit 400 and analyzes validity of the step candidate based on the collation result and an inclination of the line segment candidate, and a three-dimensional object detection unit 600 that detects a step present on the road based on the analysis result of the analysis unit 500.

INDIVIDUAL OBJECT IDENTIFICATION SYSTEM, INDIVIDUAL OBJECT IDENTIFICATION PROGRAM, AND RECORDING MEDIUM

An individual object identification system includes: an image acquisition processor configured to perform an image acquisition process to acquire an image of a subject acquired using imaging equipment; a feature point extraction processor that extracts a feature point from the image; a local feature amount calculation processor that calculates a local feature amount of the feature point; a local feature amount group classification processor that performs classification into a predetermined number of local feature amount groups; a global feature amount calculation processor that calculates a global feature amount based on each of the local feature amount groups; a searching target image registration processor that registers a plurality of images that are searching targets; a global feature amount registration processor that registers the global feature amount related to each of the registered images in a global feature amount registration unit; a narrowing processor that narrows down the plurality of registered images to registered images each having the global feature amount highly correlated with a global feature amount of an identification image; and a determination processor that compares the registered images as candidates with the identification image to determine the registered image having the largest number of corresponding points.