G06V10/758

Image processing device, image recognition device, image processing program, and image recognition program

An image processing device has a function for plotting a luminance gradient co-occurrence pair of an image on a feature plane and applying an EM algorithm to form a GMM. The device learns a pedestrian image and creates a GMM, subsequently learns a background image and creates a GMM, and calculates a difference between the two and generates a GMM for relearning based on the calculation. The device plots a sample that conforms to the GMM for relearning on the feature plane by applying an inverse function theorem. The device forms a GMM that represents the distribution of samples at a designated mixed number and thereby forms a standard GMM that serves as a standard for image recognition. When this mixed number is set to less than a mixed number designated earlier, the dimensions with which an image is analyzed are reduced, making it possible to reduce calculation costs.

DATA SELECTION FOR IMAGE GENERATION

A method includes obtaining waveform return data including waveform return records for multiple sampling events associated with an observed area and determining a relevance score for the waveform return records of the waveform return data. The relevance score for a particular waveform return record is based, at least partially, on estimated information gain associated with the particular waveform return record. The method also includes, based on the relevance scores, selecting a first subset of waveform return records, where one or more waveform return records are excluded from the first subset of waveform return records. The method also includes generating image data based on the first subset of waveform return records.

Determining functional and descriptive elements of application images for intelligent screen automation
11468225 · 2022-10-11 · ·

The presently disclosed inventive concepts are directed to systems, computer program products, and methods for intelligent screen automation. The inventive computer program products include program instructions configured to cause a computer, upon execution thereof, to perform a method including: identifying, from among a plurality of elements within one or more images of a user interface, a first set of elements and a second set of elements, wherein the first set of elements and the second set of elements each independently comprise either or both of: textual elements, and non-textual elements; determining one or more logical relationships between the textual elements and the non-textual elements; building a hierarchy comprising some or all of the first set of elements and some or all of the second set of elements, wherein building the hierarchy forms a data structure representing functionality of the user interface; and outputting the data structure to a memory.

Object pose neural network system

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium for predicting object pose. In one aspect, a method includes receiving an image of an object having one or more feature points; providing the image as an input to a neural network subsystem trained to receive images of objects and to generate an output including a heat map for each feature point; applying a differentiable transformation on each heat map to generate respective one or more feature coordinates for each feature point; providing the feature coordinates for each feature point as input to an object pose solver configured to compute a predicted object pose for the object, wherein the predicted object pose for the object specifies a position and an orientation of an object; and receiving, at the output of the object pose solver, a predicted object pose for the object in the image.

Image processing techniques in multiplexed fluorescence in-situ hybridization
11624708 · 2023-04-11 · ·

A fluorescent in-situ hybridization imaging and analysis system includes a flow cell to contain a sample to be exposed to fluorescent probes in a reagent, a fluorescence microscope to obtain sequentially collect a plurality of images of the sample at a plurality of different combinations of imaging parameters, and a data processing system. The data processing system includes an online pre-processing system configured to sequentially receive the images from the fluorescence microscope as the images are collected and perform on-the-fly image pre-processing to remove experimental artifacts of the image and to provide RNA image spot sharpening, and an offline processing system configured to, after the plurality of images are collected, perform registration of images having a same field of view and to decode intensity values in the plurality of images to identify expressed genes.

Method and device for detecting human skeletons

A method for detecting a human skeleton is provided. The method includes: receiving a video frame, wherein the video frame comprises a human body; determining whether the video frame comprises prediction information; determining whether a first intra-coded macroblock (IMB) ratio of a target area comprising the human body in the video frame is greater than a first threshold when the video frame comprises the prediction information; and using a motion vector (MV) to estimate skeleton information of the human body when the first IMB ratio of the target area is not greater than the first threshold.

Systems and methods for quantifying concrete surface roughness

The degree of concrete surface roughness contributes to the bond strength between two concrete surfaces for either new construction or repair and retrofitting of concrete structures. Provided are novel systems and methods with industrial application to quantify concrete surface roughness from images which may be obtained from basic cameras or smartphones. A digital image processing system and method with a new index for concrete surface roughness based on the aggregate area-to-total surface area is provided. A machine learning method applying a combination of advanced techniques, including data augmentation and transfer learning, is utilized to categorize images based on the classification given during the learning process. Both methods compared favorably to a well-established method of 3D laser scanning.

Systems and Methods for Identifying Threats and Locations, Systems and Method for Augmenting Real-Time Displays Demonstrating the Threat Location, and Systems and Methods for Responding to Threats

Systems for identifying threat materials such as CBRNE threats and locations are provided. The systems can include a data acquisition component configured to determine the presence of a CBRNE threat; data storage media; and processing circuitry operatively coupled to the data acquisition device and the storage media. Methods for identifying a CBRNE threat are provided. The methods can include: determining the presence of a CBRNE threat using a data acquisition component; and acquiring an image while determining the presence of the CBRNE threat. Methods for augmenting a real-time display to include the location and/or type of CBRNE threat previously identified are also provided. Methods for identifying and responding to CBRNE threats are provided as well.

SYSTEM AND METHOD FOR VEHICLE-BASED LOCALIZING OF OFFBOARD FEATURES
20230109164 · 2023-04-06 ·

A controller obtains plural images generated by an imaging device disposed onboard a vehicle, and analyzes at least first and second images of the plural images to identify a feature of interest that is offboard the vehicle and at least partially depicted in the first and second images. The controller determines a first unit vector for the feature of interest based on a first location of the feature of interest in the first image, and determines a second unit vector for the feature of interest based on a second location of the feature of interest in the second image. The controller calculates a third location of the feature of interest, relative to a physical environment, based on the first unit vector, the second unit vector, and at least one of a first reference location of the vehicle or a second reference location of the vehicle.

Sensor Fusion for Object-Avoidance Detection
20220319328 · 2022-10-06 ·

This document describes techniques, apparatuses, and systems for sensor fusion for object-avoidance detection, including stationary-object height estimation. A sensor fusion system may include a two-stage pipeline. In the first stage, time-series radar data passes through a detection model to produce radar range detections. In the second stage, based on the radar range detections and camera detections, an estimation model detects an over-drivable condition associated with stationary objects in a travel path of a vehicle. By projecting radar range detections onto pixels of an image, a histogram tracker can be used to discern pixel-based dimensions of stationary objects and track them across frames. With depth information, a highly accurate pixel-based width and height estimation can be made, which after applying over-drivability thresholds to these estimations, a vehicle can quickly and safely make over-drivability decisions about objects in a road.