G06F18/241

Analyzing documents using machine learning

A document analysis device that includes a memory operable to store a machine learning model configured to receive a sentence as an input and to output a classification identifier that is associated with a sentence type for the received sentence. The device further includes an artificial intelligence (AI) processing engine configured to receive a document comprising text, to sentences within the document, and to classify the sentences using the machine learning model. The AI processing engine is further configured to identify tagging rules for the document and to annotate one or more sentences from the document with a sentence type that matches a sentence type that is identified by the tagging rules for the document.

Terrain trafficability assessment for autonomous or semi-autonomous rover or vehicle

A rover or semi-autonomous or autonomous vehicle may use an image classifier to determine a terrain class of regions of an image of the terrain ahead of the rover or vehicle. The regions of the images are used to estimate the slope of the terrain for the different regions. The terrain class and slope are used to predict an amount of slip the rover will experience when traversing the terrain of the different regions. A heuristic mapping for the terrain class may be applied to the predicted slip amount to determine a hazard level for the rover or vehicle traversing the terrain.

Image content obfuscation using a neural network

The technology described herein obfuscates image content using a local neural network and a remote neural network. The local network runs on a local computer system and a remote classifier runs in a remote computing system. Together, the local network and the remote classifier are able to classify images, while the image never leaves the local computer system. In aspects of the technology, the local network receives a local image and creates a transformed object. The transformed object may be generated by processing the image with a local neural network to generate a multidimensional array and then randomly shuffling data locations within a multidimensional array. The transformed object is communicated to the remote classifier in the remote computing system for classification. The remote classifier may not have the seed used to deterministically scramble the spatial arrangement of data within the multidimensional array.

Detection and replacement of transient obstructions from high elevation digital images

Implementations relate to detecting/replacing transient obstructions from high-elevation digital images. A digital image of a geographic area includes pixels that align spatially with respective geographic units of the geographic area. Analysis of the digital image may uncover obscured pixel(s) that align spatially with geographic unit(s) of the geographic area that are obscured by transient obstruction(s). Domain fingerprint(s) of the obscured geographic unit(s) may be determined across pixels of a corpus of digital images that align spatially with the one or more obscured geographic units. Unobscured pixel(s) of the same/different digital image may be identified that align spatially with unobscured geographic unit(s) of the geographic area. The unobscured geographic unit(s) also may have domain fingerprint(s) that match the domain fingerprint(s) of the obscured geographic unit(s). Replacement pixel data may be calculated based on the unobscured pixels and used to generate a transient-obstruction-free version of the digital image.

Detection and replacement of transient obstructions from high elevation digital images

Implementations relate to detecting/replacing transient obstructions from high-elevation digital images. A digital image of a geographic area includes pixels that align spatially with respective geographic units of the geographic area. Analysis of the digital image may uncover obscured pixel(s) that align spatially with geographic unit(s) of the geographic area that are obscured by transient obstruction(s). Domain fingerprint(s) of the obscured geographic unit(s) may be determined across pixels of a corpus of digital images that align spatially with the one or more obscured geographic units. Unobscured pixel(s) of the same/different digital image may be identified that align spatially with unobscured geographic unit(s) of the geographic area. The unobscured geographic unit(s) also may have domain fingerprint(s) that match the domain fingerprint(s) of the obscured geographic unit(s). Replacement pixel data may be calculated based on the unobscured pixels and used to generate a transient-obstruction-free version of the digital image.

Computer system and method for presenting asset insights at a graphical user interface

A computing system is configured to derive insights related to asset operation and present these insights via a GUI. To these ends, the computing system (a) receives data related to the operation of assets, (b) based on this data, derives a plurality of insights related to the operation of at least a subset of the assets, (c) from the insights, defines a given subset of insights to be presented to a user, (d) defines at least one aggregated insight representative of one or more individual insights in the given subset of insights that are related to a common underlying problem, and (e) causes the user's client station to display a visualization of the given subset of insights including (i) an insights pane that provides a high-level overview of the subset of insights and (ii) a details pane that provides additional details regarding a selected one of the subset of insights.

Asset tracking systems

The disclosed technology includes image-based systems and methods for object tracking within an asset area. Some exemplary methods include receiving an indication of a first object entering an asset area and receiving data indicative of a plurality of captured images. The methods also include performing, by at least one processor, object classification of the first object based on one or more of the plurality of captured images. The methods further include determining a first object location of the first object based at least in part on the object classification, and outputting an indication of the first object location.

Asset tracking systems

The disclosed technology includes image-based systems and methods for object tracking within an asset area. Some exemplary methods include receiving an indication of a first object entering an asset area and receiving data indicative of a plurality of captured images. The methods also include performing, by at least one processor, object classification of the first object based on one or more of the plurality of captured images. The methods further include determining a first object location of the first object based at least in part on the object classification, and outputting an indication of the first object location.

Multiple Stage Image Based Object Detection and Recognition

Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.

SYSTEMS AND METHODS FOR CONSTRUCTING MOTION MODELS BASED ON SENSOR DATA

This disclosure relates to systems, media, and methods for updating motion models using sensor data. In an embodiment, the system may perform operations including receiving sensor data from at least one motion sensor; generating training data based on at least one annotation associated with the sensor data and at least one data manipulation; receiving at least one experiment parameter; performing a first experiment using the training data and the at least one experiment parameter to generate experiment results; and performing at least one of: update or validate a first motion model based on the experiment results.