Patent classifications
G06F18/24133
TECHNOLOGIES FOR ANALYZING BEHAVIORS OF OBJECTS OR WITH RESPECT TO OBJECTS BASED ON STEREO IMAGERIES THEREOF
This disclosure enables various technologies for analyzing behaviors of objects or with respect to objects based on stereo imageries thereof. For example, such analysis may be useful in enforcement of certain actions by objects or with respect to objects, surveillance of objects or with respect to objects, or other situations involving analyzing behaviors of objects or with respect to objects.
METHOD AND DEVICE FOR DETECTING CONTAINERS WHICH HAVE FALLEN OVER AND/OR ARE DAMAGED IN A CONTAINER MASS FLOW
Method for detecting containers which have fallen over and/or are damaged in a container mass flow, wherein the containers in the container mass flow are transported vertically on a transporter, wherein the container mass flow is captured as an image data stream using at least one camera, and wherein the image data stream is evaluated by an image processing unit, wherein the image data stream is evaluated by the image processing unit using a deep neural network in order to detect and locate the containers which have fallen over and/or are damaged.
Method, System, and Computer Program Product for Detecting Fraudulent Interactions
A method for detecting fraudulent interactions may include receiving interaction data, including a first plurality of interactions with (first) fraud labels and a second plurality of interactions (without fraud labels). Second fraud label data for each of the second plurality of interactions may be generated with a first neural network (e.g., classifying whether each interaction is fraudulent or not). Generated interaction data and generated fraud label data may be generated with a second neural network. Discrimination data for each of the second plurality of interactions and generated interactions may be generated with a third neural network (e.g., classifying whether the respective interaction is real or not). Error data may be determined based on the discrimination data (e.g., whether the respective interaction is correctly classified). At least one of the neural networks may be trained based on the error data. A system and computer program product are also disclosed.
METHODS AND SYSTEMS FOR ASSESSING A VASCULATURE
Methods and systems are provided for assessing a vasculature of an individual. In an embodiment of a method, one or more angiographic parametric imaging (API) maps of the vasculature are obtained, wherein each API map of the one or more API maps encodes a hemodynamic parameter. A state of the vasculature is determined using a machine-learning classifier applied to the one or more API maps.
METHODS AND SYSTEMS FOR PERFORMING REAL-TIME RADIOLOGY
The present disclosure provides methods and systems directed to performing real-time and/or AI-assisted radiology. A method for processing an image of a location of a body of a subject may comprise (a) obtaining the image of the location of a body of the subject; (b) using a trained algorithm to classify the image or a derivative thereof to a category among a plurality of categories, wherein the classifying comprises applying an image processing algorithm; (c) directing the image to a first radiologist for radiological assessment if the image is classified to a first category among the plurality of categories, or (ii) directing the image to a second radiologist for radiological assessment, if the image is classified to a second category among the plurality of categories; and (d) receiving a recommendation from the first or second radiologist to examine the subject based at least in part on the radiological assessment.
Apparatus for generating temperature prediction model and method for providing simulation environment
An apparatus for generating a temperature prediction model is disclosed. The apparatus for generating a temperature prediction model includes the temperature prediction model configured to provide a simulation environment, and a processor configured to set a hyperparameter of the temperature prediction model, train the temperature prediction model, in which the hyperparameter is set, so that the temperature prediction model, in which the hyperparameter is set, outputs a predicted temperature, update the hyperparameter on the basis of a difference between the predicted temperature, which is outputted from the trained temperature prediction model, and an actual temperature, and repeat the setting of the hyperparameter, the training of the temperature prediction model, and the updating of the hyperparameter on the basis of the difference between the predicted temperature and the actual temperature by a predetermined number of times or more to set a final hyperparameter of the temperature prediction model.
Method and apparatus for processing test execution logs to detremine error locations and error types
A method of processing test execution logs to determine error location and source includes creating a set of training examples based on previously processed test execution logs, clustering the training examples into a set of clusters using an unsupervised learning process, and using training examples of each cluster to train a respective supervised learning process to label data where each generated cluster is used as a class/label to identify the type of errors in the test execution log. The labeled data is then processed by supervised learning processes, specifically a classification algorithm. Once the classification model is built it is used to predict the type of the errors in future/unseen test execution logs. In some embodiments, the unsupervised learning process is a density-based spatial clustering of applications with noise clustering application, and the supervised learning processes are random forest deep neural networks.
Method for Processing Plants in a Field
The disclosure relates to a method for processing plants in a field in which a specific type of crop is planted, said method having the following steps: selecting a processing tool for processing plants; acquiring an image of the field, the image being correlated with position information; determining a position of a plant to be processed in the field using a neural network into which the acquired image is input, the neural network having a plurality of heads and in particular one of the heads being evaluated according to the processing tool and/or the type of crop grown; guiding the processing tool to the position of the plants; and processing the plants using the processing tool.
DEEP LEARNING-BASED MARINE OBJECT CLASSIFICATION USING 360-DEGREE IMAGES
Marine object detection, localization and classification systems and related techniques include an imaging system configured capture a stream of panoramic images of the water surrounding a mobile structure, including a view of the horizon. The images may include a 360-degree view from the mobile structure. The system is configured to analyze the stream of images using a marine video analytics system and/or a convolutional neural network to detect a region of interest comprising an object on the surface of the water, classify the detected object and relay the results to the user and/or a processing system. The analysis may include determining a horizon in a captured image, defining tiles across the horizon, and detecting objects in each tile.
USER-IN-THE-LOOP OBJECT DETECTION AND CLASSIFICATION SYSTEMS AND METHODS
A detection device is adapted to traverse a search area and generate sensor data associated with an object that may be present in the search area, the detection device comprising a first logic device configured to detect and classify the object in the sensor data, communicate object detection information to a control system when the detection device is within a range of communications of the control system, and generate and store object analysis information for a user of the control system when the detection device is not in communication with the control system. A control system facilitates user monitoring and/or control of the detection device during operation and to access the stored object analysis information. The object analysis information is provided in an interactive display to facilitate user detection and classification of the detected object by the user to update the detection information, trained object classifier, and training dataset.