G06V10/7784

Devices and methods for accurately identifying objects in a vehicle's environment

Vehicle navigation control systems in autonomous driving rely on accurate predictions of objects within the vicinity of the vehicle to appropriately control the vehicle safely through its surrounding environment. Accordingly this disclosure provides methods and devices which implement mechanisms for obtaining contextual variables of the vehicle's environment for use in determining the accuracy of predictions of objects within the vehicle's environment.

Embeddings + SVM for teaching traversability

A system includes a memory module configured to store image data captured by a camera and an electronic controller communicatively coupled to the memory module. The electronic controller is configured to receive image data captured by the camera, implement a neural network trained to predict a drivable portion in the image data of an environment. The neural network predicts the drivable portion in the image data of the environment. The electronic controller is configured to implement a support vector machine. The support vector machine determines whether the predicted drivable portion of the environment output by the neural network is classified as drivable based on a hyperplane of the support vector machine and output an indication of the drivable portion of the environment.

Visual analytics platform for updating object detection models in autonomous driving applications

Visual analytics tool for updating object detection models in autonomous driving applications. In one embodiment, an object detection model analysis system including a computer and an interface device. The interface device includes a display device. The computer includes an electronic processor that is configured to extract object information from image data with a first object detection model, extract characteristics of objects from metadata associated with image data, generate a summary of the object information and the characteristics, generate coordinated visualizations based on the summary and the characteristics, generate a recommendation graphical user interface element based on the coordinated visualizations and a first one or more user inputs, and update the first object detection model based at least in part on a classification of one or more individual objects as an actual weakness in the first object detection model to generate a second object detection model for autonomous driving.

METHOD AND APPARATUS FOR ESTIMATING SIZE OF DAMAGE IN THE DISASTER AFFECTED AREAS

According to an embodiment of the present disclosure, there may be provided an operation method of a server for estimating the size of damage in disaster affected areas. In this instance, the operation method of the server may include acquiring at least one first disaster image, deriving an affected area from each of the at least one first disaster image, acquiring affected area related information through labeling based on the derived affected area, and training a first learning model using the at least one first disaster image and the affected area related information.

Image-data-based classification of vacuum seal packages

Vacuum seal packages can be classified based on image data. Training image data is received that includes image data about first vacuum seal packages. Labels associated with the first vacuum seal packages are received, where each of the labels includes a state of one of the first vacuum seal packages. A trained classification model is developed based on the training image data and the received labels. Image data representative of a second vacuum seal package is received. The image data is inputted into the trained classification model, where the trained classification model is configured to classify a state of the second vacuum seal package based on the image data. The state of the second vacuum seal package is received from the trained classification model.

SYSTEMS FOR SELF-ORGANIZING DATA COLLECTION AND STORAGE IN A REFINING ENVIRONMENT

Systems for self-organizing data collection and storage in a refining environment are disclosed. An example system may include a swarm of mobile data collectors structured to interpret a plurality of sensor inputs from sensors in the refining environment, wherein the plurality of sensor inputs is configured to sense at least one of: an operational mode, a fault mode, a maintenance mode, or a health status of a plurality of refining system components disposed in the refining environment, and wherein the plurality of refining system components is structured to contribute, in part, to refining of a product. The self-organizing system organizes a swarm of mobile data collectors to collect data from the system components, and at least one of a storage operation of the data, a data collection operation of the sensors, or a selection operation of the plurality of sensor inputs.

Methods and systems for sensor fusion in a production line environment

Methods and systems for sensor fusion in a production line environment are disclosed. An example system for data collection in an industrial production environment may include an industrial production system comprising a plurality of components, and a plurality of sensors each operatively coupled to at least one of the components; a sensor communication circuit to interpret a plurality of sensor data values in response to a sensed parameter group; and a data analysis circuit to detect an operating condition of the industrial production system based at least in part on a portion of the sensor data values; and a response circuit to modify a production related operating parameter of the industrial production system in response to the detected operating condition.

Cognitive data pseudonymization

Computer systems, methods and program products for automating pseudonymization of personal identifying information (PII) using machine learning, metadata, and crowdsourcing patterns to identify and replace PII. Machine learning models are trained for classifying known column names or key names for processing, using metadata. Column or key names are classified to be unprocessed, anonymized or pseudonymized by a pseudonymizer without revealing PII or scrubbing data into a useless format. A library of crowdsourced patterns are utilized for matching PII to data values within column or key names and PII is mapped to replacement methods. Feedback from user annotations retrains the algorithms to improve classification accuracy and Deep Learning algorithms automate the identification of PII using regular expression generation to concisely articulate how pseudonymizers search for PII patterns within a data set. PII replacement is mapped consistently across entire data packages and the crowdsourced pattern library is updated with generated regular expressions.

Cross-modal weak supervision for media classification

Methods, systems, and storage media for classifying content across media formats based on weak supervision and cross-modal training are disclosed. The system can maintain a first feature classifier and a second feature classifier that classifies features of content having a first and second media format, respectively. The system can extract a feature space from a content item using the first feature classifier and the second feature classifier. The system can apply a set of content rules to the feature space to determine content metrics. The system can correlate a set of known labelled data to the feature space to construct determinative training data. The system can train a discrimination model using the content item and the determinative training data. The system can classify content using the discrimination model to assign a content policy to the second content item.

WEAKLY SUPERVISED OBJECT LOCALIZATION APPARATUS AND METHOD

A weakly supervised object localization apparatus includes: a feature map generator configured to generate a feature map X by performing a first convolution operation on an input image; an erased feature map generator configured to generate an attention map A through the feature map X and generate an erased feature map −X by performing a masking operation on the input image through the attention map A; a final map generator configured to generate a final feature map F and a final erased feature map −F, respectively, by performing a second convolution operation on the feature map X and the erased feature map −X; and a contrastive guidance determiner configured to determine contrastive guidance for a foreground object in the input image based on the final feature map F and the final erased feature map −F.