G06F11/3628

Systems and methods for censoring text inline

Systems and methods for censoring text-based data are provided. In some embodiments a censoring system may include at least one processor and at least one non-transitory memory storing application programming interface instructions. The censoring system may be configured to perform operations comprising storing a target pattern type and a computer-based model for identifying a target data pattern corresponding to a target pattern type within text based data. The censoring system may also be configured to receive text-based data by a server, and to retrieve the stored target pattern type to be censored in the text-based data. The censoring system may be configured to identify within the received text-based data, a target data pattern corresponding to the retrieved target pattern type. The censoring system may be configured to censor target characters within the identified target data pattern, and transmit the censored text-based data to a receiving party.

System, method, and computer-accessible medium for evaluating multi-dimensional synthetic data using integrated variants analysis

An exemplary system, method, and computer-accessible medium can include, for example, receiving an original dataset(s), receiving a synthetic dataset(s), training a model(s) using the original dataset(s) and the synthetic dataset(s), and evaluating the synthetic dataset(s) based on the training of the model(s). The model(s) can include a first model and a second model, and the first model can be trained using the original dataset(s) and the second model can be trained using the synthetic dataset(s). The synthetic dataset(s) can be evaluated by comparing first results from the training of the first model to second results from the training of the second model.

AUTOMATICALLY SCALABLE SYSTEM FOR SERVERLESS HYPERPARAMETER TUNING

A scalable system and method for completing a model task using a serverless architecture is disclosed. The system may include memory storing instructions and one or more processors. The method may include receiving a request to complete a model task; retrieving a first model and a first hyperparameter based on the request; provisioning computing resources to a first development instance configured to train the first model based on the first hyperparameter and the model task; training, by the first development instance, an instance of the first model to produce a trained model and terminating said training upon satisfaction of a training criterion; receiving the trained model and a first performance metric; receiving a second performance metric associated with a second model; and terminating the development instance based on a determination that the termination condition is satisfied based on at least one of the first and second performance metrics.

AUTOMATED HONEYPOT CREATION WITHIN A NETWORK

Systems and methods for managing Application Programming Interfaces (APIs) are disclosed. Systems may involve automatically generating a honeypot. For example, the system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving, from a client device, a call to an API node and classifying the call as unauthorized. The operation may include sending the call to a node-imitating model associated with the API node and receiving, from the node-imitating model, synthetic node output data. The operations may include sending a notification based on the synthetic node output data to the client device.

SYSTEMS AND METHODS TO USE NEURAL NETWORKS FOR MODEL TRANSFORMATIONS

Systems and methods for transforming legacy models and transforming a model into a neural network model are disclosed. In an embodiment, a method may include receiving input data comprising an input model, an input dataset, and an input command. The method may include applying the input model to the input dataset to generate model output and storing model output and at least one of input model features or a map of the input model. The method may include generating a candidate neural network models with parameters. The method may include tuning the candidate neural network models to the input model. The method may include receiving model output from the candidate neural network models and selecting a neural network model from the candidate neural network models based on the candidate model output and the model selection criteria. In some aspects, the method may include returning the selected neural network model.

DATASET CONNECTOR AND CRAWLER TO IDENTIFY DATA LINEAGE AND SEGMENT DATA

Systems and methods for connecting datasets are disclosed. For example, a system may include a memory unit storing instructions and a processor configured to execute the instructions to perform operations. The operations may include receiving a plurality of datasets and a request to identify a cluster of connected datasets among the received plurality of datasets. The operations may include selecting a dataset. In some embodiments, the operations include identifying a data schema of the selected dataset and determining a statistical metric of the selected dataset. The operations may include identifying foreign key scores. The operations may include generating a plurality of edges between the datasets based on the foreign key scores, the data schema, and the statistical metric. The operations may include segmenting and returning datasets based on the plurality of edges.

MAINTENANCE OF COMPUTING DEVICES

In an example there is provided a method to access data records generated by a computing device, the data records specifying at least an event log of in-device code executed by the computing device. The method comprises applying pattern recognition to the data records of the computing device to determine if the computing device needs in-device code maintenance and performing maintenance of the in-device code on the computing device in response to the output of the pattern recognition.

Remote debugging parallel regions in stream computing applications

A method, system and computer program product for facilitating remote debugging of parallel regions in stream computing applications. A stream computing management server (SCMS) communicates a list of processing elements to a debugging interface. Responsive to setting a debugging breakpoint for a processing element of the list of processing elements, the SCMS receives a command to enable remote debugging for the selected processing element. In this regard, the processing element is a part of a parallel channel in a distributed processing environment. The SCMS maps the processing element to an attachment information in the distributed environment. The SCMS dynamically attaches a remote debugger to the processing element based on the attachment information.

Systems and methods to identify neural network brittleness based on sample data and seed generation

Systems and methods for determining neural network brittleness are disclosed. For example, the system may include one or more memory units storing instructions and one or more processors configured to execute the instructions to perform operations. The operations may include receiving a modeling request comprising a preliminary model and a dataset. The operations may include determining a preliminary brittleness score of the preliminary model. The operations may include identifying a reference model and determining a reference brittleness score of the reference model. The operations may include comparing the preliminary brittleness score to the reference brittleness score and generating a preferred model based on the comparison. The operations may include providing the preferred model.

SYSTEMS AND METHODS FOR REPLACING SENSITIVE DATA

A model optimizer is disclosed for managing training of models with automatic hyperparameter tuning. The model optimizer can perform a process including multiple steps. The steps can include receiving a model generation request, retrieving from a model storage a stored model and a stored hyperparameter value for the stored model, and provisioning computing resources with the stored model according to the stored hyperparameter value to generate a first trained model. The steps can further include provisioning the computing resources with the stored model according to a new hyperparameter value to generate a second trained model, determining a satisfaction of a termination condition, storing the second trained model and the new hyperparameter value in the model storage, and providing the second trained model in response to the model generation request.