Patent classifications
G06F9/5005
AUTOMATION PREVIEW
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automated management of campaigns using scripted rules.
ITERATIVE AND HIERARCHICAL PROCESSING OF REQUEST PARTITIONS
Methods and systems disclosed herein relate generally to temporally prioritizing queries of queue-task partitions based on distributions of flags assigned to bits corresponding to access rights.
Lock scheduling using machine learning
The present approach relates to systems and methods for facilitating run time predictions for cloud-computing automated tasks (e.g., automated tasks), and using the predicted run time to schedule resource locking. A predictive model may predict the automated task run time based on historical run time to completion, and the run time may be updated using machine learning. Resource lock schedules may be determined for a queue of automated tasks utilizing the resource based on the predicted run time for the various types of automated tasks. The predicted run time may be used to reserve a resource for the given duration, such that the resource is not available for use for another task.
Transaction-enabled systems and methods for resource acquisition for a fleet of machines
The present disclosure describes transaction-enabling systems and methods. A system can include a controller and a fleet of machines, each having at least one of a compute task requirement, a networking task requirement, and an energy consumption task requirement. The controller may include a resource requirement circuit to determine an amount of a resource for each of the machines to service the task requirement for each machine, a forward resource market circuit to access a forward resource market, and a resource distribution circuit to execute an aggregated transaction of the resource on the forward resource market.
Dynamically mapping software infrastructure utilization
A computer-based system and method for real-time monitoring of computer resource usage, including obtaining, by a monitoring application executed by a processor, from a plurality of applications, each application executed by a processor, a report upon the accessing of at least one accessed resource by at least one accessing user; and generating, by the monitoring application based on the report, a map of resources accessed by the plurality of applications. If a notification that a resource has been compromised is obtained, a list of all applications that have accessed the resource may be generated based on the map.
System and Method for Providing Dynamic Provisioning Within a Compute Environment
The disclosure relates to systems, methods and computer-readable media for dynamically provisioning resources within a compute environment. The method aspect of the disclosure comprises A method of dynamically provisioning resources within a compute environment, the method comprises analyzing a queue of jobs to determine an availability of compute resources for each job, determining an availability of a scheduler of the compute environment to satisfy all service level agreements (SLAs) and target service levels within a current configuration of the compute resources, determining possible resource provisioning changes to improve SLA fulfillment, determining a cost of provisioning; and if provisioning changes improve overall SLA delivery, then re-provisioning at least one compute resource.
SYSTEM AND METHODS FOR TRANSACTION-BASED PROCESS MANAGEMENT
Systems and methods for transaction/file-based management of a plurality of processes associated with various jobs are provided. Through the management of discrete applications, a file distribution manager/scheduler orchestrates automated execution of different types of jobs. The processes executed for the various processes can vary based on job type, or other parameters.
MULTI-PATH TRANSPORT DESIGN
Disclosed herein is a method including receiving, from a user application, data to be transmitted from a source address to a destination address using a single connection through a network; and splitting the data into a plurality of packets according to a communication protocol. For each packet of the plurality of packets, a respective flowlet for the packet to be transmitted in is determined from a plurality of flowlets. Assignment of the flowlets to the packets can be dynamically adjusted based on utilization of the flowlets.
CPU Resource Reservation Method and Apparatus, and Related Device Thereof
Provided are a Central Processing Unit (CPU) resource reservation method, apparatus, and device, and a computer-readable memory medium. The method includes: selecting a target working node according to a received Virtual Machine (VM) startup request; obtaining a total number of virtual cores and a number of allocatable physical cores in the target working node statistically; performing calculation to obtain an available CPU quota according to the total number of virtual cores and the number of allocatable physical cores; and performing CPU resource reservation configuration on the target working node by use of the available CPU quota. According to the CPU resource reservation method, the reservation of CPU resources in a VM system may be implemented more flexibly and efficiently.
SYSTEMS AND METHODS FOR PERFORMING SECURE DIGITAL FORENSICS INVESTIGATIONS USING A HYBRID OF ON-PREMISES AND CLOUD-BASED RESOURCES
Computer systems and methods for managing sensitive data items when performing a computer-implemented digital forensic workflow using on-premises (“on-prem”) and cloud resources are provided. The system includes a control computing node configured to: store the digital forensic workflow in a memory; and allocate forensic data processing tasks corresponding to portions of the digital forensic workflow to processing node computing devices (“processing nodes”) for execution by the processing nodes, the processing nodes communicatively connected to the control computing node via at least one data communication network and including at least one cloud processing node and at least one on-premises (“on-prem”) processing node. The control computing node automatically restricts allocation of a given forensic data processing task to the at least one on-prem processing node when forensic data to be operated on in performance of the given processing task is tagged as sensitive.