G06F9/505

SECURITY SYSTEM AND CONTROL METHOD THEREOF
20220405140 · 2022-12-22 ·

A security system is disclosed. The security system includes a memory and a processor. The memory is configured to store several applications, in which several applications include several relationships. The processor is coupled to the memory, in which the processor is configured to manage several applications according to several relationships and at least one of a time driven method and an event driven method, in which the several relationships include a parent-child relationship, a function-group relationship, and an app-type relationship, to receive several input signals from several sources, and to display a screen picture of the several input signals according to several drawing parameters, and when several applications are running, the processor is further configured to allocate several resources of the security system to several applications according to several weighting values.

TREE BASED BEHAVIOR PREDICTOR

Various embodiments include methods and devices for training and implementing a tree-based behavior prediction model for use in autonomous vehicle control systems. Some embodiments may include labeling real-world autonomous vehicle run data to indicate an insight of the data, selecting an insight decision tree of the tree-based behavior prediction model for training using the labeled data, training the insight decision tree using the labeled data to classify a probability of an insight associated with the insight decision tree, and updating the tree-based behavior prediction model based on training the insight decision tree. Some embodiments may include selecting an insight decision tree of a tree-based behavior prediction model configured for classifying a probability of an insight associated with the insight decision tree, executing the insight decision tree, and outputting a probability of an insight determined from executing the insight decision tree using the data.

Simulation systems and methods using query-based interest
11533367 · 2022-12-20 · ·

Methods, systems, computer-readable media, and apparatuses for query-based interest in a simulation are presented. An entity comprising one or more components may be simulated. The entity may be modified to include an interest component indicating, for each component in the one or more components of the entity, a query subscription to an entity database. The query subscription may comprise one or more queries. Each query of the one or more queries may comprise a component value that qualifies another entity for inclusion in a query result, and a frequency for receiving, from the entity database, updates on the query result.

Cross-cluster host reassignment

Disclosed are various implementations of approaches for reassigning hosts between computing clusters. A computing cluster assigned to a first queue is identified. The first queue can include a first list of identifiers of computing clusters with insufficient resources for a respective workload. A host machine assigned to a second queue can then be identified. The second queue can include a second list of identifiers of host machines in an idle state. A command can then be sent to the host machine to migrate to the computing cluster. Finally, the host machine can be removed from the second queue.

METHOD AND APPARATUS FOR DIFFERENTIALLY OPTIMIZING QUALITY OF SERVICE QoS
20220400062 · 2022-12-15 ·

A method and apparatus for differentially optimizing a quality of service (QoS) includes: establishing a system model of a multi-task unloading framework; acquiring a mode for users executing a computation task, executing, according to the mode for users executing the computation task, the system model of the multi-task unloading framework; and optimizing a quality of service (QoS) on the basis of a multi-objective optimization method for a multi-agent deep reinforcement learning. According to the present invention, an unloading policy is calculated on the basis of a multi-user differentiated QoS of a multi-agent deep reinforcement learning, and with the differentiated QoS requirements among different users in a system being considered, a global unloading decision is performed according to a task performance requirement and a network resource state, and differentiated performance optimization is performed on different user requirements, thereby effectively improving a system resource utilization rate and a user service quality.

ORCHESTRATING EDGE SERVICE WORKLOADS ACROSS EDGE HIERARCHIES
20220400085 · 2022-12-15 ·

Computing resources are managed in a computing environment comprising a computing service provider and an edge computing network. The edge computing network comprises computing and storage devices configured to extend computing resources of the computing service provider to remote users of the computing service provider. The edge computing network collects capacity and usage data for computing and network resources at the edge computing network. The capacity and usage data is sent to the computing service provider. Based on the capacity and usage data, the computing service provider, using a cost function, determines a distribution of workloads pertaining to a processing pipeline that has been partitioned into the workloads. The workloads can be executed at the computing service provider or the edge computing network.

INFRASTRUCTURE RESOURCE CAPACITY MANAGEMENT WITH INTELLIGENT EXPANSION TRIGGER COMPUTATION
20220398520 · 2022-12-15 ·

Infrastructure resource capacity management techniques in an information processing system are disclosed. For example, a method comprises the following steps. Data associated with at least one resource of one or more computing platforms is obtained. Each of the one or more computing platforms is deployed at one or more locations associated with one or more entities. One or more resource expansion trigger threshold values are computed based on at least a portion of the obtained data for each of the one or more computing platforms. A resource expansion operation is initiated for the one or more computing platforms based on the one or more resource expansion trigger threshold values.

Server Classification Using Machine Learning Techniques
20220398132 · 2022-12-15 ·

Methods, apparatus, and processor-readable storage media for server classification using machine learning techniques are provided herein. An example computer-implemented method includes obtaining, from at least one data source, data pertaining to server activity attributed to one or more servers; processing at least a portion of the obtained data using one or more rule-based analyses; selecting at least a particular machine learning classification algorithm from a set of multiple machine learning classification algorithms, based at least in part on results from the processing and one or more portions of the obtained data; classifying an activity level of at least a portion of the one or more servers by processing at least a portion of the obtained data using the selected machine learning classification algorithm; and performing at least one automated action based at least in part on results of the classifying.

TESTING FRAMEWORK WITH LOAD FORECASTING

A method comprises collecting data corresponding to a plurality of components in a system, wherein the data comprises information about at least one of respective protocols and respective interfaces associated with respective ones of the plurality of components. The data is analyzed to determine at least one of the respective protocols and the respective interfaces associated with the respective ones of the plurality of components. In the method, operations of one or more components of the plurality of components are tested based at least in part on the determination of the at least one of the respective protocols and the respective interfaces. The method further includes outputting respective statuses of the one or more components, wherein the respective statuses are derived at least in part from the testing.

TECHNIQUES FOR SCALING WORKFLOW CAMPAIGNS

Techniques are disclosed for processing a workflow campaign. In some embodiments, a message processing service receives a first message that corresponds to a user arriving at a first node of the workflow campaign from a first message queue. The message processing service causes one or more actions associated with the first node to be performed with respect to the user. In addition, the message processing service determines that the first user should be progressed from the first node to a second node of the workflow campaign. The message processing service generates a second message that corresponds to the user arriving at the second node and determines a second message queue that is associated with the second node. The message processing service progresses the user from the first node to the second node by transmitting the second message to the second message queue.