H04L67/1004

Payload recording and comparison techniques for discovery

Persistent storage may contain an input discovery payload that contains entries representing configuration items and relationships therebetween, wherein the configuration items contain attributes defining devices, components, or applications on a network. One or more processors may be configured to: provide, for display, a graphical user interface containing a representation of the input discovery payload and a button; provide the input discovery payload to an identification and reconciliation engine (IRE) software application; receive, from the IRE software application, an output discovery payload that includes a log generated from execution of the IRE software application on the input discovery payload, wherein the log indicates, for the configuration items and the relationships in the input discovery payload, how a configuration management database (CMDB) would be updated by the IRE software application; and provide, for display, a further graphical user interface containing a further representation of the output discovery payload.

Server system and method of managing server system
11582295 · 2023-02-14 · ·

A server system including a first server to execute first role, other server to execute at other role, spare server and management layer server. The management layer server is configured to allocate first group of users to access first server and other group of users to access other server, receive status information sent by first server and status information sent by other server, analyse status information to determine an operational status of first server and operational status of other server, update role of spare server to first role when operational status of first server indicates failed state and reallocate first group of users to the spare server, and update a role of another spare server to the other role when the operational status of the other server indicates a failed state and reallocate the other group of users to the other spare server.

Server system and method of managing server system
11582295 · 2023-02-14 · ·

A server system including a first server to execute first role, other server to execute at other role, spare server and management layer server. The management layer server is configured to allocate first group of users to access first server and other group of users to access other server, receive status information sent by first server and status information sent by other server, analyse status information to determine an operational status of first server and operational status of other server, update role of spare server to first role when operational status of first server indicates failed state and reallocate first group of users to the spare server, and update a role of another spare server to the other role when the operational status of the other server indicates a failed state and reallocate the other group of users to the other spare server.

Auto-documentation for application program interfaces based on network requests and responses
11582291 · 2023-02-14 · ·

Disclosed embodiments are directed at systems, methods, and architecture for providing auto-documentation to APIs. The auto documentation plugin is architecturally placed between an API and a client thereof and parses API requests and responses in order to generate auto-documentation. In some embodiments, the auto-documentation plugin is used to update preexisting documentation after updates. In some embodiments, the auto-documentation plugin accesses an on-line documentation repository. In some embodiments, the auto-documentation plugin makes use of a machine learning model to determine how and which portions of an existing documentation file to update.

Auto-documentation for application program interfaces based on network requests and responses
11582291 · 2023-02-14 · ·

Disclosed embodiments are directed at systems, methods, and architecture for providing auto-documentation to APIs. The auto documentation plugin is architecturally placed between an API and a client thereof and parses API requests and responses in order to generate auto-documentation. In some embodiments, the auto-documentation plugin is used to update preexisting documentation after updates. In some embodiments, the auto-documentation plugin accesses an on-line documentation repository. In some embodiments, the auto-documentation plugin makes use of a machine learning model to determine how and which portions of an existing documentation file to update.

METHOD FOR RESPONDING TO RESOURCE REQUEST, REDIRECT SERVER, AND DECISION DELIVERY SERVER
20230038228 · 2023-02-09 ·

Embodiments of the present disclosure disclose a method for responding to a resource request, a redirect server, and a decision delivery server. The redirect server classifies a first resource request from a client based on a first screening rule (201), and responds to the first resource request determined to be of an unprocessable type, to enable the client to send a second resource request to the decision delivery server (202). The decision delivery server determines, based on a second screening rule, whether the second resource request from the client is of a serviceable type (203), and performs proxy acceleration for the second resource request if it is determined the second resource request is of the serviceable type (204).

METHOD FOR RESPONDING TO RESOURCE REQUEST, REDIRECT SERVER, AND DECISION DELIVERY SERVER
20230038228 · 2023-02-09 ·

Embodiments of the present disclosure disclose a method for responding to a resource request, a redirect server, and a decision delivery server. The redirect server classifies a first resource request from a client based on a first screening rule (201), and responds to the first resource request determined to be of an unprocessable type, to enable the client to send a second resource request to the decision delivery server (202). The decision delivery server determines, based on a second screening rule, whether the second resource request from the client is of a serviceable type (203), and performs proxy acceleration for the second resource request if it is determined the second resource request is of the serviceable type (204).

FOG NODE SCHEDULING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
20230010046 · 2023-01-12 ·

This application relates to a fog node scheduling method performed by a computer device, and a storage medium. The method includes: searching for candidate fog nodes storing a resource requested by a fog node scheduling request initiated by a client; performing effectiveness filtration on the candidate fog nodes to obtain effective fog nodes having predefined connectivity with the client; acquiring collected load information of the effective fog nodes; performing scheduling in the effective fog nodes based on the load information to obtain a scheduling result, where the scheduling result includes an identification of a target fog node obtained through scheduling and service flow allocated to the target fog node; and returning the scheduling result to the client so that the client can acquire the resource from the target fog node according to the identification and the service flow.

FOG NODE SCHEDULING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
20230010046 · 2023-01-12 ·

This application relates to a fog node scheduling method performed by a computer device, and a storage medium. The method includes: searching for candidate fog nodes storing a resource requested by a fog node scheduling request initiated by a client; performing effectiveness filtration on the candidate fog nodes to obtain effective fog nodes having predefined connectivity with the client; acquiring collected load information of the effective fog nodes; performing scheduling in the effective fog nodes based on the load information to obtain a scheduling result, where the scheduling result includes an identification of a target fog node obtained through scheduling and service flow allocated to the target fog node; and returning the scheduling result to the client so that the client can acquire the resource from the target fog node according to the identification and the service flow.

Customer data handling in a proxy infrastructure

Systems and methods herein provide for a proxy infrastructure. In the proxy infrastructure, a network element (e.g., a supernode) is connected with a plurality of exit nodes. At one of a plurality of messenger units of the proxy infrastructure, a proxy protocol request is received directly from a client computing device. The proxy protocol request specifies a request and a target. In response the proxy protocol request, a selection is made between one between one of the plurality of exit nodes. A message with the request is sent from the messenger to the supernode connected with the selected exit node. Finally, the message is sent from the supernode to the selected exit node to forward the request to the target.