Patent classifications
H04L67/1036
SYSTEM AND TECHNIQUES FOR INFERRING A THREAT MODEL IN A CLOUD-NATIVE ENVIRONMENT
In some aspects, a server device may identify one or more services of a cloud infrastructure via a management layer. The server device may determine service information and configuration information for the one or more services. The server device may generate an environment model based at least in part on the service information and the configuration information, the environment model providing information on relationship between one or more components of the cloud infrastructure. The server device may determine one or more threats to the one or more services based at least in part on analyzing the environment model and accessing a threat information database. The server device may generate a threat model that lists the one or more threats to the one or more services. The server device may generate one or more recommendations for the cloud infrastructure based at least on the threat model.
Dynamically computing load balancer subset size in a distributed computing system
A distributed computing system uses dynamically calculates a subset size for each of a plurality of load balancers. Each of a plurality of load balancers logs requests from client devices for connections to back-end servers and periodically sends a request report to a traffic aggregator, which aggregates the report requests from the load balancers in the corresponding zone. Each traffic aggregator sends the aggregated request data to a traffic controller, which aggregates the request data to determine a total number of requests received at the system. The total request data is transmitted through each traffic aggregator to each load balancer instance, which calculates a percentage of the total number of requests produced by the load balancer and determines a subset size based on the calculated percentage.
Dynamically computing load balancer subset size in a distributed computing system
A distributed computing system uses dynamically calculates a subset size for each of a plurality of load balancers. Each of a plurality of load balancers logs requests from client devices for connections to back-end servers and periodically sends a request report to a traffic aggregator, which aggregates the report requests from the load balancers in the corresponding zone. Each traffic aggregator sends the aggregated request data to a traffic controller, which aggregates the request data to determine a total number of requests received at the system. The total request data is transmitted through each traffic aggregator to each load balancer instance, which calculates a percentage of the total number of requests produced by the load balancer and determines a subset size based on the calculated percentage.
Highly available private cloud service
One or more non-transitory machine-readable storage mediums storing program instructions for operating a first cluster of servers. The program instructions are configured to be executable by one or more processors of the first cluster of servers to perform various operations. The operations may include storing a user interface of a cloud service consumer and receiving, from a user, a request to access services of the cloud service consumer. The operations may further include retrieving, from a second cluster of servers maintained by the cloud service consumer, user data required in response to the request and providing the user interface and the user data to the user.
Systems and methods for end user connection load balancing
Described herein are systems and methods for end user connection load balancing amongst multiple on-premise connector proxies deployed across geographic locations and reducing connection setup latency without using a shared or distributed database. The system can load balance connections deterministically amongst the on-premise connector proxies using load statistics. The system utilizes an intelligent DNS service that can use network experience data, service availability, and application metrics to provide sophisticated traffic management via DNS or API-based decisions. The system can include a domain name system (DNS) resolver configured to receive metrics for a first connector and a second connector of a data center of an entity, receive a DNS request including an entity identifier and a data center identifier; and transmit a response to the DNS request identifying a server selected based on the metrics identified using the entity identifier and the data center identifier.
Systems and methods for end user connection load balancing
Described herein are systems and methods for end user connection load balancing amongst multiple on-premise connector proxies deployed across geographic locations and reducing connection setup latency without using a shared or distributed database. The system can load balance connections deterministically amongst the on-premise connector proxies using load statistics. The system utilizes an intelligent DNS service that can use network experience data, service availability, and application metrics to provide sophisticated traffic management via DNS or API-based decisions. The system can include a domain name system (DNS) resolver configured to receive metrics for a first connector and a second connector of a data center of an entity, receive a DNS request including an entity identifier and a data center identifier; and transmit a response to the DNS request identifying a server selected based on the metrics identified using the entity identifier and the data center identifier.
Cloud agnostic service discovery
A system may include a processing device and a memory storing instructions that, when executed by the processing device, causes the processing device to discover one or more endpoints of a service in view of a name that is unique to the service. In response to receiving a request to resolve the name from a client, the processing device may obtain the one or more endpoints of that service in view of the name. The processing device may filter the one or more endpoints, in view of the name and return the one or more endpoints which are filtered, to the client.
Cloud agnostic service discovery
A system may include a processing device and a memory storing instructions that, when executed by the processing device, causes the processing device to discover one or more endpoints of a service in view of a name that is unique to the service. In response to receiving a request to resolve the name from a client, the processing device may obtain the one or more endpoints of that service in view of the name. The processing device may filter the one or more endpoints, in view of the name and return the one or more endpoints which are filtered, to the client.
SERVICE AREA BASED DNS
Apparatuses, methods, and systems are disclosed for supporting edge data network discovery. One apparatus includes a transceiver and a processor that receives a first request from a function in the mobile communication network including a UE identity and a UE network address. The processor determines whether the UE is located in a first service area based on a UE location and forwards a DNS request received from the UE network address to a first DNS server associated with the first service area in response to determining that the UE is located in the first service area. Via the transceiver, the processor receives a DNS reply from the first DNS server and sends a second request to a policy function in the mobile communication network in response to determining that the DNS reply includes a first IP address.
RESILIENCE BASED DATABASE PLACEMENT IN CLUSTERED ENVIRONMENT
Herein are resource-constrained techniques that plan ahead for resiliently moving pluggable databases between container databases after a failure in a high-availability database cluster. In an embodiment, a computer identifies many alternative placements that respectively assign each pluggable database to a respective container database. For each alternative placement, a respective resilience score is calculated for each pluggable database that is based on the container database of the pluggable database. Based on the resilience scores of the pluggable databases for the alternative placements, a particular placement is selected as an optimal placement that would maximize utilization of computer resources, minimize database latencies, maximize system throughput, and maximize the ability of the database cluster to avoid a service outage.