Patent classifications
H04L47/781
Application mapping and alerting based on data dependencies
Aspects of the disclosure relate to application mapping and alerting based on data dependencies in business and technology logic. In some embodiments, a computing platform may receive a request to map enterprise technology resources. Then, the computing platform may generate a business capability model. Next, the computing platform may cause a user computing device to display a graphical user interface comprising selectable graphical representations of applications associated with the enterprise technology resources. Then, the computing platform may receive a user input identifying an occurrence of a technology incident by selecting one of the graphical representations. In response to the selection, the computing platform may trace, using the generated business capability model, upstream or downstream impacts of the technology incident. Then, the computing platform may cause a visual representation of data dependencies indicating upstream or downstream impacts of the technology incident to be displayed on the user computing device.
Managing cloud acquisitions using distributed ledgers
Systems and methods of the disclosure include: receiving, by a cloud resource provisioning component via a cloud provisioning request application programming interface (API), a cloud resource request; storing the cloud resource request on a cryptographically-protected distributed ledger; receiving, from a first cloud provider, a first cloud resource offer responsive to the cloud resource request; and responsive to receiving, from a node of the cryptographically-protected distributed ledger, a notification of validation of the first cloud resource offer with respect to the cloud resource request, causing the first cloud provider to provision a cloud resource specified by the first cloud resource offer.
System for source independent but source value dependent transfer monitoring
Systems, computer program products, and methods are described herein for source independent but source value dependent transfer monitoring. The invention is configured to receive a processing request to initiate a processing network session, wherein the processing network session is associated with the processing of a first activity; receive, a processing interaction request to access a first resource associated with the user; extract a resource processing value associated with the first activity from the processing parameter data structure; determine whether the resource processing value is associated with triggering at least one block intervention step; block the entity input device from accessing the first resource in response to determining that the resource processing value is associated with triggering the at least one block intervention step during processing of the first activity; transmit a block notification to the entity input device; and trigger display of a success notification at an end-user application.
PACKET TRANSMISSION METHOD AND APPARATUS, AND NETWORK DEVICE
The application discloses a packet transmission method, an apparatus, and a network device. In an embodiment, a first network device obtains identification information corresponding to a service flow, and reserves a forwarding resource based on the identification information. The forwarding resource is used by the first network device to forward the service flow to a second network device. The first network device further sends a packet including the identification information to the second network device, and the second network device reserves a corresponding forwarding resource based on the identification information in the packet. Network devices perform resource reservation hop by hop by sending the packet including the identification information, and do not need to perform resource reservation based on a transmission path that is pre-calculated and planned, so that load of the network device or a controller is reduced, and resource reservation flexibility is improved.
Network Access Control Method, SDF, CP, UP, and Network System
A network device having at least one processor and one or more non-transitory memories storing programming instructions that are associated with a steering decision function (SDF) in a network system and including instructions to obtain a carrier-grade network address translation (CGN) resource pool by receiving CGN resources reported by a plurality of user planes (UPs), where the network system includes the SDF, the plurality UPs, and a control plane (CP), receive a CGN instance obtaining request sent by the CP, the CGN instance obtaining request indicating to allocate a CGN instance to a user equipment, allocate a first CGN instance to the user equipment based on the CGN resource pool, the first CGN instance indicating a first UP, of the plurality of UPs, having an available CGN resource, and send the first CGN instance to the CP.
DYNAMIC ALLOCATION OF COMPUTING RESOURCES
The exemplary embodiments disclose a method, a computer program product, and a computer system for allocating computing resources. The exemplary embodiments may include collecting data of one or more users, wherein the collected data comprises calendar data of the one or more users, extracting one or more features from the collected data, and allocating one or more computing resources to one or more of the users based on the extracted one or more features and one or more models.
APPARATUS AND METHOD FOR OPERATING A RING INTERCONNECT
An apparatus and method for operating a ring interconnect are disclosed. The ring interconnect has a plurality of nodes that are used to connect to associated components, and is arranged to transport a plurality of slots around the ring interconnect between the nodes in order to transfer items of traffic allocated into those slots between components connected to the nodes. For each item of traffic, one of the components acts as a source to allocate that item of traffic into a slot, and another components acts as destination to seek to remove that item of traffic from the slot. In a default mode of operation, the ring interconnect is arranged to allow all of the slots to be available for transfer of any items of traffic. Special slot management circuitry is provided that is responsive to a throughput alert trigger indicating a potential for occurrence of a throughput inhibiting condition, to cause a slot amongst the plurality of slots to be reserved as a special slot that is constrained for use only when one or more determined conditions are met. Further, the one or more determined conditions are arranged to cause the special slot to be used in a manner that seeks to avoid the throughput inhibiting condition arising.
ENHANCED REDEPLOYING OF COMPUTING RESOURCES
Examples described herein relate to method, resource management system, and non-transitory machine-readable medium for redeploying a computing resource. Data related to a performance parameter corresponding to a plurality of computing resources deployed on a plurality of host-computing nodes may be received. The performance parameter is associated with one or both of: communication between computing resources of the plurality of computing resources, or communication of the plurality of computing resources with a network device. Further, for a computing resource of the plurality of computing resources, a candidate host-computing node is determined from the plurality of host-computing nodes based on the data related to the performance parameter and the computing resource may be redeployed on the candidate host-computing node.
DYNAMIC BANDWIDTH ALLOCATION IN CLOUD NETWORK SWITCHES BASED ON TRAFFIC DEMAND PREDICTION
Embodiments for dynamic bandwidth allocation in cloud network switches in a cloud computing environment are provided. Quality of service (QoS) policies may be dynamically changed in one or more cloud network switches based on dynamically estimating expected traffic demands for each of a plurality of traffic classes, wherein bandwidth is dynamically allocated among queues based on changing the QoS policies.
SYSTEMS AND METHODS FOR MULTI-CLOUD VIRTUALIZED INSTANCE DEPLOYMENT AND EXECUTION
A system may receive a first definition for a virtualized instance of a network function. The first definition may include a first set of declarations in a first format that is different than respective formats supported by different virtualized environments. The system may select a first virtualized environment to run the virtualized instance based on requirements specified within the first definition, and may generate a second definition with a second set of declarations that map the first set of declarations from the first format to a second format supported by the first virtualized environment. The system may deploy the virtualized instance to the first virtualized environment using the second set of declarations from the second definition. Deploying the virtualized instance may include configuring its operation based on some of the second set of declarations matching a configuration format supported by the first virtualized environment.