Patent classifications
H04L49/45
Runtime schema for services in a switch
One embodiment of the present invention provides a switch. During operation, the switch parses a first schema of the switch. The first schema indicates initialization information for one or more services of the switch expressed based on one or more tags. The switch then identifies a tag of the one or more tags in the first schema based on the parsing and identifies information corresponding to the tag from a profile of the switch. Subsequently, the switch generates a second schema from the first schema based on the identified information.
DYNAMIC PROXY PLACEMENT FOR POLICY-BASED ROUTING
Techniques for operationalizing workloads at edge network nodes, while maintaining centralized intent and policy controls. The techniques may include storing, in a cloud-computing network, a workload image that includes a function capability. The techniques may also include receiving, at the cloud-computing network, a networking policy associated with an enterprise network. Based at least in part on the networking policy, a determination may be made at the cloud-computing network that the function capability is to be operationalized on an edge device of the enterprise network. The techniques may also include sending the workload image to the edge device to be installed on the edge device to operationalize the function capability. In some examples, the function capability may be a security function capability (e.g., proxy, firewall, etc.), a routing function capability (e.g., network address translation, load balancing, etc.), or any other function capability.
DYNAMIC PROXY PLACEMENT FOR POLICY-BASED ROUTING
Techniques for operationalizing workloads at edge network nodes, while maintaining centralized intent and policy controls. The techniques may include storing, in a cloud-computing network, a workload image that includes a function capability. The techniques may also include receiving, at the cloud-computing network, a networking policy associated with an enterprise network. Based at least in part on the networking policy, a determination may be made at the cloud-computing network that the function capability is to be operationalized on an edge device of the enterprise network. The techniques may also include sending the workload image to the edge device to be installed on the edge device to operationalize the function capability. In some examples, the function capability may be a security function capability (e.g., proxy, firewall, etc.), a routing function capability (e.g., network address translation, load balancing, etc.), or any other function capability.
Technologies for dynamically managing resources in disaggregated accelerators
Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
Technologies for dynamically managing resources in disaggregated accelerators
Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
Technologies for data center multi-zone cabling
Technologies for connecting data cables in a data center are disclosed. In the illustrative embodiment, racks of the data center are grouped into different zones based on the distance from the racks in a given zone to a network switch. All of the racks in a given zone are connected to the network switch using data cables of the same length. In some embodiments, certain physical resources such as storage may be placed in racks that are in zones closer to the network switch and therefore use shorter data cables with lower latency. An orchestrator server may, in some embodiments, schedule workloads or create virtual servers based on the different zones and corresponding latency of different physical resources.
Techniques to configure physical compute resources for workloads via circuit switching
Embodiments are generally directed apparatuses, methods, techniques and so forth to select two or more processing units of the plurality of processing units to process a workload, and configure a circuit switch to link the two or more processing units to process the workload, the two or more processing units each linked to each other via paths of communication and the circuit switch.
Switch network architecture
One embodiment describes a network system. The system includes a primary enclosure including a network switch system that includes a plurality of physical interface ports. A first one of the plurality of physical interface ports is to communicatively couple to a network. The system further includes a sub-enclosure comprising a network interface card (NIC) to which a computer system is communicatively coupled and a downlink extension module (DEM) that is communicatively coupled with the NIC and a second one of the plurality of physical interface ports of the network switch system to provide network connectivity of the computer system to the network via the network switch system.
Non-blocking switch matrix
A N×M non-blocking switch matrix, where N and M are integers, includes an input stage having a plurality of m/2-way multiport switches, where quotient m/2 is a positive integer less than M, and an output stage having a plurality of n/2-way multiport switches, where quotient n/2 is a positive integer less than N. The switch matrix further includes a transfer stage having a plurality of transfer switches operatively connected between the input stage and output stage, and selectively applying outputs of the m/2-way multiport switches to inputs of the n/2-way multiport switches such that any given input to the m/2-way multiport switches is connectable to any given output of the n/2-way multiport switches.
Methods and apparatuses for transparent embedding of photonic switching into electronic chassis for scaling data center cloud system
There is provided methods and apparatuses for transferring photonic cells or frames between a photonic switch and an electronic switch enabling a scalable data center cloud system with photonic functions transparently embedded into an electronic chassis. In various embodiments, photonic interface functions may be transparently embedded into existing switch chips (or switch cards) without changes in the line cards. The embedded photonic interface functions may provide the switch cards with the ability to interface with both existing line cards and photonic switches. In order to embed photonic interface functions without changes on the existing line cards, embodiments use two-tier buffering with a pause signalling or pause messaging scheme for managing the two-tier buffer memories.