H04L12/2881

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

Illumination control device and related illumination control system

A control device for controlling at least one LED module includes: a power supply module arranged to receive power over Ethernet to generate a first supply power; and an illumination controlling module coupled to the power supply module for receiving a communication signal from the Ethernet to generate a serial bus signal to control an illumination of the at least one LED module; wherein the illumination controlling module is powered by the first supply power.

Host Routed Overlay with Deterministic Host Learning and Localized Integrated Routing and Bridging

Systems, methods, and devices for improved routing operations in a network computing environment. A system includes a virtual customer edge router and a host routed overlay comprising a plurality of host virtual machines. The system includes a routed uplink from the virtual customer edge router to one or more of the plurality of leaf nodes. The system is such that the virtual customer edge router is configured to provide localized integrated routing and bridging (IRB) service for the plurality of host virtual machines of the host routed overlay.

Independent datastore in a network routing environment

Systems, methods, and devices for offloading network data to a datastore. A system includes a publisher device in a network computing environment. The system includes a subscriber device in the network computing environment. The system includes a datastore independent of the publisher device and the subscriber device, the datastore comprising one or more processors in a processing platform configurable to execute instructions stored in non-transitory computer readable storage media. The instructions includes receiving data from the publisher device. The instructions include storing the data across one or more of a plurality of shared storage devices. The instructions include providing the data to the subscriber device.

Per-Subscriber Virtual Segmentation of an Active Ethernet Network on Multi-Tenant Properties
20230080458 · 2023-03-16 · ·

The present systems and methods enable Internet service providers and managed service providers to deploy a segmented network for multiple subscribers on a shared active Ethernet distribution medium, where each subscriber can be associated with one or more unique public IP addresses, and each subscriber also has control of their own gateway configuration. The system leverages the per-subscriber dynamic 802.1q VLAN approach enforced through compatible wireless and wireline distribution equipment in combination with optional multiple PSK zero-touch LAN onboarding and public IP WAN address assignment mechanisms, along with an onboard multi-tenant subscriber portal. The result is a network architecture that incorporates per-subscriber segmentation and security features, while simultaneously providing centralized radio resource management, property-wide roaming, instantaneous onboarding, and the like.

Host routed overlay with deterministic host learning and localized integrated routing and bridging

Systems, methods, and devices for improved routing operations in a network computing environment. A system includes a virtual customer edge router and a host routed overlay comprising a plurality of host virtual machines. The system includes a routed uplink from the virtual customer edge router to one or more of the plurality of leaf nodes. The system is such that the virtual customer edge router is configured to provide localized integrated routing and bridging (TRB) service for the plurality of host virtual machines of the host routed overlay.

CLOUD-BASED SCALE-UP SYSTEM COMPOSITION

Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.

Provisioning network devices in Ethernet-based access networks
09787492 · 2017-10-10 · ·

In general, techniques are described for provisioning network devices in an Ethernet-based access network. For example, an access node located in an Ethernet-based access network positioned intermediate to a back office network and a customer network may implement the techniques. The access node comprises a control unit that discovers a demarcation point device that terminates the access network of the service provider network at the customer network. The control unit of the access node implements an Ethernet protocol to provide layer two network connectivity between the service provider network and the customer network, authenticates the demarcation point device based on a unique identifier assigned to the demarcation point device and, after successfully authenticating the demarcation point device, provisions the demarcation point device.

BRIDGE PORT EXTENDER
20170279639 · 2017-09-28 ·

Example implementations relate to a bridge port extender. For example, a bridge port extender may include a processor. The processor may receive an Ethernet frame from a network bridge, where the Ethernet frame includes an encapsulated portion and an unencapsulated portion, and where the unencapsulated portion includes an E-tag. The processor may remove the E-tag from the unencapsulated portion to form a modified Ethernet frame. The processor may transmit the modified Ethernet frame to a client device based on the E-tag.

Technologies for dividing work across accelerator devices

Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.