Patent classifications
H05K7/1447
REFERENCE ELECTRICAL POTENTIAL ASSEMBLY AND ATTACHMENT ASSEMBLY FOR PRINTED CIRCUIT
The invention relates to a reference potential assembly and an attachment assembly for an electrical circuit, the assembly having an electrically conductive flexible clip (30) which includes: an electrical circuit clamping conductor clamp; an electrical contact stub (42); an attachment (43) adapted to clamp the electrical contact stub (42) against an electrical contact surface.
PRE-CABLED DRAWER SYSTEM FOR COMPUTER NETWORKING EQUIPMENT
A drawer type installation for computer equipment within a computer infrastructure environment. A chassis is configured to contain computer equipment that is pre-cabled and coupled with a first connector. The chassis adapted to insert into a rack with a receiver chassis. The first connector operatively couples with a second connector that is connected with a second wiring to the computer infrastructure environment.
Technologies for dynamically managing resources in disaggregated accelerators
Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
TECHNOLOGIES FOR COORDINATING DISAGGREGATED ACCELERATOR DEVICE RESOURCES
A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
Technologies for data center multi-zone cabling
Technologies for connecting data cables in a data center are disclosed. In the illustrative embodiment, racks of the data center are grouped into different zones based on the distance from the racks in a given zone to a network switch. All of the racks in a given zone are connected to the network switch using data cables of the same length. In some embodiments, certain physical resources such as storage may be placed in racks that are in zones closer to the network switch and therefore use shorter data cables with lower latency. An orchestrator server may, in some embodiments, schedule workloads or create virtual servers based on the different zones and corresponding latency of different physical resources.
Techniques to configure physical compute resources for workloads via circuit switching
Embodiments are generally directed apparatuses, methods, techniques and so forth to select two or more processing units of the plurality of processing units to process a workload, and configure a circuit switch to link the two or more processing units to process the workload, the two or more processing units each linked to each other via paths of communication and the circuit switch.
Workstation with cable management system
A cable management apparatus can include a first surface configured for working and a second surface having a channel integrally formed into the apparatus material at the second surface. The channel can have sidewalls, extend longitudinally in a direction parallel to the second surface, have a longitudinal channel opening at the second surface along a substantial portion of the channel, and can be configured to hold a cable therein even when the channel opening faces downward. The cable may be curved within the channel to create elastic potential energy in the cable, and a friction force between the curved cable pushing against the channel sidewalls can hold the cable in place within the channel. The channel may form a cross-section having an undercut that is defined by ledge portions at the channel opening, and the ledge portions can hold the cable in place within the channel.
TECHNIQUES TO CONTROL SYSTEM UPDATES AND CONFIGURATION CHANGES VIA THE CLOUD
Embodiments are generally directed apparatuses, methods, techniques and so forth determine an access level of operation based on an indication received via one or more network links from a pod management controller, and enable or disable a firmware update capability for a firmware device based on the access level of operation, the firmware update capability to change firmware for the firmware device. Embodiments may also include determining one or more configuration settings of a plurality of configuration settings to enable for configuration based on the access level of operation, and enable configuration of the one or more configuration settings.
TECHNOLOGIES FOR COORDINATING DISAGGREGATED ACCELERATOR DEVICE RESOURCES
A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
Technologies for accelerator interface
Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.