Patent classifications
G06F2212/50
Optimized adaptive routing to reduce number of hops
A switch is provided, which can receive a data communication at an edge of a network. The network may be made up of a plurality of switches. The switch may generate a flow channel based upon an identified source and destination for the data communication. The data communication can be routed across the plurality of switches based on minimizing a number of hops between a subset of the plurality of switches and in accordance with the flow channel.
Emulating scratchpad functionality using caches in processor-based devices
Emulating scratchpad functionality using caches in processor-based devices is disclosed. In one aspect, each cache line within a cache of a processor-based device is associated with a corresponding scratchpad indicator indicating whether the corresponding cache line is exempt from the replacement policy used to select a cache line for eviction. Upon receiving data that corresponds to a memory access operation indicated as requiring scratchpad functionality, the cache controller stores the data in a cache line of the cache, and then sets the corresponding scratchpad indicator for the cache line. Subsequently, the cache controller emulates scratchpad functionality by allowing conventional memory read and write operations to be performed on the cache line, but does not apply its replacement policy to that cache line when selecting a cache line as a candidate for eviction. In this manner, the cache line may remain in the cache for use as scratchpad memory by software.
System and method for facilitating efficient packet injection into an output buffer in a network interface controller (NIC)
A network interface controller (NIC) capable of efficient packet injection into an output buffer is provided. The NIC can be equipped with an output buffer, a plurality of injectors, a prioritization logic block, and a selection logic block. The plurality of injectors can share the output buffer. The prioritization logic block can determine a priority associated with a respective injector based on a high watermark and a low watermark associated with the injector. The selection logic block can then determine, from the plurality of injectors, a subset of injectors associated with a buffer class and determine whether the subset of injectors includes a high-priority injector. Upon identifying a high-priority injector in the subset of injectors, the selection logic block can select the high-priority injector for injecting a packet in the output buffer.
METHOD AND SYSTEM FOR PROVIDING NETWORK EGRESS FAIRNESS BETWEEN APPLICATIONS
Methods and systems are provided to facilitate network egress fairness between applications. At an egress port of a network, an arbitrator can provide fairness-based traffic shaping to data associated with applications. The desired fairness-based traffic shaping can be provided based on bandwidth, traffic classes, or other parameters. Consequently, the egress link’s bandwidth can be allocated with fairness among the applications.
Switch device for facilitating switching in data-driven intelligent network
- Abdulla M. Bataineh ,
- Jonathan P. Beecroft ,
- Thomas L. Court ,
- Anthony M. Ford ,
- Edwin L. Froese ,
- David Charles Hewson ,
- Joseph G. Kopnick ,
- Andrew S. Kopser ,
- Duncan Roweth ,
- Gregory Faanes ,
- Michael Higgins ,
- Timothy J. Johnson ,
- Trevor Jones ,
- James Reinhard ,
- Edward J. Turner ,
- Steven L. Scott ,
- Robert L. Alverson
A switch architecture for a data-driven intelligent networking system is provided. The system can accommodate dynamic traffic with fast, effective congestion control. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform flow control on a per-flow basis.
Storage system and method for accessing same
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
System and method for facilitating efficient host memory access from a network interface controller (NIC)
A network interface controller (NIC) capable of efficient memory access is provided. The NIC can be equipped with an operation logic block, a signaling logic block, and a tracking logic block. The operation logic block can maintain an operation group associated with packets requesting an operation on a memory segment of a host device of the NIC. The signaling logic block can determine whether a packet associated with the operation group has arrived at or departed from the NIC. Furthermore, the tracking logic block can determine that a request for releasing the memory segment has been issued. The tracking logic block can then determine whether at least one packet associated with the operation group is under processing in the NIC. If no packet associated with the operation group is under processing in the NIC, tracking logic block can notify the host device that the memory segment can be released.
Technologies for execute only transactional memory
Technologies for execute only transactional memory include a computing device with a processor and a memory. The processor includes an instruction translation lookaside buffer (iTLB) and a data translation lookaside buffer (dTLB). In response to a page miss, the processor determines whether a page physical address is within an execute only transactional (XOT) range of the memory. If within the XOT range, the processor may populate the iTLB with the page physical address and prevent the dTLB from being populated with the page physical address. In response to an asynchronous change of control flow such as an interrupt, the processor determines whether a last iTLB translation is within the XOT range. If within the XOT range, the processor clears or otherwise secures the processor register state. The processor ensures that an XOT range starts execution at an authorized entry point. Other embodiments are described and claimed.
METHODS FOR DISTRIBUTING SOFTWARE-DETERMINED GLOBAL LOAD INFORMATION
Systems and methods are provided for performing routing in a switch network or fabric. Switches can be configured in a hierarchical topology having a plurality of groups, where switches in a group are connected to one another, and groups are connected to other groups. Routing can be performed by maintaining per-group group load information. A packet can be routed between at least two groups using the per-group group load information to effect a set of routing decisions. The set of routing decisions can be biased towards or away one or more paths.
Technologies for execute only transactional memory
Technologies for execute only transactional memory include a computing device with a processor and a memory. The processor includes an instruction translation lookaside buffer (iTLB) and a data translation lookaside buffer (dTLB). In response to a page miss, the processor determines whether a page physical address is within an execute only transactional (XOT) range of the memory. If within the XOT range, the processor may populate the iTLB with the page physical address and prevent the dTLB from being populated with the page physical address. In response to an asynchronous change of control flow such as an interrupt, the processor determines whether a last iTLB translation is within the XOT range. If within the XOT range, the processor clears or otherwise secures the processor register state. The processor ensures that an XOT range starts execution at an authorized entry point. Other embodiments are described and claimed.