Patent classifications
G06F2212/621
Packet processing method and related device
A packet processing method and device are provided, to save CPU resources consumed by parsing a packet. The method includes: parsing, by an intelligent network interface card, a received first packet to obtain an identifier of the first packet; updating, by the intelligent network interface card, a control field of a first memory buffer based on the identifier of the first packet; storing, by the intelligent network interface card, a payload of the first packet or a packet header and a payload of the first packet into the first address space through DMA based on an aggregation position of the first packet; aggregating, by a host, the first address information and at least one piece of second address information based on an updated control field in the first mbuf; and reading, by a virtual machine, address information, to obtain data in an address space indicated by the address information.
NETWORK ENTITIES AND METHODS PERFORMED THEREIN FOR HANDLING CACHE COHERENCY
A method performed by a coordinating entity in a disaggregated data center architecture wherein computing resources are separated in discrete resource pools and associated together to represent a functional server. The coordinating entity obtains a setup of processor cores that are coupled logically as the functional server, and determines an index indicating an identity of a cache coherency domain based on the obtained setup of processor cores. The coordinating entity further configures one or more communicating entities associated with the obtained setup of processor cores, to use the determined index when handling updated cache related data.
Streaming engine with flexible streaming engine template supporting differing number of nested loops with corresponding loop counts and loop offsets
A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements for the nested loops. A steam head register stores data elements next to be supplied to functional units for use as operands. A stream template specifies loop count and loop dimension for each nested loop. A format definition field in the stream template specifies the number of loops and the stream template bits devoted to the loop counts and loop dimensions. This permits the same bits of the stream template to be interpreted differently enabling trade off between the number of loops supported and the size of the loop counts and loop dimensions.
CACHE COHERENT SYSTEM IMPLEMENTING VICTIM BUFFERS
In accordance with various aspects of the invention, a recall transaction is issued if a tag filter entry needs to be freed up for an incoming transaction. Directory entries chosen for a recall transaction are pushed into a fully associative structure called victim buffer. If this structure gets full, then an entry is selected from entries inside the victim buffer for the recall.
Cache unit useful for secure execution
A cache unit that is configured to retain: a plurality of cache blocks; a plurality of owner indicators, and a plurality of validity marks. For each cache block of the plurality of cache blocks exists a corresponding owner indicator in the plurality of owner indicators. An owner indicator corresponding to a cache block is capable of identifying an entity that caused the cache block to be fetched to the cache unit. For each cache block of the plurality of cache blocks exists a corresponding validity mark in the plurality of validity marks. A validity mark corresponding to the cache block indicates whether a validation process performed on the cache block upon fetching thereof was successful. The cache unit may be useful for secure execution.
Method and apparatus for controlling cache line storage in cache memory
A method and apparatus physically partitions clean and dirty cache lines into separate memory partitions, such as one or more banks, so that during low power operation, a cache memory controller reduces power consumption of the cache memory containing the clean only data. The cache memory controller controls refresh operation so that data refresh does not occur for clean data only banks or the refresh rate is reduced for clean data only banks. Partitions that store dirty data can also store clean data, however other partitions are designated for storing only clean data so that the partitions can have their refresh rate reduced or refresh stopped for periods of time. When multiple DRAM dies or packages are employed, the partition can occur on a die or package level as opposed to a bank level within a die.
Methods and apparatuses involving radar system data paths
Exemplary aspects for a specific example concern a radar system having sensor circuitry including multiple radar sensors to provide sensor data via multiple virtual channels and multiple data types, a memory circuit with memory buffers, and a bus-interface circuit to control bus interconnects for bus communications involving a radar signal transmitter and the memory circuit. Radar signals are received and processed, via data acquisition path circuitry in multiple circuit paths and via streams of data in response to and to accommodate the operations of the sensor circuitry. A master controller conveys data, via the bus-interface circuit, to the buffers for the sensor data, and generates selectable-type transactions to be linked in selected ones of the buffers, in response to the data provided from the sensor circuitry and based on the sensor data being provided via different ones of the multiple virtual channels and of the multiple data types.
Handling Memory Requests
A converter module is described which handles memory requests issued by a cache (e.g. an on-chip cache), where these memory requests include memory addresses defined within a virtual memory space. The converter module receives these requests, issues each request with a transaction identifier and uses that identifier to track the status of the memory request. The converter module sends requests for address translation to a memory management unit and where there the translation is not available in the memory management unit receives further memory requests from the memory management unit. The memory requests are issued to a memory via a bus and the transaction identifier for a request is freed once the response has been received from the memory. When issuing memory requests onto the bus, memory requests received from the memory management unit may be prioritized over those received from the cache.
System and methods for cache coherent system using ownership-based scheme
A computer system includes a first core including a first local cache and a second core including a second local cache. The first core and the second core are coupled through a remote link. A shared cache coupled to the first core and to the second core. The shared cache includes an ownership table that includes a plurality of entries indicating if a cache line is stored solely in the first local cache or solely in the second local cache. The remote link includes a first link between the first core and the shared cache and a second link between the second core and the shared cache.
INTEGRATED CIRCUIT AND METHOD FOR EXECUTING CACHE MANAGEMENT OPERATION
An integrated circuit and a method for executing a cache management operation are provided. The integrated circuit includes a master interface, a slave interface, and a link. The link is connected between the master interface and the slave interface, and the link includes an A-channel, a B-channel, a C-channel, a D-channel, and an E-channel. The A-channel is configured to transmit a cache management operation message of the master interface to the slave interface, and the cache management operation message is configured to manage data consistency between different data caches. The D-channel is configured to transmit a cache management operation acknowledgement message of the slave interface to the master interface.