Patent classifications
G06F2213/2806
Asymmetric read / write architecture for enhanced throughput and reduced latency
The present disclosure relates to asymmetric read/write architectures for enhanced throughput and reduced latency. One example embodiment includes an integrated circuit. The integrated circuit includes a network interface. The integrated circuit also includes a communication bus interface. The integrated circuit is configured to establish a communication link with a processor of the host computing device over the communication bus interface, which includes mapping to memory addresses associated with the processor of the host computing device. The integrated circuit is also configured to receive payload data for transmission over the network interface in response to the processor of the host computing device writing payload data to the mapped memory addresses using one or more programmed input-outputs (PIOs). Further, the integrated circuit is configured to write payload data received over the network interface to the memory of the host computing device using direct memory access (DMA).
COMPUTING SYSTEM FOR TRANSMITTING COMPLETION EARLY BETWEEN SERIALLY CONNECTED ELECTRONIC DEVICES
A computing system includes a host, a first electronic device including a memory and an accelerator, and a second electronic device including a direct memory access (DMA) engine. Based on a command transmitted from the host through the first electronic device, the DMA engine transmits data and completion information of the command to the first electronic device. The memory includes a data buffer storing the data and a completion queue buffer storing the completion information. The accelerator executes a calculation on the data. The DMA engine transmits the data to the first electronic device and then transmits the completion information to the first electronic device.
FIXED ETHERNET FRAME DESCRIPTOR
System and techniques for enhanced electronic navigation maps for a vehicle are described herein. A descriptor set-up message may be received at a network controller interface (NIC). Here, the descriptor set-up message includes an ethernet frame descriptor. The NIC may then use the ethernet frame descriptor to transmit, across a physical interface of the NIC, multiple ethernet frames, each of which use the same ethernet frame descriptor from the set-up message.
Elastic method of remote direct memory access memory advertisement
Systems and methods for demand-based remote direct memory access buffer management. A method embodiment commences upon initially partitioning a memory pool at a computer that is to receive memory contents from a sender. The memory pool is partitioned into memory areas that comprise a plurality of different sized buffers that serve as target buffers for one or more direct memory access data transfer operations from the data sources. An initial first set of buffer apportionments are associated with each one of the one or more data sources and those initial sets are advertised to the corresponding data sources. Over time, based on messages that have been loaded into the receiver's memory, the payload sizes of the messages are observed. Based on the observed the demand for buffers that are used for the message payload, the constituency of the advertised buffers can grow or shrink elastically as compared to previous advertisements.
Fixed ethernet frame descriptor
System and techniques for enhanced electronic navigation maps for a vehicle are described herein. A descriptor set-up message may be received at a network controller interface (NIC). Here, the descriptor set-up message includes an ethernet frame descriptor. The NIC may then use the ethernet frame descriptor to transmit, across a physical interface of the NIC, multiple ethernet frames, each of which use the same ethernet frame descriptor from the set-up message.
SEMICONDUCTOR DEVICE AND SYSTEMS USING THE SAME
A semiconductor device capable of suppressing performance degradation and systems using the same are provided. The semiconductor device includes a plurality of processors CPU1 and CPU2, a scheduling device 10 (ID1) connected to the processors CPU1 and CPU2 for controlling the processors CPU1 and CPU2 to execute a plurality of tasks in real time, memories 17 and 18 accessed by the processors CPU1 and CPU2 to store data by executing the tasks, and access monitor circuits 15 for monitoring accesses to the memories by the processors CPU1 and CPU2. When an access to the memory is detected by the access monitor circuit 15, the data stored in the memory 18 is transferred based on the destination information of the data stored in the memory 18.
Apparatus and method for processing burst read transactions
An apparatus and method are provided for processing burst read transactions. The apparatus has a master device and a slave device coupled to the master device via a connection medium. The master device comprises processing circuitry for initiating a burst read transaction that causes the master device to issue to the slave device, via the connection medium, an address transfer specifying a read address. The slave device is arranged to process the burst read transaction by causing a plurality of data items required by the burst read transaction to be obtained based on the read address specified by the address transfer, and by performing a plurality of data transfers over the connection medium in order to transfer the plurality of data items to the master device. The slave device has transfer identifier generation circuitry for generating, for each data transfer, a transfer identifier to be transmitted over the connection medium to identify which data item in the plurality of data items is being transferred by that data transfer. The master device has buffer circuitry to buffer data items received by the plurality of data transfers, and to employ the transfer identifier provided for each data transfer to cause the plurality of data items to be provided to the processing circuitry in a determined order irrespective of an order in which the data items are transferred to the master device via the plurality of data transfers. This can significantly reduce the overhead required to manage the supply of the data items to the processing circuitry in the required determined order.
APPARATUS AND METHOD FOR PROCESSING BURST READ TRANSACTIONS
An apparatus and method are provided for processing burst read transactions. The apparatus has a master device and a slave device coupled to the master device via a connection medium. The master device comprises processing circuitry for initiating a burst read transaction that causes the master device to issue to the slave device, via the connection medium, an address transfer specifying a read address. The slave device is arranged to process the burst read transaction by causing a plurality of data items required by the burst read transaction to be obtained based on the read address specified by the address transfer, and by performing a plurality of data transfers over the connection medium in order to transfer the plurality of data items to the master device. The slave device has transfer identifier generation circuitry for generating, for each data transfer, a transfer identifier to be transmitted over the connection medium to identify which data item in the plurality of data items is being transferred by that data transfer. The master device has buffer circuitry to buffer data items received by the plurality of data transfers, and to employ the transfer identifier provided for each data transfer to cause the plurality of data items to be provided to the processing circuitry in a determined order irrespective of an order in which the data items are transferred to the master device via the plurality of data transfers. This can significantly reduce the overhead required to manage the supply of the data items to the processing circuitry in the required determined order.
ELASTIC METHOD OF REMOTE DIRECT MEMORY ACCESS MEMORY ADVERTISEMENT
Systems and methods for demand-based remote direct memory access buffer management. A method embodiment commences upon initially partitioning a memory pool at a computer that is to receive memory contents from a sender. The memory pool is partitioned into memory areas that comprise a plurality of different sized buffers that serve as target buffers for one or more direct memory access data transfer operations from the data sources. An initial first set of buffer apportionments are associated with each one of the one or more data sources and those initial sets are advertised to the corresponding data sources. Over time, based on messages that have been loaded into the receiver's memory, the payload sizes of the messages are observed. Based on the observed the demand for buffers that are used for the message payload, the constituency of the advertised buffers can grow or shrink elastically as compared to previous advertisements.
Bit manipulation capable direct memory access
A memory management circuit includes a direct memory access (DMA) channel. The DMA channel includes logic configured to receive a buffer of data to be written using DMA. The DMA channel further includes logic to perform bit manipulation in real-time during a DMA write cycle of the first buffer of data.