Patent classifications
G06F2213/28
COMMUNICATION CONTROLLER AND COMMUNICATION CONTROL METHOD
A communications controller is disclosed. The communications controller includes a data transfer unit and a protocol engine. The communications controller further includes a circuit configured to control transfer of data from the data transfer unit to the protocol engine in dependence upon a process identifier which identifies a process entity requiring the protocol engine to transmit data for the process entity.
USING A HARDWARE SEQUENCER IN A DIRECT MEMORY ACCESS SYSTEM OF A SYSTEM ON A CHIP
In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
Transfer device, information processing device, and data transfer method
A transfer device (230) for communicating with a first processing device (110 or 210) and a second processing device (210 or 110) by PCIe is provided. The transfer device (230) is provided with a direct memory access controller (DMAC) (233) for controlling a data transfer from a first memory (120 or 220) of the first processing device to a second memory (220 or 120) of the second processing device; a first transmission descriptor controller (235 or 237) for acquiring, from the first processing device, information relating to a first memory address in the first memory at which the data to be transferred is stored; and a first reception descriptor controller (234 or 236) for acquiring, from the second processing device, information relating to a second memory address in the second memory at which the data to be transferred should be stored.
Memory access operation in distributed computing system
In one example, an apparatus comprises: a local on-chip memory; a computation engine configured to generate local data and to store the local data at the local on-chip memory; and a controller. The apparatus is configured to be coupled with a second device via an interconnect, the second device comprising a local memory. The controller is configured to: fetch the local data from the local on-chip memory; fetch remote data generated by another device from a local off-chip memory; generate output data based on combining the local data and the remote data; and store, via the interconnect, the output data at the local memory of the second device.
TECHNIQUES FOR MANAGING CONTEXT INFORMATION FOR A STORAGE DEVICE
Disclosed herein are techniques for managing context information for data stored within a non-volatile memory of a computing device. According to some embodiments, the method can include (1) loading, into a volatile memory of the computing device, the context information from the non-volatile memory, where the context information is separated into a plurality of silos, (2) writing transactions into a log stored within the non-volatile memory, and (3) each time a condition is satisfied: (i) identifying a next silo of the plurality of silos to be written into the non-volatile memory, (ii) updating the next silo to reflect the transactions that apply to the next silo, and (iii) writing the next silo into the non-volatile memory. In turn, when an inadvertent shutdown of the computing device occurs, the silos of which the context information is comprised can be sequentially accessed and restored in an efficient manner.
Apparatus and methods for varying output pulse-width modulation (PWM) control of an inverter
Apparatus and methods of providing digital varying output, such as sinusoidal, pulse width modulation, SPWM, control for an inverter comprising at least a first switch and a second switch are disclosed. The method comprising: generating a first binary control signal at a system modulation frequency; generating a second binary control signal at an M-times higher carrier frequency; wherein generating the second binary control signal comprises: providing a periodic counter having a K-times higher reset frequency; calculating M switch-off moments; determining for each, a corresponding switch-off counter value and a corresponding counter sequence value; storing each switch-off counter value in a respective memory location corresponding to the respective counter sequence and dummy values in the remaining memory locations; and sequentially and periodically transferring the contents of the memory locations to at least one PWM value register.
Solid state drive (SSD) memory system improving the speed of a read operation using parallel DMA data transfers
There are provided a memory system and an operating method thereof. The memory system includes: a memory device for storing data in a program operation, and reading stored data and temporarily store the read data in a read operation; and a controller for transmitting data to the memory device, wherein the controller includes: a flash direct memory access (DMA) for reading and outputting the data temporarily stored in the memory device in the read operation; a buffer memory for storing the data output from the flash DMA; and a host DMA for reading the data stored in the buffer memory and outputting the read data to a host, wherein a first operation of storing the data temporarily stored in the memory device in the buffer memory and a second operation of outputting the data stored in the buffer memory to the host are performed in parallel.
SYNCHRONOUS INPUT/OUTPUT (I/O) CACHE LINE PADDING
A computer-implemented method for synchronous input/output (I/O) cache line padding is described. The cache line padding occurs between a server having a processor executing an operating system and a recipient control unit. The method can include receiving, via the processor at the recipient control unit, a partial line direct memory access (DMA) write request; fetching, via the processor, a device table entry (DTE) associated with the partial line DMA write request; determining, via the processor, a cache line size for a synchronous input/output (I/O) cache line; and writing a full cache line DMA write request by padding, via the processor, the partial line DMA write request with a padded portion, where the padded portion is based on the cache line size.
ACCELERATION FRAMEWORK TO CHAIN IPU ASIC BLOCKS
A method is described. The method includes receiving a first invocation for a first ASIC block on a semiconductor chip. The first invocation provides a value. The method includes receiving a second invocation for a second ASIC block on the semiconductor chip. The second invocation also provides the value. The method includes determining that the second ASIC block is to operate on output from the first ASIC block from the first and second invocations having both provided the value. The method includes using a first device driver for the first ASIC block and a second device driver for the ASIC block to cause the second ASIC block to operate on the output from the first ASIC block.
Network functions virtualization platforms with function chaining capabilities
A virtualization platform for Network Functions Virtualization (NFV) is provided. The virtualization platform may include a host processor coupled to an acceleration coprocessor. The acceleration coprocessor may be a reconfigurable integrated circuit to help provide improved flexibility and agility for the NFV. The coprocessor may include multiple virtual function hardware acceleration modules each of which is configured to perform a respective accelerator function. A virtual machine running on the host processor may wish to perform multiple accelerator functions in succession at the coprocessor on a given data. In one suitable arrangement, intermediate data output by each of the accelerator functions may be fed back to the host processor. In another suitable arrangement, the successive function calls may be chained together so that only the final resulting data is fed back to the host processor.