Patent classifications
G06F2213/2806
METHOD AND APPARATUS FOR PROTECTING A PCI DEVICE CONTROLLER FROM MASQUERADE ATTACKS BY MALWARE
A technique allows for protecting a PCI device controller from a PCI BDF masquerade attack from Ring-0 and Ring-3 malware. The technique may use Virtualization technologies to create guest virtual machines that can use a hypervisor to allocate ACPI information from ACPI tables to a secure VM and using extended page tables (EPT) and VT-d policies to protect the MMIO memory range during illegal runtime events.
System and method for supporting a lazy sorting priority queue in a computing environment
A system and method can support queue processing in a computing environment. A lazy sorting priority queue in a concurrent system can include a priority queue and one or more buffers. The one or more buffers, which can be first-in first-out (FIFO) buffers, operate to store one or more requests received from one or more producers, and move at least one message to the priority queue when no consumer is waiting for processing a request. Furthermore, the priority queue operates to prioritize one or more incoming requests received from the one or more buffers, and allows one or more consumers to pick up the requests based on priority.
Generating a non-deterministic finite automata (NFA) graph for regular expression patterns with advanced features
In an embodiment, a method of compiling a pattern into a non-deterministic finite automata (NFA) graph includes examining the pattern for a plurality of elements and a plurality of node types. Each node type can correspond with an element. Each element of the pattern can be matched at least zero times. The method further includes generating a plurality of nodes of the NFA graph. Each of the plurality of nodes can be configured to match for one of the plurality of elements. The node can indicate the next node address in the NFA graph, a count value, and/or node type corresponding to the element. The node can also indicate the element representing a character, character class or string. The character can also be a value or a letter.
PERIPHERAL COMPONENT INTERCONNECT EXPRESS (PCIe) DEVICE METHOD FOR DELAYING COMMAND OPERATIONS BASED ON GENERATED THROUGHPUT ANALYSIS INFORMATION
Provided are a Peripheral Component Interconnect Express (PCIe) device and a method of operating the same. The PCIe device may include a performance analyzer, a delay time information generator and a command fetcher. The performance analyzer may measure throughputs of a plurality of functions, and generate throughput analysis information indicating a comparison result between the throughputs of the plurality of functions and throughput limits corresponding to the plurality of functions. The delay time information generator may generate a delay time for delaying a command fetch operation for each of the plurality of functions based on the throughput analysis information. The command fetcher may fetch a target command from a host based on a delay time of a function corresponding to the target command.
Packet forwarding apparatus with buffer recycling and associated packet forwarding method
A packet forwarding apparatus includes a first storage device and a processor. The first storage device has a plurality of buffers allocated therein, and at least one buffer included in the plurality of buffers is arranged to buffer at least one packet. The processor is arranged to execute a Linux kernel to perform software-based packet forwarding associated with the at least one packet. The at least one buffer allocated in the first storage device is recycled through direct memory access (DMA) management, and is reused for buffering at least one other packet.
Systems and methods for enabling concurrent applications to perform extreme wideband digital signal processing with multichannel coherency
A method for digital signal processing of sensor data includes receiving digitized samples of sensor signals via a network connection; converting the digitized samples into a standardized format; storing the converted digitized samples in a shared memory data structure in memory of a single instruction multiple data (SIMD) processor; and providing zero-copy read access to the converted digitized samples stored in the shared memory data structure to a plurality of applications.
Shared dynamic buffer in image signal processor
Embodiments relate to an image signal processor that includes an image processing circuit, a buffer, and a rate limiter circuit. The image processing circuit perform operations associated with image signal processing. The buffer stores the image data provided by the system memory. The buffer includes a shared that is dynamically allocated among the image processing circuits. The rate limiter circuit arbitrates allocation of the shared section. The arbitration process includes allocating data credits for the shared section to an image processing circuit. The rate limiter circuit determines a first number of blocks in the shared section that are allocated for pending requests and a second number of blocks that include data pending to be consumed by the image processing circuit. If the total allocated blocks occupied by the image processing circuit exceed a throttling threshold, the image processing circuit will be throttled by an exponential factor.
METHODS FOR AN AI ACCELERATOR INTEGRATED CIRCUIT CHIP WITH INTEGRATED CELL-BASED FABRIC ADAPTER
An integrated circuit formed on (i) a single semiconductor die or (ii) a plurality semiconductor dies that are integrated into a single package. The integrated circuit may include a communication interface including a serializer/deserializer (SerDes) interface; a fabric adapter communicatively coupled to the communication interface; a plurality of inference engine clusters, each inference engine cluster including a respective memory element and/or memory interface; and a data interconnect communicatively coupling each respective memory element and/or memory interfaces of the plurality of inference engine clusters to the fabric adapter. The fabric adapter may be configured to facilitate remote direct memory access (RDMA) read and write services and/or datagram communication over a cell-based switch fabric to and from the respective memory elements and/or memory interfaces of the plurality of inference engine clusters via the data interconnect.