Patent classifications
G06F9/327
Multi-Processor System with Distributed Mailbox Architecture and Processor Error Checking Method Thereof
A multi-processor system with a distributed mailbox architecture and a processor error checking method thereof are provided. The multi-processor system comprises a plurality of processors, each of the processors is correspondingly configured with an exclusive mailbox and an exclusive channel, and the processor error checking method comprises the following steps. When a first processor of the processors needs to communicate with a second processor, the first processor writes the data into the exclusive mailbox of the second processor through a public bus; and when the exclusive mailbox of the second processor has receiving the data, the exclusive mailbox of the second processor starts timing, and until the timing result exceeds a threshold value, the exclusive mailbox of the second processor sends a timeout signal to the second processor, and after receiving the timeout signal, the second processor resets the first processor.
DATA STORAGE DEVICE AND METHOD FOR SHARING MEMORY OF CONTROLLER THEREOF
A data storage device and a method for sharing memory of controller thereof are provided. The data storage device comprises a non-volatile memory and a controller, which is electrically coupled to the non-volatile memory and comprises an access interface, a redundant array of independent disks (RAID) error correcting code (ECC) engine and a central processing unit (CPU). The CPU has a first memory for storing temporary data, the RAID ECC engine has a second memory, and the controller maps the unused memory space of the second memory to the first memory to be virtualized as part of the first memory when the second memory is not fully used so that the CPU can also use the unused memory space of the second memory to store the temporary data.
PACKET PROCESSING WITH REDUCED LATENCY
Generally, this disclosure provides devices, methods, and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.
CACHE-BASED TRACE REPLAY BREAKPOINTS USING RESERVED TAG FIELD BITS
Performing breakpoint detection via a cache includes detecting an occurrence of a memory access and identifying whether any cache line of the cache matches an address associated with the memory access. When a cache line does match the address associated with the memory access no breakpoint was encountered. When no cache line matches the address associated with the memory access embodiments identify whether any cache line matches the address associated with the memory access when one or more flag bits are ignored. When a cache line does match the address associated with the memory access when the one or more flag bits are ignored, embodiment perform a check for whether a breakpoint was encountered. Otherwise, embodiments process a cache miss.
HARDWARE STATE REPORTING USING INTERRUPT PINS
Configuration information is sent from a configuration controller to a processor module in a System On Chip (SOC) and associated with firmware. In response to receiving the configuration information, context switching associated with an interrupt pin in the processor module is disabled. Hardware state information is sent from a hardware functional module in the SOC to the interrupt pin. In response to receiving the hardware state information, the processor module determines, based at least in part on the hardware state information and after completing any active firmware operations per the disabled context switching, a response. The response is received at a responding module and the responding module performs a process associated with the response.
Packet processing with reduced latency
Generally, this disclosure provides devices, methods and computer readable media for packet processing with reduced latency. The device may include a data queue to store data descriptors associated with data packets, the data packets to be transferred between a network and a driver circuit. The device may also include an interrupt generation circuit to generate an interrupt to the driver circuit. The interrupt may be generated in response to a combination of an expiration of a delay timer and a non-empty condition of the data queue. The device may further include an interrupt delay register to enable the driver circuit to reset the delay timer, the reset postponing the interrupt generation.
PREEMPTIVE SCHEDULING OF IN-ENCLAVE THREADS
Preemptive scheduling enclaves as disclosed herein support both cooperative and preemptive scheduling of in-enclave (IE) thread execution. These preemptive scheduling enclaves may include a scheduler configured to be executed as part of normal hardware interrupt processing by enclave threads. The scheduler identifies an IE thread to be scheduled and modifies enclave data structures so that when the enclave thread resumes processing after a hardware interrupt, the identified IE thread is executed, rather than the interrupted IE thread.
CONFIGURABLE INTERCONNECT ADDRESS REMAPPER WITH EVENT RECOGNITION
Systems and methods are disclosed for a configurable interconnect address remapper with event detection. For example, an integrated circuit can include a processor core configured to execute instructions. The processor core includes region registers defined by a From Address range and a To Address, a register storing a number of regions defined in the integrated circuit, interrupt enable registers associated with each pair of region registers, and event flags associated with each pair of region registers; an interconnection system handling transactions from the processor core; an interconnect address remapper translating an address associated with a transaction using the one or more pair of region registers; and an interrupt controller receiving an interrupt signal from the interconnect address remapper when the interrupt enable registers are enabled and at least one raised event flags when at least one of the one or more pair of region registers matches the transaction address.
METHOD AND APPARATUS FOR A SCALABLE INTERRUPT INFRASTRUCTURE
An apparatus and method for scalable interrupt reporting. For example, one embodiment of an apparatus comprises: a host processor to execute one or more processes having a corresponding one or more process contexts associated therewith; and a graphics processing engine to, upon initiating execution of a first process, determine a current process context associated with the first process including a first pointer to a first system memory region to store an interrupt status, a second pointer to a second system memory region to store interrupt enable and/or interrupt mask data for one or more interrupt events, and address/data values associated with a message signaled interrupt (MSI); the graphics processing engine, in response to an interrupt event, to evaluate the interrupt enable data from the second system memory region to determine whether the interrupt event is enabled, to report the interrupt event, if enabled, by writing a specified value to the first system memory region identified by the first pointer, and to generate a first MSI corresponding to the interrupt event by writing the MSI address/data values to an output accessible by the host processor.
Method for manufacturing semiconductor device
A transistor including a semiconductor, a first conductor, a second conductor, a third conductor, a first insulator, and a second insulator is manufactured by forming a hard mask layer including a fourth conductor over the second insulator, a third insulator over the fourth conductor, forming an opening portion in the second insulator with the hard mask layer as the mask, eliminating the hard mask layer by forming the opening portion, and forming the first insulator and the first conductor in the opening portion.