Patent classifications
G06F9/30189
Instruction and Logic for Configurable Arithmetic Logic Unit Pipeline
A processor includes a front end including circuitry to decode a first instruction to set a performance register for an execution unit and a second instruction and an allocator including circuitry to assign the second instruction to the execution unit to execute the second instruction. The execution unit includes circuitry to select between a normal computation and an accelerated computation based on a mode field of the performance register, perform the selected computation, and select between a normal result associated with the normal computation and an accelerated result associated with the accelerated computation based on the mode field.
Guest-specific microcode
Embodiments of apparatuses, methods, and systems for modifying the behavior of a guest installed to run within a VM are disclosed. In one embodiment, an apparatus includes virtualization logic, first storage, second storage, decode logic, and multiplexing logic. The virtualization logic is to provide a mode in which to operate a virtual machine. The first storage is to store a first plurality of micro-instructions to control the apparatus. The second storage is to store a second plurality of micro-instructions to control the apparatus. The decode logic is to decode a macro-instruction into one of a first plurality and a second plurality of micro-instructions. The multiplexing logic is to cause the macro-instruction to be decoded into the second plurality of micro-instructions instead of the first plurality of micro-instructions only when issued from the virtual machine.
Data computing system
The present disclosure provides a data computing system. The data computing system comprises: a memory, a processor and an accelerator, wherein the memory is communicatively coupled to the processor and configured to store data to be computed and a computed result, the data being written by the processor; the processor is communicatively coupled to the accelerator and configured to control the accelerator; and the accelerator is communicatively coupled to the memory and configured to access the memory according to pre-configured control information, implement a computing process to produce the computed result and write the computed result back to the memory. The present disclosure also provides an accelerator and a method performed by an accelerator of a data computing system. The present disclosure can improve the execution efficiency of the processor and reduce the computing overhead of the processor.
Caching device, cache, system, method and apparatus for processing data, and medium
A caching device, an instruction cache, a system for processing an instruction, a method and apparatus for processing data and a medium are provided. The caching device includes a first queue, a second queue, a write port group, a read port, a first pop-up port, a second pop-up port and a press-in port. The is configured to write cache data into a set storage address in the first queue and/or the second queue; the read port is configured to read all cache data from the first queue and/or the second queue at one time; the press-in port is configured to press cache data into the first queue and/or the second queue; the first pop-up port is configured to pop up cache data from the first queue; and the second pop-up port is configured to pop up cache data from the second queue.
System management mode trust establishment for OS level drivers
Various embodiments are generally directed to establishing trust in system management mode. An operating system management mode driver can invoke a system management mode and provide a signature to the system management mode to authenticate the driver with. Additionally, a hash value of the driver can be used to determine whether the driver is authorized to invoke system management mode or particular operations or features of system management mode.
Data selection for a processor pipeline using multiple supply lines
A method for a plurality of pipelines, each having a processing element having first and second inputs and first and second lines, wherein at least one of the pipelines includes first and second logic operable to select a respective line so that data is received at the first and second inputs respectively. A first mode is selected and for the at least one pipeline, the first and second lines of that pipeline are selected such that the processing element of that pipeline receives data via the first and second lines of that pipeline, the first line being capable of supplying data that is different to the second line. A second mode is selected and for the at least one pipeline a line of another pipeline is selected, the second line of the at least one pipeline is selected and the same data at the second line is supplied as the first line.
CACHE SYSTEMS AND CIRCUITS FOR SYNCING CACHES OR CACHE SETS
A cache system, having a first cache, a second cache, and a logic circuit coupled to control the first cache and the second cache according to an execution type of a processor. When an execution type of a processor is a first type indicating non-speculative execution of instructions and the first cache is configured to service commands from a command bus for accessing a memory system, the logic circuit is configured to copy a portion of content cached in the first cache to the second cache. The cache system can include a configurable data bit. The logic circuit can be coupled to control the caches according to the bit. Alternatively, the caches can include cache sets. The caches can also include registers associated with the cache sets respectively. The logic circuit can be coupled to control the cache sets according to the registers.
Secure predictors for speculative execution
Systems and methods are disclosed for secure predictors for speculative execution. Some implementations may eliminate or mitigate side-channel attacks, such as the Spectre-class of attacks, in a processor. For example, an integrated circuit (e.g., a processor) for executing instructions includes a predictor circuit that, when operating in a first mode, uses data stored in a set of predictor entries to generate predictions. For example, the integrated circuit may be configured to: detect a security domain transition for software being executed by the integrated circuit; responsive to the security domain transition, change a mode of the predictor circuit from the first mode to a second mode and invoke a reset of the set of predictor entries, wherein the second mode prevents the use of a first subset of the predictor entries of the set of predictor entries; and, after completion of the reset, change the mode back to the first mode.
Apparatus and method for handling exception events
Processing circuitry has a plurality of exception states for handling exception events, the exception states including a base level exception state and at least one further level exception state. Each exception state has a corresponding stack pointer indicating the location within the memory of a corresponding stack data store. When the processing circuitry is in the base level exception state, stack pointer selection circuitry selects the base level stack pointer as a current stack pointer indicating a current stack data store for use by the processing circuitry. When the processing circuitry is a further exception state, the stack pointer selection circuitry selects either the base level stack pointer or the further level stack pointer corresponding to the current further level exception state as a current stack pointer.
DATA MANAGEMENT ACROSS A PERSISTENT MEMORY TIER AND A FILE SYSTEM TIER
Techniques are provided for data management across a persistent memory tier and a file system tier. A block within a persistent memory tier of a node is determined to have up-to-date data compared to a corresponding block within a file system tier of the node. The corresponding block may be marked as a dirty block within the file system tier. Location information of a location of the block within the persistent memory tier is encoded into a container associated with the corresponding block. In response to receiving a read operation, the location information is obtained from the container. The up-to-date data is retrieved from the block within the persistent memory tier using the location information for processing the read operation.