Patent classifications
G06F9/30189
Processing pipeline where fast data passes slow data
Various embodiments relate to an inline encryption engine in a memory controller configured to process data read from a memory, including: a first data pipeline configured to receive data that is plaintext data and a first validity flag; a second data pipeline having the same length as the first data pipeline configured to: receive data that is encrypted data and a second validity flag; decrypt the encrypted data from the memory and output decrypted plaintext data; an output multiplexer configured to select and output data from either the first pipeline or the second pipeline; and control logic configured to control the output multiplexer, wherein the control logic is configured to output valid data from the first pipeline when the second pipeline does not have valid output decrypted plaintext data available.
Apparatuses, methods, and systems for instructions to request a history reset of a processor core
Systems, methods, and apparatuses relating to instructions to reset software thread runtime property histories in a hardware processor are described. In one embodiment, a hardware processor includes a hardware guide scheduler comprising a plurality of software thread runtime property histories; a decoder to decode a single instruction into a decoded single instruction, the single instruction having a field that identifies a model-specific register; and an execution circuit to execute the decoded single instruction to check that an enable bit of the model-specific register is set, and when the enable bit is set, to reset the plurality of software thread runtime property histories of the hardware guide scheduler.
EFFICIENT EXCEPTION HANDLING IN TRUSTED EXECUTION ENVIRONMENTS
Systems, methods, and apparatuses relating efficient exception handling in trusted execution environments are described. In an embodiment, a hardware processor includes a register, a decoder, and execution circuitry. The register has a field to be set to enable an architecturally protected execution environment at one of a plurality of contexts for code in an architecturally protected enclave in memory. The decoder is to decode an instruction having a format including a field for an opcode, the opcode to indicate that the execution circuitry is to perform a context change. The execution circuitry is to perform one or more operations corresponding to the instruction, the one or more operations including changing, within the architecturally protected enclave, from a first context to a second context.
METHOD AND APPARATUS FOR IMPLEMENTING POWER MODES IN MICROCONTROLLERS USING POWER PROFILES
A method and apparatus for implementing power modes in microcontrollers (MCUs) using power profiles. In one embodiment of the method, a central processing unit (CPU) of the MCU executes a first instruction for calling a subroutine stored in a memory of the MCU, wherein the first instruction comprises a first parameter to be passed to the subroutine. Thereafter the CPU writes a first value to a first special function register (SFR) of the MCU in response to executing the first instruction, wherein the first value is related to the first parameter. The MCU operates in a first power mode in response to the CPU writing the first value to the first SFR. The CPU also executes a second instruction for calling the subroutine, wherein the second instruction comprises a second parameter to be passed to the subroutine. In response the CPU writes a second value to a second SFR of the MCU in response to executing the second instruction, wherein the second value is related to the second parameter. The MCU operates in a second power mode in response to the CPU writing the second value to the second SFR. The MCU consumes more power operating in the first power mode than it does when operating in the second power mode.
OFFLOADING COMPUTATION BASED ON EXTENDED INSTRUCTION SET ARCHITECTURE
The present disclosure describes techniques for offloading computation based on an extended instruction set architecture (ISA). The extended ISA may be created based on identifying functions executed multiple times by a central processing unit (CPU). The extended ISA may comprise hashes corresponding to the functions and identifiers of extended operations associated with the functions. The extended operations may be converted from original operations of the functions. The extended operations may be executable by a storage device. The storage device may be associated with at least one computational core. Code may be synthesized based at least in part on the extended ISA. Computation of the synthesized code may be offloaded into the storage device.
MULTI-LAYER DATA CACHE TO PREVENT USER EXPERIENCE INTERRUPTS DURING FEATURE FLAG MANAGEMENT
There are provided systems and methods for a multi-layer cache to prevent user experience interrupts during feature flag management. A service provider may provide applications to computing devices of users including mobile applications. Use and availability of features in an application may be configured using feature flags, however, change of these feature flags may initiate an application refresh that affects user experiences with the application. To prevent interruptions, a multi-layer data cache may be used where feature flag data for the feature flags may initially be loaded, after a time period, to a first layer cache that is not used to update the application. When conditions exist for updating the application without affecting the user experience, such as if the user is no longer using a workflow, the feature flag data may be loaded to a second layer cache. The second layer cache may then be used for updating.
Deferral instruction for managing transactional aborts in transactional memory computing environments
A deferral instruction associated with a transaction is executed in a transaction execution computing environment with transactional memory. Based on executing the deferral instruction, a processor sets a defer-state indicating that pending disruptive events such as interrupts or conflicting memory accesses are to be deferred. A pending disruptive event is deferred based on the set defer-state, and the transaction is completed based on the disruptive event being deferred. The progress of the transaction may be monitored during a deferral period. The length of such deferral period may be specified by the deferral instruction. Whether the deferral period has expired may be determined based on the monitored progress of the transaction. If the deferral period has expired, the transaction may be aborted and the disruptive event may be processed.
Fetching Instructions in an Instruction Fetch Unit
A method in an instruction fetch unit configured to initiate a fetch of an instruction bundle from a first memory and to initiate a fetch of an instruction bundle from a second memory, wherein a fetch from the second memory takes a predetermined fixed plurality of processor cycles, the method comprising: identifying that an instruction bundle is to be selected for fetching from the second memory in a predetermined future processor cycle; and initiating a fetch of the identified instruction bundle from the second memory a number of processor cycles prior to the predetermined future processor cycle based upon the predetermined fixed plurality of processor cycles taken to fetch from the second memory
CIRCUITRY AND METHODS FOR IMPLEMENTING CAPABILITY-BASED COMPARTMENT SWITCHES WITH DESCRIPTORS
Systems, methods, and apparatuses for implementing capability-based compartment switches with descriptors are described. In certain examples, a hardware processor core comprises a capability management circuit to check a capability for a memory access request, the capability comprising an address field and a bounds field that is to indicate a lower bound and an upper bound of an address range to which the capability authorizes access; a decoder circuit to decode a single instruction into a decoded single instruction, the single instruction comprising one or more fields to indicate a first compartment descriptor that identifies a first capability to a first state element in a first compartment of memory and a second capability to a second state element in the first compartment of the memory, and an opcode to indicate that an execution circuit is to load the first capability from the first compartment descriptor of the memory into a first register to enable the capability management circuit to determine whether a first bounds field of the first capability authorizes an access to the first state element in the first compartment of the memory, and load the second capability from the first compartment descriptor of the memory into a second register to enable the capability management circuit to determine that a second bounds field of the second capability authorizes an access to the second state element in the first compartment of the memory; and the execution circuit to execute the decoded single instruction according to the opcode.
DATA PROCESSING SYSTEM AND OPERATING METHOD THEREOF
A data processing system may be configured to include a memory device, a controller configured to access the memory device when a host requests offload processing of an application, and process the application, and a sharing memory management component within the controller and configured to: set controller owning rights of access to a target region of the memory device in response to the host stores, in the target region, data used for the requested offload processing of the application; and set the controller owning rights of access or the host owning rights of access to the target region based on a processing state of the application.