Patent classifications
G06F9/544
Resource management unit for capturing operating system configuration states and offloading tasks
This disclosure describes methods, devices, systems, and procedures in a computing system for capturing a configuration state of an operating system executing on a central processing unit (CPU), and offloading resource-related tasks, based on the configuration state, to a resource management unit such as a system-on-chip (SoC). The resource management unit identifies a status of each resource based on the captured configuration state of the operating system. The resource management unit then processes tasks associated with the status of the resources, such as modifying a clock rate of a clocked component in the computing system. This can alleviate the CPU from processing those tasks thereby improving overall computing system performance and dynamics.
Progressive error handling
Systems and methods herein describe receiving identification from a data pipeline, accessing first data offset information for a first data origin and second data offset information for a second data origin, bisecting the first data origin using the first data offset information, processing the data pipeline with the bisected first data offset information and the second data offset information, receiving a notification indicating a data pipeline status, and causing presentation of the notification on a graphical user interface of a computing device.
Prefetch mechanism for a cache structure
An apparatus and method is provided, the apparatus comprising a processor pipeline to execute instructions, a cache structure to store information for reference by the processor pipeline when executing said instructions; and prefetch circuitry to issue prefetch requests to the cache structure to cause the cache structure to prefetch information into the cache structure in anticipation of a demand request for that information being issued to the cache structure by the processor pipeline. The processor pipeline is arranged to issue a trigger to the prefetch circuitry on detection of a given event that will result in a reduced level of demand requests being issued by the processor pipeline, and the prefetch circuitry is configured to control issuing of prefetch requests in dependence on reception of the trigger.
Scaling of an Ordered Event Stream based on a Writer Group Characteristic
Scaling of an ordered event stream (OES) based on a characteristic of one or more writer groups is disclosed. Scaling a portion of an OES contemporaneous to writing events into that portion can conserve computing resources in contrast to more conventional scaling techniques. Moreover, scaling an OES contemporaneously with writing events thereto can enable improved management of OES scaling for applications that can both read events from an input portion of an OES and, via interim events and interim portions of an OES, write events to an output portion of an OES. An application instance can therefore simultaneously act as both a reader group and writer group and can manage data via interim OESs, such that effects of passing the data through the interim OESs can be cascaded into a scaling of the output portion of an OES based on the writer group characteristic.
Heterogeneous-computing based emulator
In an approach, a processor receives an input indicative of a set of registers, the set of registers being configured for obtaining output data from a design-under-test (DUT) in a field-programmable gate array (FPGA) module. A processor executes a set of instructions for monitoring the output data in the set of registers;. A processor generates data indicative of at least one portion of changes of the output data in the set of registers during the execution of the set of instructions. A processor causes a separate machine to analyze the data via utilizing an interface to send the data to the separate machine.
Method and system for detecting and resolving a write conflict
The disclosed systems and methods are directed for detecting and resolving write-write conflicts among a plurality of transactions received from master nodes of a multi-writer database system. The method includes receiving a plurality of REDO logs and storing the plurality of REDO logs in a buffer, each REDO log associated with the one of the plurality of transactions, selecting one REDO log of the plurality of REDO logs; persisting the transaction associated with the one REDO log in a local storage when a write-write conflict is detected between the one REDO log and at least one other REDO log of the plurality of REDO logs prior to committing the transaction associated with the one REDO log; and transmitting a status of the transaction associated with the one REDO log to a global transaction manager (GTM).
Mission data file generator architecture and interface
The system and method for the development and use of an architecture for a Mission Data File Generator (MDFG) for Electronic Warfare (EW) and other systems, which applies state-of-the art software architecture, workflow design, and Graphical User Interface (GUI) design methods. The resulting MDFG tools are user-friendly to the EW Analyst, and allow development of Mission Data Files (MDFs) more quickly. The method and system are substantially more extensible and maintainable than current MDFG tools.
Pruning of buffering candidates for improved efficiency of evaluation
An integrated circuit (IC) design is accessed from a database in memory. The IC design comprises a route connecting a source to a sink. A set of buffering candidates for buffering are generated for the net. A timing improvement associated with a buffering candidate in the set of buffering candidates is determined using a first timing model. The buffering candidate is pruned from the set of buffering candidates based on the timing improvement and a cost associated with the buffering candidate. The pruned set of buffering candidates is evaluated using a second timing model, and a buffering solution for the net is selected from the pruned set of buffering candidates based on a result of the evaluating. The IC design is updated to include the buffering solution selected for the net.
METHOD AND APPARATUS FOR A LOGIC-BASED FILTER ENGINE
A cross-domain guard is disclosed that includes a field programmable gate array (FPGA). The FPGA includes a rule database containing one or more rules, a memory interconnect configured to send control data or rule processing data, media access control logic, and a plurality of filter engines configured to receive an incoming message and generate a processed message. Each of the plurality of filter engines may contain a message processing allocation element configured to receive and distribute the incoming message, and a plurality of rule processor kernels. Each of the plurality of rule processor kernels includes a rule processor kernel control element, a plurality of data operator kernels configured to perform a data comparison operation, a ternary lookup table processor configured to perform a logic operation based upon a result of the data comparison operation, and a processed message arbiter. A method for filtering incoming messages is also disclosed.
SHARED AUTOMATED EXECUTION PLATFORM IN CLOUD
According to some embodiments, a system and method comprising a plurality of automation tools; and a shared automation module, coupled to the plurality of automation tools including: a computer processor; a computer memory, coupled to the computer processor, storing instructions that, when executed by the computer processor cause the shared automation module to: receive a selection of a first automation tool of the plurality of automation tools; receive a selection of a second automation tool of the plurality of automation tools; execute the first automation tool to generate a first automation tool output; and execute the second automation tool using the stored first automation tool output to generate a second automation tool output. Numerous other aspects are provided.