G06F9/544

DISTRIBUTED GEOMETRY

Systems, apparatuses, and methods for performing geometry work in parallel on multiple chiplets are disclosed. A system includes a chiplet processor with multiple chiplets for performing graphics work in parallel. Instead of having a central distributor to distribute work to the individual chiplets, each chiplet determines on its own the work to be performed. For example, during a draw call, each chiplet calculates which portions to fetch and process of one or more index buffer(s) corresponding to one or more graphics object(s) of the draw call. Once the portions are calculated, each chiplet fetches the corresponding indices and processes the indices. The chiplets perform these tasks in parallel and independently of each other. When the index buffer(s) are processed, one or more subsequent step(s) in the graphics rendering process are performed in parallel by the chiplets.

System and method for detecting malicious scripts

An endpoint system receives a target file for evaluation for malicious scripts. The original content of the target file is normalized and stored in a normalized buffer. Tokens in the normalized buffer are translated to symbols, which are stored in a tokenized buffer. Strings in the normalized buffer are stored in a string buffer. Tokens that are indicative of syntactical structure of the normalized content are extracted from the normalized buffer and stored in a structure buffer. The content of the tokenized buffer and counts of tokens represented as symbols in the tokenized buffer are compared against heuristic rules indicative of malicious scripts. The contents of the tokenized buffer and string buffer are compared against signatures of malicious scripts. The contents of the tokenized buffer, string buffer, and structure buffer are input to a machine learning model that has been trained to detect malicious scripts.

PRIORITY ENCODER-BASED TECHNIQUES FOR COMPUTING THE MINIMUM OR THE MAXIMUM OF MULTIPLE VALUES

In various embodiments, the maximum or minimum of multiple input values is determined. For each of a set of possible values, a corresponding detection result is set to indicate whether at least one of the input values matches the possible value. The detection results are used to ascertain the maximum or minimum of the multiple input values.

SYSTEM AND METHOD OF CONFIGURING A NON-VOLATILE STORAGE DEVICE

In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may determine that the staged job needs to be executed by a baseboard management controller (BMC) while an information handling system (IHS) is held in a power-on self-test; create a hybrid job associated with the staged job; reboot the IHS; launch an IHS firmware application in a pre-boot IHS firmware environment; provide, to the BMC, a command to execute a first portion of the hybrid job; obtain, by the BMC, an authentication key; provide, by the BMC, the authentication key to the non-volatile storage device; execute, by the BMC, the first portion of the hybrid job to configure the non-volatile storage device; and execute, by the IHS firmware application, the second portion of the hybrid job to poll the baseboard management controller for a result status of configuring the non-volatile storage device.

LOW LATENCY AUGMENTED REALITY ARCHITECTURE FOR CAMERA ENABLED DEVICES

Systems and methods are disclosed that provide low latency augmented reality architecture for camera enabled devices. Systems and methods of communication between system components are presented that use a hybrid communication protocol. Techniques include communications between system components that involve one-way transactions. A hardware message controller is disclosed that controls out-buffers and in-buffers to facilitate the hybrid communication protocol.

MONITORING EXECUTION OF APPLICATION SCHEDULES IN COMPUTING SYSTEMS

One or more embodiments of the present disclosure relate to monitoring execution of runnables that may be executed by a computing system, the executing begin based at least on a schedule. The monitoring may include one or more of: monitoring timing of execution of the runnables, monitoring one or more sequences of execution of the runnables, or monitoring health of at least a portion of the computing system executing the runnables. Additionally or alternatively, one or more embodiments may relate to determining compliance with respect to one or more execution constraints based at least in part on the monitoring.

CIRCUITRY AND METHODS FOR ACCELERATING STREAMING DATA-TRANSFORMATION OPERATIONS
20230100586 · 2023-03-30 ·

Systems, methods, and apparatuses for accelerating streaming data-transformation operations are described. In one example, a system on a chip (SoC) includes a hardware processor core comprising a decoder circuit to decode an instruction comprising an opcode into a decoded instruction, the opcode to indicate an execution circuit is to generate a single descriptor and cause the single descriptor to be sent to an accelerator circuit coupled to the hardware processor core, and the execution circuit to execute the decoded instruction according to the opcode; and the accelerator circuit comprising a work dispatcher circuit and one or more work execution circuits to, in response to the single descriptor sent from the hardware processor core: when a field of the single descriptor is a first value, cause a single job to be sent by the work dispatcher circuit to a single work execution circuit of the one or more work execution circuits to perform an operation indicated in the single descriptor to generate an output, and when the field of the single descriptor is a second different value, cause a plurality of jobs to be sent by the work dispatcher circuit to the one or more work execution circuits to perform the operation indicated in the single descriptor to generate the output as a single stream.

SYSTEM AND METHODS FOR EFFICIENT EXECUTION OF A COLLABORATIVE TASK IN A SHADER SYSTEM

Methods and systems are disclosed for executing a collaborative task in a shader system. Techniques disclosed include receiving, by the system, input data and computing instructions associated with the collaborative task, as well as a configuration setting, causing the system to operate in a takeover mode. The system then launches, exclusively in one workgroup processor, a workgroup including wavefronts configured to execute the collaborative task.

CONTAINERIZED POINT-OF-SALE (POS) SYSTEM AND TECHNIQUE FOR OPERATING

A Point-Of-Sale (POS) processing environment is encapsulated within a container running on a first Operating System (OS) of a transaction terminal. Peripheral drivers for connected peripherals run on a second and different OS of the transaction terminal. A platform processing environment runs the peripheral drivers on the second and different OS of the terminal. A socket interface is provided for communication between transaction applications of the POS processing environment with the peripheral drivers of the platform processing environment for purposes of allowing the transaction applications to control and access the connected peripherals during transactions performed at the transaction terminal via the socket interface.

Address translation data invalidation

A data processing system (2) including one or more transaction buffers (16, 18, 20) storing address translation data executes translation buffer invalidation instructions TLBI within respective address translation contexts VMID, ASID, X. Translation buffer invalidation signals generated as a consequence of execution of the translation buffer invalidation instructions are broadcast to respective translation buffers and include signals which specify the address translation context of the translation buffer invalidation instruction that was executed. This address translation context specified within the translation buffer invalidation signals is used to gate whether or not those translation buffer invalidation signals when received by translation buffers which are potential targets for the invalidation are or are not flushed. The address translation context data provided within the translation buffer invalidation signals may also be used to control whether or not local memory transactions for a local transactional memory access are or are not aborted upon receipt of the translation buffer invalidation signals.