Patent classifications
G06F9/3869
GRAPHICS PROCESSING
There is disclosed an instruction that can be included into a graphics processor shader program to be executed by a group of execution threads that when executed will cause a group of execution lanes to be in an ‘active’ (e.g. SIMD) execution state in which active state processing operations can be performed using the group of plural execution lanes together. The processing operations can then be performed using the execution lanes in the active state together. The execution lanes are then allowed or caused to return to their prior execution state once the processing operations have finished.
CI/CD PIPELINE TO CONTAINER CONVERSION
A method includes receiving, by a processing device, a definition of a CI/CD pipeline for executing a set of stages of the CI/CD pipeline. The CI/CD pipeline is associated with a first computer system. The method further includes converting, by the processing device, the definition into a container image file, and causing, by the processing device using the container image file, a second computer system to implement a container executing the CI/CD pipeline.
Pipeline flattener with conditional triggers
A semiconductor device comprising a processor having a pipelined architecture and a pipeline flattener and a method for operating a pipeline flattener in a semiconductor device are provided. The processor comprises a pipeline having a plurality of pipeline stages and a plurality of pipeline registers that are coupled between the pipeline stages. The pipeline flattener comprises a plurality of trigger registers for storing a trigger, wherein the trigger registers are coupled between the pipeline stages.
PERFORMING RESYNCHRONIZATION JOBS IN A DISTRIBUTED STORAGE SYSTEM BASED ON A PARALLELISM POLICY
The disclosure herein describes performing resynchronization (“resync”) jobs in a distributed storage system based on a parallelism policy. A resync job is obtained from a queue and input/output (I/O) resources that will be used during execution of the resync job are identified. Available bandwidth slots of each I/O resource of the identified I/O resources are determined. The parallelism policy is applied to the identified I/O resources and the available bandwidth slots. Based on the application of the parallelism policy, a bottleneck resource of the I/O resources is determined and a parallel I/O value is calculated based on the available bandwidth slots of the bottleneck resource, wherein the parallel I/O value indicates a quantity of I/O tasks that can be performed in parallel. The resync job is executed using the I/O resources, the execution of the resync job including performance of I/O tasks in parallel based on the parallel I/O value.
Multiple dies hardware processors and methods
- Nevine Nassif ,
- Yen-Cheng Liu ,
- Krishnakanth V. Sistla ,
- Gerald Pasdast ,
- Siva Soumya Eachempati ,
- Tejpal Singh ,
- Ankush Varma ,
- Mahesh K. Kumashikar ,
- Srikanth Nimmagadda ,
- Carleton L. Molnar ,
- Vedaraman Geetha ,
- Jeffrey D. Chamberlain ,
- William R. Halleck ,
- George Z. Chrysos ,
- John R. Ayers ,
- Dheeraj R. Subbareddy
Methods and apparatuses relating to hardware processors with multiple interconnected dies are described. In one embodiment, a hardware processor includes a plurality of physically separate dies, and an interconnect to electrically couple the plurality of physically separate dies together. In another embodiment, a method to create a hardware processor includes providing a plurality of physically separate dies, and electrically coupling the plurality of physically separate dies together with an interconnect.
Modular gated multiplier circuitry and multiplication technique
Various implementations described herein are related to a device having multiplier circuitry with an array of summation result cells that holds summation bit values for shifted arrays added together. The device may include latch circuitry having one or more gated elements disposed between the summation result cells, and the gated elements may be adapted to provide a portion of the summation bit values based on a gating signal.
Pipeline including separate hardware data paths for different instruction types
A processing element is implemented in a stage of a pipeline and configured to execute an instruction. A first array of multiplexers is to provide information associated with the instruction to the processing element in response to the instruction being in a first set of instructions. A second array of multiplexers is to provide information associated with the instruction to the first processing element in response to the instruction being in a second set of instructions. A control unit is to gate at least one of power or a clock signal provided to the first array of multiplexers in response to the instruction being in the second set.
Performing resynchronization jobs in a distributed storage system based on a parallelism policy
The disclosure herein describes performing resynchronization (“resync”) jobs in a distributed storage system based on a parallelism policy. A resync job is obtained from a queue and input/output (I/O) resources that will be used during execution of the resync job are identified. Available bandwidth slots of each I/O resource of the identified I/O resources are determined. The parallelism policy is applied to the identified I/O resources and the available bandwidth slots. Based on the application of the parallelism policy, a bottleneck resource of the I/O resources is determined and a parallel I/O value is calculated based on the available bandwidth slots of the bottleneck resource, wherein the parallel I/O value indicates a quantity of I/O tasks that can be performed in parallel. The resync job is executed using the I/O resources, the execution of the resync job including performance of I/O tasks in parallel based on the parallel I/O value.
UNIFIED AUTOMATION OF APPLICATION DEVELOPMENT
Unified automation of application development and delivery is provided. An automation pipeline execution coordinator may define a pipeline specification that includes actions to be performed, a triggering event definition and specification for determining execution context. The coordinator may concurrently detect triggering events for multiple pipelines matching the pipeline specification, and responsive to the detecting, determine execution contexts for the pipelines. The coordinator may then execute the multiple pipelines, where execution may proceed independently for pipelines with differing execution contexts. For pipelines sharing an execution context, execution of various actions of the respective pipelines may be coordinated. Execution context may be determined according to the specification for determining execution context, which may include an overridable default specification that determines context by locations of source data related to the triggering event. Pipeline specifications may be defined using pipeline specification templates and input from users obtained via various user interfaces.
APPARATUS AND METHODS EMPLOYING A SHARED READ PORT REGISTER FILE
In some implementations, a processor includes a plurality of parallel instruction pipes, a register file includes at least one shared read port configured to be shared across multiple pipes of the plurality of parallel instruction pipes. Control logic controls multiple parallel instruction pipes to read from the at least one shared read port. In certain examples, the at least one shared register file read port is coupled as a single read port for one of the parallel instruction pipes and as a shared register file read port for a plurality of other parallel instruction pipes.