Patent classifications
G06F2209/484
Dynamic command scheduling for storage system
The subject technology provides for managing a data storage system. Commands are identified into as a first command type or a second command type. The commands identified as the first command type are assigned to a first queue, and the commands identified as the second command type are assigned to a second queue. After the commands from the first queue and the commands from the second queue are processed based on a scheduling ratio over a predetermined period of time, a write amplification factor, number of host read commands, and number of host write commands during the predetermined period of time are determined. The scheduling ratio is updated based on the write amplification, the number of host read commands, the number of host write commands, and a predetermined scheduling ratio factor. Subsequent commands are processed from the first queue and the second queue based on the updated scheduling ratio.
TECHNOLOGIES FOR PROVIDING EFFICIENT MIGRATION OF SERVICES AT A CLOUD EDGE
Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
Interleave-scheduling of correlated tasks and backfill-scheduling of depender tasks into a slot of dependee tasks
Methods and arrangements for assembling tasks in a progressive queue. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task. The dependee tasks are assembled in a progressive queue for execution, and the dependee tasks are executed. Other variants and embodiments are broadly contemplated herein.
User configurable task triggers
Systems and processes for user configurable task triggers are provided. In one example, at least one user input, including a selection of at least one condition of a plurality of conditions and a selection of at least one task of a plurality of tasks, is received. Stored context data corresponding to an electronic device is received. A determination is whether the stored context data indicates an occurrence of the at least one selected condition. In response to determining that the stored context data indicates an occurrence of the at least one selected condition, the at least one selected task associated with the at least one selected condition is performed.
Data Processing Method and Computer Device
A data processing method implemented by a computer device, includes generating a target task including a buffer application task or a buffer release task, when the target task is the buffer application task, a first buffer corresponding to the buffer application task is used when the second task is executed, or when the target task is the buffer release task, a second buffer corresponding to the buffer release task is used when the first task is executed, obtaining a buffer entry corresponding to the target task after a preceding task of the target task is executed and before a successive task of the target task is executed, where the buffer entry includes a memory size of a buffer corresponding to the target task, a memory location of the buffer, and a memory address of the buffer, and executing the target task to apply for or release the buffer.
HW Programmable Signal Path Event-Based DSP For Sensor Mixed Signal Devices
A hardware-programmable digital signal path component for processing events from sensor mixed signal devices. A system includes a mixed signal component and a reconfigurable signal path component. The mixed signal component includes a group of sensor devices and generates one or more events from among the group of sensor devices. The signal path component receives the event(s), and includes a control unit component and a digital signal processor (DSP) component. The control unit component includes a programmable function enable mechanism, and distributes the received event(s) in combination with one or more functions among a set of predefined functions enabled by the programmable function enable mechanism. The DSP component is configured to perform one or more operations associated with the distributed event(s) in accordance with the enabled function(s).
EFFICIENT THREAD GROUP SCHEDULING
A mechanism is described for facilitating intelligent thread scheduling at autonomous machines. A method of embodiments, as described herein, includes detecting dependency information relating to a plurality of threads corresponding to a plurality of workloads associated with tasks relating to a processor including a graphics processor. The method may further include generating a tree of thread groups based on the dependency information, where each thread group includes multiple threads, and scheduling one or more of the thread groups associated a similar dependency to avoid dependency conflicts.
Dynamic sequencing of data partitions for optimizing memory utilization and performance of neural networks
Optimized memory usage and management is crucial to the overall performance of a neural network (NN) or deep neural network (DNN) computing environment. Using various characteristics of the input data dimension, an apportionment sequence is calculated for the input data to be processed by the NN or DNN that optimizes the efficient use of the local and external memory components. The apportionment sequence can describe how to parcel the input data (and its associated processing parameters—e.g., processing weights) into one or more portions as well as how such portions of input data (and its associated processing parameters) are passed between the local memory, external memory, and processing unit components of the NN or DNN. Additionally, the apportionment sequence can include instructions to store generated output data in the local and/or external memory components so as to optimize the efficient use of the local and/or external memory components.
Managing data segments in memory for context switching with standalone fetch and merge services
Methods and arrangements for managing data segments. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task, and the at least one of the dependee set of tasks is executed. There is extracted, from the at least one of the dependee set of tasks, at least one service common to at least another of the dependee set of tasks. Other variants and embodiments are broadly contemplated herein.
FACILITATING DYNAMIC PARALLEL SCHEDULING OF COMMAND PACKETS AT GRAPHICS PROCESSING UNITS ON COMPUTING DEVICES
A mechanism is described for facilitating parallel scheduling of multiple commands on computing devices. A method of embodiments, as described herein, includes detecting a command of a plurality of commands to be processed at a graphics processing unit (GPU), and acquiring one or more resources of a plurality of resources to process the command. The plurality of resources may include other resources being used to process other commands of the plurality of commands. The method may further include facilitating processing of the command using the one or more resources, wherein the command is processed in parallel with processing of the other commands using the other resources.