Patent classifications
G06F2209/484
Method and Device for Anonymous Page Management, Terminal Device, and Readable Storage Medium
A method, and a terminal device therefor. The method includes: monitoring whether a process of which a priority changes exists in the terminal device, and obtaining information indicating a priority change of each process; determining a target to-be-executed process according to the information indicating the priority change of each process; detecting whether a target anonymous page corresponding to the target to-be-executed process is stored in a swap space, wherein the swap space is configured to recycle anonymous pages; and prefetching the target anonymous page from the swap space in response to the target anonymous page being stored in the swap space.
SYSTEM AND METHOD FOR IMPLEMENTING CLOUD BASED ASYNCHRONOUS PROCESSORS
Systems, apparatuses, and methods for scheduling the processing of job requests on a data processing platform that utilizes multiple processing elements. In one embodiment, each job request includes a set of attributes that are used to determine scheduling and handling. Such attributes may include job type, priority, priority time, dependency list, and fail on dependency failure flag. In one embodiment, job requests are started in an order determined by the job request attributes of priority and priority time. If a job request has an unresolved dependency, the job request may be removed from the ordered list. Thus, a lower-priority job request may overtake a higher priority job if the higher-priority job has unfinished dependent job requests. Rules for interacting with job requests having these attributes may be customized according to user needs and desires.
System and method for conditional task switching during ordering scope transitions
A data processing system includes a processor core and a hardware module. The processor core performs tasks on data packets. The ordering scope manager stores a first value in a first storage location. The first value indicates that exclusive execution of a first task in a first ordering scope is enabled. In response to a relinquish indicator being received, the ordering scope manager stores a second value in the first storage location. The second value indicates that the exclusively execution of the first task in the first ordering scope is disabled.
MULTI-GRAINED MEMORY OPERANDS
A system according to an exemplary embodiment receives a description of a first set of data elements referenced in a first operand, receives a description of a second set of data elements referenced in a second operand, selects a set of subsets of data elements that are included in both the first set of data elements and the second set of data elements, wherein selecting the set of subsets of data elements includes optimizing a size of the set of subsets of data elements, provides the set of subsets of data elements, and in response to a reference by the second operand that modifies the set of subsets of data elements, provides a respective mapping from each previous subset in the set of subsets to a respective new collection of subsets in the set of subsets.
MEMORY OPERAND DESCRIPTORS
A system according to an exemplary embodiment receives an operand descriptor identifying characteristics of a set of data elements referenced by an operand to be accessed from a set of locations in a memory, wherein the operand descriptor describes an ordering of the set of data elements and respective locations in the memory for each respective data element in the set of data elements. The system further accesses the set of data elements in the memory based on the operand descriptor.
BATTERY MANAGEMENT SYSTEM AND CONTROLLING METHOD THEREOF
A battery management system in which each of a plurality of battery management systems performs an individual task and transmits results of the tasks to a master battery management system wirelessly, the battery management system including: a task information storage unit including a list of tasks performed by each of the plurality of battery management systems, the performance time, performance cycle, and work priority of each task included in the list of tasks, and the communication priority among the plurality of battery management systems, a schedule determination unit configured to determine a work schedule on the basis of data stored in the task information storage unit, and a priority changing unit configured to adjust the work priority of a task based on the work schedule determined by the schedule determination unit, wherein the schedule determination unit is further configured to adjust the work schedule according to the adjusted work priority.
Power-efficient deep neural network module configured for parallel kernel and parallel input processing
A deep neural network (DNN) module utilizes parallel kernel and parallel input processing to decrease bandwidth utilization, reduce power consumption, improve neuron multiplier stability, and provide other technical benefits. Parallel kernel processing enables the DNN module to load input data only once for processing by multiple kernels. Parallel input processing enables the DNN module to load kernel data only once for processing with multiple input data. The DNN module can implement other power-saving techniques like clock-gating (i.e. removing the clock from) and power-gating (i.e. removing the power from) banks of accumulators based upon usage of the accumulators. For example, individual banks of accumulators can be power-gated when all accumulators in a bank are not in use, and do not store data for a future calculation. Banks of accumulators can also be clock-gated when all accumulators in a bank are not in use, but store data for a future calculation.
APPLICATION CONSTRUCTION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
An application construction method and apparatus, an electronic device and a storage medium are provided, which are related to the field of artificial intelligence. The application construction method includes: acquiring a service orchestration file of an application; and determining an execution program of the application based on the service orchestration file, wherein the service orchestration file includes at least one of the following contents corresponding to at least one task obtained by disassembling the application: information relating to a format of data transferred between tasks; information relating to syntax transformation of the data transferred between the tasks; information relating to logical processing between the tasks; and information relating to a model that is to be used by the task.
DEFERRED COMMAND EXECUTION
Deferred command execution by a command processor (CP) may be performed based on a determination that at least one command of a primary buffer is located between a first link of the primary buffer and a second link of the primary buffer. The first link and the second link may be to one or more secondary buffers that includes a set of commands. The CP may initiate, before executing, a fetch of a first set of commands in the set of commands based on the first link, a fetch of the at least one command of the primary buffer, and a fetch of a second set of commands in the set of commands based on the second link. After initiating the fetch of the second set of commands, the CP may execute the first set of commands, the at least one command of the primary buffer, and the second set of commands.
X-ray computed tomography apparatus, image generation apparatus, and task management method
According to one embodiment, an X-ray computed tomography apparatus includes processing circuitry. The processing circuitry generates first tasks and second tasks, for each of a plurality of reconstruction requests for image reconstruction. The processing circuitry manages an order of execution of the first tasks and the second tasks such that the second tasks are executed after the first tasks. The processing circuitry executes the first tasks and the second tasks in the managed order of execution, based on the projection data set.