Patent classifications
G06F9/4881
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
Transaction-enabled systems and methods for royalty apportionment and stacking
Transaction-enabled systems and methods for royalty apportionment and stacking are disclosed. An example system may include a plurality of royalty generating elements (a royalty stack) each related to a corresponding one or more of a plurality of intellectual property (IP) assets (an aggregate stack of IP). The system may further include a royalty apportionment wrapper to interpret IP licensing terms and apportion royalties to a plurality of owning entities corresponding to the aggregate stack of IP in response to the IP licensing terms and a smart contract wrapper. The smart contract wrapper is configured to access a distributed ledger, interpret an IP description value and IP addition request, to add an IP asset to the aggregate stack of IP, and to adjust the royalty stack.
MODEL GENERATION DEVICE, IN-VEHICLE DEVICE, AND MODEL GENERATION METHOD
Provided are: a selection information acquiring unit to acquire selection information for identifying a target model to be generated from among a plurality of generable neural network models; a model identification unit to identify the target model on the basis of the selection information acquired by the selection information acquiring unit; a weight acquiring unit to acquire a weight of the target model identified by the model identification unit; and a model generation unit to generate the target model identified by the model identification unit on the basis of the weight acquired by the weight acquiring unit and a weight map in which structure information on a structure of each of the plurality of neural network models and information for mapping a weight in the structure are defined.
SYSTEM AND METHOD FOR BATCH AND SCHEDULER MIGRATION IN AN APPLICATION ENVIRONMENT MIGRATION
A method of batch and scheduler migration assesses a batch job, scans it's scheduling mechanism and components, ascertains a quantum change for migrating the batch job to a target batch service and forecasts an assessment statistic that provides at least one functional readiness and a timeline to complete the migration of the batch job. The method generates a transformed batch job structure by breaking the batch job according to the target batch service while retaining the scheduling mechanism. Further, it updates containerized batch service components of the target batch service as per the forecasted assessment statistic and the transformed batch job structure, and migrates the batch job to the target batch service by re-platforming the updated containerized batch service components.
METHOD FOR DATA PROCESSING, DEVICE, AND STORAGE MEDIUM
A method for data processing, an electronic device, and a computer-readable storage medium, which relate to the field of computers. The method includes: acquiring a scheduling information for a perception model based on a user application; determining, based on the scheduling information for the perception model, a scheduling set of the perception model, where the scheduling set of the perception model comprises one or more sub-models of a plurality of sub-models of the perception model; and running, based on perception data from a data collection device, the one or more sub-models of the scheduling set of the perception model, so as to output one or more perception results corresponding to the one or more sub-models.
METHOD AND APPARATUS FOR DYNAMICALLY ADJUSTING PIPELINE DEPTH TO IMPROVE EXECUTION LATENCY
Apparatus and method for managing pipeline depth of a data processing device. For example, one embodiment of an apparatus comprises: an interface to receive a plurality of work requests from a plurality of clients; and a plurality of engines to perform the plurality of work requests; wherein the work requests are to be dispatched to the plurality of engines from a plurality of work queues, the work queues to store a work descriptor per work request, each work descriptor to include information needed to perform a corresponding work request, wherein the plurality of work queues include a first work queue to store work descriptors associated with first latency characteristics and a second work queue to store work descriptors associated with second latency characteristics; engine configuration circuitry to configure a first engine to have a first pipeline depth based on the first latency characteristics and to configure a second engine to have a second pipeline depth based on the second latency characteristics.
SYSTEMS AND METHODS OF HYBRID CENTRALIZED DISTRIBUTIVE SCHEDULING ON SHARED PHYSICAL HOSTS
Systems and systems for hybrid centralized distributive scheduling and conflict resolution of multiple scheduler instances that share physical resources in a cloud computing system. The cloud computing system includes a plurality of scheduler instances, a global resource manager (GRM) for high-level resource management and conflict resolution for the scheduler instances, and a plurality of physical hosts. Each physical host has a respective local resource manager (LRM). The scheduler instances are responsible for initially processing of scheduling and resource allocation for resource requests, and proposing candidate physical hosts (and respective resource allocation) for the resource requests to the GRM. The GRM is responsible for conflict resolution through its general conflict resolvers of filtering, sorting and counting. The GRM decides which physical hosts among the candidate physical hosts will run the runtime instances of the resource requests after resolving conflicts among the scheduler instances.
INSTRUCTION INTERPRETATION FOR WEB TASK AUTOMATION
A method of generating an instruction performance skeleton employs an instruction unit configured to receive a natural language instruction. From the natural language instruction, a sequence of clauses may be extracted. The instruction unit then determines a target website or websites on which to perform the task. The object models of the target website are generated. A comparison of the sequence of actions to the object model and its labelling hierarchical class structure is performed. Based on this comparison, an instruction performance skeleton is generated. In future, on the basis of a further natural language instruction that is similar to the previous natural language instruction, the instruction performance skeleton may be modified to generate a playback performance skeleton to arrange performance of a task.
METHOD AND APPARATUS FOR SCHEDULING TASKS IN MULTI-CORE PROCESSOR
An apparatus includes a plurality of processing cores, and a memory including a plurality of task queues corresponding to the plurality of processing cores, respectively, wherein at least one processing core of the plurality of processing cores is configured, by executing a scheduler, to determine execution of task rescheduling, based on states of the plurality of processing cores, tasks stored in the plurality of task queues, and at least one reference value, and, when the task rescheduling is executed, move a first task stored in a first task queue to a second task queue.
OPTIMIZING VM NUMA CONFIGURATION AND WORKLOAD PLACEMENT IN A HETEROGENEOUS CLUSTER
An example method of placing a virtual machine (VM) in a cluster of hosts is described. Each of the hosts having a hypervisor managed by a virtualization management server for the cluster, the hosts separated into a plurality of nonuniform memory access (NUMA) domains. The method including: comparing a virtual central processing unit (vCPU) and memory configuration of the VM with physical NUMA topologies of the hosts; selecting a set of the hosts spanning at least one of the NUMA domains, each host in the set of hosts having a physical NUMA topology that maximizes locality for vCPU and memory resources of the VM as specified in the vCPU and memory configuration; and providing the set of hosts to a distributed resource scheduler (DRS) executing in the virtualization management server, the DRS configured to place the VM in a host selected from the set of hosts.