H04L47/78

CONFIGURABLE LOGIC PLATFORM WITH RECONFIGURABLE PROCESSING CIRCUITRY
20230046107 · 2023-02-16 · ·

An architecture for a load-balanced groups of multi-stage manycore processors shared dynamically among a set of software applications, with capabilities for destination task defined intra-application prioritization of inter-task communications (ITC), for architecture-based ITC performance isolation between the applications, as well as for prioritizing application task instances for execution on cores of manycore processors based at least in part on which of the task instances have available for them the input data, such as ITC data, that they need for executing.

CONFIGURABLE LOGIC PLATFORM WITH RECONFIGURABLE PROCESSING CIRCUITRY
20230046107 · 2023-02-16 · ·

An architecture for a load-balanced groups of multi-stage manycore processors shared dynamically among a set of software applications, with capabilities for destination task defined intra-application prioritization of inter-task communications (ITC), for architecture-based ITC performance isolation between the applications, as well as for prioritizing application task instances for execution on cores of manycore processors based at least in part on which of the task instances have available for them the input data, such as ITC data, that they need for executing.

Systems and methods for provision of a guaranteed batch

Systems and methods for providing a guaranteed batch pool are described, including receiving a job request for execution on the pool of resources; determining an amount of time to be utilized for executing the job request based on available resources from the pool of resources and historical resource usage of the pool of resources; determining a resource allocation from the pool of resources, wherein the resource allocation spreads the job request over the amount of time; determining that the job request is capable of being executed for the amount of time; and executing the job request over the amount of time, according to the resource allocation.

Systems and methods for provision of a guaranteed batch

Systems and methods for providing a guaranteed batch pool are described, including receiving a job request for execution on the pool of resources; determining an amount of time to be utilized for executing the job request based on available resources from the pool of resources and historical resource usage of the pool of resources; determining a resource allocation from the pool of resources, wherein the resource allocation spreads the job request over the amount of time; determining that the job request is capable of being executed for the amount of time; and executing the job request over the amount of time, according to the resource allocation.

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

Transaction-enabled systems and methods for royalty apportionment and stacking

Transaction-enabled systems and methods for royalty apportionment and stacking are disclosed. An example system may include a plurality of royalty generating elements (a royalty stack) each related to a corresponding one or more of a plurality of intellectual property (IP) assets (an aggregate stack of IP). The system may further include a royalty apportionment wrapper to interpret IP licensing terms and apportion royalties to a plurality of owning entities corresponding to the aggregate stack of IP in response to the IP licensing terms and a smart contract wrapper. The smart contract wrapper is configured to access a distributed ledger, interpret an IP description value and IP addition request, to add an IP asset to the aggregate stack of IP, and to adjust the royalty stack.

NETWORK SERVICE CONSTRUCTION SYSTEM AND NETWORK SERVICE CONSTRUCTION METHOD
20230040676 · 2023-02-09 ·

Provided are a network service construction system and a network service construction method which are capable of flexibly constructing network services that satisfy various needs. A purchase management module receives service requirement data indicating a service requirement. An E2EO module and an inventory management module identify, based on the service requirement data, a configuration of a functional unit group that achieves a network service. Based on the identified configuration and template data in which the configuration is acceptable as a parameter, a CMaaS module, a service manager module, and a slice manager module identify a construction procedure of the functional unit group. The CMaaS module, the service manager module, the slice manager module, and a container management module construct the functional unit group by executing the identified construction procedure.

Automated local scaling of compute instances

At a first compute instance run on a virtualization host, a local instance scaling manager is launched. The scaling manager determines, based on metrics collected at the host, that a triggering condition for redistributing one or more types of resources of the first compute instance has been met. The scaling manager causes virtualization management components to allocate a subset of the first compute instance's resources to a second compute instance at the host.

Transaction-enabling systems and methods for customer notification regarding facility provisioning and allocation of resources

The present disclosure describes transaction-enabling systems and methods. A system can include a facility including a core task including a customer relevant output and a controller. The controller may include a facility description circuit to interpret a plurality of historical facility parameter values and corresponding facility outcome values and a facility prediction circuit to operate an adaptive learning system, wherein the adaptive learning system is configured to train a facility production predictor in response to the historical facility parameter values and the corresponding outcome values. The facility description circuit also interprets a plurality of present state facility parameter values, wherein the trained facility production predictor determines a customer contact indicator in response to the plurality of present state facility parameter values and a customer notification circuit provides a notification to a customer in response.

Multi-tier resource, subsystem, and load orchestration

Electronic communications received via a network from a plurality of electronic devices may include signals of device interactions or data changes that correspond to process performances by process-performing resources, signals of conditions of loads, or signals of processes associated with the process-performing resources and the loads. Data composites may be formed from the electronic communications, with data portions collected and mapped to resource profile records and load profile records that may be updated with the collected data portions. For each load, at least one of the one or more resource profile records and/or the one or more load profile records may be used to map the process-performing resources to the load. Content nodes may be linked in a network of content nodes, including respective linked content, resource specifications or load specifications. Access to the network of content nodes may be allowed via a control interface.