H04L47/788

Transaction-enabled systems and methods for royalty apportionment and stacking

Transaction-enabled systems and methods for royalty apportionment and stacking are disclosed. An example system may include a plurality of royalty generating elements (a royalty stack) each related to a corresponding one or more of a plurality of intellectual property (IP) assets (an aggregate stack of IP). The system may further include a royalty apportionment wrapper to interpret IP licensing terms and apportion royalties to a plurality of owning entities corresponding to the aggregate stack of IP in response to the IP licensing terms and a smart contract wrapper. The smart contract wrapper is configured to access a distributed ledger, interpret an IP description value and IP addition request, to add an IP asset to the aggregate stack of IP, and to adjust the royalty stack.

Transaction-enabling systems and methods for customer notification regarding facility provisioning and allocation of resources

The present disclosure describes transaction-enabling systems and methods. A system can include a facility including a core task including a customer relevant output and a controller. The controller may include a facility description circuit to interpret a plurality of historical facility parameter values and corresponding facility outcome values and a facility prediction circuit to operate an adaptive learning system, wherein the adaptive learning system is configured to train a facility production predictor in response to the historical facility parameter values and the corresponding outcome values. The facility description circuit also interprets a plurality of present state facility parameter values, wherein the trained facility production predictor determines a customer contact indicator in response to the plurality of present state facility parameter values and a customer notification circuit provides a notification to a customer in response.

GENERATING AUTOMATIC BANDWIDTH ADJUSTMENT POLICIES PER LABEL-SWITCHED PATH
20180006962 · 2018-01-04 ·

A device may identify a plurality of first values associated with network traffic of a label-switched path of a plurality of label-switched paths. The device may determine an adjustment policy based on the plurality of first values. The adjustment policy may include one or more factors associated with a plurality of second values. The plurality of second values may be determined based on the plurality of first values. The device may implement the adjustment policy in association with the label-switched path. A bandwidth reservation of the label-switched path may be adjusted based on the adjustment policy. The adjustment policy may be implemented for fewer than all of the plurality of label-switched paths.

Transaction-enabled systems and methods for resource acquisition for a fleet of machines

The present disclosure describes transaction-enabling systems and methods. A system can include a controller and a fleet of machines, each having at least one of a compute task requirement, a networking task requirement, and an energy consumption task requirement. The controller may include a resource requirement circuit to determine an amount of a resource for each of the machines to service the task requirement for each machine, a forward resource market circuit to access a forward resource market, and a resource distribution circuit to execute an aggregated transaction of the resource on the forward resource market.

Methods and apparatus for supporting dynamic network scaling based on learned patterns and sensed data

Methods and apparatus for predicting communications resources which will be needed at a venue and then controlling the amount of available resources dynamically are described. In various embodiments real time or near real time video of areas of the venue are used to predict the number of people in a portion of a venue and/or the direction of movement. Along with other information such as the type of event and/or event schedule collected information is supplied to a set of trained resource requirement models which are used to predict future resource needs at a venue, e.g., while an event is ongoing. Commands are sent to dynamically vary the amount of communications resources provided to one or more portions of the venue. Resources which can be varied included but are not limited to fixed wired WAN bandwidth, WiFi bandwidth, cellular bandwidth, network based on-demand services, transcoding services, firewall services, etc.

Load adaptation architecture framework for orchestrating and managing services in a cloud computing system

According to one aspect of the concepts and technologies disclosed herein, a cloud computing system can include a load adaptation architecture framework that performs operations for orchestrating and managing one or more services that may operate within at least one of layers 4 through 7 of the Open Systems Interconnection (“OSI”) communication model. The cloud computing system also can include a virtual resource layer. The virtual resource layer can include a virtual network function that provides, at least in part, a service. The cloud computing system also can include a hardware resource layer. The hardware resource layer can include a hardware resource that is controlled by a virtualization layer. The virtualization layer can cause the virtual network function to be instantiated on the hardware resource so that the virtual network function can be used to support the service.

METHOD AND DEVICE OF COMMUNICATION IN A COMMUNICATION SYSTEM USING AN OPEN RADIO ACCESS NETWORK

A method and apparatus for supporting a multiple-input multiple-output (MIMO) by a service management and orchestration (SMO) entity in a communication system using an open radio access network (O-RAN) includes receiving, from a first entity, first data, the first entity including at least one of an O-RAN centralized unit (O-CU) and an O-RAN distributed unit (O-DU), the first data including MIMO related information collected from the first entity, determining, based on the first data, a configuration for applying at least one of a single-user-multiple-input-multiple-output (SU-MIMO) and a multi-user-multiple-input-multiple-output (MU-MIMO), and transmitting, to a second entity that controls the first entity in the O-RAN, information on the configuration.

SYSTEMS AND METHODS FOR MULTI-CLOUD VIRTUALIZED INSTANCE DEPLOYMENT AND EXECUTION

A system may receive a first definition for a virtualized instance of a network function. The first definition may include a first set of declarations in a first format that is different than respective formats supported by different virtualized environments. The system may select a first virtualized environment to run the virtualized instance based on requirements specified within the first definition, and may generate a second definition with a second set of declarations that map the first set of declarations from the first format to a second format supported by the first virtualized environment. The system may deploy the virtualized instance to the first virtualized environment using the second set of declarations from the second definition. Deploying the virtualized instance may include configuring its operation based on some of the second set of declarations matching a configuration format supported by the first virtualized environment.

Enhanced selection of cloud architecture profiles

This document describes modeling and simulation techniques to select a cloud architecture profile based on correlations between application workloads and resource utilization. In some aspects, a method includes obtaining infrastructure data specifying utilization of computing resources of an existing computing system. Application workload data specifying tasks performed by one or more applications running on the existing computing system is obtained. One or more models are generated based on the infrastructure data and the application workload data. The model(s) define an impact on utilization of each computing resource in response to changes in workloads of the application(s). A workload is simulated, using the model(s), on a candidate cloud architecture profile that specifies a set of computing resources. A simulated utilization of each computing resource of the candidate cloud architecture profile is determined based on the simulation. An updated cloud architecture profile is generated based on the simulated utilization.

System, method, and computer program for determining a network situation in a communication network

A system, method, and computer program product are provided for a determining a network situation in a communication network. In use, at least one threshold value of at least one operational parameter of a communication network is obtained, the at least one operational parameter representing at least one operational status of at least one of a computational device or a communication device. Additionally, log data of the communication network is obtained, the log data containing at least one value of the at least one operational parameter reported by at least one network entity of the communication network. The at least one value of the at least one operational parameter of the log data is compared with a corresponding threshold value of the at least one threshold value to form a detection of a network situation. Further, the detection of the network situation is reported if the at least one value of the at least one operational parameter of the log data traverses the corresponding threshold value of the at least one threshold value.