H04L47/803

Device Group Partitions and Settlement Platform
20230040365 · 2023-02-09 · ·

Device group partitions and a settlement platform are provided. In some embodiments, device group partitions (e.g., partitions of devices based on associated device groups) are provided. In some embodiments, a settlement platform service is provided. In some embodiments, a settlement platform service is provided for partitioned devices. In some embodiments, collecting device generated service usage information for one or more devices in wireless communication on a wireless network; and aggregating the device generated service usage information for a settlement platform for the one or more devices in wireless communication on the wireless network is provided. In some embodiments, a settlement platform implements a service billing allocation and/or a service/transactional revenue share among one or more partners. In some embodiments, service usage information includes micro-CDRs, which are used for CDR mediation or reconciliation that provides for service usage accounting on any device activity that is desired. In some embodiments, each device activity that is desired to be associated with a billing event is assigned a micro-CDR transaction code, and a service processor of the device is programmed to account for that activity associated with that transaction code. In some embodiments, a service processor executing on a wireless communications device periodically reports (e.g., during each heartbeat or based on any other periodic, push, and/or pull communication technique(s)) micro-CDR usage measures to, for example, a service controller or some other network element for CDR mediation or reconciliation.

Interoperable cloud based media processing using dynamic network interface
11496414 · 2022-11-08 · ·

A method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP) includes obtaining a plurality of tasks for processing the media content, providing an interface between an NBMP workflow manager and a cloud manager by providing an NBMP Link application program interface (API), which links the plurality of tasks together, identifying an amount of network resources to be used for processing the media content, by using the NBMP Link API, and processing the media content in accordance with the identified amount of network resources.

CONTROL APPARATUS, CONTROL METHOD AND PROGRAM

A control apparatus includes an acquisition unit that acquires application information related to at least one of a plurality of sessions at a plurality of timings for each of the plurality of sessions related to a plurality of types of applications that are communicating in a network, a search unit that searches for an allocation band of each of the plurality of sessions at the plurality of timings based on a QoE prediction value of each of the plurality of sessions derived from the application information and a condition for a band of the network, and a control unit that controls allocation of a band for each of the plurality of sessions at the plurality of timings based on the allocation band searched by the search unit.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD
20230095186 · 2023-03-30 ·

An information processing device (10, 10A) includes a band requesting unit (162, 162A), an adjusting unit (164, 164A), and a transmitting unit (140). The band requesting unit (162, 162A) requests, according to a bandwidth necessary for transmitting information including a moving image, a use reservation of the bandwidth. The adjusting unit (164, 164A) adjusts, according to a result of the request by the band requesting unit and a reserved bandwidth, an information amount of information to be transmitted. The transmitting unit (140) converts the information with the adjusted information amount into a transmission signal and transmits the transmission signal.

Smart cascading security functions for 6G or other next generation network

In a 6G network, microservices can be utilized in the absence of a core network. For example, after a mobile device has authenticated, through its carrier network, with a transport service layer, microservices can be allocated to the mobile device without having to be transmitted via the core network. Thus, removing the core network from the process can generate a direct line of microservices from the transport layer to the end-user. Furthermore, additional microservices and/or resources can be access through a microservices library. Consequently, packets can be securely transmitted be a wireless network facilitating sending packet profile data from one to many node devices in anticipation of the packet traversing the various node devices.

Background Data Transfer Policy Formulation Method, Apparatus, and System
20230098362 · 2023-03-30 ·

Embodiments of this application disclose a background data transfer policy formulation method, an apparatus, and a system. The method includes: A first policy control network element sends, to a first network element, a first message used to request a background data transfer policy stored in the first network element, and obtains, from the first network element, the background data transfer policy stored in the first network element and second decision information that is used to formulate a second background data transfer policy. The first policy control network element formulates a first background data transfer policy based on first decision information used to formulate the first background data transfer policy and the second decision information and according to the background data transfer policy stored in the first network element.

Enhanced redeploying of computing resources

Examples described herein relate to method, resource management system, and non-transitory machine-readable medium for redeploying a computing resource. Data related to a performance parameter corresponding to a plurality of computing resources deployed on a plurality of host-computing nodes may be received. The performance parameter is associated with one or both of: communication between computing resources of the plurality of computing resources, or communication of the plurality of computing resources with a network device. Further, for a computing resource of the plurality of computing resources, a candidate host-computing node is determined from the plurality of host-computing nodes based on the data related to the performance parameter and the computing resource may be redeployed on the candidate host-computing node.

Centrally managed time-sensitive fog networks

The present disclosure envisages optimization of a time-sensitive fog network deployed in an industrial environment. The time-sensitive fog network comprises a plurality of fog nodes communicably coupled to a plurality of industrial equipments referenced as endpoints. Each fog node is embodied with a plurality of computer-based resources including computational resources, storage resources, security resources, network resources, application-specific resources, and device-specific resources. The resource constraints that warrant the endpoints to cooperate with specific fog nodes to access specific resources are manifested as a compute profile, a storage profile, a security profile, a network profile, an application-specific profile, and a device-specific profile. The endpoints are optimally provisioned to cooperate with the fog nodes and consume the computer-based resources embodied therein, based on a deployment model that optimally and deterministically correlates the plurality of computer-based resources embodied in each of the fog nodes to the resource profiles attributed to each of the endpoints.

CONTROLLING PLACEMENT OF WORKLOADS OF AN APPLICATION WITHIN AN APPLICATION ENVIRONMENT

A technique is directed toward controlling placement of workloads of an application within an application environment. The technique involves, while a first placement of workloads of the application is in a first deployment of resources within the application environment, generating a set of resource deployment changes that accommodates a predicted change in demand on the application. The technique further involves adjusting the first deployment of resources within the application environment to form a second deployment of resources within the application environment, the second deployment of resources being different from the first deployment of resources. The technique further involves providing a second placement of workloads of the application in the second deployment of resources to accommodate the predicted change in demand on the application, the second placement of workloads being different from the first placement of workloads.

Method and apparatus for creating network slices
11606255 · 2023-03-14 · ·

Disclosed are a method and an apparatus for creating network slices. The method for creating network slices comprises: creating a slice-bundles link between a first node and a second node, wherein the slice-bundles link comprises at least one member link created between the first node and the second node; and configuring a packet service for the slice-bundles link.