H04J1/16

SYSTEMS AND METHODS FOR DYNAMIC NETWORK FUNCTION RESOURCE ALLOCATION THROUGH THE NETWORK REPOSITORY FUNCTION

A device may include a processor configured to register a network function, of a core network associated with a radio access network, in a network function repository for the core network. The processor may be further configured to obtain load information for the network function, wherein the load information indicates a load associated with the network function during a time period; determine that the load associated with the network function has reached a threshold based on the load information; and send an alert to an orchestration system to adjust a capacity for the network function, in response to determining that the load associated with the network function has reached the threshold.

Wireless communications in a system that supports a first subframe type having a first symbol duration and a second subframe type having a second symbol duration

Methods, systems, and devices are described for low latency communications within a wireless communications system. An eNB and/or a UE may be configured to operate within the wireless communications system and may send triggers to initiate communications using a dedicated resource in a wireless communications network that supports transmissions having a first subframe type and a second subframe type, the first subframe type comprising symbols of a first duration and the second subframe type comprising symbols of a second duration that is shorter than the first duration. Communications may be initiated by transmitting a trigger from the UE or eNB using the dedicated resource, and initiating communications following the trigger. The duration of time between the trigger and initiating communications can be significantly shorter than the time to initiate communications using legacy LTE communications.

Wireless communications in a system that supports a first subframe type having a first symbol duration and a second subframe type having a second symbol duration

Methods, systems, and devices are described for low latency communications within a wireless communications system. An eNB and/or a UE may be configured to operate within the wireless communications system and may send triggers to initiate communications using a dedicated resource in a wireless communications network that supports transmissions having a first subframe type and a second subframe type, the first subframe type comprising symbols of a first duration and the second subframe type comprising symbols of a second duration that is shorter than the first duration. Communications may be initiated by transmitting a trigger from the UE or eNB using the dedicated resource, and initiating communications following the trigger. The duration of time between the trigger and initiating communications can be significantly shorter than the time to initiate communications using legacy LTE communications.

Low latency wireless messaging
11516694 · 2022-11-29 · ·

Technology for wireless transmission of messages to remote receiving devices is disclosed. The technology includes receiving a message for transmission, determining transmission parameters for transmission of the message, and transmitting the message to a remote receiving device according to the determined transmission parameters. The technology may also include encoding the message to effect message latency and may be employed for message transmission via the ionosphere or other atmospheric layer at frequencies in the Medium Frequency (MF), High Frequency (HF), or Very High Frequency (VHF) spectrum. Further, the disclosed technology may be employed for message transmission to effect low latency financial transaction execution, such as high speed high frequency trading.

ALLOCATING ADDITIONAL BANDWIDTH TO RESOURCES IN A DATACENTER THROUGH DEPLOYMENT OF DEDICATED GATEWAYS

Some embodiments provide policy-driven methods for deploying edge forwarding elements in a public or private SDDC for tenants or applications. For instance, the method of some embodiments allows administrators to create different traffic groups for different applications and/or tenants, deploys edge forwarding elements for the different traffic groups, and configures forwarding elements in the SDDC to direct data message flows of the applications and/or tenants through the edge forwarding elements deployed for them. The policy-driven method of some embodiments also dynamically deploys edge forwarding elements in the SDDC for applications and/or tenants after detecting the need for the edge forwarding elements based on monitored traffic flow conditions.

Systems, devices, and methods for controlling operation of wearable displays during vehicle operation
11595878 · 2023-02-28 · ·

The present systems, devices, and methods generally relate to controlling wearable displays during vehicle operation, and particularly to detecting when a user is operating a vehicle and restricting operation of a wearable display to prevent the user from being distracted. At least one processor of a wearable display system receives user context data from at least one user context sensor, and determines whether the user is operating a vehicle based on the user context data. If the user is operating a vehicle, presentation of at least one user interface is restricted. Unrestricted access can be restored by inputting an unlock input to override the restriction, or by analysis of additional user context data at a later time.

AUTOMATIC SCALING FOR CONSUMER SERVERS IN A DATA PROCESSING SYSTEM

A system and method for automatically scaling consumer servers in a data processing system. To build an automatic scaling system, the present disclosure allows consumers to obtain additional information, e.g., the number of events that await to be read from an aggregator when receiving an event from the aggregator. This additionally obtained number provides a direct gauge for the data processing system to determine when the consumers are over-provisioned, i.e., when the number of events left to be read is close to zero, as well as when the consumers are under-provisioned, e.g., when the number of events left to be read continues to increase. As a result, the consumers can be automatically scaled to handle the dynamic data processing demand while providing optimal resource allocation.

BALANCING TRAFFIC OF MULTIPLE REALMS ACROSS MULTIPLE RESOURCES
20220368634 · 2022-11-17 ·

Methods, computer readable media, and devices for balancing traffic of multiple realms across multiple resources such that a load balancing algorithm delivers equal flows of traffic to the multiple resources are disclosed. One method may include identifying a high risk realm and two low risk realms from among a plurality of realms, identifying three resources from among a plurality of resources, and distributing the high risk realm, the first low risk realm, and the second low risk realm across the three resources such that the high risk realm and a first low risk realm share a first resource, the high risk realm and a second low risk realm share a second resource, the two low risk realms share a third resource, traffic of the high risk realm is load balanced equally, and traffic of the two low risk realms is load balanced unequally.

LATENCY-AWARE LOAD BALANCER FOR TOPOLOGY-SHIFTING SOFTWARE DEFINED NETWORKS
20220368646 · 2022-11-17 ·

Techniques are described for performing latency-aware load balancing. In some examples, a computing device communicably coupled to a plurality of service endpoints that are in motion with respect to the computing device may receive data to be processed. The computing device may select, based at least in part on a communication latency of each of the plurality of service endpoints and a predicted compute latency of each of the plurality of service endpoints, a service endpoint out of the plurality of service endpoints to process the data. The computing device may send the data to the selected service endpoint for processing.

SYSTEMS AND METHODS FOR TRACKING AND EXPORTING FLOWS IN SOFTWARE WITH AUGMENTED PROCESSING IN HARDWARE
20220360506 · 2022-11-10 ·

Systems and methods are provided herein for using a network device's software (e.g., programs executed on a CPU) to maintain and export flow data while offloading network resource intensive tasks to the network device's hardware. This may be accomplished by a network device determining whether a new flow should be tracked using only the software table (e.g., table stored only on the CPU) of the network device or whether certain flow tracking tasks (e.g., counting/parsing) can be offloaded to a hardware table (e.g., counter table in a hardware flow cache) of the network device. The network device may use one or more conditions to determine whether the new flow should be tracked using the software table or by both the software and the hardware table. The conditions can relate to the characteristics of the new flow, resource information, prioritization of the new flow, etc.