Patent classifications
H04L47/623
Apparatus and method for forwarding handover data in wireless communication system
A technique forwards handover data in a wireless communication system. A base station apparatus includes a first buffer for storing downlink data of a terminal, a handover agent for, when the terminal performs a handover, performing scheduling on data which is stored in the first buffer for at least one terminal including the terminal that performs the handover so that an interruption time of the at least one terminal is reduced in order to forward the data to a target base station, and a communication unit for transmitting the data according to a scheduling result of the handover agent.
Controlling notification based on power expense and social factors
In one embodiment, a computer system receives an indication of a power state of a client device, identifies messages to be sent to the client device, determines a transmit cost and a value of each of the messages, and stores at least one of the messages in a queue based on the transmit cost and value of the message.
COMMUNICATION DEVICE, COMMUNICATION SYSTEM AND COMMUNICATION METHOD
There is provided a communication device in a communication system in which a plurality of communication devices are coupled in series, the communication device including: a memory; a processor coupled with the memory and the processor configured to: receive a control signal included in a signal transmitted from a first communication device of the plurality of communication devices, control an output band in which the communication device transmits the signal, based on a weight value included in the control signal received, update the weight value, and transmit the signal including the control signal including the weight value updated, to a second communication device of the plurality of communication devices through the output band controlled.
DEVICES AND METHODS FOR MANAGING NETWORK TRAFFIC FOR A DISTRIBUTED CACHE
A programmable switch includes ports, and circuitry to receive cache messages for a distributed cache from client devices. The cache messages are queued for sending to memory devices from the ports. Queue occupancy information is generated and sent to a controller that determines, based at least in part on the queue occupancy information, at least one of a cache message transmission rate for a client device, and one or more weights for the queues used by the programmable switch. In another aspect, the programmable switch extracts cache request information from a cache message. The cache request information indicates a cache usage and is sent to the controller, which determines, based at least in part on the extracted cache request information, at least one of a cache message transmission rate for a client device, and one or more weights for queues used in determining an order for sending cache messages.
Online task dispatching and scheduling system and method thereof
The present disclosure relates to an online task dispatching and scheduling system. The system includes an end device; an access point (AP) configured to receive a task from the end device; one or more edge servers configured to receive the task from the AP, the one or more edge servers including a task waiting queue, a processing pool, a task completion queue, and a scheduler, wherein the AP further includes a dispatcher utilizing Online Learning (OL) for determining a real-time state of network conditions and server loads; and the AP selects a target edge server from the one or more edge servers to which the task is to be dispatched; and wherein the scheduler utilizes Deep Reinforcement Learning (DRL) in generating a task scheduling policy for the one or more edge servers.
SYSTEM AND METHOD FOR LATENCY CRITICAL QUALITY OF SERVICE USING CONTINUOUS BANDWIDTH CONTROL
A system and method are provided for a bandwidth manager for packetized data designed to arbitrate access between multiple, high bandwidth, ingress channels (sources) to one, lower bandwidth, egress channel (sink). The system calculates which source to grant access to the sink on a word-to-word basis and intentionally corrupts/cuts packets if a source ever loses priority while sending. Each source is associated with a ranking that is recalculated every data word. When a source buffer sends enough words to have its absolute rank value increase above that of another source buffer waiting to send, the system “cuts” the current packet by forcing the sending buffer to stop mid-packet and selects a new, lower ranked, source buffer to send. When there are multiple requesting source buffers with the same rank, the system employs a weighted priority randomized scheduler for buffer selection.
Programmable traffic management engine
Examples herein describe a programmable traffic management engine that includes both programmable and non-programmable hardware components. The non-programmable hardware components are used to generate features that can then be used to perform different traffic management algorithms. Depending on which traffic management algorithm the PTM engine is configured to do, the PTM engine may use a subset (or all) of the features to perform the algorithm. The programmable hardware components in the PTM engine are programmable (e.g., customizable) by the user to perform a selected algorithm using some or all of the features provided by the non-programmable hardware components.
Dynamic client-server arbiter
Electronic apparatus includes functional circuitry configured to respond to requests from a plurality of client devices, data storage circuitry configured as a plurality of client queues in which each respective client queue is configured to store pending requests from a respective client device, priority determination circuitry configured to assign a respective priority level to each respective client queue based at least in part on requests stored in the respective client queues, and arbiter circuitry configured to control access to the functional circuitry by the plurality of client devices. The arbiter circuitry is configured to monitor the priority level of each respective client queue, and control passage of requests from client queues to the functional circuitry based at least in part on a respective priority level assigned to each respective client queue. The priority determination circuitry includes fill level detector circuitry configured to determine a fill level of each client queue.
SYSTEMS AND METHODS FOR DIFFERENTIATION OF SERVICE USING IN-BAND SIGNALING
An apparatus includes a network interface for connection to a network and a database configured to store traffic shaping parameters for a traffic shaping scheme for a plurality of classes of data packets. A database loading circuit is configured to obtain the traffic shaping parameters from in-band communication received in a data packet by the network interface and load the traffic shaping parameters into the database. One or more traffic shapers are configured to access the traffic shaping parameters in the database and apply the traffic shaping scheme according to the traffic shaping parameters to the plurality of classes of data packets received by the network interface.
BINDING APPLICATION TO NAMESPACE (NS) TO SET TO SUBMISSION QUEUE (SQ) AND ASSIGNING PERFORMANCE SERVICE LEVEL AGREEMENT (SLA) AND PASSING IT TO A STORAGE DEVICE
A host interface layer in a storage device is described. The host interface layer may include an arbitrator to select a first submission queue (SQ) from a set including at least the first SQ and a second SQ. The first SQ may be associated with a first Quality of Service (QoS) level, and the second SQ may be associated with a second QoS level. A command fetcher may retrieve an input/output (I/O) request from the first SQ. A command parser may place the I/O request in a first command queue from a set including at least the first command queue and a second command queue. The arbitrator may be configured to select the first SQ based at least in part on a first weight associated with the first SQ and a second weight associated with the second SQ. The first weight may be based at least in part on a first total storage capacity of at least one first namespace (NS) associated with the first QoS level, and the second weight may be based at least in part on a second total storage capacity of at least one second NS associated with the second QoS level.