H04L49/9084

Dynamically controlling a local buffer of a modem of a wireless device

Apparatuses, methods, and systems for dynamically controlling a local buffer of a modem of a wireless device are disclosed. One method includes receiving and queuing transmission packets in the local buffer of the modem of the wireless device for wireless transmission to a receiving device, purging each transmission packet from the local buffer after receiving an acknowledgement of reception of the transmission packet from the receiving device, and requesting acknowledgement from the receiving device when a queue of the transmission packets within the local buffer exceeds a threshold level, wherein the receiving device aggregates acknowledgment responses to a plurality of unpurged transmission packets in the local buffer and transmits an aggregated acknowledgment to the modem.

CONTENT MANAGEMENT SYSTEM FRAMEWORK FOR CLOUD DEPLOYMENT AND OPERATION AS MICROSERVICES
20230029601 · 2023-02-02 ·

The disclosure provides a new content server framework in which functionalities of a content server are implemented as lightweight microservices. At startup of the content server framework, a content server container and a set of microservices are launched. The content server container only has a content server application programming interface (API) which has a controller that can instantiate controller applications, each having a master module and worker(s). When a request is received, the content server API routes it to an appropriate microservice which stores the request in a repository. The master module retrieves the request from the repository and places it in a queue. The worker picks up the request from the queue and processes it. The controller keeps track of details of each controller application container that it instantiated (e.g., load and status) and automatically scale up or down the number of instances.

Content management system framework for cloud deployment and operation as microservices

The disclosure provides a new content server framework in which functionalities of a content server are implemented as lightweight microservices. At startup of the content server framework, a content server container and a set of microservices are launched. The content server container only has a content server application programming interface (API) which has a controller that can instantiate controller applications, each having a master module and worker(s). When a request is received, the content server API routes it to an appropriate microservice which stores the request in a repository. The master module retrieves the request from the repository and places it in a queue. The worker picks up the request from the queue and processes it. The controller keeps track of details of each controller application container that it instantiated (e.g., load and status) and automatically scale up or down the number of instances.

Data packet processing method and apparatus, and device

Embodiments of the present invention disclose a data packet processing method and apparatus, and a device. The method includes: if a first data packet is received, determining a first cache queue that is in the first buffer and that is used to store the first data packet; buffering the first data packet in the second buffer if a state of the first cache queue is an invalid state, where a data amount of the first data packet is less than the capacity of the second buffer, and the state of the first cache queue is set to the invalid state when a current data amount of the first buffer reaches the capacity of the first buffer; and if a data amount of the second buffer reaches the capacity of the second buffer, sending all data packets that are in the second buffer to a control plane device.

Method and system for effective use of internal and external memory for packet buffering within a network device

A mechanism is provided to maximize utilization of internal memory for packet queuing in network devices, while providing an effective use of both internal and external memory to achieve high performance, high buffering scalability, and minimizing power utilization. Embodiments initially store packet data received by the network device in queues supported by an internal memory. If internal memory utilization crosses a predetermined threshold, a background task performs memory reclamation by determining those queued packets that should be targeted for transfer to an external memory. Those selected queued packets are transferred to external memory and the internal memory is freed. Once the internal memory consumption drops below a threshold, the reclamation task stops.

Communication apparatus and communication method

A first node and a second node transmit packets to a third node via a switch. The packets are buffered in a Tx buffer in the switch and then transmitted to the third node. When the third node detects a sign of congestion at the Tx buffer based on the reception frequency of the packets, it is recognized, from transmitter addresses included in the received packets, that the nodes transmitting the packets to the third node are the first node and the second node, and a control packet for a transmission stop request is transmitted to the first node and the second node. On receiving the control packet for a transmission stop request, the first node stops transmission of only packets addressed to the third node. On receiving the control packet for a transmission stop request, the second node stops transmission of only packets addressed to the third node.

METHOD FOR ALLOCATING RESOURCE FOR STORING VISUALIZATION INFORMATION, APPARATUS, AND SYSTEM
20230112747 · 2023-04-13 · ·

A method for allocating a resource for storing visualization information, an apparatus, and a system are provided. The method includes: a first network device determines a first queue based on a constraint condition, where the first queue is a queue that needs to be visualized. Then, the first network device allocates a first storage resource to the first queue, where the first storage resource is used to store visualization information of the first queue, and the visualization information is information used to visualize the first queue. Therefore, occupation of storage resources in the first network device is reduced.

DYNAMICALLY CONTROLLING A LOCAL BUFFER OF A MODEM OF A WIRELESS DEVICE

Apparatuses, methods, and systems for dynamically controlling a local buffer of a modem of a wireless device are disclosed. One method includes receiving transmission packets in the local buffer of the modem of the wireless device for wireless transmission to a receiving device, purging a transmission packet from the local buffer after receiving an acknowledgement of reception of the transmission packet from the receiving device, and requesting acknowledgement from the receiving device when a queue of the transmission packets within the local buffer exceeds a threshold level, wherein a time delay is introduced before the requesting of the acknowledgement, wherein the time delay is based at least on a propagation delay of the wireless transmission between the wireless device and the receiving device.

System and method for modem stabilization when waiting for AP-driven link recovery
11606316 · 2023-03-14 · ·

Various embodiments of methods and systems for a modem-directed application processor boot flow in a portable computing device (“PCD”) are disclosed. An exemplary method includes an application processor that transitions into an idle state, such as a WFI state, for durations of time during a boot sequence that coincide with processing by a DMA engine and/or crypto engine. That is, the application processor may “sleep” while the DMA engine and/or crypto engine process workloads in response to instructions they received from the application processor.

Method and System for Effective Use of Internal and External Memory for Packet Buffering within a Network Device

A mechanism is provided to maximize utilization of internal memory for packet queuing in network devices, while providing an effective use of both internal and external memory to achieve high performance, high buffering scalability, and minimizing power utilization. Embodiments initially store packet data received by the network device in queues supported by an internal memory. If internal memory utilization crosses a predetermined threshold, a background task performs memory reclamation by determining those queued packets that should be targeted for transfer to an external memory. Those selected queued packets are transferred to external memory and the internal memory is freed. Once the internal memory consumption drops below a threshold, the reclamation task stops.