Patent classifications
H04L12/861
SYSTEM AND METHOD OF A HIGH BUFFERED HIGH BANDWIDTH NETWORK ELEMENT
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
Efficient screen image transfer
A system including a source display, externally updatable, an image compression algorithm database, a network connection, and a frame transfer engine. The algorithm database comprises a plurality of image compression algorithms. The frame transfer engine is configured to receive a plurality of updates made to the source display, store at least some of the updates in a queue, and select, based on a bandwidth of the network connection, a size of the update, and sizes and times of updates currently present in the queue, an image compression algorithm in the algorithm database for current transfer over the network connection.
Dynamic temporary use of packet memory as resource memory
In one embodiment, packet memory and resource memory of a memory are independently managed, with regions of packet memory being freed of packets and temporarily made available to resource memory. In one embodiment, packet memory regions are dynamically made available to resource memory so that in-service system upgrade (ISSU) of a packet switching device can be performed without having to statically allocate (as per prior systems) twice the memory space required by resource memory during normal packet processing operations. One embodiment dynamically collects fragments of packet memory stored in packet memory to form a contiguous region of memory that can be used by resource memory in a memory system that is shared between many clients in a routing complex. One embodiment assigns a contiguous region no longer used by packet memory to resource memory, and from resource memory to packet memory, dynamically without packet loss or pause.
OPTIMIZED STORAGE OF MEDIA ITEMS
The present technology is for optimizing storage on a computing device. A media application on the computing device can allocate a minimum amount of storage on the computing device. The media application can further be configured to automatically download and store media items added to a media library of an account associated with the computing device. The combination of these features can put strain on computing devices with limited amounts of storage. Accordingly, the present technology can automatically delete media items in cache to allow media items to be automatically downloaded, or allow other uses of storage by other applications on the computing device, while also preserving the minimum amount of storage of media items on the computing device.
Generating and processing MAC-ehs protocol data units
A wireless transmit/receive unit (WTRU) may disassemble enhanced high speed medium access control (MAC-ehs) protocol data units (PDUs) to provide reordering PDUs. A reordering PDU may include a MAC-ehs service data unit (SDU) or a segment of the MAC-ehs SDU. The WTRU may reassemble the MAC-ehs SDU from segments of the MAC-ehs SDU disassembled from a reordering PDU. The WTRU may route the reassembled MAC-ehs SDU to a logical channel of a plurality of logical channels.
DATA PROCESSING DEVICE AND METHOD FOR OFFLOADING DATA TO REMOTE DATA PROCESSING DEVICE
The disclosure relates to a data processing device, comprising: a processing element configured to process a plurality of data packets according to a communication protocol to provide a plurality of processed data packets each comprising a first part and a second part; and an interface configured to offload the second parts of the plurality of processed data packets to a remote data processing device and configured to notify the remote processing device of the offload of the second parts of the plurality of processed data packets.
PACKET DESCRIPTOR STORAGE IN PACKET MEMORY WITH CACHE
A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.
METHOD AND COMPUTING DEVICE FOR MINIMIZING ACCESSES TO DATA STORAGE IN CONJUNCTION WITH MAINTAINING A B-TREE
Methods for modifying a B-tree are disclosed. According to an implementation, a computing device receives requests for updates to a B-tree, groups two or more of the requests into a batch that are destined for a particular node on the B-tree, but refrains from modifying the node until a buffer of a node above it is full (or will be full with this batch of requests). Once the buffer is full, the computing device provides the requests to that particular node. The techniques described herein may result in the computing device carrying out fewer of reads from and writes to storage than existing B-tree maintenance techniques, thereby saving time and bandwidth. Reducing the number of reads and writes also saves money, particularly when the storage is controlled by a third party SaaS provider that charges according to the number of transactions.
TECHNIQUES FOR WARMING UP A NODE IN A DISTRIBUTED DATA STORE
In various embodiments, a node manager configures a “new” node as a replacement for an “unavailable” node that was previously included in a distributed data store. First, the node manager identifies a source node that stores client data that was also stored in the unavailable node. Subsequently, the node manager configures the new node to operate as a slave of the source node and streams the client data from the source node to the new node. Finally, the node manager configures the new node to operate as one of multiple masters nodes in the distributed data store. Advantageously, by configuring the node to implement a hybrid of a master-slave replication scheme and a master-master replication scheme, the node manager enables the distributed data store to process client requests without interruption while automatically restoring the previous level of redundancy provided by the distributed data store.
Radio communication apparatus
A radio receiving apparatus for receiving the variable-length RLC PDU data in an RLC layer includes the buffer memory sectioned into a plurality of areas having a predetermined maximum data length of the RLC PDU data. By referring to a sequence number SN included in each received RLC PDU data, the radio receiving apparatus stores the RLC PDU data having an identical sequence number SN into an identical area, and assembles an RLC SDU data on a basis of the RLC PDU data stored in each area.