Patent classifications
G06F2212/174
Monitoring memory usage in computing devices
In general, techniques are described for monitoring memory usage in computing devices. A computing device comprising a memory and a control unit that executes a kernel of an operating system having kernel sub-systems may implement the techniques. A memory manager kernel subsystem, in response to requests for amounts of the memory from other one kernel sub-systems, allocates memory blocks from the memory. The memory manager, in response to requests to de-allocate one or more of the allocated memory blocks, determines the corresponding requested amounts of memory and sizes of the de-allocated blocks. The memory manager then generates memory usage information based on the determined requested amounts of memory and the determined ones of the two or more different sizes. The memory usage information specifies usage of the memory in terms of the two or more different sizes of the allocated plurality of memory blocks.
Device for packet processing acceleration
A device for packet processing acceleration includes a CPU, a tightly coupled memory (TCM), a buffer descriptor (BD) prefetch circuit, and a BD write back circuit. The BD prefetch circuit reads reception-end (RX) BDs from an RX BD ring of a memory to write them into an RX ring of the TCM, and reads RX header data from a buffer of the memory to write them into the RX ring. The CPU accesses the RX ring to process the RX BDs and RX header data, and generates transmission-end (TX) BDs and TX header data; afterwards, the CPU writes the TX BDs and TX header data into a TX ring of the TCM. The BD write back circuit reads the TX BDs and TX header data from the TX ring, writes the TX BDs into a TX BD ring of the memory, and writes the TX header data into the buffer.
CACHE INDEX MAPPING
Systems and methods in accordance with various embodiments of the present disclosure provide approaches for mapping entries to a cache using a function, such as cyclic redundancy check (CRC). The function can calculate a colored cache index based on a main memory address. The function may cause consecutive address cache indexes to be spread throughout the cache according to the indexes calculated by the function. In some embodiments, each data context may be associated with a different function, enabling different types of packets to be processed while sharing the same cache, reducing evictions of other data contexts and improving performance. Various embodiments can identify a type of packet as the packet is received, and lookup a mapping function based on the type of packet. The function can then be used to lookup the corresponding data context for the packet from the cache, for processing the packet.
Elastic resource management in a network switch
An elastic memory system that may include memory banks, clients that are configured to obtain access requests associated with input addresses; first address converters that are configured to convert the input addresses to intermediate addresses within a linear address space; address scramblers that are configured to convert the intermediate addresses to physical addresses while balancing a load between the memory banks; atomic operation units; an interconnect that is configured to receive modified access requests that are associated with the physical addresses, and send the modified access requests downstream, wherein atomic modified access requests are sent to the atomic operation units; wherein the atomic operations units are configured to execute the atomic modified access requests; wherein the memory banks are configured to respond to the atomic modified access requests and to non-atomic modified access requests.
Packet processing cache
A data or packet processing device such as a network interface controller may include cache control logic that is configured to receive a first request for processing a first data packet associated with the queue identifier, and obtain a set of memory descriptors associated with the queue identifier from the memory. The set of descriptors can be stored in the cache. When a second request for processing a second data packet associated with the queue identifier is received, the cache control logic can determine that the cache is storing memory descriptors for processing the second data packet, and provide the memory descriptors used for processing the second packet.
COMPUTING DEVICE
A computing device includes a first processor; a second processor; a network interface communicably coupling the first and second processors to a network; an interface bus communicably coupling the first processor to the second processor; a first interface communicably coupling the second processor to the interface bus; a second interface communicably coupling the second processor to the interface bus, the second interface being separate from the first interface, wherein the second interface is configured to provide the second processor with management functionality over one or more hardware components of the computing device; and storage means communicably coupled to the second processor, wherein the second processor regulates access of the first processor to the storage means.
LBA EVICTION IN PCM MEDIA
According to various aspects of the present disclosure, there is provided a method and an apparatus for writing and evicting data in a phase-change memory (PCM). In one embodiment, a logical block address (LBA) eviction candidate (LEC) list is stored in the PCM media. The LEC list employs a circular queue having a head end and tail end, where new LBAs are inserted at the head end. In one embodiment, a tail end LBA at the tail end of the LEC list along with all the subsequent LBAs on the LEC list with continuing write sequence number to that of the tail end LBA are evicted when data needs to be evicted from the PCM media.
Cache index mapping
Systems and methods in accordance with various embodiments of the present disclosure provide approaches for mapping entries to a cache using a function, such as cyclic redundancy check (CRC). The function can calculate a colored cache index based on a main memory address. The function may cause consecutive address cache indexes to be spread throughout the cache according to the indexes calculated by the function. In some embodiments, each data context may be associated with a different function, enabling different types of packets to be processed while sharing the same cache, reducing evictions of other data contexts and improving performance. Various embodiments can identify a type of packet as the packet is received, and lookup a mapping function based on the type of packet. The function can then be used to lookup the corresponding data context for the packet from the cache, for processing the packet.
Computing device with a separate processor provided with management functionality through a separate interface with the interface bus
A computing device includes a first processor; a second processor; a network interface communicably coupling the first and second processors to a network; an interface bus communicably coupling the first processor to the second processor; a first interface communicably coupling the second processor to the interface bus; a second interface communicably coupling the second processor to the interface bus, the second interface being separate from the first interface, wherein the second interface is configured to provide the second processor with management functionality over one or more hardware components of the computing device; and storage means communicably coupled to the second processor, wherein the second processor regulates access of the first processor to the storage means.
ELECTRONIC DEVICE AND TETHERING METHOD THEREOF
An electronic device and a tethering method are disclosed. The electronic device includes a housing, and a display exposed through a first portion of the housing. The electronic device also includes an electrical connector exposed through a second portion of the housing, and a wireless communication circuit. The electronic device further includes a first processor operably coupled to the display and the electrical connector and configured to use a first memory address region including a first plurality of addresses. The electronic device also includes a second processor operably coupled to the wireless communication circuit and configured to use a second memory address region including a second plurality of virtual addresses. The electronic device also include an electric circuitry operably coupled to the first processor and the second processor and configured to provide relations between the first plurality of addresses and the second plurality of virtual addresses.