Patent classifications
G06F2212/174
Allocation of buffer interfaces for moving data, and related systems, methods and devices
A buffer interface, data transport method, and computing system are described in which a buffer interface may be configured for communicating data samples to and from frame buffers defined in a memory. The configurable buffer interfaces and frame buffers provide a flexible and scalable platform for use with many applications.
INTERFACING WITH SYSTEMS, FOR PROCESSING DATA SAMPLES, AND RELATED SYSTEMS, METHODS AND APPARATUSES
Disclosed examples include an apparatus. The apparatus may include first interfaces, second interfaces, a bus interface, and a buffer interface. The first interfaces may be to communicate at first data widths via first interconnects for operable coupling with data samples sources. The second interface may be to communicate at second data widths via second interconnects for operable coupling with data sinks. The buffer interface may be to communicate with a system to process data samples sampled using different sampling rates according to processing frame durations. The buffer interface may include an uplink channel handler and a downlink channel handler. The uplink channel handler may be to receive data samples from the first interfaces at first data widths and provide the data samples to the bus interface at third data widths. The downlink channel handler may be to receive processed data samples from the bus interface at third data widths and provide the processed data samples to the second interfaces at second data widths. The bus interface to communicate at a third data width via a third interconnect for operative coupling with allocated memory region utilized by the system to process data samples.
Computing device
A computing device includes a first processor; a second processor; a network interface communicably coupling the first and second processors to a network; an interface bus communicably coupling the first processor to the second processor; a first interface communicably coupling the second processor to the interface bus; a second interface communicably coupling the second processor to the interface bus, the second interface being separate from the first interface, wherein the second interface is configured to provide the second processor with management functionality over one or more hardware components of the computing device; and storage means communicably coupled to the second processor, wherein the second processor regulates access of the first processor to the storage means.
Electronic device and tethering method thereof
An electronic device and a tethering method are disclosed. The electronic device includes a housing, and a display exposed through a first portion of the housing. The electronic device also includes an electrical connector exposed through a second portion of the housing, and a wireless communication circuit. The electronic device further includes a first processor operably coupled to the display and the electrical connector and configured to use a first memory address region including a first plurality of addresses. The electronic device also includes a second processor operably coupled to the wireless communication circuit and configured to use a second memory address region including a second plurality of virtual addresses. The electronic device also include an electric circuitry operably coupled to the first processor and the second processor and configured to provide relations between the first plurality of addresses and the second plurality of virtual addresses.
Packet processing cache
A data or packet processing device such as a network interface controller may include cache control logic that is configured to obtain a set of memory descriptors associated with a queue from the memory. The set of descriptors can be stored in the cache. When a request for processing a data packet associated with the queue is received, the cache control logic can determine that the cache is storing memory descriptors for processing the data packet, and provide the memory descriptors used for processing the packet.
Cache index mapping
Systems and methods in accordance with various embodiments of the present disclosure provide approaches for mapping entries to a cache using a function, such as cyclic redundancy check (CRC). The function can calculate a colored cache index based on a main memory address. The function may cause consecutive address cache indexes to be spread throughout the cache according to the indexes calculated by the function. In some embodiments, each data context may be associated with a different function, enabling different types of packets to be processed while sharing the same cache, reducing evictions of other data contexts and improving performance. Various embodiments can identify a type of packet as the packet is received, and lookup a mapping function based on the type of packet. The function can then be used to lookup the corresponding data context for the packet from the cache, for processing the packet.
Monitoring of health
A health monitoring system receives a health condition reading from a mobile device. The health condition reading is compared to ranges of data defining acceptable health conditions. If the health condition reading lies outside one of the ranges of data, then an alert may be generated and sent to the mobile device. The alert informs a user that the health condition reading is unacceptable, perhaps requiring medical attention.
DATA SAMPLES TRANSPORT SCHEME, AND RELATED SYSTEMS, METHODS AND DEVICES
A buffer interface, data transport method, and computing system are described in which a buffer interface may be configured for communicating data samples to and from frame buffers defined in a memory. The configurable buffer interfaces and frame buffers provide a flexible and scalable platform for use with many applications.
Shared data management system
A shared data management system configured to receive frames comprising data from one or more producer devices and to transmit reconstructed frames to one or more consumer devices, a producer device and a consumer device being connected to the shared data management system by way of a communication network using a communication protocol. The shared data management system comprises a memory system having one or more memories. The shared data management system advantageously comprises a central controller configured to store at least some of the data encapsulated in a frame received from a producer device in a target memory area of the memory system, the central controller being configured to compute, for each datum to be stored, the address of the target memory based on an index associated with the datum in the received frame.
RDMA-SSD dual-port unified memory and network controller
System and method for a unified memory and network controller for an all-flash array (AFA) storage blade in a distributed flash storage clusters over a fabric network. The unified memory and network controller has 3-way control functions including unified memory buses to cache memories and DDR4-AFA controllers, a dual-port PCIE interconnection to two host processors of gateway clusters, and four switch fabric ports for interconnections with peer controllers (e.g., AFA blades and/or chassis) in the distributed flash storage network. The AFA storage blade includes dynamic random-access memory (DRAM) and magnetoresistive random-access memory (MRAM) configured as data read/write cache buffers, and flash memory DIMM devices as primary storage. Remote data memory access (RDMA) for clients via the data caching buffers is enabled and controlled by the host processor interconnection(s), the switch fabric ports, and a unified memory bus from the unified controller to the data buffer and the flash SSDs.