Patent classifications
G06F15/167
METHODS, SYSTEMS AND COMPUTER READABLE MEDIA FOR IMPROVING REMOTE DIRECT MEMORY ACCESS PERFORMANCE
The subject matter described herein includes methods, systems, and computer readable media for improving remote direct memory access (RDMA) performance. A method for improving RDMA performance occurs at an RDMA node utilizing a user space and a kernel space for executing software. The method includes posting, by an application executing in the user space, an RDMA work request including a data element indicating a plurality of RDMA requests associated with the RDMA work request to be generated by software executing in the kernel space; and generating and sending, by the software executing in the kernel space, the plurality of RDMA requests to or via a system under test (SUT).
DATA PROCESSING METHOD AND APPARATUS AND HETEROGENEOUS SYSTEM
A data processing method and apparatus, and a heterogeneous system, pertaining to the field of computer technologies are provided. The heterogeneous system includes a processor connected to an accelerator. A secondary memory is connected to the accelerator. The processor is configured to write to-be-processed data into the secondary memory and trigger the accelerator to access and process the to-be-processed data stored in the secondary memory according to a processing instruction. The accelerator is configured to write a processing result of the to-be-processed data into the secondary memory and to trigger the processor to read the processing result. Processing efficiency is enhanced by reducing the number of times of interaction between the processor and the accelerator and simplifying the procedure for data processing.
Application processor performing a dynamic voltage and frequency scaling operation, computing system including the same, and operation method thereof
A method of operating an application processor including a central processing unit (CPU) with at least one core and a memory interface includes measuring, during a first period, a core active cycle of a period in which the at least one core performs an operation to execute instructions and a core idle cycle of a period in which the at least one core is in an idle state, generating information about a memory access stall cycle of a period in which the at least one core accesses the memory interface in the core active cycle, correcting the core active cycle using the information about the memory access stall cycle to calculate a load on the at least one core using the corrected core active cycle, and performing a DVFS operation on the at least one core using the calculated load on the at least one core.
Maintaining failure independence for storage of a set of encoded data slices
A method includes detecting a storage error associated with a first memory device of a storage unit of a set of storage units, where data is error encoded into a set of encoded data slices and stored in a plurality of memory devices of the set of storage units, and where the plurality of memory devices includes the first memory device. The method further includes determining attributes associated with the first memory device and determining attributes of other memory devices of the plurality of memory devices. The method further includes selecting a memory device from the other memory devices based on the attributes of the memory device comparing favorably to the attributes associated with the first memory device. The method further includes rebuilding an encoded data slice associated with the storage error and storing the rebuilt encoded data slice in the selected memory device.
Analytics, Algorithm Architecture, and Data Processing System and Method
A system and method employing a distributed hardware architecture, either independently or in cooperation with an attendant data structure, in connection with various data processing strategies and data analytics implementations are disclosed. A compute node may be implemented independent of a host compute system to manage and to execute data processing operations. Additionally, an unique algorithm architecture and processing system and method are also disclosed. Different types of nodes may be implemented, either independently or in cooperation with an attendant data structure, in connection with various data processing strategies and data analytics implementations.
METHOD AND SYSTEM FOR SHARING MEMORY
A method and system for sharing memory in a computer system includes placing one or more processors in the computer system in an idle state. The one or more processors are queried for associated memory space, and a shared physical memory address space is updated, wherein each processor in the system has access to the physical memory in the shared physical memory address space. The one or more processors is removed from the idle state, and work is submitted to the one or more processors for execution.
Edge component computing system having integrated FaaS call handling capability
An apparatus is described. The apparatus includes logic circuitry embedded in at least one of a memory controller, network interface and peripheral control hub to process a function as a service (FaaS) function call embedded in a request. The request is formatted according to a protocol. The protocol allows a remote computing system to access a memory that is coupled to the memory controller without invoking processing cores of a local computing system that the memory controller is a component of.
Edge component computing system having integrated FaaS call handling capability
An apparatus is described. The apparatus includes logic circuitry embedded in at least one of a memory controller, network interface and peripheral control hub to process a function as a service (FaaS) function call embedded in a request. The request is formatted according to a protocol. The protocol allows a remote computing system to access a memory that is coupled to the memory controller without invoking processing cores of a local computing system that the memory controller is a component of.
PROCESSING ELEMENT AND NEURAL PROCESSING DEVICE INCLUDING SAME
The present disclosure discloses a processing element and a neural processing device including the processing element. The processing element includes a weight register configured to store a weight, an input activation register configured to store an input activation, a flexible multiplier configured to receive a first sub-weight of a first precision included in the weight, receive a first sub-input activation of the first precision included in the input activation, and generate result data by performing multiplication calculation of the first sub-weight and the first sub-input activation as the first precision or a second precision different from the first precision according to the first sub-weight and the first sub-input activation and a saturating adder configured to generate a partial sum by using the result data.
Methods, systems, and media for detecting the presence of a digital media device on a network
Methods, systems, and media for detecting the presence of a digital media device on a network are provided. In some embodiments, methods for detecting a presence of a particular type of digital media device is provided, the methods comprising: identifying cached device details for devices previously associated with the network; performing a simple device discovery protocol (SSDP) on the network, and substantially concurrently sending a unicast message to an address associated with the identified cached digital media device using hypertext transfer protocol (HTTP); and indicating the presence of a digital media device on the network in response to either (i) receiving a response to the unicast message, or (ii) determining that a type of a device discovered using SSDP is the same as the particular device type.