G06F2212/2515

MEMORY SYSTEM INCLUDING MEMORY MODULE, MEMORY MODULE, AND OPERATING METHOD OF MEMORY MODULE

A memory system includes a nonvolatile memory module and a first controller configured to control the nonvolatile memory module. The nonvolatile memory module includes a volatile memory device, a nonvolatile memory device, and a second controller configured to control the volatile memory device and the nonvolatile memory device. The first controller may be configured to transmit a read request to the second controller. When, during a read operation according to the read request, normal data is not received from the nonvolatile memory device, the first controller may perform one or more retransmits of the read request to the second controller without a limitation on a number of times that the first controller performs the one or more retransmits of the read request.

Method and apparatus for managing memory

A method for managing a memory and an electronic device are provided. The method includes setting a list including an exclusive relationship, between a page unit memory allocation requester and a contiguous memory allocation requester, corresponding to a segment of a contiguous memory allocation region, receiving a memory allocation request, confirming whether the memory allocation requester comprises the memory allocation requester included in the list, and allocating a page of the contiguous memory allocation region corresponding to the segment, if the memory allocation requester is included in the list.

MEMORY HAVING A STATIC CACHE AND A DYNAMIC CACHE

The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein.

System-on-chips and operation methods thereof

A system-on-chip includes a magnetic random access memory and a security interface. The magnetic random access memory includes a plurality of memory areas, each of the plurality of memory areas having a different security level. The security interface circuitry configured to: identify a memory area from among the plurality of memory areas based on a received memory address associated with a received memory command; determine a security level associated with the identified memory area; and perform a memory operation on received data based on the received memory command and the determined security level.

NAMESPACE PERFORMANCE ACCELERATION BY SELECTIVE SSD CACHING
20190179802 · 2019-06-13 ·

In one example, a method includes receiving metadata in the form of a modification to metadata represented by a file system namespace abstraction, and the file system namespace abstraction corresponds to less than the entire file system namespace. Next, the file system namespace abstraction is updated based on the received metadata. Next, a determination is made whether or not caching is enabled for the file system namespace abstraction. If caching is enabled for the file system namespace abstraction, the updated file system namespace abstraction is cached in SSD storage.

Combined Transparent/Non-Transparent Cache
20190171380 · 2019-06-06 ·

In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.

Distributed big data in a process control system

A distributed big data device in a process plant includes an embedded big data appliance configured to locally stream and store, as big data, data that is generated, received, or observed by the device, and to perform one or more learning analyzes on at least a portion of the stored data. The embedded big data appliance generates or creates learned knowledge based on a result of the learning analysis, which the device may use to modify its operation to control a process in real-time in the process plant, and/or which the device may transmit to other devices in the process plant. The distributed big data device may be a field device, a controller, an input/output device, or other process plant device, and may utilize learned knowledge created by other devices when performing its learning analysis.

Memory having a static cache and a dynamic cache

The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein.

TECHNOLOGIES FOR EFFICIENTLY PERFORMING SCATTER-GATHER OPERATIONS
20190121731 · 2019-04-25 ·

Technologies for efficiently performing scatter-gather operations include a device with circuitry configured to associate, with a template identifier, a set of non-contiguous memory locations of a memory having a cross point architecture. The circuitry is additionally configured to access, in response to a request that identifies the non-contiguous memory locations by the template identifier, the memory locations.

Processor memory architecture

A processing device includes a first memory interface for accessing a first memory device of a main memory. Each first memory interface is compatible with Low-Power Double-Data-Rate (LPDDR) signaling. The processing device further includes a second memory interface, which has different signaling characteristics from the first memory interface, for accessing a second memory device of the main memory. The second memory device has an access latency higher than the first memory device and lower than a secondary storage device. The first memory device and the second memory device may be used as a dual memory or a two-tiered memory.