Patent classifications
G06F2212/171
Apparatus and method to share host system RAM with mass storage memory RAM
A method includes, in one non-limiting embodiment, sending a request from a mass memory storage device to a host device, the request being one to allocate memory in the host device; writing data from the mass memory storage device to allocated memory of the host device; and subsequently reading the data from the allocated memory to the mass memory storage device. The memory may be embodied as flash memory, and the data may be related to a file system stored in the flash memory. The method enables the mass memory storage device to extend its internal volatile RAM to include RAM of the host device, enabling the internal RAM to be powered off while preserving data and context stored in the internal RAM.
METHOD FOR DISPLAYING APPLICATION STORAGE SPACE AND TERMINAL
Embodiments of the present invention disclose a method for displaying application storage space and a terminal, so as to enable a user to intuitively learn a use state of application storage space, so that the storage space can be cleaned in time to ensure normal and efficient running of a terminal. The method in the embodiments of the present invention includes: first displaying, by a terminal, a first icon on a desktop in first display mode; and when determining that storage space used by a first application corresponding to the first icon is greater than a preset storage threshold, displaying, by the terminal, the first icon in preset display mode that is different from the first display mode.
Method and apparatus for accessing data stored in a storage system that includes both a final level of cache and a main memory
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
Embedded device and memory management method thereof
An embedded device and a memory management method of the embedded device are provided. The embedded device includes a system memory and hardware memory. An operating system of the embedded device operates based on virtual memory addresses. The method includes: mapping the virtual memory addresses to indirect memory addresses by a first memory management unit; and mapping the indirect memory addresses to physical addresses of the hardware memory and selectively to physical addresses of the system memory by a second memory management unit, such that the operating system of the embedded device is able to access the hardware memory.
MEMORY SYSTEM AND METHOD OF OPERATING THEREOF, STORAGE MEDIUM AND MEMORY CONTROLLER
According to one aspect of the present disclosure, a memory system is provided. The memory system may include at least one non-volatile memory device and a memory controller coupled to the at least one non-volatile memory device. A multi-level mapping table may be stored in the memory device. The multi-level mapping table may be configured to implement mapping from a logical address to a physical address. The memory controller may include a buffer. A portion of the multi-level mapping table may be stored in the buffer. The memory controller may be configured to perform a random read operation on the data stored in the memory device. In response to a random read range corresponding to the random read operation meeting a preset condition, the memory controller may be configured to adjust capacity for storing different levels of mapping tables in the buffer.
DEVICE, DATA SAVING PROCESS METHOD AND MEDIUM FOR DATA SAVING PROCESS PROGRAM
A pre-copy creator creates a copy of a memory area that is used by an application that is switched to a background and updates a memory management table. At a time of transition to a hibernation mode, an uncopied-area saving unit refers to the memory management table and creates a copy of a memory area that is used by the application and excludes the memory area of which the copy has been created by the pre-copy creator.
METHOD AND APPARATUS FOR ACCESSING DATA STORED IN A STORAGE SYSTEM THAT INCLUDES BOTH A FINAL LEVEL OF CACHE AND A MAIN MEMORY
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
Secure Garbage Collection on a Mobile Device
Methods and systems for performing garbage collection involving sensitive information on a mobile device are described herein. Secure information is received at a mobile device over a wireless network. The sensitive information is extracted from the secure information. A software program operating on the mobile device uses an object to access the sensitive information. Secure garbage collection is performed upon the object after the object becomes unreachable.
Methods, apparatus, and systems for secure demand paging and other paging operations for processor devices
A secure demand paging system includes a processor operable for executing instructions, an internal memory for a first page in a first virtual machine context, an external memory for a second page in a second virtual machine context, and a security circuit coupled to the processor and to the internal memory for maintaining the first page secure in the internal memory. The processor is operable to execute sets of instructions representing: a central controller, an abort handler coupled to supply to the central controller at least one signal representing a page fault by an instruction in the processor, a scavenger responsive to the central controller and operable to identify the first page as a page to free, a virtual machine context switcher responsive to the central controller to change from the first virtual machine context to the second virtual machine context; and a swapper manager operable to swap in the second page from the external memory with decryption and integrity check, to the internal memory in place of the first page.
Data storage device and operating method thereof
A data storage device includes a plurality of memory apparatuses, a searching unit configured to search for k physical addresses mapped to k continuous logical addresses, and a processor configured to determine numerical consecutiveness of i logical addresses mapped to i continuous physical addresses consecutive to an K.sup.th physical address of the k physical addresses, and transmit a first pre-read command with respect to a first pre-read memory area corresponding to the i continuous physical addresses and first read-estimated physical addresses consecutive to the i continuous physical addresses when the numerical consecutiveness is admitted.