G06F13/1657

Memory system having memory controller

A memory system includes: a memory block including a plurality of pages each comprising a plurality of memory cells connected to bit lines and a word line of word lines, an address manager configured to output addresses corresponding to the plurality of pages, and a system data manager configured to generate index data corresponding to the each of the addresses, the index data indicating whether user data is inverted, and output the index data and information on a memory cell in which the index data is to be stored, respectively. The system data manager is configured to, determine memory cells connected to different bit lines from among memory cells included in adjacent pages corresponding to consecutive addresses of the addresses, as memory cells in which index data corresponding to the consecutive addresses are to be stored.

Register file segments for supporting code block execution by using virtual cores instantiated by partitionable engines
09842005 · 2017-12-12 · ·

A system for executing instructions using a plurality of register file segments for a processor. The system includes a global front end scheduler for receiving an incoming instruction sequence, wherein the global front end scheduler partitions the incoming instruction sequence into a plurality of code blocks of instructions and generates a plurality of inheritance vectors describing interdependencies between instructions of the code blocks. The system further includes a plurality of virtual cores of the processor coupled to receive code blocks allocated by the global front end scheduler, wherein each virtual core comprises a respective subset of resources of a plurality of partitionable engines, wherein the code blocks are executed by using the partitionable engines in accordance with a virtual core mode and in accordance with the respective inheritance vectors. A plurality register file segments are coupled to the partitionable engines for providing data storage.

Processor memory system
09836412 · 2017-12-05 · ·

A plurality of processing elements (PEs) include memory local to at least one of the processing elements in a data packet-switched network interconnecting the processing elements and the memory to enable any of the PEs to access the memory. The network consists of nodes arranged linearly or in a grid to connect the PEs and their local memories to a common controller. The processor performs memory accesses on data stored in the memory in response to control signals sent by the controller to the memory. The local memories share the same memory map or space. The packet-switched network supports multiple concurrent transfers between PEs and memory. Memory accesses include block and/or broadcast read and write operations, in which data can be replicated within the nodes and, according to the operation, written into the shared memory or into the local PE memory.

PROCESSOR, SIGNAL ADJUSTMENT METHOD AND COMPUTER SYSTEM
20230176751 · 2023-06-08 ·

A processor, a signal adjustment method, and a computer system including the processor are provided, pertaining to the field of computer technologies. The processor includes a memory controller. The memory controller includes a memory physical interface and a first processor core, and the first processor core is connected to the memory physical interface. After the computer system is started and during a running process of the computer system, the first processor core is configured to adjust a timing relationship between a target signal of the memory physical interface and a synchronization signal of the target signal. According to this application, timing alignment can be ensured between the target signal and the synchronization signal, thereby improving correctness of sampling performed on the target signal.

Adaptive memory transaction scheduling

Memory transactions in a computing device may be scheduled by forming subsets of a set of memory transactions corresponding to memory transaction requests directed to a DRAM. Each subset may include transactions identified by the same combination of direction (read or write) and DRAM rank as each other. The transactions selected for inclusion in each subset may be determined based on efficiency. One of the subsets may be selected based on a metric applied to each subset, and the transactions in the selected subset may be sent to the DRAM.

SYSTEM ON A CHIP HAVING HIGH OPERATING CERTAINTY
20170300447 · 2017-10-19 ·

The invention concerns a system on a chip (100) comprising a set of master modules which includes a main processing module (101a) and a direct memory access controller (DMA) (102a) associated with said module (101a), and at least one secondary processing module (101b) and a DMA (102b) associated with said module (101b), and slave modules; each master module being configured for connection to a clock source, a power supply, and slave modules which include a set of proximity peripherals (105a,b), at least one internal memory (104a,b) and a set (106) of peripherals and external memories shared by the master modules; said clock source, power supply, proximity peripherals (105a,b) and a cache memory (103a,b) of a master processing module and its DMA being dedicated to said master processing module and not shared with the other processing modules of the set of master modules; and said at least one internal memory (104a,b) of each master processing module and its DMA being dedicated to said master processing module, said main processing module (101a) being nevertheless able to access same.

Memory systems with multiple modules supporting simultaneous access responsive to common memory commands

Described are memory systems in which a memory controller issues commands and addresses to multiple memory modules that collectively support each read and write transactions. A common set of control signal lines from the controller communicates the same command and address signals to the modules. For write commands, the controller sends subsets of write data to each module over a respective subset of data lines. For read commands, each module responds with a subset of the requested data over the respective subset of data lines. The memory modules can be width configurable so that a single full-width module can connect to both subsets of data lines to convey full-width data, or two half-width modules can connect one each to the subsets of data lines.

Memory distribution across multiple non-uniform memory access nodes
09785581 · 2017-10-10 · ·

A system, methods, and apparatus for determining memory distribution across multiple non-uniform memory access processing nodes are disclosed. An apparatus includes processing nodes, each including processing units and main memory serving as local memory. A bus connects the processing units of each processing node to different main memory of a different processing node as shared memory. Access to local memory has lower memory access latency than access to shared memory. The processing nodes execute threads distributed across the processing nodes, and detect memory accesses made from each processing node for each thread. The processing nodes determine locality values for the thread that represent the fraction of memory accesses made from the processing nodes, and determine processing time values for the threads for a sampling period. The processing nodes determine weighted locality values for the threads, and determine a memory distribution across the processing nodes based on the weighted locality values.

BOOT online upgrading device and method
20170286128 · 2017-10-05 · ·

The disclosure discloses a BOOT online upgrading device. The device includes: a logical gating unit, at least two embedded Central Processing Units (CPUs) and OOT memories each corresponding to respective CPUs are connected to the logical gating unit through access buses; each embedded CPU includes BOOT upgrading drive modules for all the BOOT memories, and the BOOT upgrading drive modules are configured to execute BOOT version updating on the BOOT memories; and the logical gating unit is configured to provide an access channel from any embedded CPU to any BOOT memory. Correspondingly, the disclosure also discloses a BOOT online upgrading method. The problem of BOOT online upgrading failure or incapability of normal starting despite of successful upgrading is solved, BOOT online upgrading reliability is improved, and BOOT upgrading risk and later maintenance cost are reduced.

Sequential memory access operations
09778846 · 2017-10-03 · ·

Methods of operating a memory include performing a memory access operation, obtaining an address corresponding to a subsequent memory access operation prior to stopping the memory access operation, stopping the memory access operation, sharing charge between access lines used for the memory access operation and access lines to be used for the subsequent memory access operation, and performing the subsequent memory access operation.