G06F2212/251

DATA TRANSFER WITH CONTINUOUS WEIGHTED PPM DURATION SIGNAL
20230046980 · 2023-02-16 ·

A computer-implemented method for processing signals is provided including advantageously generating a temporally continuous weighted pulse position modulation (CW PPM) duration signal from an input analog signal, converting the CW PPM duration signal to a memory access signal, executing a multiply and accumulate (MAC) operation with the memory access signal, and advantageously generating the input analog signal from a result of the MAC operation by an activation function (AF).

STORAGE SYSTEM AND METHOD FOR ACCESSING SAME
20230049799 · 2023-02-16 ·

A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.

Apparatus for Hardware Implementation of Heterogeneous Decompression Processing
20180011796 · 2018-01-11 ·

A processor includes a memory hierarchy, buffer, and a decompressor. The decompressor includes circuitry to read elements to be decompressed according to a compression scheme, parse the elements to identify literals and matches, and, with the literals and matches, generate an intermediate token stream formatted for software-based copying of the literals and matches to produce decompressed data. The intermediate token stream is to include a format for multiple tokens that are to be written in parallel with each other, and another format for tokens that include a data dependency upon themselves.

MEMORY DEVICE AND OPERATING METHOD THEREOF
20230236967 · 2023-07-27 ·

A memory device, for executing an anneal computation with first state and a second state. The memory device includes a first memory array, a second memory array, a control circuit, a sensing circuit and a processing circuit. the control circuit selects a first horizontal row of memory units from the first memory array, and selects a second horizontal row of memory units from the second memory array. The sensing circuit computes a local energy value of the first state according to the current generated by the memory units of the first horizontal row, and computes a local energy value of the second state according to the current generated by the memory units of the second horizontal row. The processing circuit updates the first state and/or the second state according to the local energy value of the first state and the local energy value of the second state.

DETERMINISTIC MEMORY FOR TENSOR STREAMING PROCESSORS
20230024670 · 2023-01-26 ·

Embodiments are directed to a deterministic streaming system with one or more deterministic streaming processors each having an array of processing elements and a first deterministic memory coupled to the processing elements. The deterministic streaming system further includes a second deterministic memory with multiple data banks having a global memory address space, and a controller. The controller initiates retrieval of first data from the data banks of the second deterministic memory as a first plurality of streams, each stream of the first plurality of streams streaming toward a respective group of processing elements of the array of processing elements. The controller further initiates writing of second data to the data banks of the second deterministic memory as a second plurality of streams, each stream of the second plurality of streams streaming from the respective group of processing elements toward a respective data bank of the second deterministic memory.

ELECTRONIC SYSTEM, OPERATING METHOD THEREOF, AND OPERATING METHOD OF MEMORY DEVICE
20230229493 · 2023-07-20 ·

Provided are an electronic system of a real-time operating system, an operating method thereof, and an operating method for a memory device. The operating method comprising obtaining a call graph by performing static code analysis on at least one thread that corresponds to a task, obtaining a stack usage of the thread and a call probability for each node by performing runtime profiling of the call graph, allocating a threshold value of a stack size for a first memory area by taking into account the call graph, the call probability for each node, and the stack usage, expanding and storing a stack from the first memory area to a second memory area according to a comparison result between the threshold value and a stack usage of the first memory area and returning the stack to the first memory when execution is completed in the second memory area, wherein the electronic system comprises a memory device configured to include the first memory area and the second memory area.

ARRAY ACCESS WITH RECEIVER MASKING

Methods, systems, and devices for array access with receiver masking are described. A first device may issue to a second device a first sequence of write commands for a set of data. The first sequence of write commands may indicate different memory addresses in an order. After issuing the first sequence of write commands, the first device may issue to the second device a second sequence of read commands for the set of data. The second sequence of read commands may indicate the different memory addresses in the same order as the first sequence of write commands. Based on issuing the second sequence of read commands, the first device may receive the set of data from the second device.

Booting a secondary operating system kernel with reclaimed primary kernel memory

Methods that boot a secondary operating system (O/S) kernel with reclaimed primary kernel memory are disclosed herein. One method includes booting, via a processor performing a boot algorithm, a secondary kernel for an O/S in response to a primary kernel for the O/S going offline, in which the secondary kernel is configured to be loaded to a reserved memory area. The method further includes reclaiming memory space from the primary kernel for use in booting the secondary kernel in response to a determination that the reserved memory area includes insufficient memory space for completing the boot algorithm. Also disclosed herein are apparatus, systems, and computer program products that can include, perform, and/or implement the methods for providing a secondary kernel that includes a reserved area in memory.

High-Throughput Algorithm For Multiversion Concurrency Control With Globally Synchronized Time
20230216921 · 2023-07-06 ·

Throughput is preserved in a distributed system while maintaining concurrency by pushing a commit wait period to client commit paths and to future readers. As opposed to servers performing commit waits, the servers assign timestamps, which are used to ensure that causality is preserved. When a server executes a transaction that writes data to a distributed database, the server acquires a user-level lock, and assigns the transaction a timestamp equal to a current time plus an interval corresponding to bounds of uncertainty of clocks in the distributed system. After assigning the timestamp, the server releases the user-level lock. Any client devices, before performing a read of the written data, must wait until the assigned timestamp is in the past.

Apparatuses and methods for concurrently accessing different memory planes of a memory

Apparatuses and methods for concurrently accessing different memory planes are disclosed herein. An example apparatus may include a controller associated with a queue configured to maintain respective information associated with each of a plurality of memory command and address pairs. The controller is configured to select a group of memory command and address pairs from the plurality of memory command and address pairs based on the information maintained by the queue. The example apparatus further includes a memory configured to receive the group of memory command and address pairs. The memory is configured to concurrently perform memory access operations associated with the group of memory command and address pairs.