G06F5/16

ELECTRONIC DEVICE AND METHOD TO REDUCE POWER CONSUMPTION DURING ACCESS
20250103284 · 2025-03-27 ·

An electronic device includes a first buffer, a second buffer, and a multiplexer. The first buffer receives and stores first data when the first buffer is not full, and performs a First-In-First-Out (FIFO) operation on the first data. The second buffer receives and stores second data when the first buffer is full, and performs the FIFO operation on the second data. The multiplexer is electrically connected between the first buffer and the second buffer. The multiplexer receives the first data from outside of the electronic device, or it receives the second data from the second buffer. A depth of the first buffer is less than that of the second buffer.

ELECTRONIC DEVICE AND METHOD TO REDUCE POWER CONSUMPTION DURING ACCESS
20250103284 · 2025-03-27 ·

An electronic device includes a first buffer, a second buffer, and a multiplexer. The first buffer receives and stores first data when the first buffer is not full, and performs a First-In-First-Out (FIFO) operation on the first data. The second buffer receives and stores second data when the first buffer is full, and performs the FIFO operation on the second data. The multiplexer is electrically connected between the first buffer and the second buffer. The multiplexer receives the first data from outside of the electronic device, or it receives the second data from the second buffer. A depth of the first buffer is less than that of the second buffer.

Data processing

A data processing apparatus comprises a processor having an internal state dependent upon execution of application program code, the processor being configured to generate display data relating to images to be displayed and to buffer display data relating to a most recent period of execution of a currently executing application. The apparatus includes RAM for storing temporary data relating to a current operational state of program execution. The apparatus also includes a data transfer controller configured to transfer data from the RAM relating to the currently executing application, data relating to a current internal state of the processor and buffered display data to suspend data memory, and to transfer data from the suspend data memory to RAM and to the processor to recreate an execution state of an application at a time the suspend instruction was executed, and to retrieve display data relating to the resumed application.

Data processing

A data processing apparatus comprises a processor having an internal state dependent upon execution of application program code, the processor being configured to generate display data relating to images to be displayed and to buffer display data relating to a most recent period of execution of a currently executing application. The apparatus includes RAM for storing temporary data relating to a current operational state of program execution. The apparatus also includes a data transfer controller configured to transfer data from the RAM relating to the currently executing application, data relating to a current internal state of the processor and buffered display data to suspend data memory, and to transfer data from the suspend data memory to RAM and to the processor to recreate an execution state of an application at a time the suspend instruction was executed, and to retrieve display data relating to the resumed application.

EMBEDDING A STATE SPACE MODEL ON MODELS-ON-SILICON HARDWARE ARCHITECTURE

A state space model with selective updates, also referred to as a Mamba-based block, in a Mamba-based model can be embedded onto a silicon chip. Specialized hardware modules in a models-on-silicon chip, such as an optimized selective scan unit and an optimized 1D convolution unit, can perform the operations of the selective state space model of the Mamba-based model. These modules individually and collectively enhance processing speed, power efficiency, and overall performance. The parameters such as weights of the Mamba-based model are arranged in a sequential order in one or more sequential read memories according to a predetermined timing sequence. By embedding the selective state space model onto the models-on-silicon architecture, which excels in managing larger input context sizes, this solution transforms the Mamba-based model into a highly viable and efficient option for AI tasks being performed on resource-constrained devices.

EMBEDDING A STATE SPACE MODEL ON MODELS-ON-SILICON HARDWARE ARCHITECTURE

A state space model with selective updates, also referred to as a Mamba-based block, in a Mamba-based model can be embedded onto a silicon chip. Specialized hardware modules in a models-on-silicon chip, such as an optimized selective scan unit and an optimized 1D convolution unit, can perform the operations of the selective state space model of the Mamba-based model. These modules individually and collectively enhance processing speed, power efficiency, and overall performance. The parameters such as weights of the Mamba-based model are arranged in a sequential order in one or more sequential read memories according to a predetermined timing sequence. By embedding the selective state space model onto the models-on-silicon architecture, which excels in managing larger input context sizes, this solution transforms the Mamba-based model into a highly viable and efficient option for AI tasks being performed on resource-constrained devices.

Matrix computing method, chip, and related device
12608172 · 2026-04-21 · ·

This application provides a matrix computing method, a chip, and a related device. The chip includes a first buffer, is configured to buffer a first vector, and a second buffer is configured to buffer a second vector. A scheduling module generates a selection signal based on a bitmap of the first vector. The selection signal may cause the processing element to obtain, from the first buffer, a group of non-zero elements in the first vector, and cause the processing element to obtain, from the second buffer, a group of elements in the second vector. An operation is performed between the first vector and the second vector based on the group of non-zero elements in the first vector and the group of elements in the second vector. In this application, an element whose value is 0 in one vector may be excluded from computing, to reduce a computing amount.

Matrix computing method, chip, and related device
12608172 · 2026-04-21 · ·

This application provides a matrix computing method, a chip, and a related device. The chip includes a first buffer, is configured to buffer a first vector, and a second buffer is configured to buffer a second vector. A scheduling module generates a selection signal based on a bitmap of the first vector. The selection signal may cause the processing element to obtain, from the first buffer, a group of non-zero elements in the first vector, and cause the processing element to obtain, from the second buffer, a group of elements in the second vector. An operation is performed between the first vector and the second vector based on the group of non-zero elements in the first vector and the group of elements in the second vector. In this application, an element whose value is 0 in one vector may be excluded from computing, to reduce a computing amount.