Patent classifications
G11C11/06
Semiconductor Device Package Die Stacking System and Method
A semiconductor memory device includes first memory dies stacked one upon another and electrically connected one to another by first bond wires, and covered with a first encapsulant. Second memory dies are disposed above the first memory dies, stacked one upon another and electrically connected one to another with second bond wires, and covered with a second encapsulant. A control die may be mounted on the top die in the second die stack. Vertical bond wires extend between the stacked die modules. A redistribution layer is formed over the top die stack and the control die to allow for electrical communication with the memory device. The memory device allows for stacking memory dies in a manner that allows for increased memory capacity without increasing the package form factor.
Semiconductor Device Package Die Stacking System and Method
A semiconductor memory device includes first memory dies stacked one upon another and electrically connected one to another by first bond wires, and covered with a first encapsulant. Second memory dies are disposed above the first memory dies, stacked one upon another and electrically connected one to another with second bond wires, and covered with a second encapsulant. A control die may be mounted on the top die in the second die stack. Vertical bond wires extend between the stacked die modules. A redistribution layer is formed over the top die stack and the control die to allow for electrical communication with the memory device. The memory device allows for stacking memory dies in a manner that allows for increased memory capacity without increasing the package form factor.
Hardware accelerator with analog-content addressable memory (a-CAM) for decision tree computation
Examples described herein relate to a decision tree computation system in which a hardware accelerator for a decision tree is implemented in the form of an analog Content Addressable Memory (a-CAM) array. The hardware accelerator accesses a decision tree. The decision tree comprises of multiple paths and each path of the multiple paths includes a set of nodes. Each node of the decision tree is associated with a feature variable of multiple feature variables of the decision tree. The hardware accelerator combines multiple nodes among the set of nodes with a same feature variable into a combined single node. Wildcard values are replaced for feature variables not being evaluated in each path. Each combined single node associated with each feature variable is mapped to a corresponding column in the a-CAM array and the multiple paths of the decision tree to rows of the a-CAM array.
Temperature compensation in a memory system
A processing device in a memory sub-system stores data at a first voltage level in a memory cell in a first segment of the memory sub-system, and determines a temperature change between a current temperature associated with the memory cell and a new temperature. The processing device further determines a voltage level read from the memory cell at the new temperature, determines a difference between the voltage level read from the memory cell and the first voltage level, and determines a temperature compensation value based on the difference between the voltage level read from the memory cell and the first voltage level in view of the temperature change.
Victim row refreshes for memories in electronic devices
An electronic device includes a memory having a plurality of memory rows and a memory refresh functional block that performs a victim row refresh operation. For the victim row refresh operation, the memory refresh functional block selects one or more victim memory rows that may be victims of data corruption caused by repeated memory accesses in a specified group of memory rows near each of the one or more victim memory rows. The memory refresh functional block then individually refreshes each of the one or more victim memory rows.
Memory device
According to one embodiment, a memory device, includes a first memory cell, and a second memory cell adjacent to the first memory cell; and a sequencer configured to, when data is read from the first memory cell: perform a first read operation on the second memory cell; perform a second read operation on the first memory cell; perform a third read operation on the first memory cell by applying a voltage different from that applied in the second read operation to a gate of the second memory cell; and generate first data stored in the first memory cell and second data for correcting the first data, based on results of the first to third read operations.
HARDWARE ACCELERATOR WITH ANALOG-CONTENT ADDRESSABLE MEMORY (A-CAM) FOR DECISION TREE COMPUTATION
Examples described herein relate to a decision tree computation system in which a hardware accelerator for a decision tree is implemented in the form of an analog Content Addressable Memory (a-CAM) array. The hardware accelerator accesses a decision tree. The decision tree comprises of multiple paths and each path of the multiple paths includes a set of nodes. Each node of the decision tree is associated with a feature variable of multiple feature variables of the decision tree. The hardware accelerator combines multiple nodes among the set of nodes with a same feature variable into a combined single node. Wildcard values are replaced for feature variables not being evaluated in each path. Each combined single node associated with each feature variable is mapped to a corresponding column in the a-CAM array and the multiple paths of the decision tree to rows of the a-CAM array.
Resonance rotating spin-transfer torque memory device
A memory device includes a plurality of layers forming a stack. The plurality of layers include a spin polarization layer having a magnetic anisotropy approximately perpendicular to a plane of the spin polarization layer, an antiferromagnetic layer having an antiferromagnetic material, a ferromagnetic layer that is exchange coupled to the antiferromagnetic layer, where the antiferromagnetic layer is between the ferromagnetic layer and the spin polarization layer, and a storage layer having a magnetization direction that indicates a memory state of the storage layer. The memory state is switched by an amount of current through the stack. The spin polarization layer, the ferromagnetic layer, and the antiferromagnetic layer are configured to reduce the amount of current through the stack for switching the magnetization direction of the storage layer relative to an amount of current through a memory device without the spin polarization layer, the ferromagnetic layer, and the antiferromagnetic layer.
System and method for classifying data using neural networks with errors
A computing device includes one or more processors, random access memory (RAM), and a non-transitory computer-readable storage medium storing instructions for execution by the one or more processors. The computing device receives first data and classifies the first data using a neural network that includes at least one quantized layer. The classifying includes reading values from the random access memory for a set of weights of the at least one quantized layer of the neural network using first read parameters corresponding to a first error rate.
System and method for classifying data using neural networks with errors
A computing device includes one or more processors, random access memory (RAM), and a non-transitory computer-readable storage medium storing instructions for execution by the one or more processors. The computing device receives first data and classifies the first data using a neural network that includes at least one quantized layer. The classifying includes reading values from the random access memory for a set of weights of the at least one quantized layer of the neural network using first read parameters corresponding to a first error rate.