Patent classifications
G11C15/00
Memory devices configured to apply different weights to different strings of memory cells coupled to a data line and methods
A memory device has first and second strings of memory cells coupled to a data line. The first string is for storing a first bit having a first bit significance, and the second string is for storing a second bit having a second bit significance different than the first bit significance. A first resistor is coupled in series with the first string. A second resistor is coupled in series with the second string. The memory device is configured to set the first resistor to a first resistance based on the first bit significance and the second resistor to a second resistance based on the second bit significance so that the second resistance is different than the first resistance. The memory device is configured to compare a first bit of input data to the first bit and to compare a second bit of the input data to the second bit.
Non-volatile (NV)-content addressable memory (CAM) (NV-CAM) cells employing differential magnetic tunnel junction (MTJ) sensing for increased sense margin
Non-volatile (NV)-content addressable memory (CAM) (NV-CAM) cells employing differential magnetic tunnel junction (MTJ) sensing for increased sense margin are disclosed. By the NV-CAM cells employing MTJ differential sensing, differential cell voltages can be generated for match and mismatch conditions in response to search operations. The differential cell voltages are amplified to provide a larger match line voltage differential for match and mismatch conditions, thus providing a larger sense margin between match and mismatch conditions. For example, a cross-coupled transistor sense amplifier employing positive feedback may be employed to amplify the differential cell voltages to provide a larger match line voltage differential for match and mismatch conditions. Providing NV-CAM cells that have a larger sense margin can mitigate sensing issues for increased search operation reliability. One non-limiting example of an NV-CAM cell that employs MTJ differential sensing is a ten (10) transistor (10T)-four (4) MTJ (10T-4MTJ) NV-TCAM cell.
Ternary content addressable memory
A ternary content addressable memory includes at least one first memory cell, at least one second memory cell and at least one switch set. The first memory cell receives a first search signal, determines whether to send first stored data to a common end or not according to the first search signal. The second memory cell receives a second search signal, determines whether to send second stored data to the common end or not according to the second search signal. The switch set adjusts a resistance of a path between the match line and the reference ground according to a voltage on the common end and a third search signal.
NON-VOLATILE MEMORY ACCELERATOR AND METHOD FOR SPEEDING UP DATA ACCESS
A non-volatile memory accelerator and a method for speeding up data access are provided. The non-volatile memory accelerator includes a data pre-fetching unit, a cache unit, and an access interface circuit. The data pre-fetching unit has a plurality of line buffers. One of the line buffers provides read data according to a read command, or the data pre-fetching unit reads at least one cache data as the read data according to the read command. The data pre-fetching unit further stores in at least one of the line buffers a plurality of pre-stored data with continuous addresses according to the read command. The cache unit stores the at least one cache data and the pre-stored data with the continuous addresses. The access interface circuit is configured to be an interface circuit of the non-volatile memory.
SYSTEM AND METHOD FOR ALLOWING MULTIPLE GLOBAL IDENTIFIER (GID) SUBNET PREFIX VALUES CONCURRENTLY FOR INCOMING PACKET PROCESSING IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT
System and method for using multiple global identification subnet prefix values in a network switch environment in a high performance computing environment. A packet is received from a network fabric by a first Host Channel Adapter (HCA). The packet has a header portion including a destination subnet prefix identifying a destination subnet of the network fabric. The network HCA is allowed to receive the first packet from a port of the network HCA by selectively determining a logical state of a flag and, selectively in accordance with a predetermined logical state of the flag, ignoring the destination subnet prefix identifying the destination subnet of the network fabric.
IN MEMORY MATRIX MULTIPLICATION AND ITS USAGE IN NEURAL NETWORKS
A method for an associative memory array includes storing each column of a matrix in an associated column of the associative memory array, where each bit in row j of the matrix is stored in row R-matrix-row-j of the array, storing a vector in each associated column, where a bit j from the vector is stored in an R-vector-bit-j row of the array. The method includes simultaneously activating a vector-matrix pair of rows R-vector-bit-j and R-matrix-row-j to concurrently receive a result of a Boolean function on all associated columns, using the results to calculate a product between the vector-matrix pair of rows, and writing the product to an R-product-j row in the array.
Ingress data placement
Server computers often include one or more input/output (I/O) adapter devices for communicating with a network or directly attached storage device. The data transfer latency for request can be reduced by utilizing ingress data placement logic to bypass the processor of the I/O adapter device. For example, host memory descriptors can be stored in a content addressable memory unit of the I/O adapter device to facilitate placement of requested data.
Ingress data placement
Server computers often include one or more input/output (I/O) adapter devices for communicating with a network or directly attached storage device. The data transfer latency for request can be reduced by utilizing ingress data placement logic to bypass the processor of the I/O adapter device. For example, host memory descriptors can be stored in a content addressable memory unit of the I/O adapter device to facilitate placement of requested data.
SYSTEM AND METHOD FOR SUPPORTING SUBNET NUMBER ALIASING IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT
System and method for supporting subnet number aliasing in a high performance computing environment. In accordance with an embodiment, a fabric member can be assigned, by a global fabric manager, an alias fabric local subnet number in order to keep a fabric running after a fabric reconfiguration. The alias fabric local subnet number can be assigned for a period of time, the period of time being static, configurable, or indefinite.
SYSTEM AND METHOD FOR SUPPORTING FLEXIBLE FORWARDING DOMAIN BOUNDARIES IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT
System and method for supporting flexible forwarding domain boundaries in a high performance computing environment. In accordance with an embodiment, flexible forwarding domain boundaries can be supported by dividing/partitioning a physical switch into two or more logical switches, where each switch is logically in a different domain, and allowing a fabric to be decomposed into independent subnets with one two or more physical end ports at the physical switch. By doing so, the same hierarchical forwarding structure and management structure between subnets can be provided as when complete physical switches are used as building blocks.