ENCODING AND DECODING OF DATA USING GENERALIZED LDPC CODES
20230370090 · 2023-11-16
Inventors
- Ariel Doubchak (Herzliya, IL)
- Avner Dor (Kfar Saba, IL)
- Yaron Shany (Kfar Saba, IL)
- TAL PHILOSOF (GIVATAYIM, IL)
- Yoav Shereshevski (Tel-Aviv, IL)
- Amit Berman (Binyamina, IL)
Cpc classification
H03M13/1111
ELECTRICITY
G06F11/1012
PHYSICS
H03M13/1108
ELECTRICITY
H03M13/451
ELECTRICITY
H03M13/1174
ELECTRICITY
H03M13/3707
ELECTRICITY
H03M13/2963
ELECTRICITY
H03M13/616
ELECTRICITY
International classification
Abstract
A method of correcting data stored in a memory device includes: applying an iterative decoder to the data; determining a total number of rows in first data the decoder attempted to correct; estimating first visible error rows among the total number that continue to have an error after the attempt; estimating residual error rows among the total number that no longer have an error after the attempt; determining second visible error rows in second data of the decoder that continue to have an error by permuting indices of the residual error rows according to a permutation; and correcting the first data using the first visible error rows.
Claims
1. A method of correcting data stored in a memory device, the method comprising: applying an iterative decoder to the data; determining a total number of rows in first data the iterative decoder attempted to correct; estimating first visible error rows among the total number that continue to have an error after the attempt; estimating residual error rows among the total number that no longer have an error after the attempt; determining second visible error rows in second data of the iterative decoder that continue to have an error by permuting indices of the residual error rows according to a permutation; and correcting the first data using the first visible error rows.
2. The method of claim 1, wherein the correcting the first data comprises: setting log-likelihood ratios (LLRs) of rows of the first data determined to be the first visible error rows to 0; increasing a magnitude of LLRs of the remaining rows of the first data to a value having a higher magnitude; and applying the first data and the LLRs to the iterative decoder.
3. The method of claim 1, further comprising: determining that the iterative decoder is oscillating, wherein the total number of rows in the first data the iterative decoder attempted to correct is determined in response to determining that the iterative decoder is oscillating.
4. The method of claim 1, wherein the iterative decoder is determined to be oscillating in response to the iterative decoder attempting to correct a first number of errors in the read data to generate the first data, and the iterative decoder attempting to correct a second number of errors in the second data to restore the first data including the first number of errors.
5. The method of claim 1, the correcting the first data comprises: determining a number of second hidden error rows in the second data based on a total number of the first visible error rows, a total number of corrections attempted on the first visible error rows by the iterative decoder, and a total number of corrections attempted on the second visible error rows by the iterative decoder.
6. The method of claim 5, the correcting the first data further comprises: choosing one of the rows of the second data to represent a first one of the second hidden error rows; selecting the visible error rows in the first data that have 3 known coordinates; using a Hamming decoder on the 3 known coordinates to deduce a 4.sup.th coordinate for each of the selected first visible error rows; and correcting the first data using the deduced coordinates.
7. The method of claim 6, the correcting the first data further comprises: flipping bits of the first data having the 3 known coordinates.
8. The method of claim 7, wherein the coordinates are used in the correcting in response to the 4.sup.th coordinates being mapped to a same row in second data.
9. The method of claim 1, wherein the iterative decoder is configured to decode a generalized low-density parity-check (GLDPC) code based on a Hamming code.
10. The method of claim 9, wherein the Hamming code is a shortened extended Hamming code.
11. A memory system comprising: a nonvolatile memory (NVM); a controller configured to read data from the NVM; and an accelerator comprising an iterative decoder, the accelerator being connected to the controller and configured to: apply the iterative decoder to the read data through the controller and determine whether the iterative decoder is oscillating; determine a total number of rows in first data the iterative decoder attempted to correct to generate second data; estimate residual error rows among the total number that no longer have an error after the attempt; determine second visible error rows in the second data of the iterative decoder that continue to have an error by permuting indices of the residual error rows according to a permutation; determine whether zero or more first hidden error rows are present in the first data from the second visible error rows; and correct the first data using the first visible error rows and the determined number of first hidden error rows when it is determined that the iterative decoder is oscillating, wherein each hidden error row has an error and is a valid Hamming codeword.
12. The memory system of claim 11, wherein the controller is configured to read the data and output the corrected data to the host in response to a request from a host.
13. The memory system of claim 11, wherein the iterative decoder is determined to be oscillating in response to the iterative decoder attempting to correct a first number of errors in the read data to generate the first data, permuting the first data to generate the second data, and attempting to correct a second number of errors in the second data to restore the first data including the first number of errors.
14. The memory system of claim 11, wherein, to correct the first data, the controller is further configured to: set log-likelihood ratios (LLRs) of rows of the first data determined to be the first visible error rows to 0; set LLRs of rows of the first data determined to be a hidden error row to 0; increase a magnitude of LLRs of the remaining rows of the first data to a value having a higher magnitude; and apply the first data and the LLRs to the iterative decoder.
15. The memory system of claim 11, wherein the iterative decoder is configured to: decode a generalized low-density parity-check (GLDPC) code based on a Hamming code.
16. The memory system of claim 15, wherein the Hamming code is a shortened extended Hamming code.
17. A nonvolatile memory (NVM) device comprising: a nonvolatile memory (NVM) array; an iterative decoder; and a logic circuit configured to: apply the iterative decoder to decode data read from the NVM array; determine a total number of rows in first data the iterative decoder attempted to correct; estimate first visible error rows among the total number that continue to have an error after the attempt; estimate residual error rows among the total number that no longer have an error after the attempt; determine second visible error rows in second data of the iterative decoder that continue to have an error by permuting indices of the residual error rows according to a permutation; and correct the first data using the first visible error rows.
18. The NVM device of claim 17, wherein the iterative decoder is determined to repeatedly change between two states in response to the iterative decoder attempting to correct a first number of errors in the read data to generate the first data, permuting the first data to generate the second data, and attempting to correct a second number of errors in the second data to restore the first data including the first number of errors.
19. The NVM device of claim 17, wherein, to correct the first data, the controller is further configured to: set log-likelihood ratios (LLRs) of rows of the first data determined to be the first visible error rows to 0; increase a magnitude of LLRs of the remaining rows of the first data to a value having a higher magnitude; and apply the first data and the LLRs to the iterative decoder.
20. The NVM device of claim 17, wherein the iterative decoder is configured to: generalized low-density parity-check (GLDPC) code based on a Hamming code.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
[0010] The present inventive concept will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
DETAILED DESCRIPTION
[0026] Hereinafter, exemplary embodiments of the inventive concept in conjunction with accompanying drawings will be described. Below, details, such as detailed configurations and structures, are provided to aid a reader in understanding embodiments of the inventive concept. Therefore, embodiments described herein may be variously changed or modified without departing from embodiments of the inventive concept.
[0027] Modules in the drawings or the following detailed description may be connected with other modules in addition to the components described in the detailed description or illustrated in the drawings. Each connection between the modules or components may be a connection by communication or may be a physical connection.
[0028]
[0029] Referring to
[0030] The host controller 110 controls read and write operations of the memory controller 120 and may correspond to a central processing unit (CPU), for example. The memory controller 120 stores data when performing a write operation and outputs stored data when performing a read operation under the control of the host controller 110. The memory controller 120 includes a host interface 121 and an access controller 125. The host interface 121 and the access controller 125 may be connected to one another via an internal bus 127. The access controller 125 is configured to interface with a nonvolatile memory device 126. In an exemplary embodiment, the nonvolatile memory device 126 is implemented by a flash memory device. In an alternate embodiment, the nonvolatile memory device 126 is replaced with a volatile memory but is described herein as nonvolatile for ease of discussion.
[0031] The host interface 121 may be connected with a host (e.g., see 4100 in
[0032] The access controller 125 is configured to write data to the memory device 126, and read data from the memory device 126. The memory device 126 may include one or more non-volatile memory devices.
[0033] The host controller 110 exchanges signals with the memory controller 120 through the host interface 121. The access controller 125 controls an access operation on a memory in which data will be stored within the memory device 126 when a write operation is performed and controls an access operation on a memory in which data to be outputted is stored within the memory device 126 when a read operation is performed. The memory device 126 stores data when a write operation is performed and outputs stored data when a read operation is performed. The access controller 125 and the memory device 126 communicate with one another through a data channel 130. While only a single memory device 126 is illustrated in
[0034]
[0035] Referring to
[0036]
[0037] Herein, the term [n, k, d] code refers to a linear binary code of length n, dimension k, and a minimum Hamming distance d. Also, the term [n, k] code is an [n, k, d] code for some d.
Hamming Codes
[0038] A Hamming code Ham is a [2.sup.m -1, 2.sup.m - m -1, 3] code defined by a m x (2.sup.m - 1) parity-check matrix whose 2.sup.m -1 columns are all non-zero binary vectors of length m, where m is some positive integer.
Extended Hamming Codes
[0039] An extended Hamming code eHam is the [2.sup.m, 2.sup.m - m -1, 4] code obtained by adjoining a parity bit to all codewords of the Hamming code Ham.
Shortening
[0040] If C is an [n, k] code and I .Math. {1, ..., n}, then the code obtained by shortening C on I is obtained by first taking only the codewords (c.sub.1, ..., c.sub.n) of C with c.sub.i = 0 for all i ∈ 1, and then deleting all coordinates from I (all of them zero coordinates). The resulting code has length n - |I| and dimension at least k - |I|. The dimension equals k - |I| if I is a subset of the information part in systematic encoding.
Shortened eH-codes
C Is a Shortened eH-code if it is Obtained by Shortening eHam on Some Subset eH-GLDPC Codes
[0041] For positive integers n ≤ 2.sup.m and N, let C.sub.rows be some fixed shortened eH-code of length n, and let π be a permutation on the coordinates of N × n matrices. The eH-GLDPC code
is defined as the set of all N × n binary matrices M that satisfy the following conditions: [0042] 1. All the rows of M are in C.sub.rows. [0043] 2. All the rows of πM are in C.sub.rows, where πM is the matrix whose (i′, j′)-th entry is M.sub.i,j iff (i′, j′) = π(i, j)The above conditions 1 and 2 refer to two different “views” of the same matrix M: In the first view (hereinafter referred to as “J.sub.1”), M itself is referred to, while in the second view (hereinafter referred to as “J.sub.2”, the permutated version πM is referred to.
[0044]
Property 1 (Line-Intersection Property)
[0045] The set of indices obtained by applying a permutation π to a row intersects each row at most once. Here, a row stands for a set of indices of the form {(i, 1), (1, 2), ..., (i, n)} for some i ∈ {1, ..., N}. It may be verified that π has the line-intersection property if and only if the inverse permutation π.sup.-1 has the line-intersection property. Further, the property requires N ≥ n. The line-intersection property is illustrated in
Pseudo-Errors
[0046] A certain type of error (hereinafter referred to as “Pseudo-errors”) is the reason for the error floor in eH-GLDPC codes. Pseudo-errors can be thought of as a special case of near-codewords/trapping sets, i.e., low-weight errors that violate only a very small number of local constrains. They are special in the sense that they result in oscillations between J.sub.1 and J.sub.2.
[0047]
[0048] By definition, a pseudo-error is an error pattern (say, at J.sub.1) that results in decoder oscillations. Pseudo-errors for which the post-decoding patterns at J.sub.i (i = 1,2) have only rows of weight 4 are considered herein. The pre-decoding pseudo-error at J.sub.i (i = 1,2) as illustrated in
[0049] In an embodiment, pseudo-errors with two properties are considered: i) in visible-error rows, there are only wrong corrections (e.g., all bits flipped by the decoder 228 should not have been flipped); and ii) all visible-error (wrong) corrections are mapped through π or π.sup.-1 (depending on whether i equals 1 or 2, respectively) to rows without an “X”, where an X marks an error present both before and after the decoding.
[0050]
[0051] The method of
[0052] The method of
[0053] The method of
[0054] The choosing of the number of hidden error rows, the choosing of the number of visible errors rows and their locations, and the choosing of the number of residual errors rows and their locations, may be referred to as selecting parameters for a scan.
[0055] The method of
[0056] The method of
[0057] The method of
[0058]
[0059] The method of
[0060] The method of
[0061] If hidden error rows are not present, then step 801 includes completing a visible error row in the second data to a weight-4 vector using the first visible error errors of the first data (see right side of 801). Then it is verified whether the weight-4 vector is a valid codeword in step 802 (see right side of 802).
[0062] The method of
[0063] The method of
[0064] The method of
[0065] As discussed above, step 707 of
[0066] The method of
[0067] The method of
[0068] The method of
[0069]
[0070] The method of
[0071] As shown in
[0072] The method of
[0073] The method of
[0074] The method of
[0075] The method of
[0076] The method of
[0077] The method of
[0078]
[0079] In
[0080] In the general case, if the number of X rows of J.sub.ℓ̅ is larger than 4, then an additional scan over
intersection options is required, and in what follows one considers the case where the scan hits the correct option. Note also that
is assumed to be known at this stage, since both
have been calculated from the current values of the scanned parameters.
[0081] In each instance of this scan over M options, for each of the
visible error rows of side
[0082] It is noted that all the X’s in the
visible error rows of
not coming from the hidden error row of J.sub.ℓ comes from the K.sub.ℓ visible error rows of J.sub.ℓ, and by assumption, each such row intersects each visible error row of
at most once, in a known coordinate.
[0083] In what follows, one can simultaneously recover, in
the X’s from the visible error rows of J.sub.ℓ, and the identity of the hidden error row of J.sub.ℓ (if it exists). Moreover, one can reconstruct some unknowns in several different ways, and checking if the resulting values for the same unknown are the same will be used as a criterion for screening out wrong assumptions.
[0084] A visible error row is fixed in
and it is assumed that the decoder flipped m.sub.▱ E {1,2,3} coordinates in this row. For example, in
a shortened-eH word of weight 4 has exactly the following “1″s: i) Up to one X from the hidden error row of J.sub.ℓ: such an X is assumed iff if H.sub.ℓ= 1 and the current value of the scan over M options described above implies that this visible error row of
indeed intersects with the hidden error row of J.sub.ℓ (e.g., m.sub.h ∈ {0,1} is written for the number of X’s from the hidden error row of J.sub.ℓ). ii) exactly m.sub.▱ ▱’s, and iii) exactly m.sub.a := 4 - m.sub.▱ - m.sub.h X’s coming from the visible error rows of
each such X coming from a different row of
[0085] The algorithm may then run on all the visible error rows of
as follows: [0086] For each such row, the algorithm runs on choices of m.sub.a rows out of the K.sub.ℓ visible error rows of J.sub.ℓ [0087] For each choice in the above scan, if not all m.sub.a chosen rows intersect with the fixed row of J
[0092] Typically, and with high probability, only the correct solution will not be screened out by the above process. In addition, if one of the fixed parameters from outer scans is incorrect, then typically all solutions will be screened out, and it will be clear than the decoder must proceed to the next hypothesis.
[0093] For example, in
choices of a single visible error row from J.sub.1, and compete the X resulting from the intersection of this J.sub.1-row with the J.sub.2 row and the 2 □’s to a weight-4 shortened-eH codeword (if possible). This results in one additional X on the J.sub.2 row. Similarly, for the visible J.sub.2 row with a single □, one scans on
choices of two visible error rows from J1, and again complete the 3 resulting coordinates coming from 2 X’s mapped from J1 and the single □ to a fourth coordinate from a weight-4 codeword. If the two completions from the two rows are mapped to the same row of J1, then this option is retained. The situation after this stage is depicted in
[0094] As explained above, at this stage, the only unknown X’s (if any) are those of the
hidden error rows of side
For example, in
The 2.SUP.nd Option
[0095] If
then there is nothing to solve, and the entire pattern is already known. If
then work is performed similarly to the above in order to find the single hidden error row of
and consequently all missing X′. In an embodiment, one can find the hidden error row of
by completing triples of known coordinates in rows of J.sub.ℓ to a weight-4 codewords. These completions need to be mapped to the same row of
(verification), which is then the estimated hidden error row.
[0096] The case where
as in
[0097] Scan on hypotheses, row.sub.1 = 1, ..., N, excluding the rows where the decoder acted [0098] For each visible error row of J.sub.ℓ that does intersects with row.sub.1 and has a total of 3 known coordinates coming from: 1. Flippings of the decoder (O’s in
The 1.SUP.st Option
[0106] In some cases, it is sufficient to consider only pseudo-errors that are allowed to have hidden error rows only on one side. For example, such cases may arise at an intermediate stage of pseudo-error decoding with hidden rows on both sides, as described in the previous section. As another example, when modifying some decoder parameters, it is possible to assure that practically all pseudo-errors have hidden error rows only on one side, at the cost of slightly decreasing the rBER coverage.
[0107] It is assumed that all hidden error rows appear only on one side. In this case, we first scan over two options for the side J.sub.ℓ, ℓ ∈ {0,1}, that might contain hidden rows. By assumption, there are no hidden error rows on side
This means that the decoder of side
acted exactly in the rows that contain the permutation-map of the pseudo-error at the output of J.sub.ℓ’s decoder. Referring to the
in which the decoder acted as visible error rows, this suggests the following line of action: [0108] For each row of J.sub.ℓ, find its intersection with all visible error rows [0109] If for some row there are less than 4 intersections, discard this row [0110] Otherwise, if there are m ≥ 4 intersections, then check for choices of 4 indices out of a total of m intersections, and for each such choice check if it is a codeword of the shortened-eH code [0111] If not, discard this option. Otherwise, save this option as a potential part of the pseudo-error for the current J.sub.ℓ row [0112] At this stage, there is typically a small number R of rows of J.sub.ℓ for which there is at least one saved codeword. These rows include all visible error rows of J.sub.ℓ. [0113] We may now run on the hypothesized number r of rows of the pseudo-error on side J.sub.ℓ, typically in the range r = 1, ...,6 [0114] For each choice of r, we may run on all options of choosing r candidate rows of our R rows. [0115] For each choice of r rows, we may now run on all possible choices of a weight-4 codeword from each one of r rows [0116] At each scanning instance, we have a hypothesis for the pseudo-error at side J.sub.ℓ (accounting only for the visible rows): r weight-4 codewords sitting in r rows. [0117] We may map this pattern to and see that it results exactly in the action of decoder for the actual pseudo-error. If itdoes, then the r × 4 pattern in J.sub.ℓ is a candidate for the pseudo-error.
[0118] As an alternative, one can set the output LLRs of all visible error rows of
to zero, set the magnitudes of output LLR’s of all rows that are not visible error rows in
to their maximum possible value, and proceed with eH-GLDPC decoding iterations. Note that when we proceed with the eH-GLDPC decoding iterations, the first step is to map output LLRs from side
to side J.sub.ℓ. In particular, in each row of J.sub.ℓ, the zero LLRs mark exactly its intersection with the visible error rows of
and they are now the lowest LLRs of the row.
[0119] Referring back to
[0120] The above-described methods may be tangibly embodied on one or more computer readable medium(s) (i.e., program storage devices such as a hard disk, magnetic floppy disk, RAM, ROM, CD ROM, Flash Memory, etc., and executable by any device or machine comprising suitable architecture, such as a general purpose digital computer having a processor, memory, and input/output interfaces).
[0121]
[0122] The host 4100 may write data in the SSD 4200 or read data from the SSD 4200. The host controller 4120 may transfer signals SGL such as a command, an address, a control signal, and the like to the SSD 4200 via the host interface 4111. The DRAM 4130 may be a main memory of the host 4100.
[0123] The SSD 4200 may exchange signals SGL with the host 4100 via the host interface 4211, and may be supplied with a power via a power connector 4221. The SSD 4200 may include a plurality of nonvolatile memories 4201 through 420n, an SSD controller 4210, and an auxiliary power supply 4220. Herein, the nonvolatile memories 4201 to 420n may be implemented by NAND flash memory. The SSD controller 4210 may be implemented by the controller 125 of
[0124] The plurality of nonvolatile memories 4201 through 420n may be used as a storage medium of the SSD 4200. The plurality of nonvolatile memories 4201 to 420n may be connected with the DDS controller 4210 via a plurality of channels CH1 to CHn. One channel may be connected with one or more nonvolatile memories. Each of the channels CH1 to CHn may correspond to the data channel 130 depicted in
[0125] The SSD controller 4210 may exchange signals SGL with the host 4100 via the host interface 4211. Herein, the signals SGL may include a command (e.g., the CMD), an address (e.g., the ADDR), data, and the like. The SSD controller 4210 may be configured to write or read out data to or from a corresponding nonvolatile memory according to a command of the host 4100.
[0126] The auxiliary power supply 4220 may be connected with the host 4100 via the power connector 4221. The auxiliary power supply 4220 may be charged by a power PWR from the host 4100. The auxiliary power supply 4220 may be placed within the SSD 4200 or outside the SSD 4200. For example, the auxiliary power supply 4220 may be put on a main board to supply an auxiliary power to the SSD 4200.
[0127] While an embodiment with respect to
[0128]
[0129] Referring to
[0130] The main processor 1100 may control all operations of the system 1000, more specifically, operations of other components included in the system 1000. The main processor 1100 may be implemented as a general-purpose processor, a dedicated processor, or an application processor.
[0131] The main processor 1100 may include at least one CPU core 1110 and further include a controller 1120 configured to control the memories 1200a and 1200b and/or the storage devices 1300a and 1300b. In some embodiments, the main processor 1100 may further include an accelerator 1130, which is a dedicated circuit for a high-speed data operation, such as an artificial intelligence (AI) data operation. The accelerator 1130 may include a graphics processing unit (GPU), a neural processing unit (NPU) and/or a data processing unit (DPU) and be implemented as a chip that is physically separate from the other components of the main processor 1100. The accelerator 1130 may include the ECC encoder 222 and the ECC decoder 228 similar to the accelerator 128 illustrated in
[0132] The storage devices 1300a and 1300b may serve as non-volatile storage devices configured to store data regardless of whether power is supplied thereto, and have larger storage capacity than the memories 1200a and 1200b. The storage devices 1300a and 1300b may respectively include storage controllers(STRG CTRL) 1310a and 1310b and NVM(Non-Volatile Memory)s 1320a and 1320b configured to store data via the control of the storage controllers 1310a and 1310b. Although the NVMs 1320a and 1320b may include flash memories having a two-dimensional (2D) structure or a three-dimensional (3D) V-NAND structure, the NVMs 1320a and 1320b may include other types of NVMs, such as PRAM and/or RRAM.
[0133] The storage devices 1300a and 1300b may be physically separated from the main processor 1100 and included in the system 1000 or implemented in the same package as the main processor 1100. In addition, the storage devices 1300a and 1300b may have types of solid-state devices (SSDs) or memory cards and be removably combined with other components of the system 100 through an interface, such as the connecting interface 1480 that will be described below. The storage devices 1300a and 1300b may be devices to which a standard protocol, such as a universal flash storage (UFS), an embedded multi-media card (eMMC), or a non-volatile memory express (NVMe), is applied, without being limited thereto.
[0134] The image capturing device 1410 may capture still images or moving images. The image capturing device 1410 may include a camera, a camcorder, and/or a webcam.
[0135] The user input device 1420 may receive various types of data input by a user of the system 1000 and include a touch pad, a keypad, a keyboard, a mouse, and/or a microphone.
[0136] The sensor 1430 may detect various types of physical quantities, which may be obtained from the outside of the system 1000, and convert the detected physical quantities into electric signals. The sensor 1430 may include a temperature sensor, a pressure sensor, an illuminance sensor, a position sensor, an acceleration sensor, a biosensor, and/or a gyroscope sensor.
[0137] The communication device 1440 may transmit and receive signals between other devices outside the system 1000 according to various communication protocols. The communication device 1440 may include an antenna, a transceiver, and/or a modem.
[0138] The display 1450 and the speaker 1460 may serve as output devices configured to respectively output visual information and auditory information to the user of the system 1000.
[0139] The power supplying device 1470 may appropriately convert power supplied from a battery (not shown) embedded in the system 1000 and/or an external power source, and supply the converted power to each of components of the system 1000.
[0140] The connecting interface 1480 may provide connection between the system 1000 and an external device, which is connected to the system 1000 and capable of transmitting and receiving data to and from the system 1000. The connecting interface 1480 may be implemented by using various interface schemes, such as advanced technology attachment (ATA), serial ATA (SATA), external SATA (e-SATA), small computer small interface (SCSI), serial attached SCSI (SAS), peripheral component interconnection (PCI), PCI express (PCIe). NVMe, IEEE 1394, a universal serial bus (USB) interface, a secure digital (SD) card interface, a multi-media card (MMC) interface, an eMMC interface, a UFS interface, an embedded UFS (eUFS) interface, and a compact flash (CF) card interface.
[0141]
[0142] Referring to
[0143] The application servers 3100 to 3100n may communicate with the storage servers 3200 to 3200m through a network 3300. The network 3300 may be implemented by using a fiber channel (FC) or Ethernet. In this case, the FC may be a medium used for relatively high-speed data transmission and use an optical switch with high performance and high availability. The storage servers 3200 to 3200m may be provided as file storages, block storages, or object storages according to an access method of the network 3300.
[0144] In an embodiment, the network 3300 may be a storage-dedicated network, such as a storage area network (SAN). For example, the SAN may be an FC-SAN, which uses an FC network and is implemented according to an FC protocol (FCP). As another example, the SAN may be an Internet protocol (IP)-SAN, which uses a transmission control protocol (TCP)/IP network and is implemented according to a SCSI over TCP/IP or Internet SCSI (iSCSI) protocol. In another embodiment, the network 3300 may be a general network, such as a TCP/IP network. For example, the network 3300 may be implemented according to a protocol, such as FC over Ethernet (FCoE), network attached storage (NAS), and NVMe over Fabrics (NVMe-oF).
[0145] Hereinafter, the application server 3100 and the storage server 3200 will mainly be described. A description of the application server 3100 may be applied to another application server 3100n, and a description of the storage server 3200 may be applied to another storage server 3200m.
[0146] The application server 3100 may store data, which is requested by a user or a client to be stored, in one of the storage servers 3200 to 3200m through the network 3300. Also, the application server 3100 may obtain data, which is requested by the user or the client to be read, from one of the storage servers 3200 to 3200m through the network 3300. For example, the application server 3100 may be implemented as a web server or a database management system (DBMS).
[0147] The application server 3100 may access a memory 3120n or a storage device 3150n, which is included in another application server 3100n, through the network 3300. Alternatively, the application server 3100 may access memories 3220 to 3220m or storage devices 3250 to 3250m, which are included in the storage servers 3200 to 3200m, through the network 3300. Thus, the application server 3100 may perform various operations on data stored in application servers 3100 to 3100n and/or the storage servers 3200 to 3200m. For example, the application server 3100 may execute an instruction for moving or copying data between the application servers 3100 to 3100n and/or the storage servers 3200 to 3200m. In this case, the data may be moved from the storage devices 3250 to 3250m of the storage servers 3200 to 3200m to the memories 3120 to 3120n of the application servers 3100 to 3100n directly or through the memories 3220 to 3220m of the storage servers 3200 to 3200m. The data moved through the network 3300 may be data encrypted for security or privacy.
[0148] The storage server 3200 will now be described as an example. An interface 3254 may provide physical connection between a processor 3210 and a controller 3251 and a physical connection between a network interface card (NIC) 3240 and the controller 3251. For example, the interface 3254 may be implemented using a direct attached storage (DAS) scheme in which the storage device 3250 is directly connected with a dedicated cable. For example, the interface 3254 may be implemented by using various interface schemes, such as ATA, SATA, e-SATA, an SCSI, SAS, PCI, PCIe. NVMe, IEEE 1394, a USB interface, an SD card interface, an MMC interface, an eMMC interface, a UFS interface, an eUFS interface, and/or a CF card interface.
[0149] The storage server 3200 may further include a switch 3230 and the NIC(Network InterConnect) 3240. The switch 3230 may selectively connect the processor 3210 to the storage device 3250 or selectively connect the NIC 3240 to the storage device 3250 via the control of the processor 3210.
[0150] In an embodiment, the NIC 3240 may include a network interface card and a network adaptor. The NIC 3240 may be connected to the network 3300 by a wired interface, a wireless interface, a Bluetooth interface, or an optical interface. The NIC 3240 may include an internal memory, a digital signal processor (DSP), and a host bus interface and be connected to the processor 3210 and/or the switch 3230 through the host bus interface. The host bus interface may be implemented as one of the above-described examples of the interface 3254. In an embodiment, the NIC 3240 may be integrated with at least one of the processor 3210, the switch 3230, and the storage device 3250.
[0151] In the storage servers 3200 to 3200m or the application servers 3100 to 3100n, a processor may transmit a command to storage devices 3150 to 3150n and 3250 to 3250m or the memories 3120 to 3120n and 3220 to 3220m and program or read data. In this case, the data may be data of which an error is corrected by an ECC engine. The data may be data on which a data bus inversion (DBI) operation or a data masking (DM) operation is performed, and may include cyclic redundancy code (CRC) information. The data may be data encrypted for security or privacy.
[0152] Storage devices 3150 to 3150n and 3250 to 3250m may transmit a control signal and a command/address signal to NAND flash memory devices 3252 to 3252m in response to a read command received from the processor. Thus, when data is read from the NAND flash memory devices 3252 to 3252m, a read enable (RE) signal may be input as a data output control signal, and thus, the data may be output to a DQ bus. A data strobe signal DQS may be generated using the RE signal. The command and the address signal may be latched in a page buffer depending on a rising edge or falling edge of a write enable (WE) signal.
[0153] The controller 3251 may control all operations of the storage device 3250. In an embodiment, the controller 3251 may include SRAM. In an embodiment, the controller 3251 may include the ECC encoder 222 and the ECC decoder 228 of
[0154] Although the present inventive concept has been described in connection with exemplary embodiments thereof, those skilled in the art will appreciate that various modifications can be made to these embodiments without substantially departing from the principles of the present inventive concept.