Treating circuit-level noise using a local and global decoding scheme
12057859 ยท 2024-08-06
Assignee
Inventors
- Christopher Chamberland (Pasadena, CA, US)
- Luis Goncalves (Pasadena, CA, US)
- Prasahnt Sivarajah (Pasadena, CA, US)
- Eric Christopher Peterson (Oakland, CA, US)
- Sebastian Johannes Grimberg (San Francisco, CA, US)
Cpc classification
G06N10/60
PHYSICS
H03M13/611
ELECTRICITY
H03M13/159
ELECTRICITY
International classification
H03M13/00
ELECTRICITY
G06N10/60
PHYSICS
G06N10/70
PHYSICS
Abstract
Techniques for implementing a local neural network and global decoding scheme for quantum error correction of circuit-level noise within quantum surface codes such that the decoding schemes have fast decoding throughout and low latency times for quantum algorithms are disclosed. A local neural network decoder may be pre-trained via a supervised learning technique such that the local neural network decoder may be applied for error correction in the presence of circuit-level noise in arbitrarily sized surface codes in a local decoding stage. Prior to a global decoding stage, an intermediate stage may be used to remove vertical pairs of highlighted vertices within the matching graph, which may reduce a syndrome density within the matching graph to allow for faster decoding at the global decoding stage. Such an intermediate stage may include application of a syndrome collapse or vertical cleanup technique.
Claims
1. A system, comprising: one or more quantum hardware devices configured to implement a quantum surface code; and one or more computing devices configured to perform error correction for measurement results of a plurality of rounds of syndrome measurements of a quantum surface code, wherein: the measurement results form a measurement results volume bounded by first and second dimensions corresponding to dimensions of the quantum surface code and a third dimension corresponding to a number of rounds included in the plurality of rounds of syndrome measurements; and to perform the error correction, the one or more computing devices are further configured to: decode the measurement results volume, via a local neural network decoder configured to account for circuit-level noise, wherein the local neural network decoder incrementally decodes a first set of detected syndrome differences of a total number of syndrome differences in the measurement results volume; decode the decoded measurement results volume, via a global decoder, wherein the global decoder decodes a second set of detected syndrome differences remaining in the decoded measurement results volume subsequent to the decoding via the local neural network decoder; and provide error-corrected results of measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
2. The system of claim 1, wherein the circuit-level noise is based, at least in part, on one or more of the following: one or more errors pertaining to single qubit gates acting on given qubits of a given quantum surface code of the quantum surface codes; one or more errors pertaining to multi-qubit gates between given qubits of the given quantum surface code; one or more measurement errors pertaining to the plurality of rounds of syndrome measurements; one or more errors pertaining to ancilla qubit reset timesteps of the plurality of rounds of syndrome measurements; or one or more errors pertaining to idling of given qubits of the quantum surface code.
3. The system of claim 1, wherein the local neural network decoder has a three-dimensional, fully-convolutional, neural network architecture.
4. The system of claim 1, wherein the local neural network decoder has been trained, via a supervised learning technique, to account for circuit-level noise within the measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
5. The system of claim 1, wherein to perform the error correction, the one or more computing devices are further configured to: treat, prior to the decode the decoded measurement results volume via the global decoder, vertical pairs of highlighted vertices in the decoded measurement results volume, wherein to treat the vertical pairs of highlighted vertices causes a syndrome density in the decoded measurement results volume to be reduced.
6. The system of claim 5, wherein to treat the vertical pairs of highlighted vertices, the one or more computing devices are further configured to: perform a syndrome collapse technique; or perform a vertical cleanup technique.
7. The system of claim 1, wherein to decode the decoded measurement results volume, via the global decoder, the one or more computing devices are further configured to perform a graph-based decoding technique to decode the second set of detected syndrome differences.
8. A method, comprising: performing error correction for measurement results of a plurality of rounds of syndrome measurements of a quantum surface code, wherein: the measurement results form a measurement results volume bounded by first and second dimensions corresponding to dimensions of the quantum surface code and a third dimension corresponding to a number of rounds included in the plurality of rounds of syndrome measurements; and said performing the error correction comprises: decoding the measurement results volume, via a local neural network decoder, wherein: the local neural network decoder incrementally decodes a first set of detected syndrome differences of a total number of syndrome differences in the measurement results volume; and said decoding the measurement results volume accounts for circuit-level noise; and decoding the decoded measurement results volume, via a global decoder, wherein the global decoder decodes a second set of detected syndrome differences remaining in the decoded measurement results volume subsequent to the decoding via the local neural network decoder; and providing error-corrected results of measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
9. The method of claim 8, wherein the circuit-level noise is based, at least in part, on one or more of the following: one or more errors pertaining to single qubit gates acting on given qubits of a given quantum surface code of the quantum surface codes; one or more errors pertaining to multi-qubit gates between given qubits of the given quantum surface code; one or more measurement errors pertaining to the plurality of rounds of syndrome measurements; one or more errors pertaining to ancilla qubit reset timesteps of the plurality of rounds of syndrome measurements; or one or more errors pertaining to idling of given qubits of the quantum surface code.
10. The method of claim 8, wherein said performing the error correction further comprises: treating, prior to said decoding the decoded measurement results volume via the global decoder, vertical pairs of highlighted vertices in the decoded measurement results volume, wherein said treating the vertical pairs of highlighted vertices causes a syndrome density in the decoded measurement results volume to be reduced.
11. The method of claim 10, wherein the vertical pairs of highlighted vertices in the decoded measurement results volume are generated based, at least in part, on the decoding, via the local neural network decoder.
12. The method of claim 10, wherein the treating the vertical pairs of highlighted vertices comprises: performing syndrome collapse, wherein the performing the syndrome collapse comprises: partitioning the decoded measurement results volume into partitions in the third dimension of the decoded measurement results volume; and adding syndromes of respective syndrome densities in each partition modulo 2 to collapse the respective partitions, wherein said collapse results in a removal of the vertical pairs of highlighted vertices.
13. The method of claim 10, wherein the treating the vertical pairs of highlighted vertices comprises: performing vertical cleanup, wherein the performing vertical cleanup comprises: identifying vertical pairs of highlighted vertices in the decoded measurement results volume; and removing the vertical pairs of highlighted vertices.
14. The method of claim 13, wherein a temporal direction pertaining to the performing vertical cleanup is selected based, at least in part, on a distribution of syndrome differences within the syndrome density in the decoded measurement results volume.
15. The method of claim 8, further comprising: training, via a supervised learning technique, the local neural network decoder to be used for said performing the error correction for the measurement results of the plurality of rounds of syndrome measurements of the quantum surface code, wherein said training comprises: providing a training data set of measurement results corresponding to a simulated plurality of rounds of syndrome measurements for a simulated quantum surface code that form dimensions of a simulated measurement results volume; determining predictions of locations of alleged errors on data qubits within the training data set; comparing the predictions of locations of alleged errors to ground truth information, wherein the ground truth information comprises actual locations of errors on data qubits within the training data set; and providing the trained local decoder to be used for said performing the error correction for the measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
16. The method of claim 15, wherein the training data set further comprises at least one or more of the following: syndrome information for the simulated plurality of rounds of syndrome measurements; location information about qubit placement within the dimensions of the simulated measurement results volume; and temporal boundaries information about the simulated plurality of rounds of syndrome measurements.
17. The method of claim 15, further comprising: determining a homological equivalence convention to be applied to homologically equivalent errors within the actual locations of the errors on the data qubits within the training data set, wherein the homological equivalence convention comprises at least one or more of: one or more weight-reduction transformations; or one or more equivalence transformations; and applying the homological equivalence convention to the ground truth information.
18. A non-transitory, computer-readable, medium storing program instructions that, when executed on or across one or more processors, cause the one or more processors to: perform error correction for measurement results of a plurality of rounds of syndrome measurements of a quantum surface code, wherein: the measurement results form a measurement results volume bounded by first and second dimensions corresponding to dimensions of the quantum surface code and a third dimension corresponding to a number of rounds included in the plurality of rounds of syndrome measurements; and to perform the error correction, the program instructions further cause the one or more processors to: decode the measurement results volume, via a local neural network decoder, such that the local neural network decoder accounts for circuit-level noise, wherein the local neural network decoder incrementally decodes a first set of detected syndrome differences of a total number of syndrome differences in the measurement results volume; and decode the decoded measurement results volume, via a global decoder, wherein the global decoder decodes a second set of detected syndrome differences remaining in the decoded measurement results volume subsequent to the decoding via the local neural network decoder; and provide error-corrected results of measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
19. The non-transitory, computer-readable medium of claim 18, wherein the circuit-level noise is based, at least in part, on one or more of the following: one or more errors pertaining to single qubit gates acting on given qubits of a given quantum surface code of the quantum surface codes; one or more errors pertaining to multi-qubit gates between given qubits of the quantum surface code; one or more measurement errors pertaining to the plurality of rounds of syndrome measurements; one or more errors pertaining to ancilla qubit reset timesteps of the plurality of rounds of syndrome measurements; or one or more errors pertaining to idling of given qubits of the quantum surface code.
20. The non-transitory, computer-readable medium of claim 18, wherein to perform the error correction, the program instructions further cause the one or more processors to: treat, prior to the decode the decoded measurement results volume via the global decoder, vertical pairs of highlighted vertices in the decoded measurement results volume, wherein to treat the vertical pairs of highlighted vertices causes a syndrome density in the decoded measurement results volume to be reduced.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28) While embodiments are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that embodiments are not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope as defined by the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word may is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words include, including, and includes mean including, but not limited to. When used in the claims, the term or is used as an inclusive or and not as an exclusive or. For example, the phrase at least one of x, y, or z means any one of x, y, and z, as well as any combination thereof.
DETAILED DESCRIPTION
(29) Various embodiments of methods and apparatus for performing error correction for rounds of syndrome measurements of a surface code, wherein the rounds of syndrome measurements may form a three-dimensional measurement results volume (e.g., a surface code volume) using local and global decoding stages such that the error correction accounts for circuit-level noise are described. A person having ordinary skill in the art should understand that circuit-level noise may be defined as an ability to treat error-types such as at least errors pertaining to single and/or multi-qubit gates between qubits of a quantum surface code, measurement errors pertaining to rounds of syndrome measurements, errors pertaining to ancilla qubit reset timesteps in respective rounds of syndrome measurements, errors pertaining to idling of qubits of a quantum surface code, etc. In addition, circuit-level noise may encompass treatment of crosstalk errors (e.g., spatial and/or temporal crosstalk errors), and/or errors caused by leakage outside of a code space for a given qubit.
(30) A neural network architecture may be implemented as a local decoder, and may be trained such that the local decoder may detect error patterns within a given measurement results volume and treat errors up to a given sized error pattern. As this local decoding procedure may cause vertical pairs of highlighted vertices (e.g., pairs of syndrome differences that are temporally (e.g., vertically) separated by one round of syndrome measurements within a given measurement results volume) to be formed, the present written description also relates to methods and techniques for treating said vertical pairs, namely via syndrome collapse and/or vertical cleanup techniques. Treating vertical pairs may reduce a syndrome density in the measurement results volume prior to a global decoding stage, such that the methods and techniques described herein provide both fast decoding throughput and latency times, and efficient error correction for quantum surface codes.
(31) In some embodiments, quantum computers have the potential to implement families of algorithms with significant speedups relative to classical computers. One of the main challenges in building a quantum computer, however, is in mitigating the effects of noise, which may introduce errors during a computation, therefore corrupting the results. Since the successful implementation of quantum algorithms require that qubits, gates and measurements fail with low probabilities, additional methods are required for detecting and correcting errors when they occur. Universal fault-tolerant quantum computers are one such strategy, wherein the low desired failure rates come at the cost of substantial extra qubit and gate overhead requirements.
(32) In some embodiments, stabilizer-based error correction may be defined as the encoding of logical qubits using a set of physical data qubits. The qubits may be encoded in a state which is a +1 eigenstate of all operators in a stabilizer group (e.g., an Abelian group of Pauli operators). Measuring operators in the stabilizer group, known as a syndrome measurement, may provide information on the possible errors afflicting the data qubits. The results of the syndrome measurements may then be fed to a classical decoding algorithm whose goal is to determine the most likely errors afflicting the data qubits. Improving the performance of error correcting codes and fault-tolerant quantum computing architectures in order to reduce the large overhead requirements arising from error correction, however, remains a challenge, along with determining classical decoding algorithms which operate on the very fast time scales required to avoid exponential backlogs during the implementation of a given quantum algorithm.
(33) In the past, some decoders have been proposed to attempt to meet the speed requirements imposed by quantum algorithms: Decoders such as cellular automata and renormalization group decoders are based on simple local update rules and have the potential of achieving fast runtimes when using distributed hardware resources. However, such decoders do not demonstrate the low logical failure rates imposed by algorithms in the circuit-level noise setting. Similarly, linear-time decoders, such as Union Find (UF) and a hierarchical implementation of Union Find with local update rules, have been proposed for complex decoding algorithms. However, the hierarchical implementation of Union Find with local update rules has not been shown to achieve the small throughput times required to run quantum algorithms in the circuit-level noise regime.
(34) The present written description overcomes these challenges through at least the use of neural network decoders. For neural network decoders to be a viable candidate in universal fault-tolerant quantum computing, they must be fast, scalable, and exhibit competitive performance in the presence of circuit-level noise. In some embodiments, a scalable neural network decoder, based on fully three-dimensional convolutions, may be used to treat circuit-level noise for rotated surface codes. Such a scalable neural network decoder may be implemented as a local decoder, which may then be applied to regions of the spacetime volume of a measurement results volume in order to correct errors arising from a given number of faults. The remaining faults (e.g., faults with longer error chain lengths within the spacetime volume, etc.) may then be corrected by applying a global decoder (e.g., a graph-based decoding technique such as minimum-weight perfect-matching, Union Find, etc.). By first applying a local neural network decoder, which treats a given number of errors afflicting the data qubits (e.g., via syndrome difference information), followed by applying a global decoder which treats remaining errors, overall decoding time for syndrome measurement rounds of a given surface code may be reduced.
(35) Furthermore, such a decoding scheme may reduce the syndrome density (e.g., via syndrome collapse and/or vertical cleanup, discussed below), resulting in a faster implementation of the global decoder. In some embodiments, the use of a local neural network decoder to account for circuit-level noise, such as those described herein, may result in the creation of vertical pairs of highlighted syndrome vertices. It may be advantageous to treat such vertical pairs, via syndrome collapse and/or vertical cleanup, before the global decoding stage in order to reduce the syndrome density. In some embodiments, syndrome collapse may be defined as a procedure to remove a subset of vertical pairs (after the application of a local neural network decoder) while also reducing the number of error syndromes used as input to the global decoder. Vertical cleanup may be defined as a procedure of directly removing vertical pairs after the application of the local neural network decoder, but prior to the implementation of the global decoder, according to some embodiments.
(36) This written description continues with a general description of the use of fast local and global decoders for quantum error-correcting codes to account for circuit-level noise. Examples of neural network architectures and their implementations as local decoders are discussed, followed by methods for training such local neural network decoders to treat circuit-level noise during error correction. Methods for performing syndrome collapse and vertical cleanup in preparation for a global decoding stage are then discussed. Finally, a description of an example computing system upon which the various components, modules, systems, devices, and/or decoding algorithms may be implemented is provided. Various examples are provided throughout the specification. A person having ordinary skill in the art should also understand that the previous and following description of local and global decoding schemes is not to be construed as limiting as to the implementation of said decoding schemes, or portions thereof.
(37)
(38) In some embodiments, error correction for syndrome measurement results of a surface code (e.g., surface code 200 described herein with regard to
(39) In block 102, a local neural network decoder may be used to sweep through a given measurement results volume (e.g., measurement results volume 300 described herein with regard to
(40) In some embodiments, vertical pairs of highlighted vertices may be generated during the decoding process described by block 102 (see also the description for
(41) In block 106, remaining errors in the measurement results volume are decoded using a global decoder, such as through a minimum-weight perfect-matching (MWPM) decoding technique, Union Find, or through another graph-based decoding technique. Then, in block 108, the results (e.g., a logical result) of the error correction may be provided.
(42)
(43) In some embodiments, a surface code may be used to correct errors during a quantum computation. A surface code, such as surface code 200, may resemble a two-dimensional planar version of the toric code. The code parameters of a surface code, such as surface code 200, may be defined as [d.sub.xd.sub.z, 1, min(d.sub.x, d.sub.z)], where d.sub.x and d.sub.z are the distances of minimum-weight representatives of the logical X and Z operators of the surface code (which may be referred to as the X and Z distance of the surface code). For example, surface code 200 may be referred to as a d.sub.x=d.sub.z=5 surface code, according to some embodiments. Furthermore, the logical
(44) The surface code belongs to the family of Calderbank-Shor-Steane (CSS) codes, wherein X and Z-type stabilizers in the bulk of the surface code lattice (e.g., plaquettes such as X-type stabilizer 202 and Z-type stabilizer 204 in surface code 200) correspond to weight-four operators, and there are additional weight-two operators along the boundary of the surface code lattice (e.g., the semi-circles shown in surface code 200). Data qubits, such as data qubit 208, are placed at vertices of the respective stabilizers, and ancilla qubits, such as ancilla qubit 210, are placed at the center of the respective stabilizers, as shown in
(45) In some embodiments, error syndromes for Calderbank-Shor-Steane codes may be defined as follows: Let .sub.X=
g.sub.1.sup.(X), g.sub.2.sup.(X), . . . , g.sub.r.sub.
and
.sub.Z=
g.sub.1.sup.(Z), g.sub.2.sup.(Z), . . . , g.sub.r.sub.
generating set of X and Z-type stabilizers of a Calderbank-Shor-Steane code
, and suppose that the stabilizer measurements are repeated d.sub.m times (e.g., meaning that there are d.sub.m rounds of stabilizer measurements). Then, s.sub.X(d.sub.m) may be defined to be a bit string (e.sub.X.sup.(1)e.sub.X.sup.(2) . . . e.sub.X.sup.(d.sup.
(46) Note that the s.sub.X(d.sub.m) and s.sub.Z(d.sub.m) syndromes in the above definition of error syndromes for Calderbank-Shor-Steane codes may have non-zero bits due to both the presence of data qubit errors as well as measurement errors.
(47) In some embodiments, syndrome differences between consecutive rounds of stabilizer measurements may also be defined as follows: Given the syndromes s.sub.X(d.sub.m)=(e.sub.X.sup.(1)e.sub.X.sup.(2) . . . e.sub.X.sup.(d.sup., s.sub.X.sup.diff(d.sub.m) may be defined as s.sub.X.sup.diff(d.sub.m)=(e.sub.X.sup.(1){tilde over (e)}.sub.X.sup.(2) . . . {tilde over (e)}.sub.X.sup.(d.sup.
(48) In some embodiments, the standard decoding protocol that may be used to correct errors with a surface code, such as surface code 200, is to perform minimum-weight perfect-matching (MWPM), such as by using the Edmonds Blossom algorithm. In minimum-weight perfect-matching, a graph G may be formed, wherein edges of graph G represent the data qubits of the corresponding surface code (e.g., data qubit 208 in surface code 200), and vertices of graph G represent the ancilla qubits of the corresponding surface code (e.g., ancilla qubit 210 in surface code 200). As described above, ancilla qubits may be associated with stabilizer measurement outcomes, wherein the outcomes may be encoded in the respective ancilla qubits, according to some embodiments.
(49) In some embodiments, in order to distinguish measurement errors from data qubit errors, the error syndrome (e.g., the measurement of all stabilizers) may be repeated r times, wherein r is considered large enough to ensure fault-tolerance. Furthermore, let m.sup.(k)(g.sub.i)=1 if the stabilizer g.sub.i in round k is measured non-trivially and zero otherwise. Prior to implementing minimum-weight perfect-matching, a vertex v.sup.(k)(g.sub.i) in graph G associated with a stabilizer g.sub.i in the k'th syndrome measurement round may be highlighted if and only if m.sup.(k)(g.sub.i)?m.sup.(k-1)(g.sub.i), (e.g., wherein highlighted refers to the syndrome measurement outcome of g.sub.i changing between rounds k?1 and k). More generally, for any fault location l.sub.k in the circuits used to measure the stabilizers of the surface code (e.g., CNOT gates, idling locations, state-preparation, and measurements), all possible Pauli errors P.sup.l.sup.
(50) In some embodiments which use a minimum-weight perfect-matching global decoder, for a distance d.sub.x=d.sub.z=d surface code with d rounds of syndrome measurements, the decoding complexity of minimum-weight perfect-matching may be referred to as (n.sup.3), where n?d.sup.2, and which corresponds to the number of highlighted vertices in graph G. In other embodiments which use a Union Find global decoder, the decoding complexity may be referred to as
(?n), where ? is the inverse of Ackermann's function.
(51) Although minimum-weight perfect-matching, Union Find, and/or other graph-based global decoders may have polynomial decoding time complexities, this still may not be fast enough for time scales that decoders need to operate on for many practical quantum hardware architectures. Therefore, error correction of a surface code may be performed by local and global decoders, via the methods and techniques described herein. In some embodiments, scalable local neural network decoders may be used prior to using a global decoder (e.g., minimum-weight perfect-matching or Union Find). For example, a local neural network decoder that has an effective distance d and which can thus correct errors E of weight wt(E)?(d?1)/2 may be used before a global decoder, which then may correct any remaining errors which were not corrected by the local decoder (see the description for
(52) The effect of using a local decoder is to reduce the value of n by removing a first set of errors afflicting the data qubits. Using the methods and techniques described herein, local neural network decoders may be used to correct not only for code capacity noise (e.g., where only data qubits can fail, and error syndromes only have to be measured once) and phenomenological noise (e.g., where measurements can fail in addition to data qubits), but also for circuit-level noise, which introduce additional and complex fault patterns in which a local neural network decoder is trained for.
(53) For some embodiments described herein, circuit-level depolarizing noise (and the related circuit-level depolarizing noise model) may be considered as follows. In some embodiments, a circuit-level noise model may comprise the following definitions: Each single-qubit gate location is followed by a Pauli X, Y or Z error, each with probability p/3. With probability p, each two-qubit gate is followed by a two-qubit Pauli error drawn uniformly and independently from {I, X, Y, Z}.sup..Math.2\{I.Math.I}. With probability 2p/3, the preparation of the |0 state is replaced by |1
=X|0
. Similarly, with probability 2p/3, the preparation of the |+
state is replaced by |?
=Z|+
. With probability 2p/3, any single qubit measurement has its outcome flipped. Lastly, with probability p, each idle gate location is followed by a Pauli error drawn uniformly and independently from {X, Y, Z}.
Training Neural Networks as Local Decoders
(54) As described in
(55) In some embodiments, the local neural network decoder may have an effective distance d.sub.eff?max(d.sub.x, d.sub.z), allowing the local decoder to remove errors arising from at most (d.sub.eff?1)/2 faults. By removing such errors at the local decoding step of the error correction process (e.g., block 100), the syndrome density may be reduced (e.g., the number of highlighted vertices in the matching graph G used to implement minimum-weight perfect-matching), thus resulting in a faster execution of the global decoding step. By training a three-dimensional fully-convolutional neural network, which will be further explained in the following paragraphs, the local neural network decoder may be able to correct for circuit-level noise (e.g., wherein repeated rounds of syndrome measurements are performed), as opposed to only code capacity noise and/or phenomenological noise. In addition, encoding strategies described herein for training said three-dimensional fully-convolutional neural network allows the neural network to adapt to different boundaries of a surface code lattice, and enhances the neural network's abilities to correct errors within the bulk of the measurement results volume.
(56)
(57) In some embodiments, decoding, such as with a local neural network decoder, may resemble a pattern recognition technique: for each physical data qubit q.sub.j used in the encoding of the given surface code, and given the syndrome measurements within some local volume (d.sub.x, d.sub.z, d.sub.m) of the surface code lattice, a classifier can predict whether or not there is an error afflicting data qubit q.sub.j.
(58) In embodiments described herein, a neural network classifier may be designed to take, as input, a local volume of size (d.sub.x, d.sub.z, d.sub.m), and train it to correct data-qubit errors arising from at most (d?1)/2 faults, wherein d=min(d.sub.x, d.sub.z). In order to promote scalability, such a neural network classifier may be designed using the methods and techniques described herein such that it corrects errors arising from at most (d.sub.eff?1)/2 faults even when applied to larger measurement results volumes (d.sub.x, d.sub.z, d.sub.m), where d.sub.eff?d. In some embodiments, such as those shown in
(59)
(60) In some embodiments, the network architecture of the local neural network decoder may resemble an enhanced version of a multi-layer perceptron (MLP) with an input layer, hidden layer, and output layer, each of which is a fully connected layer where all inputs connect to each neuron in the layer. In some embodiments of a multi-layer perceptron network, the (d.sub.x, d.sub.z, d.sub.m) local volume serves as inputs to a set of N neurons in the input layer. The hidden layer may then take those N neurons as inputs for a set of H neurons, followed by the H hidden layer neuron outputs being inputs to the final layer neurons that produce the prediction. In some embodiments, such as in the network architectures of FIGS. 4 and 5, the methods and techniques described herein are used to implement a network architecture with two outputs, the occurrence of an X-type error, and the occurrence of a Z-type error, with Y-type errors occurring if both X-type and Z-type errors are present. The two outputs may then be compared to the two output targets defined by trainY (see the description for trainY below).
(61) Said enhancements to the multi-layer perceptron network decoder may pertain to the network being fully-convolutional, wherein each layer consists of a set of convolution filters (e.g., see the filters shown in
(62) In some embodiments, a fully-convolutional neural network produces a prediction for the data qubit at the center of the local volume it analyzes, as it sweeps through a measurement results volume of a given surface code lattice (as shown in
(63) An additional enhancement to the neural network architecture may be to improve the representational power of the network by replacing the first layer of convolutional filters with multiple layers, while preserving the overall receptive field of the network. For example, if the first layer had filters of size (9, 9, 9), then 4 layers with filters of size (3, 3, 3) also has an effective filter size of (9, 9, 9), since each additional layer increases the effective filter width by 2 from the first layer's width of 3. If, hypothetically, each layer was linear, the resulting N outputs in the fourth layer would be mathematically equivalent to a single 9?9?9 layer with N outputs. In given embodiments, however, in which each layer is non-linear, with a nonlinear activation function (e.g., ReLu), the two networks may no longer be equivalent, and the neural network architecture with 4 layers of (3, 3, 3) filters may have more representational power, learning nonlinear combinations of features-of-features-of-features. Additionally, the hidden layer may be expanded with (1,1,1) filters to become multiple layers of (1,1,1) filters, which may increase the neural network's learning capacity.
(64)
(65) The neural network architecture shown in
(66) Neural network architectures, such as those shown in
(67) In order to train the local and fully-convolutional neural network decoders such as those discussed herein, training data sets may be prepared by performing Monte Carlo simulations using the circuit-level noise model described above, with the surface code circuit being used to compute the error syndrome. In some embodiments, the resulting training data sets may then be stored using the following format: The inputs to the local neural network decoder, which may be referred to collectively as trainX herein for ease of discussion, is a tensor of shape (m, d.sub.x, d.sub.z, d.sub.m, 5) for a surface code with X and Z distances d.sub.x and d.sub.z, with d.sub.m syndrome measurement rounds (with the last round being a round of perfect error correction wherein the data qubits are measured in some basis), and with m being the number of Monte Carlo simulations (e.g., the number of training samples) within the given training data set. The number 5 in trainX refers to five additional inputs which are described in the following paragraphs (e.g., syndrome information such as syndrome difference histories s.sub.X.sup.diff(d.sub.m) and s.sub.Z.sup.diff(d.sub.m), location information such as matrices enc(X) and enc(Z), and temporal boundaries of the given measurement results volume). Using the inputs of trainX, the local neural network decoder is trained to detect and decode a given set of errors within a given measurement results volume. Such output targets that the local neural network decoder is attempting to match to during training may be referred to as trainY herein, also for ease of discussion.
(68) Recalling the definition of syndrome differences described above, the first two of the five additional inputs to trainX contain the syndrome differences s.sub.X.sup.diff(d.sub.m) and s.sub.Z.sup.diff(d.sub.m) obtained for d.sub.m?1 rounds of noisy syndrome measurements, followed by one round of perfect error correction, according to some embodiments. Tracking changes in syndrome measurement outcomes between consecutive rounds may ensure that the average syndrome density remains constant across different syndrome measurement rounds. The next two inputs of trainX contain spatial information used to enable the local neural network decoder to associate syndrome measurement outcomes with data qubits in both the bulk and along boundaries of the given surface code that can influence the observed outcome. The data is represented as d.sub.x by d.sub.z binary matrices labelled enc(X) and enc(Z), where 1 values are inserted following a particular mapping, which is described in the following paragraphs with regard to
(69)
(70) In the following example embodiments pertaining to
(71) In
(72) In some embodiments, mapping to M.sub.syn.sub.
(73) In
(74) For the X-type stabilizers shown in
(75) As described above with regard to binary matrix M.sub.syn.sub.
(76) Continuing with the discussion of the five additional inputs to trainX, the second two of the five additional inputs may correspond to location information about qubit placements within the dimensions of the measurement results volume, such as matrices enc(X) and enc(Z), which may be identical in each syndrome measurement round unless the given surface code lattice changes shape. (Note that some example embodiments of performing a parity measurement via lattice surgery, in which a given surface code lattice does change shape, are discussed with regard to
(77)
where j?{1, . . . , d.sub.m}.
(78) In some embodiments, matrices enc(X) and enc(Z) are provided for each syndrome measurement round, following the notation j?{1, . . . , d.sub.m} described above. Matrices enc(X) and enc(Z) may be identical in each round (e.g., {1, . . . , d.sub.m}) unless the given surface code lattice changes shape between consecutive syndrome measurement rounds (e.g., during a given lattice surgery protocolsee the description for
(79) In some embodiments in which a local neural network decoder is decoding a subset of a total measurement results volume and the subset is located in the bulk of the measurement results volume (e.g., volume 302 or 306 when decoding measurement results volume 300), syndromes associated with a particular data qubit may change shape depending on which data qubit is observed (e.g., by comparing data qubits surrounding data qubit 626 versus data qubits surrounding data qubit 627 in mapping to M.sub.syn.sub.
(80)
(81) Similarly to providing locations of data qubits within the bulk, enc(X) and enc(Z) may allow the local neural network decoder to identify data qubits along the boundaries of a given measurement results volume (e.g., volume 308 when decoding measurement results volume 300). For embodiments described herein, boundary X-type data qubits may refer to data qubits located along the horizontal top and bottom boundaries of a given surface code lattice (e.g., boundary X-type data qubits (b.sub.X) 802 in surface code 800), and boundary Z-type data qubits may refer to data qubits located along the vertical left and right boundaries of a given surface code lattice (e.g., boundary Z-type data qubits (b.sub.Z) 808 in surface code 800).
(82) Providing such inputs (e.g., location information pertaining to dimension sizes of a given measurement results volume, locations of data qubits within the bulk and/or at the boundaries, etc.) to trainX improves performance as compared to only specifying locations pertaining to data qubits at the boundaries.
(83) Continuing with the discussion of the five additional inputs to trainX, the fifth of the five additional inputs may correspond to a specification of temporal boundaries of a given measurement results volume. In some embodiments, the last round of error correction for a given measurement results volume (e.g., measurement results volume 300) is a round of perfect error correction in which the data qubits may be measured in some basis, and therefore it is relevant to specify such information in trainX. For some embodiments described herein, a round of perfect error correction may be defined as a syndrome measurement round in which no new errors are introduced, and the perfect error correction round arises when the data qubits are measured directly in some basis (e.g., at the end of the computation). In some embodiments, a measurement error which occurs when the data qubits are measured directly (e.g., during a round of perfect error correction) is equivalent to an error on such data qubits in the prior round (e.g., the second-to-last round of syndrome measurements), and the syndrome measurement outcome may be compatible with the errors afflicting the data qubits arising from the second-to-last round. As such, since the last syndrome measurement round (e.g., a round of perfect error correction) behaves differently than other rounds of syndrome measurements, specifying the first and last rounds of syndrome measurements in trainX allows the trained local neural network decoder to generalize to measurement results volumes with arbitrary d.sub.m values (e.g., arbitrary numbers of rounds of syndrome measurements for arbitrarily sized surface code lattices).
(84) In some embodiments, an input to trainX pertaining to temporal boundaries may be represented using d.sub.x?d.sub.z binary matrices for each syndrome measurement round. For example, the encoding may resemble matrices which are filled with ones for rounds 1 and d.sub.m (e.g., the first and last rounds of syndrome measurements), and filled with zeros for rounds of syndrome measurements in between (e.g., rounds 2 to d.sub.m?1).
(85) As introduced above, output targets in trainY may contain the locations of X and Z data errors afflicting data qubits of a given measurement results volume for syndrome measurement rounds 1 to d.sub.m that the local neural network decoder is attempting to predict during training. As trainY may contain the locations of actual data errors afflicting data qubits within the simulated measurement results volume, trainY may be referred to as ground truth information to be used during a supervised learning technique such as those described herein, and predicted data errors, decoded by the local neural network decoder during training, may be referred to as alleged data errors until the alleged data errors are verified and/or compared against the actual data errors in trainY. Such locations may be stored in trainY in a tensor of shape (m, d.sub.x, d.sub.z, d.sub.m, 2). In some embodiments, in order for the data stored in trainY to be compatible with trainX, changes in data qubit errors between consecutive syndrome measurement rounds are tracked rather than the data qubit errors themselves in each round, since trainX tracks changes in syndrome measurement outcomes between consecutive rounds (see the description for syndrome differences, and inputs to trainX s.sub.X.sup.diff(d.sub.m) and s.sub.Z.sup.diff(d.sub.m) described above). Tracking changes in data qubit errors may also ensure that the average error densities are independent of the number of syndrome measurement rounds, according to some embodiments. This is advantageous, as otherwise, the number of syndrome measurement rounds needed to train such local neural network decoders would be very large in order for said networks to generalize well to arbitrary values of d.sub.m. Tracking changes in data qubit errors between rounds reduces the number of syndrome measurement rounds needed to effectively train such local neural network decoders, according to some embodiments.
(86) A further implementation parameter that may be considered in the process of training a local neural network decoder such as those described herein is a homological equivalence convention for errors. In some embodiments, when performing Monte Carlo simulations to collect and prepare training data samples for training the local neural network decoder, there may be cases in which two errors, such as errors E.sub.1 and E.sub.2, have the same syndrome (e.g., s(E.sub.1)=s(E.sub.2)), with E.sub.1E.sub.2=g and where g is in the stabilizer group of the surface code. In such embodiments, E.sub.1 and E.sub.2 may be considered to be homologically equivalent for a given surface code if s(E.sub.1)=s(E.sub.2), and E.sub.1E.sub.2?
, wherein
is the stabilizer group of
(e.g., E.sub.1 and E.sub.2 are homologically equivalent for a code
if E.sub.1 and E.sub.2 have the same error syndrome, and are identical up to products of stabilizers). Determining a particular convention and/or fixed choice for representing homologically equivalent errors in trainY, such as the example errors E.sub.1 and E.sub.2, may lead to significant performance improvements of the local neural network decoder. (Such a convention is further described with regard to
(87) For a given training sample of the m training samples in trainY with a tensor of shape (m, d.sub.x, d.sub.z, d.sub.m, 2), the first channel may consist of d.sub.m binary d?d matrices M.sub.e.sup.(X.sup.
(88)
(89) In some embodiments, examples of how a homological equivalence convention may be determined for training a local neural network decoder and subsequently applied to a given surface code (e.g., surface codes 900 and 920) are described in embodiments shown in
(90) In some embodiments, fixEquivalenceX transformations may include the following transformations, wherein E.sub.x may be assumed to be a weight-2 X error with support on a weight-4 stabilizer g.sub.k.sup.(X) of surface code 900, wherein the top left qubit of the given stabilizer has coordinates (?, ?): Suppose E.sub.x has support at the coordinates (?+1, ?) and (?+1, ?+1). Then fixEquivalenceX may map E.sub.x to a weight-2 error at coordinates (?, ?) and (?, ?+1). Thus, horizontal X errors at the bottom of g.sub.k.sup.(X) are mapped to horizontal X errors at the top of g.sub.k.sup.(X). Suppose E.sub.x has support at the coordinates (?, ?) and (?+1, ?). Then fixEquivalenceX may map E.sub.x to a weight-2 error at coordinates (?, ?+1) and (?+1, ?+1). Thus, vertical X errors at the left of g.sub.k.sup.(X) are mapped to vertical X errors at the right of g.sub.k.sup.(X). Suppose E.sub.x has support at the coordinates (?, ?) and (?+1, ?+1). Then fixEquivalenceX may map E.sub.x to a weight-2 error at coordinates (?, ?+1) and (?+1, ?). Thus, diagonal X errors from the top left to bottom right of g.sub.k.sup.(X) are mapped to diagonal X errors at the top right to bottom left of g.sub.k.sup.(X).
(91) Further embodiments of fixEquivalenceX transformations may include the following additional transformations, wherein g.sub.k.sup.(X) is assumed to be a weight-2 X-type stabilizer along the top boundary of surface code 900, with the left-most qubit in its support having coordinates (?, ?). If E.sub.x is a weight-1 error at coordinates (?, ?+1), fixEquivalenceX may map E.sub.x to a weight-1 error at coordinates (?, ?). If g.sub.k.sup.(X) is a weight-2 X-type stabilizer along the bottom of the surface code lattice with the left-most qubit in its support having coordinates (?, ?), and E.sub.x is a weight-1 error at coordinates (?, ?), then fixEquivalenceX may map E.sub.x to a weight-1 error at coordinates (?, ?+1).
(92) In some embodiments, weightReductionX and fixEquivalenceX functions may be applied to all X-type stabilizers of the given surface code lattice in each syndrome measurement round. It follows that the weightReductionX function may be applied first, then the fixEquivalenceX function, for efficiency purposes, according to some embodiments. For ease of discussion herein, such an application process of weightReductionX and fixEquivalenceX functions may be referred to collectively as a simplifyX function, wherein weightReductionX and fixEquivalenceX functions are applied to all X-type stabilizers of the given surface code lattice in each syndrome measurement round, with E.sub.x errors in round 1?j?d.sub.m being described by the binary matrix M.sub.e.sup.(X.sup.
(93) In some embodiments, similar functions, referred to herein for ease of discussion as weightReductionZ, which may reduce the weights of Z errors at each Z-type stabilizer in a given surface code lattice (e.g., surface code 920), and fixEquivalenceZ, which may be chosen such that it is rotationally symmetric to the function fixEquivalenceX under a 90-degree rotation of the surface code lattice, may also be defined as part of the given homological equivalence convention used when training the given local neural network decoder. In such embodiments, the homological equivalence convention for Z data qubit errors may be implemented by repeatedly calling the simplifyZ function (which may map matrices M.sub.e.sup.(Z.sup.
(94) Errors which may be invariant under transformations of the simplifyX and simplifyZ functions are shown in
(95) A person having ordinary skill in the art should understand that the example embodiments of weightReductionX, weightReductionZ, fixEquivalenceX, and fixEquivalenceZ described above are not meant to be limiting, and that other combinations of homological equivalence conventions may be applied for the methods and techniques described herein. Furthermore, in practice, a homological equivalence convention, such as those discussed herein, may be determined during training of a local neural network decoder and subsequently applied to ground truth information, such as trainY discussed herein, in order for a local neural network decoder to more efficiently learn relationships between syndromes and errors within a given training data set, according to some embodiments.
(96) Further considerations when training a local neural network decoder that may influence performance, in addition to the neural network architecture (see the descriptions for
(97)
(98) In some embodiments, the methods and techniques for training a local neural network decoder may resemble the process shown in
(99) In block 1002, a neural network architecture, such as those shown in
(100) In some embodiments, local neural network decoders may be implemented using classical computing hardware, as neural network evaluations involve little conditional logic, and therefore may maximize the use of pipelined data pathways. However, due to costs and design implementation considerations of many current hardware architectures, such as slow design iteration, custom manufacturing, bounded size, and integration with existing electronics, selection of such classical computing hardware may require special consideration. In some embodiments, candidate technologies for implementation of local neural network decoders may include application-specific integrated circuits (ASICs) and/or Field-Programmable Gate Arrays (FPGAs).
(101) Using application-specific integrated circuits (ASICs) allow the implementation of local neural network decoders to perform on time scales sufficient for running quantum algorithms, such as those discussed herein. FPGAs may be used for neural network evaluation as well. An FPGA may include a set of components, such as flip-flops, look-up tables (LUTs), block RAM (BRAM), configurable logic blocks (CLBs), and digital signal processing (DSP) slices, each of whose inputs may be selectively routed into one another to perform complex computations ranging from fixed high-performance arithmetic circuits to entire programmable processors.
(102) In addition, a person having ordinary skill in the art should understand that computing devices, such as computing device 2000, may be used to generate training data sets (e.g., via Monte Carlo simulations) that may be used to train local neural network decoders such as those described herein. The same or difference computing devices, such as the classical computing hardware described above, may then be used to implement trained local neural network decoders. Furthermore, same and/or additional computing devices, such as additional implementations of computing device 2000, may be used to perform error correction for arbitrarily-sized measurement results volumes using a local and global decoding scheme such as those described herein.
(103) Removing Vertical Pairs of Highlighted Vertices Following a Local Decoding Error Correction Step
(104) In some embodiments, decoding portions of the measurement results volume via a local neural network decoder (e.g., in the process described with regard to block 102) may contribute to the creation of pairs of highlighted vertices in the matching graph (e.g., see the description for matching graph G, which may be used to implement minimum-weight perfect-matching during the global decoding step herein). Such creation of pairs of highlighted vertices may be a result of the local decoder correctly identifying and correcting given errors via syndrome difference information (e.g., such as the CNOT gate failure described in
(105)
(106)
(107) The series of figures demonstrated in
(108)
(109) As illustrated by surface code at round j+1 1120 in
(110)
(111) In subset of matching graph 1122, the X error on data qubit 1110 during the j'th syndrome measurement round is marked on the lower edge of the subset, and ancilla qubit 1114 is marked as a highlighted vertex. (Note that the dashed lines in the upper half of subset of matching graph 1122 are meant to emphasize the focus on round j in
(112)
(113) In subset of matching graph 1124, in addition to the X error marked on the lower edge of the subset corresponding to round j, an additional X error is marked on the upper edge of the subset to represent the propagation of the CNOT gate failure into round j+1. From the perspective of the local neural network decoder, after receiving said information about measurements that took place during round j+1, the local neural network decoder may now correctly identify the fault pattern described in
(114)
(115) Subset of matching graph 1126 shows how matching graph G may transform after the local neural network decoder applies an error correction to the X error on data qubit 1110. As the local neural network decoder identifies that the given X error seen in both rounds j and j+1 is due to CNOT gate 1116, the highlighted vertices are transformed such that ancilla qubit 1112 is highlighted both in rounds j and j+1, forming a vertical pair of highlighted vertices. The reader may note that even though the local neural network decoder has made a correct error correction, the pair of highlighted vertices are still formed (and the syndrome density may not be reduced as a consequence of this particular type of space-time error detection). Furthermore, in some embodiments, since the local neural network decoder may receive information about syndrome differences of multiple rounds of syndrome measurements at once (e.g., as defined by volume size (d.sub.x, d.sub.z, d.sub.m)), the local decoder may perform a correction on a given data qubit in a round before the error actually occurs as part of a pattern recognition technique (e.g., a pattern that may extend into a second set of rounds of syndrome measurements that may not be viewed at the same time as a first set of rounds of syndrome measurements due at least in part to a volume size (d.sub.x, d.sub.x, d.sub.m) of the local decoder), leading to the creation of a vertical pair of highlighted vertices.
(116) In some embodiments, the creation of vertical pairs arising from a correction performed by the local neural network decoder due to a two-qubit gate failure, such as a series described in
(117) Removing Vertical Pairs of Highlighted Vertices: Performing Syndrome Collapse by Sheets
(118)
(119) In
(120) As a second step in the process of performing syndrome collapse, each sheet may then be collapsed into collapsed sheets (e.g., collapsed sheet 1204). Collapsing, or compressing, the sheets may refer to adding all of the syndromes in the given sheet modulo 2, as described in the following paragraphs, which may cause vertical pairs of highlighted vertices to be removed. Note that vertical pairs of highlighted vertices (e.g., corresponding highlighted vertices that are temporally separated by one round of syndrome measurements) within a given sheet may be removed via this process of collapsing the sheets in addition to one or more other types of vertically-separated pairs of highlighted vertices (e.g., corresponding highlighted vertices that are temporally separated by more than one round of syndrome measurements that still are within a given sheet).
(121) In some embodiments, the procedure visually demonstrated in
s.sub.X.sup.diff(d.sub.m)=((e.sub.X.sup.(1){tilde over (e)}.sub.X.sup.(2) . . . {tilde over (e)}.sub.X.sup.(d.sup.
(122) A syndrome collapse by sheets of size d.sub.m (e.g., sheet 1202) transforms s.sub.X.sup.diff(d.sub.m) as
where
(123)
with the sum being performed modulo 2. (In embodiments in which j=1, the first term in the above equation becomes e.sub.X.sup.(1).) In some embodiments in which d.sub.m is not a multiple of d.sub.m, there may be [d.sub.m/d.sub.m] sheets, with the last sheet having size d.sub.m??d.sub.m, where ?=[d.sub.m/d.sub.m]. A person having ordinary skill in the art should understand that the above steps may also be applied for the transformation of syndrome difference s.sub.Z.sup.diff(d.sub.m).
(124) In some embodiments, performing a syndrome collapse by sheets may reduce the size of the original matching graph G (e.g., matching graph G prior to the syndrome collapse procedure) since G contained d.sub.m sheets prior to performing the collapse. For ease of description herein, the matching graph after performing syndrome collapse may be referred to as G.sub.sc. In the following paragraphs,
(125)
(126)
(127) Furthermore, by performing syndrome collapse on a given surface code of distance d after the application of the local neural network decoder, with d.sub.m=(d.sub.eff) (wherein d.sub.eff is the effective distance of the local neural network decoder which depends on the local receptive field and size of the volume that the local neural network decoder was trained on), syndrome collapse may result in a global effective distance which is equal or close to d, according to some embodiments. This results from the assumption that errors contained within each sheet arising from less than or equal to (d.sub.eff?1)/2 faults are removed by the local neural network decoder using the methods and techniques described herein, resulting in a faster overall decoding time.
(128)
(129)
(130) In block 1400, a measurement results volume is decoded via a decoder (e.g., a local neural network decoder), such as those discussed herein with regard to at least
(131) A process of performing syndrome collapse may resemble processes such as those shown in blocks 1404 and 1406 and the visual steps shown in
(132) Removing Vertical Pairs of Highlighted Vertices: Performing Vertical Cleanup
(133)
(134) In some embodiments, vertical pairs of highlighted vertices may also be corrected using vertical cleanup.
(135) The first column of
(136) As seen by the plots in the last column of
(137) As also discussed above with regard to syndrome collapse, performing vertical cleanup without first applying a decoding step (e.g., via a local neural network decoder) may result in logical errors at the global decoding stage. An example of performing vertical cleanup without the use of a local neural network decoder is discussed with regard to
(138)
(139) In subset of matching graph before vertical cleanup 1600 for correcting X errors, horizontal edges (e.g., edge 1604) of subset of matching graph before vertical cleanup 1600 correspond to data qubits of the given d=5 surface code, and vertices (e.g., vertices 1606, 1608, 1610, and 1612) of subset of matching graph before vertical cleanup 1600 correspond to stabilizer measurement outcomes. The squares (e.g., square 1602) correspond to boundary vertices connected by edges of zero weight. In
(140) In subset of matching graph after vertical cleanup 1620 of
(141)
(142) In block 1700, a measurement results volume is decoded via a decoder (e.g., a local neural network decoder), such as those discussed herein with regard to at least
(143) Furthermore, performing vertical cleanup may be applied during a lattice surgery protocol (e.g., a protocol to merge two or more previously separate logical surface code patches together such that operations such as parity measurements may be performed on or across the merged logical surface code patches), according to some embodiments.
(144)
(145) In some embodiments, performing an X.Math.X multi-qubit Pauli measurement using two surface code patches may resemble the two-dimensional slice of the matching graph used to correct Z-type errors during a lattice surgery protocol shown in state, and a gauge fixing step may be performed, wherein the X-type operators are measured.
(146) In some embodiments, in the first round of the merge of logical patch 1800 and logical patch 1802, the X-type measurements performed in routing space region may random, but the product of all such measurements encode the parity of the logical X.Math.X operator being measured. However, measurement errors (marked by an m in
(147) In some embodiments, such as some of the embodiments shown in
(148)
(149)
(150)
(151) Furthermore, for d.sub.m syndrome measurement rounds (with d.sub.m being odd in the example embodiments shown in
(152) In some embodiments, in order to ensure that a given minimum-weight perfect-matching path does not map to a temporal boundary (e.g., top temporal boundary 1806 and bottom temporal boundary 1808 in
(153) In some embodiments, when applying a local neural network decoder, space-like and/or time-like error chains, such as the examples shown in
(154) Temporal Encoding of Lattice Surgery Protocol
(155) A key idea behind temporal encoding of lattice surgery (TELS) is to use fast, noisy lattice surgery operations, with this noise corrected by encoding the sequence of Pauli measurements within a classical error correction code. Thus, more noise can be tolerated in the Pauli measurements, requiring fewer rounds of syndrome measurements during a lattice surgery protocol, wherein logical failures arising during a sequence of Pauli measurements implemented via lattice surgery can be corrected using a classical error correcting code.
(156) This encoding can be thought of as taking place in the time domain, so the encoding does not directly lead to additional qubit overhead costs. There can be a small additive qubit cost when temporal encoding of lattice surgery (TELS) is used for magic state injection, with magic states needing to be stored for slightly longer times.
(157) Temporal Encoding of Lattice Surgery Protocol: Parallelizable Pauli Measurements
(158) In some embodiments, a sequence of Pauli measurements can be grouped in larger sets of parallelizable Pauli measurements. Let P.sub.[t,t+k]:={P.sub.t, P.sub.t+1, . . . , P.sub.t+k} be a sub-sequence of Pauli operators. P.sub.[t,t+k] is a parallelizable set if: all Pauli operators commute; and any Clifford corrections can be commuted to the end of the sub-sequence. For example, a parallelizable set is given when magic states are used to perform a T.sup..Math.k gate. Therefore, given a circuit with ? T-gates and T-depth ?, the Pauli measurement sequence can be split into a sequence of ? parallelizable sets of average size k:=?/?.
(159) In time-optimal Pauli based computation, an n-qubit computation of T-depth ? can be reduced to runtime O(n+?). However, the space-time volume is not compressed by using the time-optimal approach, so that reducing the algorithm runtime to 10% of a seqPBC runtime would require at least a 10? increase in qubit cost.
(160) Temporal Encoding of Lattice Surgery Protocol: Encoding of Pauli Measurements
(161) In some embodiments, temporal encoding of lattice surgery takes advantage of parallelizable Pauli sets to speed up lattice surgery while maintaining a given level of certainty (e.g., low error rate). However, unlike other approaches, it does not incur a multiplicative qubit overhead cause, and thus reduces an overall space-time cost of performing a quantum algorithm.
(162) Due to the properties of a parallelizable Pauli set, all Pauli operators within the set can be measured in any order. Furthermore, any set S that generates the group P.sub.t, P.sub.t+1, P.sub.t+k
can be measured. If the set S is overcomplete, there will be some linear dependencies between the measurements that can be used to detect (and correct) for any errors in the lattice surgery measurements. For example, consider the simplest parallelizable set {P.sub.1, P.sub.2} and let d.sub.m be the required lattice surgery time, so performing both measurements takes 2(d.sub.m+1) error correction cycles. Instead, {P.sub.1, P.sub.2, P.sub.1P.sub.2} could be measured. If the third measurement outcome (e.g., P.sub.1P.sub.2) is not equal to the product of the first two measurements (e.g., the product of P.sub.1 and P.sub.2), then it can be determined that something has gone wrong and the measurements can be repeated to gain more certainty of the true values. By measuring the overcomplete set {P.sub.1, P.sub.2, P.sub.1P.sub.2} an extra lattice surgery measurement has been performed. However, this extra measurement (resulting in an overcomplete set) allows a single lattice surgery failure to be tolerated without causing a logical error. This is because the single lattice surgery failure can be detected, and when re-measuring the original set {P.sub.1, P.sub.2} a second lattice surgery failure would need to occur to produce a wrong measurement outcome. This allows for fewer rounds of lattice surgery measurements to be taken (d.sub.m) and still avoid errors. For example, d<<d.sub.m while still achieving a same overall success probability. Also, since the overall time in non-temporally encoded lattice surgery is 2(d.sub.m+1), if the measurements are such that 3d.sub.m<<2d.sub.m, then the computation has been speed up.
(163) In general, given a parallelizable Pauli set
P={P.sub.t,P.sub.t+1, . . . ,P.sub.t+k}
Pauli operators can be defined as
(164)
where x is a length k binary column vector. Given a set S that generates all the required Pauli operators, such that S
=
P
, the elements of the set can be written as
S={Q[x.sup.1],Q[x.sup.2], . . . ,Q[x.sup.k]}
with superscripts denoting different vectors. Since this is a generating set, the vectors {x.sup.1, x.sup.2, . . . , x.sup.k} span the relevant space. Furthermore, a matrix G can be defined with these vectors as columns and such a matrix specifies the temporal encoding of lattice surgery (TELS) protocol. In the simple k=2 example as shown in
(165)
(166) Notice that the rows of the above matrix generate the code words of the [3, 2, 2] classical code. In general G can be considered as the generator matrix for the code words of an [n, k, d] classical code. This is referred to herein as the measurement code for the temporal encoding of lattice surgery (TELS) protocol. Note that k is the number of (unencoded) Pauli operators in the generating set. The full-rank G matrix is considered where k equals the number of rows in G. The number n represents how many Pauli measurements are physically performed in the encoded scheme and corresponds to the number of columns in G. The distance d is the lowest weight vector in the row-span of G.
(167) In order to show that the code distance d does in fact capture the ability of TELS to correct errors, the redundancy in lattice surgery measurements can be formalized as follows. For any length n binary vector u=(u.sub.1, u.sub.2, . . . , u.sub.n),
(168)
(169) Since the matrix G is full-rank and has more columns than rows, there will exist u such that ?.sub.ju.sub.jx.sup.j=0. For these u it is true that.
(170)
(171) Therefore, these u vectors describe redundancy in the measurements. The condition ?.sub.ju.sub.jx.sup.j=0 can be rewritten compactly as Gu=0. Following the convention in coding theory, this set of u is called the dual of G and denoted as:
G.sup.?:={u: Gu=0(mod 2)}.
(172) Next, consider that this redundancy can be used to detect time-like lattice surgery errors. For example, let m={m.sub.1, m.sub.2, . . . , m.sub.n} be a binary vector denoting the outcomes of the lattice surgery Pauli measurements in the set S. That is, if a measurement of Q[x.sup.j] gives outcome +1 set m.sub.j=0 and when the measurement of Q[x.sup.j] gives ?1 set m.sub.j=1. Given a u?G.sup.?, we know that Pauli operators product to the identity so when there are no time-like lattice surgery errors we have
(173)
(174) Conversely, if it observed that
(175)
then it is known that a time-like lattice surgery error has occurred. For example, consider m=s+e, where s is the ideal measurement outcome and e is the error measurement outcome. The ideal measurement outcomes are always self-consistent and so they always satisfy u.Math.s=0 for all u?G.sup.?. Therefore, it can be seen that an error e is undetected if and only if u.Math.e=0 for some u?G.sup.?. This is equivalent to undetected errors e being in the row-span of G (since the dual of the dual is always the original space). Recall, the distance d denotes the lowest (non-zero) weight vector in the row-span of G. Therefore, d also denotes the smallest number of time-like lattice surgery errors needed for them to be undetected by TELS. Consequently, if is the probability of a single time-like error, TELS error-detection will fail with probability O(
.sup.d).
(176) As an example, take the Pauli set {P.sub.t, P.sub.t+1, . . . , P.sub.t+k} and measure each of these observables separately, and then measure the product of them so that the measurement code has the generator matrix
(177)
which is an identity matrix padded with an extra column that is an all 1 vector. Therefore, this corresponds to a [?+1, ?, 2] classical code that detects a single error. Concatenating such a code m times gives a code with parameters [(?+1).sup.m, ?.sup.m, 2.sup.m]. Another example that can be considered is an example wherein a simple [8, 4, 4] extended Hamming code is used as the measurement code with generator matrix
(178)
This corresponds with replacing {P.sub.1, P.sub.2, P.sub.3, P.sub.4} with S containing the 8 operators
S={P.sub.2P.sub.3P.sub.4,P.sub.2P.sub.3,P.sub.2P.sub.4,P.sub.2,P.sub.1P.sub.3P.sub.4,P.sub.1P.sub.3,P.sub.1P.sub.4,P.sub.1}.
(179) Because the generator matrix has distance 4, this scheme will detect up to 3 errors. This Hamming code is the m=3 member of a family of [2.sup.m, 2.sup.m?m?1, 4] extended Hamming codes. There are several viable strategies to handle a detected error.
(180) In some embodiments, the following detect/remeasure strategy is used: if a distance d measurement code is used with lattice surgery performed for time d.sub.m, then whenever an error is detected the Pauli operators are remeasured but this time using the original Pauli set P={P.sub.t, P.sub.t+1, . . . , P.sub.t+k} instead of using the overcomplete set S. For the remeasure round, the lattice surgery is performed using an amount of time ?qd.sub.m? where q is some constant scaling factor. The expected runtime to execute the protocol is then
T=n(d.sub.m+1)+p.sub.dkdd.sub.m,
where p.sub.d is the probability of detecting an error.
(181) Embodiments of the present disclosure may be described in view of the following clauses:
(182) Clause 1. A method to train, via a supervised learning technique, a local decoder to be used to perform error correction for quantum surface codes, the method comprising: determining a three-dimensional, fully-convolutional neural network architecture for the local decoder such that the three-dimensional, fully-convolutional neural network architecture accounts for circuit-level noise; providing a training data set of simulated measurement results that form a simulated measurement results volume, bounded by first and second dimensions corresponding to dimensions of a simulated quantum surface code and a third dimension corresponding to a number of rounds included in a plurality of rounds of syndrome measurements, wherein the training data set further comprises: syndrome difference information for the number of rounds; location information about qubit placement within the first, second, and third dimensions of the simulated measurement results volume; and temporal boundaries information about the number of rounds; determining predictions of locations of alleged errors on data qubits within the training data set; comparing the predictions of locations of alleged errors to ground truth information, wherein the ground truth information comprises actual locations of errors on data qubits within the training data set; and providing the trained local decoder.
(183) Clause 2. The method of clause 1, wherein the circuit-level noise is based, at least in part, on one or more of the following: one or more errors pertaining to single qubit gates acting on given qubits of a given quantum surface code of the quantum surface codes; one or more errors pertaining to multi-qubit gates between given qubits of the given quantum surface code; one or more measurement errors pertaining to a given plurality of rounds of syndrome measurements of the given quantum surface code; one or more errors pertaining to ancilla qubit reset timesteps of the given plurality of rounds of syndrome measurements of the given quantum surface code; or one or more errors pertaining to idling of given qubits of the given quantum surface code.
(184) Clause 3. The method of clause 1, further comprising determining a homological equivalence convention to be applied to homologically equivalent errors within the actual locations of the errors on the data qubits within the training data set, wherein the homological equivalence convention comprises one or more of: one or more weight-reduction transformations; and one or more equivalence transformations.
(185) Clause 4. The method of clause 3, wherein the comparing the predictions of the locations of alleged errors to the ground truth information comprises applying the homological equivalence convention to the ground truth information.
(186) Clause 5. The method of clause 1, wherein: the determining the three-dimensional, fully-convolutional neural network architecture for the local decoder is based, at least in part, on respective sizes of the first, second, and third dimensions of the training data set of simulated measurement results; and one or more respective sizes of the three-dimensional fully-convolutional neural network architecture are smaller than one or more of the corresponding respective sizes of the first, second, and third dimensions of the training data set of simulated measurement results.
(187) Clause 6. The method of clause 1, further comprising generating the training data set, wherein the generating comprises: performing one or more Monte Carlo simulations to obtain the simulated measurement results, wherein the one or more Monte Carlo simulations are based, at least in part, on a circuit-level noise model.
(188) Clause 7. The method of clause 1, further comprising: performing error correction for other measurement results of a given quantum surface code of the quantum surface codes, wherein said performing the error correction comprises: decoding the other measurement results, via the trained local decoder, wherein the trained local decoder incrementally decodes a first set of detected syndrome differences of a total number of syndrome differences in the other measurement results.
(189) Clause 8. The method of clause 7, wherein the performing the error correction for the other measurement results further comprises: decoding the decoded other measurement results, via a global decoder, wherein the global decoder decodes a second set of detected syndrome differences in the decoded other measurement results subsequent to the decoding via the trained local decoder.
(190) Clause 9. The method of clause 7, wherein at least one dimension of the given quantum surface code and at least one dimension of the first and second dimensions of the simulated quantum surface code are different in size.
(191) Clause 10. The method of clause 7, wherein the third dimension of the simulated measurement results volume and a number of rounds of syndrome measurements in the other measurement results of the given quantum surface code are different.
(192) Clause 11. A system, comprising: one or more computing devices configured to perform training for a local decoder that is configured to account for circuit-level noise, wherein to perform the training, the one or more computing devices are further configured to: provide a training data set of simulated measurement results that form a simulated measurement results volume, bounded by first and second dimensions corresponding to dimensions of a simulated quantum surface code and a third dimension corresponding to a number of rounds included in a plurality of rounds of syndrome measurements, wherein the training data set further comprises: syndrome information for the number of rounds; location information about qubit placement within the first, second, and third dimensions of the simulated measurement results volume; and temporal boundaries information about the number of rounds; determine predictions of locations of alleged errors on data qubits within the training data set; compare the predictions of locations of alleged errors to ground truth information, wherein the ground truth information comprises actual locations of errors on data qubits within the training data set; and provide the trained local decoder for use in performing error correction for quantum surface codes.
(193) Clause 12. The system of clause 11, wherein the circuit-level noise is based, at least in part, on one or more of the following: one or more errors pertaining to single qubit gates acting on given qubits of a given quantum surface code of the quantum surface codes; one or more errors pertaining to multi-qubit gates between given qubits of the given quantum surface code; one or more measurement errors pertaining to a given plurality of rounds of syndrome measurements of the given quantum surface code; one or more errors pertaining to ancilla qubit reset timesteps of the given plurality of rounds of syndrome measurements of the given quantum surface code; or one or more errors pertaining to idling of given qubits of the given quantum surface code.
(194) Clause 13. The system of clause 11, further comprising: one or more quantum hardware devices configured to implement a given quantum surface code, wherein at least one of the one or more computing devices or one or more additional computing devices are configured to: perform a first error correction step for other measurement results of the given quantum surface code of the quantum surface codes, wherein to perform the error correction for the other measurement results, the at least one of the one or more computing devices or one or more additional computing devices are further configured to decode the other measurement results, via the trained local decoder.
(195) Clause 14. The system of clause 11, wherein the syndrome information for the number of rounds is configured such that respective syndrome densities for respective rounds of the plurality of rounds of syndrome measurements are not correlated with the number of rounds.
(196) Clause 15. The system of clause 11, wherein a given qubit placement at a boundary of the simulated measurement results volume may be determined based, at least in part, on the location information about qubit placement within the first, second, and third dimensions of the simulated measurement results volume.
(197) Clause 16. One or more non-transitory, computer-readable media storing program instructions that when executed on or across one or more processors, cause the one or more processors to: train, via a supervised learning technique, a local decoder to be used to perform error correction for quantum surface codes, wherein to train the local decoder, the program instructions further cause the one or more processors to: determine a three-dimensional, fully-convolutional neural network architecture for the local decoder such that the three-dimensional, fully-convolutional neural network architecture accounts for circuit-level noise; provide a training data set of simulated measurement results that form a simulated measurement results volume, bounded by first and second dimensions corresponding to dimensions of a simulated quantum surface code and a third dimension corresponding to a number of rounds included in a plurality of rounds of syndrome measurements, wherein the training data set further comprises: syndrome information for the number of rounds; location information about qubit placement within the first, second, and third dimensions of the simulated measurement results volume; and temporal boundaries information about the number of rounds; determine predictions of locations of alleged errors on data qubits within the training data set; compare the predictions of locations of alleged errors to ground truth information, wherein the ground truth information comprises actual locations of errors on data qubits within the training data set; and provide the trained local decoder.
(198) Clause 17. The non-transitory, computer-readable medium of clause 16, wherein the circuit-level noise is based, at least in part, on one or more of the following: one or more errors pertaining to single qubit gates acting on given qubits of a given quantum surface code of the quantum surface codes; one or more errors pertaining to multi-qubit gates between given qubits of the given quantum surface code; one or more measurement errors pertaining to a given plurality of rounds of syndrome measurements of the given quantum surface code; one or more errors pertaining to ancilla qubit reset timesteps of the given plurality of rounds of syndrome measurements of the given quantum surface code; or one or more errors pertaining to idling of given qubits of the given quantum surface code.
(199) Clause 18. The non-transitory, computer-readable medium of clause 16, wherein the program instructions further cause the one or more processors to: determine a homological equivalence convention to be applied to homologically equivalent errors within the actual locations of the errors on the data qubits within the training data set, wherein the homological equivalence convention comprises at least one or more of: one or more weight-reduction transformations; or one or more equivalence transformations; and apply the homological equivalence convention to the ground truth information.
(200) Clause 19. The non-transitory, computer-readable medium of clause 16, wherein the program instructions further cause the one or more processors to: generate the training data set, wherein to generate the training data set, the program instructions further cause the one or more processors to: perform one or more Monte Carlo simulations to obtain the simulated measurement results, wherein the one or more Monte Carlo simulations are based, at least in part, on a circuit-level noise model.
(201) Clause 20. The non-transitory, computer-readable medium of clause 16, wherein the syndrome information comprises syndrome differences between respective sets of consecutive rounds of the plurality of rounds of syndrome measurements.
(202) Clause 21. A system, comprising: one or more quantum hardware devices configured to implement a quantum surface code; and one or more computing devices configured to perform error correction for measurement results of a plurality of rounds of syndrome measurements of the quantum surface code, wherein: the measurement results form a measurement results volume bounded by first and second dimensions corresponding to dimensions of the quantum surface code and a third dimension corresponding to a number of rounds included in the plurality of rounds of syndrome measurements; and to perform the error correction, the one or more computing devices are further configured to: decode, via a first decoder, a subset of detected syndrome differences of a total number of syndrome differences in the measurement results volume; treat vertical pairs of highlighted vertices in the decoded measurement results volume, wherein to treat the vertical pairs of highlighted vertices causes a syndrome density in the decoded measurement results volume to be reduced; and provide a resulting measurement results volume to a global decoder.
(203) Clause 22. The system of clause 21, wherein to treat the vertical pairs of highlighted vertices, the one or more computing devices are further configured to: perform syndrome collapse, wherein to perform the syndrome collapse, the one or more computing devices are further configured to: partition the decoded measurement results volume into partitions of the decoded measurement results volume in the third dimension of the decoded measurement results volume; and add syndrome differences of respective syndrome densities in each partition modulo 2 to collapse the respective partitions, wherein said collapse results in a removal of the vertical pairs of highlighted vertices.
(204) Clause 23. The system of clause 21, wherein to treat the vertical pairs of highlighted vertices, the one or more computing devices are further configured to: perform vertical cleanup, wherein to perform the vertical cleanup, the one or more computing devices are further configured to: identify vertical pairs of highlighted vertices in the decoded measurement results volume; and remove the vertical pairs of highlighted vertices. Clause 24. The system of clause 23, wherein: the quantum surface code comprises logical patches of surface code, merged via lattice surgery; a first half of the syndrome density in the decoded measurement results volume, comprising an initialization round of the plurality of rounds of syndrome measurements, has a larger density of syndromes with respect to a second half of the syndrome density, comprising a last round of the plurality of rounds of syndrome measurements; and a temporal direction pertaining to the remove the vertical pairs of highlighted vertices corresponds to moving from the last round of the plurality of rounds of syndrome measurements towards the initialization round of the plurality of rounds of syndrome measurements.
(205) Clause 25. The system of clause 23, wherein: the quantum surface code comprises logical patches of surface code, merged via lattice surgery; a first half of the syndrome density in the decoded measurement results volume, comprising an initialization round of the plurality of rounds of syndrome measurements, has a smaller density of syndromes with respect to a second half of the syndrome density, comprising a last round of the plurality of rounds of syndrome measurements; and a temporal direction pertaining to the remove the vertical pairs of highlighted vertices corresponds to moving from the initialization round of the plurality of rounds of syndrome measurements towards the last round of the plurality of rounds of syndrome measurements.
(206) Clause 26. The system of clause 23, wherein: the quantum surface code comprises logical patches of surface code, merged via lattice surgery; a first half of the syndrome density in the decoded measurement results volume, comprising an initialization round of the plurality of rounds of syndrome measurements, has a same density of syndromes with respect to a second half of the syndrome density, comprising a last round of the plurality of rounds of syndrome measurements; and a temporal direction pertaining to the remove the vertical pairs of highlighted vertices corresponds to moving from the initialization round of the plurality of rounds of syndrome measurements towards the last round of the plurality of rounds of syndrome measurements or to moving from the from the last round towards the initialization round.
(207) Clause 27. The system of clause 21, wherein: the first decoder is a local neural network decoder; and the local neural network decoder has been trained, via a supervised learning technique, to account for circuit-level noise within the measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
(208) Clause 28. The system of clause 27, wherein the circuit-level noise is based, at least in part, on one or more of the following: one or more errors pertaining to single qubit gates acting on given qubits of a given quantum surface code of the quantum surface codes; one or more errors pertaining to multi-qubit gates between given qubits of the quantum surface code; one or more measurement errors pertaining to the plurality of rounds of syndrome measurements; one or more errors pertaining to ancilla qubit reset timesteps of the plurality of rounds of syndrome measurements; or one or more errors pertaining to idling of given qubits of the quantum surface code.
(209) Clause 29. The system of clause 21, wherein the vertical pairs of highlighted vertices in the decoded measurement results volume are generated based, at least in part, on the decoding, via the first decoder, the subset of detected syndrome differences.
(210) Clause 30. A method, comprising: performing error correction for measurement results of a plurality of rounds of syndrome measurements of a quantum surface code, wherein: the measurement results form a measurement results volume bounded by first and second dimensions corresponding to dimensions of the quantum surface code and a third dimension corresponding to a number of rounds included in the plurality of rounds of syndrome measurements; and said performing the error correction comprises: decoding, via a first decoder, a subset of detected syndrome differences of a total number of syndrome differences in the measurement results volume; treating vertical pairs of highlighted vertices in the decoded measurement results volume, wherein said treating the vertical pairs of highlighted vertices causes a syndrome density in the decoded measurement results volume to be reduced; and providing a resulting measurement results volume to a global decoder.
(211) Clause 31. The method of clause 30, wherein the treating the vertical pairs of highlighted vertices comprises: performing syndrome collapse, wherein the performing the syndrome collapse comprises: partitioning the decoded measurement results volume into partitions of the decoded measurement results volume in the third dimension of the decoded measurement results volume; and adding syndrome differences of respective syndrome densities in each partition modulo 2 to collapse the respective partitions, wherein said collapse results in a removal of the vertical pairs of highlighted vertices.
(212) Clause 32. The method of clause 31, wherein said performing the error correction further comprises: decoding, via the global decoder, remaining syndrome differences in the collapsed partitions; and providing error-corrected results of the measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
(213) Clause 33. The method of clause 30, wherein the treating the vertical pairs of highlighted vertices comprises: performing vertical cleanup, wherein the performing vertical cleanup comprises: identifying vertical pairs of highlighted vertices in the decoded measurement results volume; and removing the vertical pairs of highlighted vertices.
(214) Clause 34. The method of clause 33, wherein a temporal direction pertaining to the treating the vertical pairs of highlighted vertices is determined based, at least in part, on a distribution of syndrome differences within the syndrome density in the decoded measurement results volume.
(215) Clause 35. The method of clause 30, wherein said performing the error correction further comprises: decoding, via the global decoder, remaining syndrome differences in the resulting measurement results volume; and providing error-corrected results of the measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
(216) Clause 36. The method of clause 30, wherein the vertical pairs of highlighted vertices in the decoded measurement results volume are generated based, at least in part, on the decoding, via the first decoder, the subset of detected errors.
(217) Clause 37. The method of clause 30, wherein: the first decoder is a local neural network decoder; and the local neural network decoder has been trained, via a supervised learning technique, to account for circuit-level noise within the measurement results of the plurality of rounds of syndrome measurements of the quantum surface code.
(218) Clause 38. A non-transitory, computer-readable, medium storing program instructions that, when executed on or across one or more processors, cause the one or more processors to: perform error correction for measurement results of a plurality of rounds of syndrome measurements of a quantum surface code, wherein: the measurement results form a measurement results volume bounded by first and second dimensions corresponding to dimensions of the quantum surface code and a third dimension corresponding to a number of rounds included in the plurality of rounds of syndrome measurements; and to perform the error correction, the program instructions further cause the one or more processors to: decode, via a decoder, a subset of detected syndrome differences of a total number of syndrome differences in the measurement results volume; treat vertical pairs of highlighted vertices in the decoded measurement results volume, wherein to treat the vertical pairs of highlighted vertices causes a syndrome density in the decoded measurement results volume to be reduced; and provide a resulting measurement results volume to a global decoder.
(219) Clause 39. The non-transitory, computer-readable medium of clause 38, wherein to treat the vertical pairs of highlighted vertices, the program instructions further cause the one or more processors to: perform syndrome collapse, wherein to perform the syndrome collapse, the program instructions further cause the one or more processors to: partition the decoded measurement results volume into partitions of the decoded measurement results volume in the third dimension of the decoded measurement results volume; and add syndrome differences of respective syndrome densities in each partition modulo 2 to collapse the respective partitions, wherein said collapse results in a removal of the vertical pairs of highlighted vertices.
(220) Clause 40. The non-transitory, computer-readable medium of clause 38, wherein to treat the vertical pairs of highlighted vertices, the program instructions further cause the one or more processors to: perform vertical cleanup, wherein to perform the vertical cleanup, the program instructions further cause the one or more processors to: identify vertical pairs of highlighted vertices in the decoded measurement results volume; and remove the vertical pairs of highlighted vertices.
Illustrative Computer System
(221)
(222)
(223) In various embodiments, computing device 2000 may be a uniprocessor system including one processor 2010, or a multiprocessor system including several processors 2010 (e.g., two, four, eight, or another suitable number). Processors 2010 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 2010 may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of processors 2010 may commonly, but not necessarily, implement the same ISA. In some implementations, graphics processing units (GPUs) may be used instead of, or in addition to, conventional processors.
(224) System memory 2020 may be configured to store instructions and data accessible by processor(s) 2010. In at least some embodiments, the system memory 2020 may comprise both volatile and non-volatile portions; in other embodiments, only volatile memory may be used. In various embodiments, the volatile portion of system memory 2020 may be implemented using any suitable memory technology, such as static random-access memory (SRAM), synchronous dynamic RAM or any other type of memory. For the non-volatile portion of system memory (which may comprise one or more NVDIMMs, for example), in some embodiments flash-based memory devices, including NAND-flash devices, may be used. In at least some embodiments, the non-volatile portion of the system memory may include a power source, such as a supercapacitor or other power storage device (e.g., a battery). In various embodiments, memristor based resistive random-access memory (ReRAM), three-dimensional NAND technologies, Ferroelectric RAM, magneto resistive RAM (MRAM), or any of various types of phase change memory (PCM) may be used at least for the non-volatile portion of system memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above, are shown stored within system memory 2020 as code 2025 and data 2026.
(225) In some embodiments, I/O interface 2030 may be configured to coordinate I/O traffic between processor 2010, system memory 2020, and any peripheral devices in the device, including network interface 2040 or other peripheral interfaces such as various types of persistent and/or volatile storage devices. In some embodiments, I/O interface 2030 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 2020) into a format suitable for use by another component (e.g., processor 2010). In some embodiments, I/O interface 2030 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 2030 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 2030, such as an interface to system memory 2020, may be incorporated directly into processor 2010.
(226) Network interface 2040 may be configured to allow data to be exchanged between computing device 2000 and other devices 2060 attached to a network or networks 2050, such as other computer systems or devices. In various embodiments, network interface 2040 may support communication via any suitable wired or wireless general data networks, such as types of Ethernet network, for example. Additionally, network interface 2040 may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
(227) In some embodiments, system memory 2020 may represent one embodiment of a computer-accessible medium configured to store at least a subset of program instructions and data used for implementing the methods and apparatus discussed in the context of
CONCLUSION
(228) Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g., SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc., as well as transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link.
(229) The various methods as illustrated in the Figures above and described herein represent exemplary embodiments of methods. The methods may be implemented in software, hardware, or a combination thereof. The order of method may be changed, and various elements may be added, reordered, combined, omitted, modified, etc.
(230) It will also be understood that, although the terms first, second, etc., may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present invention. The first contact and the second contact are both contacts, but they are not the same contact.
(231) Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. It is intended to embrace all such modifications and changes and, accordingly, the above description is to be regarded in an illustrative rather than a restrictive sense.