Decoding Quantum Low Density Parity Check Codes
Summary
TLDRJoschka Roffe's presentation provides a comprehensive overview of quantum LDPC codes and their significance in quantum computation. LDPC codes can offer a much more efficient alternative to surface codes, requiring significantly fewer physical qubits. Roffe discusses the structure of LDPC codes, how they can be derived from classical codes, and presents the challenges faced in decoding them. The talk covers the theoretical underpinnings of classical error correction, the complexities of quantum error correction, and the vital role of stabilizers in identifying and correcting errors. Additionally, Roffe outlines the existing techniques for decoding, such as belief propagation, and introduces concepts like circuit-level noise and trapping sets that complicate decoding efforts. The necessity for ongoing research into more efficient decoding algorithms is emphasized as an open problem in the field, highlighting the importance of further innovation in quantum error correction methods.
Takeaways
- 🧩 Quantum LDPC codes can significantly reduce qubit requirements in quantum computing.
- 💡 They can be decoded using techniques based on classical error correction methods.
- 🔍 Trapping sets can complicate decoding by leading to failures in error correction protocols.
- 📈 Belief propagation, although efficient, encounters challenges with complex noise models.
- 🔗 OSD adds complexity but is crucial for improving decoding accuracy in quantum systems.
- ⚙️ Circuit-level noise impacts error propagation and requires detailed decoding graphs.
- 💻 Ongoing research is vital for developing more efficient decoders for practical quantum computers.
Timeline
- 00:00:00 - 00:05:00
The introduction to the tutorial on decoding quantum LDPC codes emphasizes the efficiency benefits over traditional surface codes, potentially reducing the number of physical qubits needed per logical qubit significantly. Various quantum computing methods that can utilize such codes are discussed, including silicon spin qubits and ion traps.
- 00:05:00 - 00:10:00
A brief overview of quantum LDPC codes is presented, explaining their error-correcting capabilities and differences from surface codes. The speaker will outline a plan to delve into both classical and quantum error correction concepts, and explore the decoding challenges for quantum LDPC codes using a belief propagation approach.
- 00:10:00 - 00:15:00
The section covers classical binary error correction codes and how they can be visually represented as Tanner graphs. The relationship between codewords and a parity check matrix is outlined, showcasing how errors can be detected using syndrome measurements through the graph-based representation.
- 00:15:00 - 00:20:00
Shifting focus to quantum error correction, the discourse highlights additional challenges, specifically the need to handle phase flip errors along with bit flip errors. The concept of stabilizer codes is introduced, and the mechanics of syndrome measurement and recovery operations are explained to demonstrate the quantum decoding cycle.
- 00:20:00 - 00:25:00
The discussion elaborates on the CSS stabilizer codes, highlighting their ability to separately address bit flip and phase flip errors, using parity check matrices to define error recovery operations. The talk also introduces examples such as the Steane code and describes how this class of codes aids in the design of quantum error correction protocols.
- 00:25:00 - 00:30:00
The talk then transitions to describing the decoding problem in the simplest error model, analyzing X and Z error syndromes separately to derive recovery operations, emphasizing the importance of a rapid and efficient decoding process for practical quantum computing applications.
- 00:30:00 - 00:35:00
The decoding methods are evaluated, contrasting belief propagation with traditional surface code decoders, like minimum weight perfect matching. The benefits of belief propagation as a general decoding algorithm applicable to any sparse graph are outlined, paving the way for the following detailed technical analyses.
- 00:35:00 - 00:40:00
A deeper exploration of belief propagation is introduced, detailing its operation through message-passing on Tanner graphs, which allows for efficient probability distributions to be computed in decoding scenarios, enhancing the understanding of the error estimation process in quantum LDPC codes.
- 00:40:00 - 00:49:52
The final sections address the integration of order statistics with belief propagation, showing how this hybrid approach can significantly improve decoding performance while facing challenges from circuit level noise introduced in quantum systems. The runtime complexities of the various components are considered to ascertain practical applications in real-world quantum error correction strategies.
Mind Map
Video Q&A
What are quantum LDPC codes used for?
Quantum LDPC codes are used to perform quantum computation more efficiently than traditional methods like surface codes.
How many physical qubits does a quantum LDPC code require per logical qubit?
It is estimated that a quantum LDPC code could reduce the requirement to around 50 physical qubits per logical qubit.
What is the primary decoding technique discussed in the video?
The primary decoding technique discussed is belief propagation, enhanced with order statistics decoding.
What are trapping sets in quantum error correction?
Trapping sets refer to configurations of qubit errors that lead to decoding failures, often due to the presence of cycles.
What limitations does belief propagation face in quantum codes?
Belief propagation can struggle with complex noise models and is less effective in non-tree-like structures.
What is the challenge of using OSD with belief propagation?
The challenge is that OSD introduces cubic complexity due to matrix inversion, making it slower in practice.
What is the open problem in quantum LDPC code decoding?
The open problem is to improve or find an alternative to OSD to make decoders more efficient.
Can common decoding algorithms for surface codes be used for quantum LDPC codes?
No, traditional surface code decoders like union find and minimum weight perfect matching are not applicable due to the lack of matching properties in quantum LDPC codes.
What algorithm benefits from the structure of sparse graphs in decoding?
Belief propagation is an efficient algorithm that leverages the structure of sparse graphs for decoding.
What are the future directions suggested for quantum error correction?
Future research may build on general classical algorithms or integrate machine learning techniques to improve decoding performance.
View more video summaries
System Analysis & Design - #5 Analysis (Requirement Gathering)
System Analysis & Design - #4 Planning (System Request & Feasibility Analysis)
System Analysis & Design - #3 Metode Pengembangan Software Part 2
System Analysis & Design - #3 Metode Pengembangan Software (Waterfall/Prototyping/ Scrum/ dll)
[ITW 2023] Neural belief propagation decoding of quantum LDPC codes using overcomplete check matrice
Role of Plant growth regulators (PGRs) in plant growth and development.
- 00:00:00
- 00:00:32MODERATOR: Hey.
- 00:00:33Welcome back, everybody.
- 00:00:34So this is the second talk of today,
- 00:00:36and we are very happy to have Joschka Roffe giving us
- 00:00:39a tutorial on how to decode quantum LDPC codes.
- 00:00:41
- 00:00:47JOSCHKA ROFFE: OK.
- 00:00:47Thanks for the intro.
- 00:00:49So this talk will be a high level introduction
- 00:00:51to decoding quantum LDPC codes.
- 00:00:56OK.
- 00:00:57So I guess, why do we want to use quantum LDPC codes?
- 00:01:00And the main motivation is that we could potentially
- 00:01:03do quantum computation much more efficiently
- 00:01:05than is possible with the service codes.
- 00:01:08So with the service code, it's been
- 00:01:09estimated that we might need about a thousand
- 00:01:12physical qubits per logical qubit,
- 00:01:14whereas with quantum LDPC codes, this
- 00:01:16could be reduced considerably, perhaps to something like 50
- 00:01:19qubits per logical qubits.
- 00:01:22The disadvantage, of course, is that we
- 00:01:24go from a nice, structured layout of the surface code
- 00:01:28to something that looks like this pile of spaghetti.
- 00:01:30So we need something that can perform long range
- 00:01:34interactions.
- 00:01:37OK.
- 00:01:37And fortunately there are various quantum computing
- 00:01:39devices being developed around the world that
- 00:01:42will enable this.
- 00:01:43So examples include silicon spin qubits, neutral atoms,
- 00:01:46as we're seeing from the recent QuEra experiments.
- 00:01:49Ion traps, in principle, have the ability
- 00:01:51to perform arbitrary qubit-qubit interactions
- 00:01:54within the traps themselves.
- 00:01:56And I didn't used to include superconducting qubits
- 00:01:58on this list, but recent experiments or recent proposals
- 00:02:02from IBM and other groups have shown
- 00:02:04that it would be, in principle, possible to perform
- 00:02:07beyond nearest neighbor interactions using
- 00:02:103D integration and similar techniques.
- 00:02:12
- 00:02:14OK.
- 00:02:15So what is a quantum LDPC code?
- 00:02:16So this was nicely explained in the previous slide,
- 00:02:19but a bit of redundancy never does any harm.
- 00:02:22So a quantum LDPC code is an error correction code
- 00:02:26where each qubit is connected to a bounded number
- 00:02:28of stabilizers.
- 00:02:30So if we take this definition at face value,
- 00:02:32then technically speaking, the surface code
- 00:02:34is also a quantum low density parity check code
- 00:02:36because each qubit in surface code
- 00:02:38is only connected to at most four stabilizers.
- 00:02:42However, what we usually mean in practice
- 00:02:43when we refer to a quantum LDPC code is one of these codes
- 00:02:48where we construct the code using a product construction
- 00:02:52or otherwise where there's usually some sort of randomness
- 00:02:54in the construction.
- 00:02:56And this gives rise to the quantum
- 00:02:58low density parity check codes that
- 00:02:59have the good encoding densities and nice distance scaling
- 00:03:02properties as you increase the size of the code.
- 00:03:04
- 00:03:08OK.
- 00:03:08So just to give a quick overview of the talk.
- 00:03:13So I'll start off with some background.
- 00:03:14So go over the classical code, classical binary code
- 00:03:18formulism, the telegraph way of representing this,
- 00:03:22and then explain how we can represent our quantum codes
- 00:03:26in terms of pairs of classical codes via the CSS stabilizer
- 00:03:30framework.
- 00:03:31Following this we'll have all the background
- 00:03:33we need to understand the quantum decoding problem,
- 00:03:35and then I'll go on to explain how this can be tackled
- 00:03:38with a modified form of the classical belief propagation
- 00:03:42decoder called belief propagation, plus ordered
- 00:03:44statistics decoding.
- 00:03:47Following this, I'll talk briefly about circuit level
- 00:03:49noise before going on to discuss the open problem
- 00:03:53within the field, which is how we go beyond this belief
- 00:03:57propagation plus order statistics decoder
- 00:03:58and build a truly efficient decoder that
- 00:04:01could, in principle, be used on a real device.
- 00:04:03
- 00:04:07OK.
- 00:04:08So just to recap this again.
- 00:04:11So a binary linear code, a classical binary
- 00:04:15linear code is a redundant mapping from a k bit data
- 00:04:18string to an n bit code word.
- 00:04:20So here we have an example for the three bit repetition code.
- 00:04:23This maps a single bit to three bits.
- 00:04:27So 0 to 000 and 1 to 111.
- 00:04:31And any binary linear code can be represented
- 00:04:33in terms of a parity check matrix, which
- 00:04:35is defined such that all the code
- 00:04:37words are in its null space.
- 00:04:39So if the code word is uncorrupted, so it's just 000,
- 00:04:43then when we multiply it by the parity check matrix,
- 00:04:46we just map to the zero vector so we know that
- 00:04:48nothing's wrong.
- 00:04:49However, if our code word is corrupted,
- 00:04:53then it no longer maps to the zero code word
- 00:04:55and we get a so-called nontrivial syndrome that
- 00:04:57tells us that something's gone wrong with our encoding,
- 00:05:00and we have to do something about it.
- 00:05:02And a nice way of representing binary linear codes
- 00:05:05of this type is with a Tanner graph, which we saw
- 00:05:08explained in the previous talk.
- 00:05:11I like to draw them like this where the round nodes represent
- 00:05:14the bits which map to the columns of the parity check
- 00:05:18matrix, and the square nodes represent
- 00:05:20the parity checks which are the rows of this matrix.
- 00:05:26And the rule is that each parity check,
- 00:05:30when you sum over the bits that it's connected to,
- 00:05:32that system should be equal to zero.
- 00:05:35OK?
- 00:05:36So if we set the bit nodes to be 111, so the second code word,
- 00:05:41then we see that this parity check, for example,
- 00:05:44is satisfied because 1 plus 1 mod 2 is zero.
- 00:05:48However, if we then flip a qubit--
- 00:05:50sorry.
- 00:05:51A bit, in this case.
- 00:05:53Then this check is no longer satisfied
- 00:05:56and we get a non-zero syndrome.
- 00:05:59OK.
- 00:06:00So that's classical error correction.
- 00:06:02Now on to quantum error correction.
- 00:06:04So as with many things with quantum computing,
- 00:06:06things get more complicated and annoying.
- 00:06:09For quantum error correction, the specific problem
- 00:06:11we're faced with is the fact that we
- 00:06:13have an additional error type.
- 00:06:14So in a very simplistic way of looking at things,
- 00:06:17in classical error correction we only really never need
- 00:06:19to worry about bit flip errors.
- 00:06:21In quantum error correction, we have an additional error type
- 00:06:24to worry about, which is the phase flip.
- 00:06:26So we can think of this as being a flip
- 00:06:31around the axis of the block sphere
- 00:06:32that's perpendicular to the one that gives you bit flips.
- 00:06:35OK?
- 00:06:35So any quantum error correction code
- 00:06:37has to have the ability to detect and correct
- 00:06:40both of these error types.
- 00:06:42OK.
- 00:06:42And the way that we can do this is with a so-called stabilizer
- 00:06:45code.
- 00:06:46Here the idea is that we create an entangled logical state,
- 00:06:49and then we measure sequences of Pauli operators on the state
- 00:06:53that we can think of--
- 00:06:54that we call stabilizers, and we can
- 00:06:56think of these as the quantum analog of a parity check
- 00:07:00measurement.
- 00:07:01So these stabilizer measurements are measured out
- 00:07:04via non-destructive projective measurement via this
- 00:07:09by an auxiliary qubit.
- 00:07:11These are measured out to give us a syndrome.
- 00:07:13The syndrome is passed to a decoder,
- 00:07:15which is just an algorithm run on a classical computer.
- 00:07:19This tells us what the best recovery operation
- 00:07:22is given the measured syndrome.
- 00:07:24And then we can then apply that recovery operation
- 00:07:26and hopefully restore our quantum information
- 00:07:28to its uncorrupted state.
- 00:07:30So this is the full decoding cycle,
- 00:07:32and the important thing is that we do this full operation,
- 00:07:36so stabilizer measurement, decoding, and recovery
- 00:07:40within a time frame that's shorter than the time
- 00:07:42it takes for the qubits to decode.
- 00:07:46OK?
- 00:07:47And we're going to focus on a specific type of stabilizer
- 00:07:50code called the CSS stabilizer codes.
- 00:07:56These are a type of stabilizer code
- 00:07:58where you can separate out the detection of bit flips
- 00:08:02and phase flips into separate stabilizer extraction
- 00:08:05processes.
- 00:08:06You then get a separate syndrome readout
- 00:08:10for a bit flip errors, and another bit syndrome
- 00:08:13readout for phase flip errors.
- 00:08:14They can then be decoded separately
- 00:08:16to give you two recovery operations.
- 00:08:18
- 00:08:20OK.
- 00:08:21And the reason why we like to focus
- 00:08:22on this particular subclass of stabilizer codes, this CSS
- 00:08:29subclass, is that it provides a nice way
- 00:08:32of representing your quantum code
- 00:08:34as a pair of classical codes.
- 00:08:36So you can think of one of these CSS codes
- 00:08:39as being described or being a combination
- 00:08:42of two classical codes described by two parity check matrices.
- 00:08:46So one parity check matrix that detects
- 00:08:48phase flip errors and another that detects bit flip errors.
- 00:08:52OK.
- 00:08:53So a famous example of one of these codes
- 00:08:55is the Steane 7,1,3 Code.
- 00:08:56This encodes one logical qubits in seven,
- 00:08:59and it's defined so that both this Hx and Hz components are
- 00:09:03the classical Hamming codes.
- 00:09:04And just like with the classical telegraph,
- 00:09:08we can create a quantum telegraph.
- 00:09:10So we have our seven qubits here, we add three stabilizers
- 00:09:14to detect phase flips, and then we
- 00:09:15can also add an additional three stabilizers
- 00:09:18to detect bit flips.
- 00:09:19
- 00:09:22OK.
- 00:09:23So this provides a nice way of viewing
- 00:09:25the construction of quantum codes
- 00:09:27in terms of the classical coding framework.
- 00:09:30Unfortunately it is impossible just
- 00:09:32to select your two favorite codes as the Hx and Hz matrix,
- 00:09:36because these Hx and Hz matrices actually
- 00:09:38correspond to physical operations
- 00:09:40that we measure on our quantum states.
- 00:09:42So when we're measuring the observables on a quantum state,
- 00:09:48we have to make sure that they commute.
- 00:09:51And we can make sure that this property is satisfied in terms
- 00:09:55of the CSS formalism simply by checking that this property is
- 00:09:59satisfied.
- 00:09:59I.e. when we multiply the matrix by the transpose of the matrix,
- 00:10:04we map to the zero vector.
- 00:10:06So this is a challenge in designing quantum codes
- 00:10:09because finding pairs of codes that give us this property is
- 00:10:12not always easy.
- 00:10:14So this is where the product constrictions come in,
- 00:10:16and we can think of these as being
- 00:10:18a recipe for turning classical codes into quantum codes.
- 00:10:22So here's an explicit example.
- 00:10:24So here are two classical codes, and I've represented the parity
- 00:10:28check matrix using this image.
- 00:10:29By each black pixel it's a one in the parity check matrix,
- 00:10:33and the blue background just represents all the locations
- 00:10:35where there's a zero.
- 00:10:37So you can see that both of these parity
- 00:10:39check matrices are actually two copies of the same parity check
- 00:10:42matrix.
- 00:10:42But we can see it's sparse because the vast majority
- 00:10:46of the pixels are blue.
- 00:10:48OK?
- 00:10:49And we can put it through a product construction
- 00:10:52to turn it into a CSS code.
- 00:10:54And this gives us two Hx and Hz matrices that look like this.
- 00:11:00So we go from this gives us a 4, 16, 18, 20 code.
- 00:11:04And there are two things I want you to note about this.
- 00:11:06First of all, the Hx and the Hz matrix are somewhat bigger.
- 00:11:09In this case, we've increased the number of bits
- 00:11:13necessary by a factor of about eight.
- 00:11:15The second thing to notice is that the Hx and Hz
- 00:11:18matrices are still sparse.
- 00:11:20So they're still adding the seed codes,
- 00:11:21and then can in principle be decoded
- 00:11:23using any existing classical technique.
- 00:11:26
- 00:11:29OK.
- 00:11:31So in the first instance, it's always
- 00:11:33useful to study the decoding problem
- 00:11:36in the simplest possible error model.
- 00:11:38We call this the code capacity error model.
- 00:11:43In the code capacity error model,
- 00:11:44we just assume that errors occur just before we
- 00:11:47measure the stabilizers.
- 00:11:50OK?
- 00:11:50And because we're measuring the x and the z stabilizer
- 00:11:54separately, we have two separate decoding problems.
- 00:11:58So we get our x and our z syndrome,
- 00:12:00and then we use the respective Hz and Hx CSS
- 00:12:03matrices to decode these as two independent, classical decoding
- 00:12:07problems.
- 00:12:09And what we get out from this is some sort of binary vector
- 00:12:12that we then map to a recovery operation which is then
- 00:12:15applied to the quantum state.
- 00:12:18OK.
- 00:12:19So given this process, how do we actually
- 00:12:22determine whether our decoding has been successful?
- 00:12:24Well, in simulation what we can do
- 00:12:26is we can calculate what we call the residual error.
- 00:12:28So our CSS decoder will return two Pauli operations,
- 00:12:34Pauli operation to correct x errors and one to correct
- 00:12:36z errors.
- 00:12:37And the residual error is just the combined effect
- 00:12:40of these recovery operations and the actual error.
- 00:12:45If we're doing a simulation, we know exactly what that is.
- 00:12:47And we say that decoding is successful
- 00:12:50if the action of this residual error
- 00:12:53is to map onto the plus 1 eigenspace
- 00:12:56of the logical states.
- 00:12:58OK?
- 00:12:58So this is the same as saying that the residual error is
- 00:13:01equivalent to a stabilizer.
- 00:13:03
- 00:13:05And conversely, decoding fails, if the residual error
- 00:13:09is equivalent to a logical operator
- 00:13:11such that this logical state, the information
- 00:13:15encoded there has changed, so our error correction protocol
- 00:13:18has failed.
- 00:13:21OK, so what decoders can we use to do
- 00:13:25the decoding for these two separate error detection
- 00:13:29components?
- 00:13:30So one question you might have is,
- 00:13:32can we use the surface code decoders that have been around
- 00:13:35for a while and have been highly optimized
- 00:13:37and are now getting to the point where they might actually
- 00:13:39be fast enough to run on a real quantum computer decoding
- 00:13:43syndromes in real time?
- 00:13:44And the answer is unfortunately no.
- 00:13:46And the reason for this is that both union find
- 00:13:50and minimum weight perfect matching
- 00:13:52are matching algorithms, and these
- 00:13:54require a very specific structure
- 00:13:56from your quantum error correction codes.
- 00:13:57And the specific requirement they have
- 00:14:00is that your quantum error correction code
- 00:14:01produces pair like syndromes.
- 00:14:03So we can see that this is the case, for example,
- 00:14:05in the surface code.
- 00:14:07So here we have a bit flip error that lights up two syndromes,
- 00:14:10and as we increase the length of this error chain,
- 00:14:13the syndrome weight remains the same.
- 00:14:15So we see that the syndrome is still weight two,
- 00:14:17and if we increase it by one, it's still weight two.
- 00:14:20So decoding is just a process of joining
- 00:14:22these pairs of syndromes.
- 00:14:27OK?
- 00:14:27Unfortunately, quantum LDPC codes in general
- 00:14:31don't have this matching property.
- 00:14:33So we can see this very easily by looking at the Hamming code
- 00:14:36that we saw before.
- 00:14:37If we just put an error on the first qubit there,
- 00:14:39we see that three syndromes are activated.
- 00:14:42So we can't use a matching based decoder for this.
- 00:14:45What this means is that we need to use a much more
- 00:14:48general decoding algorithm, and I'll
- 00:14:50be focusing in this tutorial on the classical belief
- 00:14:53propagation decoder.
- 00:14:55
- 00:14:57OK.
- 00:14:58So why belief propagation?
- 00:14:59Well, it's a very general algorithm
- 00:15:01that can be applied to any sparse graph.
- 00:15:04So we don't need the matching structure.
- 00:15:06And the other reason is that it's extremely fast.
- 00:15:09So belief propagation runs in linear time,
- 00:15:11if run sequentially, and if you have
- 00:15:13access to a parallel architecture it
- 00:15:15can, in principle, be almost constant in its complexity.
- 00:15:19
- 00:15:21OK.
- 00:15:22So to provide some intuition for how this algorithm works,
- 00:15:24let's focus on the classical decoding case.
- 00:15:26Here we have a classical syndrome equation,
- 00:15:29and given the syndrome, the goal of decoding
- 00:15:31is to find some minimum weight estimates of the error given
- 00:15:36this measured syndrome.
- 00:15:38OK.
- 00:15:39And assuming that we have a uniformly distributed error
- 00:15:42model, as might be, for example, the case in the binary
- 00:15:47symmetric channel, then we can express this
- 00:15:49as a marginalization problem where
- 00:15:51we marginalize over a multivariate probability
- 00:15:54distribution that relates the probability of each bit being
- 00:15:57an error to the syndrome that's being measured.
- 00:16:01OK, so we wouldn't want to do this exhaustively
- 00:16:03because that would scale exponentially
- 00:16:06in its complexity.
- 00:16:08So the basic idea behind belief propagation
- 00:16:11is that we can leverage some of the structure
- 00:16:13in the probability distribution given to us by the Tanner graph
- 00:16:18to compute these marginals more efficiently.
- 00:16:21
- 00:16:27OK.
- 00:16:27So in practice, this is done using a message passing
- 00:16:32algorithm where information is exchanged
- 00:16:35between the bits and the variable nodes of the Tanner
- 00:16:39graph.
- 00:16:40So a type of belief propagation called the product sum
- 00:16:45algorithm actually computes the marginals exactly
- 00:16:50if the graph is tree-like in practice.
- 00:16:52Quantum error correction codes usually aren't tree-like.
- 00:16:54However, what we find is that even if the graph has cycles
- 00:16:59in the classical case, at least, it usually
- 00:17:01works reasonably well.
- 00:17:04OK.
- 00:17:05So this animation here is designed
- 00:17:10to give you some sort of intuition as to how the belief
- 00:17:14propagation decoder works.
- 00:17:16So this here is the Tanner graph for a repetition code,
- 00:17:19and we have this error pattern here shown by the bits
- 00:17:24with x's on them.
- 00:17:26And that activates the syndrome that we see at the top.
- 00:17:32OK.
- 00:17:33
- 00:17:36Yes.
- 00:17:36In the first step of belief propagation,
- 00:17:38we assume that each bit in the Tanner graph
- 00:17:43has the same probability of being an error.
- 00:17:45So what this essentially means is
- 00:17:47that we set the probability of error
- 00:17:48to being the initial error channel.
- 00:17:52OK?
- 00:17:53And then we run the lead propagation,
- 00:17:56and information is exchanged between the nodes
- 00:17:58and this probability distribution
- 00:17:59redistributes itself.
- 00:18:01And eventually once we start seeing that the probability
- 00:18:03distribution isn't really changing very much,
- 00:18:05we terminate the iterations.
- 00:18:07And at that point, we can then make a hard decision
- 00:18:10on the basis of the output probability distribution
- 00:18:13by thresholding the probabilities such
- 00:18:16that all bits that have a probability
- 00:18:18above a certain value are set to 1.
- 00:18:21And then that gives us a hard decision,
- 00:18:26which is the recovery operation which can then be
- 00:18:28applied to remove the syndrome.
- 00:18:30
- 00:18:32OK.
- 00:18:32So the reason I chose this particular error configuration
- 00:18:36is that we can see something interesting happening
- 00:18:39when we start the iteration.
- 00:18:40
- 00:18:44OK.
- 00:18:45So we see that in the first iteration, what happens
- 00:18:49is that we immediately--
- 00:18:51the probability distribution immediately shifts to this bit
- 00:18:54here.
- 00:18:55And the reason for this is that in the first step--
- 00:18:57well, can you actually see the mouse?
- 00:18:59
- 00:19:03Yeah.
- 00:19:03In the first step, what happens is that this bit sends
- 00:19:06a message to this variable node and this variable node says,
- 00:19:10OK, I'm--
- 00:19:11sorry.
- 00:19:12This variable node sends a message to this check node.
- 00:19:14This check node says, OK, I'm not satisfied,
- 00:19:17so it sends a message back.
- 00:19:18And then that information reinforces the belief
- 00:19:22that this bit is an error.
- 00:19:24OK?
- 00:19:25Right.
- 00:19:26However, when we then run the algorithm further, what we find
- 00:19:32is that we get more information from other variable nodes
- 00:19:36and checks in the graph, and then eventually the information
- 00:19:39propagates from this syndrome through to this syndrome.
- 00:19:43But this one is also unsatisfied.
- 00:19:46And if we go to the step just before the final one,
- 00:19:49what we see is that we have--
- 00:19:54yes, all four.
- 00:19:55
- 00:19:59Yeah.
- 00:19:59Now we see that all four of these bits
- 00:20:02have high probability of being an error.
- 00:20:05What then happens is a message is sent to here.
- 00:20:07It says, OK, look.
- 00:20:08These three all have high probability.
- 00:20:12And if we just have these three bits being an error,
- 00:20:15then these two syndromes are removed.
- 00:20:17But that isn't compatible with this bit also being an error.
- 00:20:20So in the final step, that information
- 00:20:22is propagated to the final bit, and then its probability
- 00:20:27goes to zero.
- 00:20:27
- 00:20:31OK.
- 00:20:32And so this is great.
- 00:20:33This algorithm can be applied to any graph.
- 00:20:36So we can apply it in principle to a surface code decoding
- 00:20:43problem.
- 00:20:43So this is the Hx component of a distance five surface code.
- 00:20:48We can do the same thing again.
- 00:20:49Can initialize belief propagation and then run it.
- 00:20:54And we see that for this particular error configuration,
- 00:20:57it just works.
- 00:20:58So we can make a hard decision and then we
- 00:21:01see the combined effect of the initial error plus the recovery
- 00:21:04operation just gives us a stabilizer,
- 00:21:06and the syndrome is removed.
- 00:21:07So from this it appears that belief propagation actually
- 00:21:10works for the surface code.
- 00:21:12So let's look at what happens when
- 00:21:13we draw a threshold plot of a toric code
- 00:21:19decoded with belief propagation.
- 00:21:21So for those of you who haven't seen these sorts of plots
- 00:21:24before, this is a quantum error correction threshold plot.
- 00:21:29On the x-axis we have the qubit error rate,
- 00:21:31which is the physical error rate on each qubit.
- 00:21:34On the y-axis we have the logical error rates.
- 00:21:38And what we typically do is we draw lines
- 00:21:40on this plot for a number of different error correction
- 00:21:44codes of different size.
- 00:21:47Once we start looking at this particular decoding plot,
- 00:21:49we see that this error correction protocol,
- 00:21:52i.e. the toric code decoded with belief propagation,
- 00:21:54doesn't actually work that well.
- 00:21:56And the reason for this is we see
- 00:21:59that as we increase the size of the code--
- 00:22:01so this blue line is 162 qubit code
- 00:22:03and the red line is a 450 qubit code.
- 00:22:07And what we see is that as we increase the size of our error
- 00:22:10correction codes, the logical error rate
- 00:22:12is actually getting higher.
- 00:22:14So this is, in effect, an error worsening code rather than
- 00:22:17an error correction code.
- 00:22:18And this toric code decoded with the belief propagation decoder
- 00:22:22doesn't actually have a threshold.
- 00:22:24
- 00:22:30And the reason for this was touched on
- 00:22:32in the previous talk.
- 00:22:34It's because of the presence of these so-called trapping
- 00:22:36sets in quantum codes, and these arise
- 00:22:38when we have cycles that support degenerate errors.
- 00:22:44OK?
- 00:22:44So the classic example is this configuration
- 00:22:48in the surface code where we have a so-called corner error.
- 00:22:54So once we've run the leave propagation
- 00:22:56on this, what we find is that the probability
- 00:23:01distribution eventually converges
- 00:23:04to showing that there's high probability of these four
- 00:23:08qubits around this square or being errors.
- 00:23:10So what then happens when we make
- 00:23:12a hard decision is that we apply a z error on every single one
- 00:23:17of the qubits around the square, and it doesn't actually
- 00:23:19remove the [INAUDIBLE].
- 00:23:24So these trapping sets cause the belief propagation decoder
- 00:23:27to fail, and this is why we don't get a threshold.
- 00:23:29
- 00:23:35OK, so we have to find some way of dealing with these trapping
- 00:23:37sets.
- 00:23:38And perhaps the best way that we know of at the moment
- 00:23:41is the so-called order statistics
- 00:23:44post-processing technique.
- 00:23:46So the OSD decoder is a technique
- 00:23:49for decoding parity check matrix that
- 00:23:51always guarantees a solution that satisfies the syndrome.
- 00:23:55It's existed for a while.
- 00:23:56So it was initially developed as a method
- 00:23:58for lowering the error flaw in classical LDPC codes,
- 00:24:02and it was first applied in the quantum setting by Pantaleev
- 00:24:04and Kalachev in 2019.
- 00:24:08OK.
- 00:24:08And the idea behind the belief propagation
- 00:24:10plus order statistics decoder is we first
- 00:24:12run belief propagation.
- 00:24:14If that belief propagation outputs satisfies
- 00:24:16a syndrome, then great.
- 00:24:17We simply output the BP solution.
- 00:24:19If not, then we call the LSD decoder
- 00:24:23and output the OSD decoding.
- 00:24:27OK.
- 00:24:27And the basic idea behind OSD is that we can always
- 00:24:30solve the decoding problem using matrix inversion.
- 00:24:33So consider this parity check matrix here.
- 00:24:37Of course we can't immediately invert this matrix
- 00:24:41because it's not full rank square.
- 00:24:42But what we can always do is we can
- 00:24:44choose a full rank subset of the columns of this matrix
- 00:24:47and decode that instead.
- 00:24:49OK.
- 00:24:49So the question now might be, why don't we
- 00:24:52just do this all the time?
- 00:24:53And the reason for this is that there
- 00:24:54are lots of different ways of choosing
- 00:24:56this subset of columns.
- 00:24:58So we could just pick four random linearly independent
- 00:25:01columns, and it would give you a solution to the decoding
- 00:25:03problem, but it's not necessarily
- 00:25:05going to be a minimum weight solution.
- 00:25:07And in the context of quantum error correction,
- 00:25:08that would probably mean that you're introducing
- 00:25:10a logical error, in most cases.
- 00:25:14OK.
- 00:25:15So how do we get around this?
- 00:25:16Well, the idea is to use the probability distribution that's
- 00:25:19output by BP to intelligently select this subset of columns.
- 00:25:24OK?
- 00:25:24So let's stick with this parity check matrix here.
- 00:25:27If we want to decode this syndrome,
- 00:25:29then we can put this into the belief propagation decoder
- 00:25:32and it might output this sequence
- 00:25:35of probabilities, error probabilities on each bit.
- 00:25:38So this fourth bit, in this case,
- 00:25:40would be the one that's most likely to be an error according
- 00:25:43to belief propagation.
- 00:25:45We can then use that belief propagation probability
- 00:25:50distribution output to rank our qubits or columns of the parity
- 00:25:55check matrix from least to--
- 00:25:57from most to least likely of being in error.
- 00:26:00OK?
- 00:26:00And then the idea is that we rearrange
- 00:26:02the columns of this parity check matrix
- 00:26:04according to this ordering.
- 00:26:06So this is the original ordering,
- 00:26:07and now this is the ordering such
- 00:26:09that this column corresponds to the bit that's
- 00:26:12most likely to be an error.
- 00:26:14At this point, we can then perform Gaussian elimination
- 00:26:17column wise on this matrix to find a full rank
- 00:26:20submatrix consisting of the bits that belief propagation has
- 00:26:25told us that are highly likely to be an error.
- 00:26:29OK.
- 00:26:29And this improves decoding.
- 00:26:31So this is just the decoding plot
- 00:26:32we saw before where we just had the belief propagation decoder.
- 00:26:35But we can turn on the OSD post-processing.
- 00:26:39And what we see is, first of all,
- 00:26:41we immediately get a much more considerable reduction
- 00:26:46in the error.
- 00:26:48So we get much more logical error suppression.
- 00:26:50So we see that this, at 1% we're suppressing
- 00:26:55by about five orders of magnitude more.
- 00:26:59The second thing is that we also now
- 00:27:01have a working error correction protocol in that we actually
- 00:27:03have a threshold.
- 00:27:04So remember the blue line is a smaller code
- 00:27:06and the red line is the biggest code.
- 00:27:08Now as we increase the size of our toric codes, what we see
- 00:27:11is we get improved logical expression.
- 00:27:14So our error correction protocol is now working.
- 00:27:17So this is the way in which we can hack belief propagation
- 00:27:20to work for quantum codes.
- 00:27:21
- 00:27:27OK.
- 00:27:28So the big advantage of this belief propagation
- 00:27:31plus all the statistics decoder is
- 00:27:33that it can work for pretty much anything.
- 00:27:35So all the examples I've given up to this point
- 00:27:37have been for the topological codes.
- 00:27:40However, it can be used to decode pretty much anything
- 00:27:44that is LDPC.
- 00:27:45So examples include fractal codes, 3D color codes,
- 00:27:49various topological codes, as well as circuit level noise
- 00:27:53decoding graphs.
- 00:27:56Now for some advertisements.
- 00:27:57If you want to try out this decoder,
- 00:27:59we have an open source software package
- 00:28:02that allows you to go from defining a parity check
- 00:28:05matrix in Python using NumPy to actually doing some decoding
- 00:28:11in less than 10 lines of code.
- 00:28:13Yeah?
- 00:28:15AUDIENCE: Maybe I missed something.
- 00:28:17Apologies if I did.
- 00:28:18I thought the reason you wanted to look at the belief
- 00:28:20propagation was that it's a local algorithm,
- 00:28:22but then this OSD where you have to sort the probabilities, that
- 00:28:26doesn't seem very local.
- 00:28:27JOSCHKA ROFFE: Sure.
- 00:28:28Yeah.
- 00:28:28We'll get on to that later.
- 00:28:30
- 00:28:34OK, so up to this point I've described
- 00:28:38decoding in the very idealized setting of circuit level noise
- 00:28:43and of code capacity noise, sorry.
- 00:28:46So of course, this isn't very realistic
- 00:28:50because we actually have to use quantum circuits
- 00:28:52to measure our stabilizers.
- 00:28:55So in the code capacity noise case, what we assume
- 00:28:58is that the errors only occur in a very specific location.
- 00:29:01So this is an example of an error correction circuit
- 00:29:03for one of the simplest quantum codes,
- 00:29:05which is the 4, 2, 2 error detection codes.
- 00:29:09Yep?
- 00:29:09AUDIENCE: What is the time complexity of this [INAUDIBLE]
- 00:29:12algorithm?
- 00:29:14JOSCHKA ROFFE: OK.
- 00:29:15I have a slide on that later, so we'll get to it.
- 00:29:19So this is a 4, 2, 2 error detection code.
- 00:29:23The Hx and Hz matrices are just these.
- 00:29:28So the first part is an error correction circuits
- 00:29:30is creating an encoder.
- 00:29:31We then get our logical state, and then
- 00:29:33we measure the x and the z stabilizers, which in this case
- 00:29:37just zzzz and xxxx out via projective measurements.
- 00:29:43OK?
- 00:29:43So what this looks like in a little more detail
- 00:29:45is that we can use CNOTs to measure these.
- 00:29:48So this is the encoder now.
- 00:29:50This is the measurement of the z stabilizers,
- 00:29:52and this is the measurement of the x stabilizers.
- 00:29:54And this is still a code capacity noise model,
- 00:29:56because we're assuming that errors only
- 00:29:57occur in this location.
- 00:29:59Of course this is completely unrealistic,
- 00:30:01because these auxiliary qubits that
- 00:30:05are used to measure out the stabilizers are also qubits,
- 00:30:08and are therefore just as susceptible to error
- 00:30:11as the qubits on the register.
- 00:30:14So what a more realistic picture of the circuit might look like,
- 00:30:18just for bit flip errors, is like this.
- 00:30:20So we have errors in every single location in the circuit,
- 00:30:23and we have to adapt our decoders
- 00:30:25to take account of this.
- 00:30:28And the reason this is problematic
- 00:30:30is because of error propagation through the circuit.
- 00:30:33To understand this, we need to understand
- 00:30:35two rules for the propagation of Pauli errors
- 00:30:39through CNOT gates.
- 00:30:41So the way that this works is if we
- 00:30:43have an x error at this location, then a CNOT gate
- 00:30:45will propagate it like this.
- 00:30:48So we get an x error at the control and an x error
- 00:30:52at the target.
- 00:30:53And similarly for z errors, what we find
- 00:30:55is that if we have a z error down here,
- 00:30:58it will climb up the CNOT to give you
- 00:31:01an error pattern like this.
- 00:31:03So what this means is that single errors in our quantum
- 00:31:06error correction extraction circuit
- 00:31:08can spread to heavier weight errors
- 00:31:11and activate more syndromes than you might expect.
- 00:31:14OK.
- 00:31:15So as an example, what we find if we have an x error here
- 00:31:19in this part of the 4, 2, 2 syndrome extraction circuit
- 00:31:22is that it will propagate like this.
- 00:31:24So I'll activate both this syndrome measurement
- 00:31:27and this syndrome measurement.
- 00:31:30OK, so given this more complicated circuit level noise
- 00:31:34model, is it sufficient to decode the seed code using
- 00:31:40the techniques that we've been describing so far?
- 00:31:42I.e. just decoding the Hx and Hz matrices as classical codes.
- 00:31:46And the answer to this is unfortunately no.
- 00:31:49So what we need is we need a much more detailed decoding
- 00:31:52graph that relates each sequence of measurements here
- 00:31:56to each specific error mechanism in our circuit.
- 00:32:00
- 00:32:02OK.
- 00:32:03So what typically happens is that these circuit level
- 00:32:07noise decoding graphs get much bigger.
- 00:32:09However, one important thing to note
- 00:32:11is that they are still binary matrices, so they
- 00:32:13can be decoded using BP+OSD.
- 00:32:16And the other thing to note is that these decoding graphs
- 00:32:21get much bigger.
- 00:32:22So in this paper, which will be the subject of the next talk,
- 00:32:26they decoded a 144, 12, 12 code under circuit level noise
- 00:32:31and had to use an 8785 column decoding matrix.
- 00:32:36So we see that our decoding matrix
- 00:32:38increases from 144 to 8785.
- 00:32:42So that's a pretty huge expansion
- 00:32:44in the complexity of the decoding problem.
- 00:32:48OK.
- 00:32:49So this leads on nicely to the next section,
- 00:32:51which is the question as to whether BP+OSD is enough.
- 00:32:54And now to answer your question about the runtime
- 00:32:57of this algorithm.
- 00:32:58So to recap, the belief propagation component
- 00:33:02is extremely fast.
- 00:33:03So it runs in linear time in the block length and almost
- 00:33:08constant when run in parallel.
- 00:33:11However, the big bottleneck in our belief propagation
- 00:33:13plus order statistics decoder is the OSD component,
- 00:33:16because that requires Gauss elimination which is,
- 00:33:19by comparison, really slow.
- 00:33:21So it's cubic in the block length in the worst case
- 00:33:24and also very difficult to parallelize.
- 00:33:28So the OSD component is really the bottleneck,
- 00:33:30and that raises the question as to
- 00:33:33whether this particular decoder that
- 00:33:35works so well for simulations and for benchmarking quantum
- 00:33:38LDPC codes is the actual decoding technique that we'd
- 00:33:41want to use in a real quantum computer
- 00:33:44when we're decoding real syndromes.
- 00:33:48So the open problem is, is there an OSD alternative
- 00:33:51that improves performance with reduced complexity?
- 00:33:56OK.
- 00:33:56So the dream would be to have a completely local decoder using
- 00:34:01a combination of belief propagation and something
- 00:34:03like the flip decoder that was described in the previous talk.
- 00:34:08So the advantage of this would be
- 00:34:09that it would be a completely local algorithm.
- 00:34:11So the flip decoder only needs to know about individual bits
- 00:34:15in its immediate environment.
- 00:34:17So it could be arbitrarily parallelized,
- 00:34:19and this would, in combination with BP, which
- 00:34:22can also be very easily parallelized,
- 00:34:24lead to very fast decoders.
- 00:34:26So this idea has been explored in various works.
- 00:34:30However, what we find is that there's usually
- 00:34:33a pretty serious performance penalty.
- 00:34:35So this is a plot from some work I'm currently
- 00:34:38doing with Tom Scruby from Okinawa University
- 00:34:42where we're comparing the performance of BP+OSD
- 00:34:44with the BP plus flip decoder.
- 00:34:48So the red line is BP+OSD and the green line
- 00:34:50is the BP flip decoder.
- 00:34:52And you see that we're reducing performance
- 00:34:55by about two orders of magnitude.
- 00:34:57OK?
- 00:34:57However, the interesting thing is
- 00:34:59that for this particular code, which is a very small quantum
- 00:35:01LDPC code, we're still getting improved performance compared
- 00:35:04to a comparably sized surface code decoded with matching.
- 00:35:08So I guess the question to be more further explored is, yeah,
- 00:35:13what is the compromise that we're-- what is the performance
- 00:35:16hit that we're willing to settle on?
- 00:35:18Is it potentially a faster decoder
- 00:35:21where if it means a reduction in the quality of the decoding?
- 00:35:27OK.
- 00:35:27The second approach that we could look at
- 00:35:29is repeat until success belief propagation.
- 00:35:32So here the idea is that we can run belief propagation
- 00:35:34multiple times, and after each run of BP,
- 00:35:38we modify something about our decoding problem.
- 00:35:41So one example is to modify the error
- 00:35:43channel and other techniques.
- 00:35:46Modify the decoding graph itself.
- 00:35:49On the right here, I have an example
- 00:35:51from this paper, the stabilizing activation decoding technique.
- 00:35:55And here, the idea is that you, after every round of BP,
- 00:35:58delete some problematic check nodes that
- 00:36:01are involved in trapping sets.
- 00:36:02And for a particular code family,
- 00:36:04they actually showed that you can
- 00:36:06improve upon the performance of belief propagation plus all
- 00:36:08the statistics decoding.
- 00:36:10What isn't explored, however, is the performance's code
- 00:36:13in a wider variety of settings or in the circuit level noise.
- 00:36:20Another approach is to abandon the idea of decoding your bit
- 00:36:26flip and phase flip errors separately,
- 00:36:29because this comes with a performance penalty.
- 00:36:31So some decoders have been proposed where you instead
- 00:36:34decode over a quaternary graph rather than a binary graph.
- 00:36:39So in some settings, this can give us performance
- 00:36:42that is comparable to BP+OSD.
- 00:36:45What I find, however, is that these decoders are
- 00:36:48very difficult to calibrate and they're often
- 00:36:50very dependent upon the initial conditions.
- 00:36:54And the final--
- 00:36:56AUDIENCE: [INAUDIBLE]
- 00:36:57
- 00:37:00JOSCHKA ROFFE: Because the belief propagation
- 00:37:01update rules get more complicated.
- 00:37:05AUDIENCE: [INAUDIBLE]
- 00:37:08JOSCHKA ROFFE: I don't know the exact number,
- 00:37:10but yeah, it's more.
- 00:37:11It's slower.
- 00:37:14So the final thing I'll mention are this idea
- 00:37:19to use machine learning to actually improve the belief
- 00:37:21propagation algorithm itself.
- 00:37:23So as mentioned previously, belief propagation
- 00:37:27works by exchanging information between check
- 00:37:29nodes and variable nodes.
- 00:37:31So the idea here is to use some sort
- 00:37:33of machine learning technique to learn better update rules.
- 00:37:38So here's one paper where they trained a belief propagation
- 00:37:43decoder using a graph neural network,
- 00:37:45and they do get performance that is approaching the BP+OSD line.
- 00:37:49The problem, of course, with this
- 00:37:50is that, as with many things to do with neural networks,
- 00:37:54scalability can be a problem.
- 00:37:56Can you ensure that it still works
- 00:37:57once you start increasing the size of the nodes?
- 00:37:59
- 00:38:02OK.
- 00:38:03And that's the end of my talk.
- 00:38:04So to summarize, quantum LDPC codes
- 00:38:07require general decoders that go beyond union
- 00:38:09fines and minimum weight perfect matching.
- 00:38:13BP+OSD works reasonably well and gives us nice decoding plots
- 00:38:17for papers in our early simulations
- 00:38:20of these quantum LDPC code protocols.
- 00:38:23However, because of the cubic complexity
- 00:38:26of the post-processing, it's unlikely they will actually
- 00:38:29be practical when we're building real quantum computers based
- 00:38:34on LDPC code architectures.
- 00:38:36So the open problem is, can we improve upon OSD?
- 00:38:42[APPLAUSE]
- 00:38:44
- 00:38:52AUDIENCE: I was curious if the BP+OSD algorithm is
- 00:38:57relatively easy to include information
- 00:38:59about the noise model?
- 00:39:00So if you have bias noise, where do you put that in?
- 00:39:05JOSCHKA ROFFE: Yes, it's very easy to include.
- 00:39:07So when you start belief propagation,
- 00:39:10you set all the bits to be equal to the initial error channel.
- 00:39:13If that initial error channel is biased,
- 00:39:15you just set the initial probability distribution bias.
- 00:39:18It works out of the box, basically.
- 00:39:20
- 00:39:22AUDIENCE: So I'm kind of curious.
- 00:39:24I think Calderbank has a paper on spatially coupled codes
- 00:39:28where he just runs belief propagation.
- 00:39:32And I think the Tanner graphs have girth eight,
- 00:39:35I want to say.
- 00:39:36And it seems to perform pretty well.
- 00:39:39What do you think the prospects are
- 00:39:41for getting rid of any post-processing completely
- 00:39:45in both the ideal setting where you're not
- 00:39:48worrying about circuit level noise,
- 00:39:50and then in the setting of circuit level noise
- 00:39:52where you actually have to worry about quite a bit more?
- 00:39:56JOSCHKA ROFFE: Yeah.
- 00:39:57I think this is a good strategy.
- 00:39:59So yeah, if you can design your code such
- 00:40:01that it works well in the first instance,
- 00:40:05then that's the best way of going about it.
- 00:40:07So yeah, I guess this is the motivation behind many
- 00:40:10of these biased noise papers.
- 00:40:12As you get more bias, it becomes more classical,
- 00:40:14therefore easier to decode.
- 00:40:16So yeah, that is the best strategy
- 00:40:19if we can find a technique to actually do that.
- 00:40:23AUDIENCE: If I may comment.
- 00:40:25So when you add the circuit level
- 00:40:28noise, then the girth of the graph,
- 00:40:31no matter what the original graph was, it becomes three.
- 00:40:35So the generators for the circuit level noise
- 00:40:39always have-- all of them have three or two.
- 00:40:42The two you can eliminate, but the three remain,
- 00:40:45and that makes it very difficult.
- 00:40:49Basically out of the box belief propagation always fails.
- 00:40:53BP+OSD requires a lot of extra because the original BP
- 00:41:02fails very often.
- 00:41:03
- 00:41:05MODERATOR: There's a question in the back.
- 00:41:06AUDIENCE: Yeah.
- 00:41:07So what is the argument that BP converges in a linear time?
- 00:41:14You said like BP is a [INAUDIBLE]..
- 00:41:17JOSCHKA ROFFE: Yeah.
- 00:41:18AUDIENCE: How can we guarantee that things--
- 00:41:20iteration converges after one time?
- 00:41:24JOSCHKA ROFFE: OK.
- 00:41:25There are proofs of this.
- 00:41:27Yeah.
- 00:41:27I can't reproduce them on the fly.
- 00:41:30AUDIENCE: Another question is BP+OSD possibility better
- 00:41:33than the minimum perfect matching in the toric code
- 00:41:35case?
- 00:41:36JOSCHKA ROFFE: No.
- 00:41:37So yeah, I guess this is the price
- 00:41:40to pay for the fact it's a very general decoder.
- 00:41:42So yeah.
- 00:41:44For the specific case of the surface codes
- 00:41:47having a decoder that's optimized specifically
- 00:41:51for that decoding problem, it can improve the performance
- 00:41:55of the matching decoder by combining it
- 00:41:56with the propagation.
- 00:41:58So yeah.
- 00:42:00Oscar, who is also here, proposed
- 00:42:03that combining belief propagation [INAUDIBLE]
- 00:42:05decoder performance improves the performance even more.
- 00:42:10
- 00:42:15AUDIENCE: So I have a question about criteria decoder.
- 00:42:18So this decoding problem is an estimation
- 00:42:21of recovery operation from syndrome.
- 00:42:26If I understand correctly, you describe the protocol
- 00:42:28to construct the estimator in this talk.
- 00:42:31But there must be some criteria on the decoder.
- 00:42:34For example, random guess may also provide some estimator,
- 00:42:38but we don't call it as a decoder.
- 00:42:40It must be some criteria to call some procedure as decoder.
- 00:42:45Do you have some criteria in your mind for how
- 00:42:50BP+OSD satisfies the criteria?
- 00:42:52
- 00:42:54JOSCHKA ROFFE: Sorry, I don't really understand the question.
- 00:42:56So criteria for?
- 00:43:00AUDIENCE: Estimating recovery operations as a decoder.
- 00:43:03This estimation needs to satisfy some condition in order
- 00:43:08for this estimator to grow--
- 00:43:10in order to call some protocol.
- 00:43:12JOSCHKA ROFFE: OK.
- 00:43:13Yeah.
- 00:43:14AUDIENCE: What's the criteria?
- 00:43:16JOSCHKA ROFFE: I guess-- yes.
- 00:43:17So in this particular setting, we're
- 00:43:20just assuming that the decoding matrices are classical matrices
- 00:43:24and we're just trying to find the lowest weight
- 00:43:27solution in the hope that this will not introduce.
- 00:43:32Much clearer and more accurate decoders
- 00:43:35can look at the actual code sets and try and differentiate
- 00:43:38between the logical classes that you have
- 00:43:41and separate the different parts of the code space.
- 00:43:46So some of the tensor network decoders and the surface codes,
- 00:43:50for example, are much more accurate, but also much slower.
- 00:43:53Is that the--
- 00:43:54
- 00:43:57AUDIENCE: OK, you--
- 00:43:59JOSCHKA ROFFE: Perhaps we can--
- 00:44:01AUDIENCE: So maybe this is a basis
- 00:44:05of your coding procedures.
- 00:44:07Is that right?
- 00:44:08JOSCHKA ROFFE: Yeah.
- 00:44:09I guess the approach here is to make a decoder
- 00:44:12and then see whether it works in terms of the decoding graphs.
- 00:44:15Yeah.
- 00:44:15If it does.
- 00:44:17But yeah, it's very difficult to actually prove things
- 00:44:19about belief propagation.
- 00:44:20AUDIENCE: [INAUDIBLE].
- 00:44:22Thank you.
- 00:44:22
- 00:44:25AUDIENCE: You mentioned union find
- 00:44:27can be applied to general LDPC codes,
- 00:44:29but there's this generalization for more general codes.
- 00:44:33So is there a combination of belief propagation
- 00:44:36with this general union find?
- 00:44:38Just like belief propagation with matching?
- 00:44:42JOSCHKA ROFFE: Yes.
- 00:44:43So belief propagation plus matching decoders
- 00:44:45have been proposed.
- 00:44:48The generalization of the union find
- 00:44:51algorithm proposed does have the problem that it's
- 00:44:54quite slow because you have to do
- 00:44:56repeated Gauss elimination on all the clusters as it grows.
- 00:44:59AUDIENCE: Yeah.
- 00:45:00Yeah.
- 00:45:00
- 00:45:07AUDIENCE: The algorithms like B propagation,
- 00:45:09but they also have--
- 00:45:11can they be naturally implemented
- 00:45:12on channels like amplitude damping?
- 00:45:15JOSCHKA ROFFE: Potentially.
- 00:45:16That's not something I've explored, though.
- 00:45:18
- 00:45:21AUDIENCE: So in the measurement circle
- 00:45:23you show, you always separate x measurements apart
- 00:45:26and z measurement apart.
- 00:45:29And I can imagine that this kind of arrangement
- 00:45:32could be good at correcting one type of errors
- 00:45:38but might be bad on correcting the other type of errors.
- 00:45:44So is there any reason of you not mixing the measurement
- 00:45:50surface of the two types of errors?
- 00:45:54JOSCHKA ROFFE: For simplicity.
- 00:45:55That was the reason for this particular example, but yes.
- 00:45:58Scheduling is important.
- 00:46:00The actual order in which you measure these syndromes
- 00:46:03can also affect the distance of the decoding graph.
- 00:46:07So yeah.
- 00:46:08In the next talk we may see some discussion about this.
- 00:46:12They explored lots of different ways
- 00:46:15of extracting these stabilizers in several other ways.
- 00:46:19
- 00:46:23AUDIENCE: Some of these machine learning encoders,
- 00:46:25how easy is it to, say, train on one code
- 00:46:28and transfer it to a different code?
- 00:46:30Are they all trained for the specific code
- 00:46:32that they're trying decode?
- 00:46:33Are they?
- 00:46:34JOSCHKA ROFFE: So yeah.
- 00:46:35My understanding is that each instance of a LDPC code,
- 00:46:38you'd have to train a specific model.
- 00:46:43Yeah.
- 00:46:44This is another problem with LDPC codes.
- 00:46:46It's just random.
- 00:46:47It's not like a surface code where
- 00:46:49once you figured out how to decode a small patch,
- 00:46:51you can essentially do it everywhere.
- 00:46:52
- 00:46:55Any more questions?
- 00:46:56
- 00:46:59MODERATOR: Yes?
- 00:47:01AUDIENCE: So you said matching based algorithms might not
- 00:47:04work in general cases, but as far as I understand,
- 00:47:11this matching comes from the fact
- 00:47:13that surface code is based on the homology
- 00:47:17and the homology is present in every quantum code in general.
- 00:47:25So does it make sense to find minimum weight, cycle,
- 00:47:30or path in general pod codes?
- 00:47:35JOSCHKA ROFFE: There are some ways
- 00:47:36of hacking more general homological codes
- 00:47:41to work with matching.
- 00:47:44It's not something I'm too familiar with.
- 00:47:48Amanda Quintavalle, who's sitting to your right,
- 00:47:50is working on that exact problem,
- 00:47:52so it'd be best to discuss with her.
- 00:47:54
- 00:47:57AUDIENCE: I think that the problem
- 00:47:59is how the single graph relates to the minimum weight solution
- 00:48:06of the decoding problem.
- 00:48:08So on the coding you always have each qubit error
- 00:48:12exact two syndromes, so you can map
- 00:48:14the problem of finding a minimum weight recovery
- 00:48:18operator to the 1 to obtain a minimum weight
- 00:48:22path between the two syndromes.
- 00:48:24But for general codes that lead to more than one mode
- 00:48:28for two syndromes, you don't have the same mapping.
- 00:48:33OK.
- 00:48:34It's not [INAUDIBLE],, to guess what the number of--
- 00:48:39but there are a lot of matching people in this.
- 00:48:42
- 00:48:46AUDIENCE: Yeah, one quick question
- 00:48:48to poke at the open problem a little bit.
- 00:48:51I mean, I think perfectly decoding to minimum weight
- 00:48:56of general CSS code is known to be NP hard.
- 00:49:00Right?
- 00:49:01So the question is where do we find hope
- 00:49:06to look for widely applicable decoders?
- 00:49:12You know what I mean?
- 00:49:13Like, where might we need to look?
- 00:49:19JOSCHKA ROFFE: Well, my hope is that some technique
- 00:49:21from the classical literature, as
- 00:49:24of yet been untested in the quantum setting,
- 00:49:26will work well.
- 00:49:28So OSD, for example, was such a technique.
- 00:49:32But yeah, I don't know specifically
- 00:49:33how it's going to be solved.
- 00:49:35
- 00:49:38MODERATOR: OK.
- 00:49:40Any more questions?
- 00:49:43If not, let's thank Joschka again.
- 00:49:45[APPLAUSE]
- 00:49:49
- Quantum LDPC Codes
- Error Correction
- Belief Propagation
- Circuit Level Noise
- Stabilizer Codes
- Quantum Computation
- Trapping Sets
- OSD
- Decoding Techniques
- Machine Learning