[ITW 2023] Neural belief propagation decoding of quantum LDPC codes using overcomplete check matrice

00:21:11
https://www.youtube.com/watch?v=2QMHzfX55WE

Resumen

TLDRThe video presentation discusses the nearly propagation decoding of quantum LDPC codes utilizing overcomplete check matrices, presented by a PhD student at the Karlsruhe Institute of Technology. It highlights the complexities and performance challenges of quantum error correction, particularly with stabilizer codes. The speaker reviews the basic definitions, decoding strategies (including quaternary belief propagation), and the introduction of neural approaches to optimize decoding processes. By employing overcomplete check matrices, significant gains in decoding performance are achieved, allowing for efficient operations and reduced iteration counts. The research ultimately aims to enhance the effectiveness of quantum error correction methods and offers accessible implementation resources for further explorations into quantum coding techniques.

Para llevar

  • 🧠 **Neural BP Decoders**: Combining belief propagation with neural networks for enhanced decoding performance.
  • 🛠️ **Error Correction Focus**: Targeted efforts on improving quantum error correction methods.
  • 📊 **Performance Gains**: Introduction of overcomplete check matrices leads to better decoding efficiency.
  • 📜 **Background Knowledge**: Understanding stabilizer codes is crucial for quantum error correction.
  • 🎓 **PhD Research**: The work was presented at the Information Theory Workshop 2023.
  • 🔍 **Quaternary BP Decoder**: Examined as a more effective alternative to traditional binary approaches.
  • 💻 **Open Source**: Implementation resources available on GitHub for public access.
  • 🚀 **Low Latency Decoding**: New methods allow for faster operations in quantum settings.
  • 🔗 **Input-Output Optimization**: Emphasis on efficient message passing in decoding algorithms.
  • 📈 **Training Challenges**: Addressed the local minima issue in neural decoding optimization.

Cronología

  • 00:00:00 - 00:05:00

    The speaker introduces their research on decoding quantum LDPC codes using overcomplete check matrices, highlighting its relevance to error correction in quantum computers. The speaker elucidates the challenges faced when decoding quantum codes and sets the stage for discussing a new decoding method that combines the advantages of quantum and classical LDPC decoders.

  • 00:05:00 - 00:10:00

    The portion provides insight into the quantum decoding process for stabilizer codes. It explains the operation of a syndrome-based decoder, emphasizing that successful decoding relies on the correction operator being a stabilizer. The relationship between the code's dual matrix and decoding success is established as a key principle.

  • 00:10:00 - 00:15:00

    In the next segment, the speaker delves deeper into the refined BP propagation decoder and its mechanisms, illustrating the significance of initialization and message passing between variable and check nodes. This segment also introduces the idea of neural network-inspired iterative decoding through trainable weights, potentially improving decoder performance.

  • 00:15:00 - 00:21:11

    The final part discusses the implementation of overcomplete check matrices in enhancing decoder performance. The results indicate that these matrices improve decoding efficiency and reduce the number of iterations needed. The talk concludes by summarizing the advances made in the decoding process and providing resources for further information on the research presented.

Ver más

Mapa mental

Vídeo de preguntas y respuestas

  • What is the main focus of the research presented?

    The research focuses on nearly propagation decoding of quantum LDPC codes using overcomplete check matrices.

  • What is a stable code in quantum error correction?

    A stable code is one that uses logical qubits to encode physical qubits, specifically in stabilizer codes.

  • What decoding approach is discussed for quantum LDPC codes?

    The presentation discusses using belief propagation (BP) decoders, particularly a quaternary BP decoder.

  • How does overcomplete check matrices benefit decoding performance?

    It provides decoding gain, reduces the number of iterations needed, and avoids complications of deep neural networks.

  • Where can the implementation of the presented techniques be found?

    The implementation can be found on a GitHub repository linked in the video.

  • What was a key challenge faced when training the neural BP decoder?

    The key challenge was the presence of many local minima in the loss function due to the degeneracy of quantum codes.

Ver más resúmenes de vídeos

Obtén acceso instantáneo a resúmenes gratuitos de vídeos de YouTube gracias a la IA.
Subtítulos
en
Desplazamiento automático:
  • 00:00:00
    hello everybody in this video I'm going
  • 00:00:02
    to talk about our work nearly
  • 00:00:04
    propagation decoding of quantum ltpc
  • 00:00:06
    codes using overcomplete check matrices
  • 00:00:09
    um this email I'm currently a PhD
  • 00:00:12
    student at carlswald Institute of
  • 00:00:13
    Technology and this is this is a joint
  • 00:00:17
    work with my colleagues we have
  • 00:00:19
    presented this at the information Theory
  • 00:00:21
    Workshop 2023
  • 00:00:23
    okay
  • 00:00:24
    I would like to start with some
  • 00:00:27
    background the big picture of this work
  • 00:00:29
    so we are looking at the
  • 00:00:32
    error correction for quantum computers
  • 00:00:34
    and more specifically we look at
  • 00:00:37
    decoding Quantum at PC codes which is a
  • 00:00:40
    promising candidate for this task and
  • 00:00:44
    Company ldpc codes can be decoded just
  • 00:00:47
    as their classical counterpart
  • 00:00:50
    with PD propagation decoder the problem
  • 00:00:53
    is that due to the special properties of
  • 00:00:56
    quantum codes the decoding performance
  • 00:00:59
    is usually not satisfying
  • 00:01:02
    and
  • 00:01:03
    inspired by the success of neural BP for
  • 00:01:07
    classical codes people have applied
  • 00:01:10
    neuron BP for Quantum adpc codes but so
  • 00:01:13
    far owning to the support Optimum binary
  • 00:01:18
    PD propagation decoders
  • 00:01:21
    and this isn't not as good as the
  • 00:01:25
    quaternary BP decoder which we will be
  • 00:01:29
    looking at in this work
  • 00:01:31
    and also we want to combine this with
  • 00:01:35
    the overcomplete check matrices which
  • 00:01:37
    gives us both decoding gain and the
  • 00:01:39
    benefits in training
  • 00:01:41
    before I talk about the details I would
  • 00:01:45
    like to start defining some notations we
  • 00:01:48
    call a stable as a code which use an
  • 00:01:51
    logical qubit to a physical qubit to to
  • 00:01:55
    encode K logical qubits as the NK
  • 00:01:59
    stabilizer code
  • 00:02:01
    and to derive or stabilize the code we
  • 00:02:03
    can work with the additive codes over
  • 00:02:06
    gf4 and we want to Define them with
  • 00:02:09
    their check Matrix as
  • 00:02:12
    which is n by a matrix and M is a minus
  • 00:02:15
    K it's the number of checks so we caught
  • 00:02:19
    every row as I of s a check of the code
  • 00:02:23
    and each of the SI corresponds to a
  • 00:02:27
    stabilizer generator as I
  • 00:02:30
    which can be used for to produce a
  • 00:02:32
    syndrome for us and for example we can
  • 00:02:35
    use this mapping from gf4 elements to
  • 00:02:39
    the poly operators and
  • 00:02:43
    um to make sure that these zip
  • 00:02:45
    stabilizer code is the indeed as a valid
  • 00:02:49
    code we need to make sure that as itself
  • 00:02:51
    orthogonal with respect to the trace in
  • 00:02:54
    the product or to say the mission in a
  • 00:02:56
    project this corresponds to the
  • 00:02:59
    commutativity of the stabilizer group
  • 00:03:02
    um
  • 00:03:03
    what do I mean by that
  • 00:03:05
    so
  • 00:03:08
    we say that as the self orthogonal that
  • 00:03:11
    means every row of as any two rows of as
  • 00:03:15
    a signal to each other more specifically
  • 00:03:18
    that means that if we take the sum of
  • 00:03:21
    the element wise Trace in the product
  • 00:03:23
    the sum is zero modular two and this
  • 00:03:28
    element wise tracing a product is given
  • 00:03:31
    in this table and we can see that this
  • 00:03:34
    result is exactly the same as the
  • 00:03:37
    commutativity relationship of the poly
  • 00:03:40
    operators that is to say if
  • 00:03:43
    in this presentation I say to
  • 00:03:46
    arrows or two operators commute with
  • 00:03:49
    with each other then it means that the
  • 00:03:52
    corresponding jf4 element has Trace in
  • 00:03:56
    the product being zero
  • 00:03:59
    okay let's look at the example code this
  • 00:04:02
    is the 71 steam code constructed from
  • 00:04:04
    the 74 bch code which are classical and
  • 00:04:09
    you can see that this is the check
  • 00:04:10
    Matrix of this code which has six rows
  • 00:04:14
    this corresponds to the six stabilizer
  • 00:04:17
    generators which generates the
  • 00:04:19
    stabilizer group
  • 00:04:20
    and more specifically we call a
  • 00:04:24
    stabilizer code which has check Matrix
  • 00:04:26
    in this form so you can see like this
  • 00:04:28
    one where each X and H set are both
  • 00:04:32
    binary matrices then we call them them
  • 00:04:35
    as the CSS code which is named after the
  • 00:04:39
    inventors of this code and
  • 00:04:44
    we are looking at are CSS codes and they
  • 00:04:48
    have a sparse check Matrix which means
  • 00:04:51
    that the row and column weights are
  • 00:04:52
    upper bounded by a small constant which
  • 00:04:55
    is irrelevant to the block lens
  • 00:04:59
    okay now let's look at decoding Quantum
  • 00:05:01
    at BC codes this PSI out is The Logical
  • 00:05:05
    Quantum state that we are trying to
  • 00:05:07
    predict it will be encoded to a n qubit
  • 00:05:10
    uh
  • 00:05:12
    State per side then it will go through a
  • 00:05:15
    Quantum channel for example if it's
  • 00:05:17
    sitting if it is in a Quantum memory it
  • 00:05:20
    could be the
  • 00:05:22
    environment that posts
  • 00:05:25
    pollution over time so this can be
  • 00:05:28
    modeled as the depolarizing channel
  • 00:05:31
    which introduced the x that and why
  • 00:05:34
    Arrows with equal probability Epsilon
  • 00:05:37
    over three
  • 00:05:38
    and then the polluted Quantum state is e
  • 00:05:43
    plus I which we can pass into a single
  • 00:05:47
    measurement circuit which produce as the
  • 00:05:52
    syndrome that which is a lens M binary
  • 00:05:56
    reactor then we can use that to
  • 00:06:00
    Decode by using a syndrome BD
  • 00:06:04
    propagation decoder which operates on
  • 00:06:06
    the check Matrix of this code and gives
  • 00:06:09
    us estimation
  • 00:06:11
    and he had of the error and we write the
  • 00:06:15
    California
  • 00:06:17
    calligraphic letter to denote that this
  • 00:06:20
    the poly operator because in the quantum
  • 00:06:24
    system will use the quantum operators so
  • 00:06:29
    this ehat will be applied as the reverse
  • 00:06:33
    operation or to say as the correction of
  • 00:06:36
    the error E
  • 00:06:38
    so I if we have the decoding that's
  • 00:06:42
    that's successful we will return e hat e
  • 00:06:45
    plus I being the same as per side so as
  • 00:06:48
    I write here so notice this different
  • 00:06:51
    from classical coding where we actually
  • 00:06:53
    need to estimate e hat being same as e
  • 00:06:57
    and in this case because of the
  • 00:07:00
    definition of stabilizer code we only
  • 00:07:02
    need to make sure that D height e is the
  • 00:07:05
    stabilizer and because applying a
  • 00:07:08
    stabilizer to a Quantum state does not
  • 00:07:10
    change the state we will have a
  • 00:07:13
    successful decoding
  • 00:07:15
    and to check this so in optimizing a
  • 00:07:19
    decoder we want to evaluate that the
  • 00:07:22
    decoding is indeed successful what we
  • 00:07:24
    can do is to use the Dual Matrix of s
  • 00:07:28
    with we call as SQL so this is the
  • 00:07:31
    kernel of s with respect to the trace
  • 00:07:34
    inner product that we defined earlier
  • 00:07:37
    and because as dual is the kernel of s
  • 00:07:40
    it is sufficient to say if e e plus a
  • 00:07:44
    plus e hat is in is orthogonal to every
  • 00:07:48
    row of as dual then we can say that e
  • 00:07:53
    plus e hat is indeed a stabilizer
  • 00:07:57
    okay that is the framework of the
  • 00:08:00
    decoding now let's take a closer look at
  • 00:08:02
    the defragation decoder
  • 00:08:06
    this is still the example code that we
  • 00:08:08
    introduced earlier the steam code
  • 00:08:12
    um first of all we need to draw a
  • 00:08:13
    telegraph in this case we have the edges
  • 00:08:17
    of different color which correspond to
  • 00:08:19
    the coefficient of the um of the element
  • 00:08:23
    then
  • 00:08:25
    we have different options to decode the
  • 00:08:28
    first option is the simple one we could
  • 00:08:31
    treat HS
  • 00:08:34
    HX and H that as two separate binary
  • 00:08:37
    matrices which enables us to use this
  • 00:08:40
    simple binary BP decoder however in this
  • 00:08:44
    case some errors are not possible to be
  • 00:08:46
    corrected they are therefore the error
  • 00:08:48
    flow is higher
  • 00:08:50
    and the performance is not so good
  • 00:08:52
    and the second option we have is to use
  • 00:08:55
    the quaternary BD propagation decoder
  • 00:08:57
    which joins the decode a x in that error
  • 00:09:01
    and will have a better performance but
  • 00:09:04
    also a higher complexity because of the
  • 00:09:07
    message passing which is a vector
  • 00:09:10
    message
  • 00:09:12
    and fortunately we have this recently
  • 00:09:15
    proposed the refined bp4 decoder by cool
  • 00:09:19
    and Lai and in this refined bp4 we could
  • 00:09:24
    compress the vector method to scalar and
  • 00:09:27
    also the output of the decoding will be
  • 00:09:30
    exactly the same as before decoder
  • 00:09:33
    but the complexity is only comparable to
  • 00:09:36
    BP bp2 decoder which is beneficial if we
  • 00:09:40
    want to introduce neurobility
  • 00:09:42
    propagation decoding
  • 00:09:44
    okay now I will introduce the beauty
  • 00:09:49
    propagation decoding with the refined BP
  • 00:09:53
    first of all we need to do
  • 00:09:54
    initialization
  • 00:09:56
    to for the variable node we we have this
  • 00:10:00
    only input is the channel statistics
  • 00:10:02
    which we estimated from the channel note
  • 00:10:06
    that this week because we cannot look at
  • 00:10:08
    the quantum State we will not have any
  • 00:10:10
    error on the real values
  • 00:10:14
    of the qubit but in instead just the
  • 00:10:18
    constant for every node then we have a
  • 00:10:23
    uh for every
  • 00:10:25
    arrows of the three type then what we
  • 00:10:29
    can do is to use this specific belief
  • 00:10:32
    quantization which is introduced in the
  • 00:10:35
    refine BP to compress the message into a
  • 00:10:38
    scalar message
  • 00:10:39
    so I will not go to the details here but
  • 00:10:43
    we can note that the messages that that
  • 00:10:48
    we are passing in the refine BP is the
  • 00:10:51
    belief of the ice error being commute
  • 00:10:54
    with the um
  • 00:10:56
    the elements of the edge that is being
  • 00:10:59
    passed to
  • 00:11:01
    okay that's the initialization then we
  • 00:11:04
    will pass the message to the check notes
  • 00:11:06
    which performs the usual uh some box
  • 00:11:10
    plus operation over the extrinsic
  • 00:11:12
    messages and the syndrome is used to
  • 00:11:18
    flip the sign of the message if a
  • 00:11:20
    certain check node is said to be
  • 00:11:22
    unsatisfied by the measurement we did
  • 00:11:26
    earlier
  • 00:11:28
    then we can pass the message to the VN
  • 00:11:31
    to the variable nodes which does the sum
  • 00:11:34
    of the channel information together with
  • 00:11:37
    the extrinsic message then again we do
  • 00:11:39
    the belief quantization to compress it
  • 00:11:42
    into the scalar message how decisions
  • 00:11:45
    also performed to estimate error here
  • 00:11:47
    and we repeat this process so
  • 00:11:51
    um so on so first here we have the
  • 00:11:54
    syndrome matched or the maximum number
  • 00:11:57
    of iteration has reached
  • 00:12:00
    now we are ready to introduce our neuron
  • 00:12:02
    BD propagation decoder so the idea is to
  • 00:12:06
    if we unroll the beauty propagation
  • 00:12:09
    decoder it will look like a neural
  • 00:12:11
    network and what we can do is to add
  • 00:12:13
    trainable widths which are dependent on
  • 00:12:16
    the iteration l
  • 00:12:19
    so more specifically in our case we
  • 00:12:22
    introduce ways for for the variable
  • 00:12:25
    nodes we introduce ways on the channel
  • 00:12:26
    information so we can see that for every
  • 00:12:30
    every variable node it has a weight
  • 00:12:33
    together with the channel statistics as
  • 00:12:35
    input and
  • 00:12:38
    the the weights are dependent with the
  • 00:12:41
    index of the variable node and the
  • 00:12:43
    number of iterations and index of the
  • 00:12:47
    current iteration and also for the check
  • 00:12:49
    node we introduced a weight that is
  • 00:12:52
    applied directly to the message of the
  • 00:12:54
    check node
  • 00:12:56
    then we can optimize these weights using
  • 00:12:59
    stochastic gradient descent by
  • 00:13:01
    minimizing a loss function which we
  • 00:13:04
    proposed here
  • 00:13:05
    this is the loss per Arrow pattern and
  • 00:13:09
    it is a bit big but we'll walk you
  • 00:13:12
    through a step by step
  • 00:13:15
    um this is the notation in case you
  • 00:13:17
    forget some
  • 00:13:18
    um so what we are doing here is to use
  • 00:13:22
    the condition of verifying the decoding
  • 00:13:26
    success that we introduced earlier so we
  • 00:13:29
    said earlier that if we have e plus e
  • 00:13:32
    hat being orthog node to every row of a
  • 00:13:36
    steward and it means the E plus e hat is
  • 00:13:39
    the inside the stabilizer space then
  • 00:13:43
    the decoder decoding successful what we
  • 00:13:47
    do here is to break this condition into
  • 00:13:50
    element y sum
  • 00:13:52
    so that is to say for a certain row as J
  • 00:13:56
    duo we compute the probability of e High
  • 00:14:01
    e i plus e hat being anti-commute with
  • 00:14:05
    this with the ice element of SJ dual
  • 00:14:09
    which can be estimated in the variable
  • 00:14:13
    node update then because we know that um
  • 00:14:17
    the number of the number of
  • 00:14:21
    anti-commuting elements here should be
  • 00:14:24
    close to an even number so we use the F
  • 00:14:27
    function here which does something like
  • 00:14:30
    a soft modular tool them
  • 00:14:34
    we have if this sj2 is satisfied then we
  • 00:14:38
    have the loss function for this row
  • 00:14:41
    approaches to zero then we can sum the
  • 00:14:44
    loss over all rows of sdoor
  • 00:14:47
    to get the loss of this Arrow pattern
  • 00:14:51
    and then we can apply a statistic
  • 00:14:54
    gradient descent by training over a
  • 00:14:57
    small mini batches
  • 00:14:59
    however we found that the this playing
  • 00:15:03
    MBP decoder does not give a good
  • 00:15:06
    performance
  • 00:15:07
    indeed the gain is only negligible if we
  • 00:15:10
    apply it directly because we see that
  • 00:15:13
    the loss decrease but it will soon reach
  • 00:15:17
    some plateau and stops decreasing and
  • 00:15:21
    the reason is that this this loss
  • 00:15:24
    function has many local minimum and it
  • 00:15:27
    is not really the problem of the loss
  • 00:15:29
    function but because of the degeneracy
  • 00:15:33
    of the quantum codes
  • 00:15:34
    so what we do is here we want to
  • 00:15:37
    introduce something additional called
  • 00:15:39
    over complete check matrices it looks
  • 00:15:41
    like this here what we do is we want to
  • 00:15:44
    modify the check Matrix this is the
  • 00:15:46
    original four rank Matrix and this is
  • 00:15:49
    the overall complete one so you can see
  • 00:15:51
    that the original Matrix is kept here
  • 00:15:54
    but We additionally introduce some
  • 00:15:57
    redundant rows
  • 00:15:59
    which are of flow weights selected from
  • 00:16:02
    the row space of the original Matrix
  • 00:16:05
    and this method has been investigated in
  • 00:16:09
    In classical coding and we use it here
  • 00:16:13
    because for one thing it gives us a good
  • 00:16:15
    decoding gain which means that we are
  • 00:16:17
    closer to the optimum
  • 00:16:20
    and second it reduces number of
  • 00:16:22
    iterations which means more operations
  • 00:16:24
    can be done in parallel and which is
  • 00:16:27
    very nice where
  • 00:16:29
    low latency decoding is desired for the
  • 00:16:32
    quantum error correction and the third
  • 00:16:36
    thing is that we can avoid training a
  • 00:16:38
    very deep neural network which can be
  • 00:16:40
    problematic
  • 00:16:41
    and um
  • 00:16:43
    to construct the overcoming check Matrix
  • 00:16:46
    we need to search for low weight words
  • 00:16:47
    in the row space of HX and H that which
  • 00:16:51
    is itself and the hard problem but
  • 00:16:53
    because we have some algorithms such as
  • 00:16:56
    the algorithm by Leon and which enables
  • 00:17:00
    us to do this for a block dance up to a
  • 00:17:02
    few thousand very easily
  • 00:17:04
    okay now look at how it works we
  • 00:17:08
    consider a generalized bicycle codes
  • 00:17:11
    because one thing that they are famous
  • 00:17:13
    called the second thing is that they
  • 00:17:15
    have naturally have a set of
  • 00:17:17
    overcomplete checks
  • 00:17:20
    and we know training has yet been
  • 00:17:23
    applied on this slide and we will only
  • 00:17:27
    look plainly
  • 00:17:29
    pp4 decoding with 32 iterations if we
  • 00:17:32
    didn't specify the number of iterations
  • 00:17:35
    and flooding the scheduling is used
  • 00:17:39
    so here is the code of 40 lens 48
  • 00:17:43
    including six logical qubits
  • 00:17:46
    and we see that this this line is the
  • 00:17:49
    original decoding performance and now we
  • 00:17:52
    expand the check Matrix to 2000 rows
  • 00:17:55
    which is very big compared to the
  • 00:17:58
    original one but we can see that the
  • 00:18:01
    decoding performance is much better and
  • 00:18:03
    also we reduce the number of iterations
  • 00:18:07
    needed to only three
  • 00:18:10
    for another code that is with a lower
  • 00:18:14
    rate we also observe the similar
  • 00:18:17
    Behavior here we have 800 rows of
  • 00:18:21
    the overcoming check Matrix and here we
  • 00:18:25
    only use six iterations but again is
  • 00:18:27
    similar
  • 00:18:28
    we also have the result for two bigger
  • 00:18:32
    coats here we can observe a similar gain
  • 00:18:35
    but here we did not find a a large
  • 00:18:39
    number of redundant checks just as these
  • 00:18:42
    codes so we only had 28 low weight
  • 00:18:45
    checks but we can still get a good
  • 00:18:49
    performance gain if we keep the number
  • 00:18:52
    of iteration unchanged
  • 00:18:55
    now let's look at the new video
  • 00:18:57
    propagation decoding results
  • 00:19:00
    these are the two small codes that I
  • 00:19:02
    showed earlier with the plain bp4
  • 00:19:05
    decoder with the original bp4 and the
  • 00:19:07
    overcompete and here we can see this is
  • 00:19:11
    the
  • 00:19:12
    BP serial BP Plus or other order
  • 00:19:16
    statistic decoding as post processing
  • 00:19:20
    if we use overcomplete check matrices
  • 00:19:23
    they perform similarly
  • 00:19:25
    but we have the advantage that we can
  • 00:19:29
    finish the decoding in a very small
  • 00:19:31
    number of iterations
  • 00:19:33
    then we can apply training based on the
  • 00:19:37
    BP over the overcomplete check Matrix we
  • 00:19:40
    can see that we can still get an another
  • 00:19:44
    good decoding game
  • 00:19:46
    and it is interesting that after after
  • 00:19:50
    training we actually also see that the
  • 00:19:53
    weight for the lower checks are bigger
  • 00:19:57
    than the weights for the highway checks
  • 00:20:00
    which matches our expectation
  • 00:20:04
    okay this is the end of my presentation
  • 00:20:07
    I would like to conclude now so in this
  • 00:20:09
    work we investigated neuron Beauty
  • 00:20:12
    propagation for decoder for out Quantum
  • 00:20:14
    at BC codes and we combined it with
  • 00:20:18
    check with overcomplete check matrices
  • 00:20:21
    and together we get a very good decoding
  • 00:20:25
    game which is better than the state of
  • 00:20:27
    the art
  • 00:20:29
    um most moreover if you want to see the
  • 00:20:33
    implementation of the neurobability
  • 00:20:35
    propagation decoder you are you can go
  • 00:20:38
    to this GitHub
  • 00:20:40
    repository or scan the QR code or if you
  • 00:20:44
    are just interested in like I want to
  • 00:20:46
    get to non-quantum coding you can you
  • 00:20:50
    want to see how can I simulate
  • 00:20:52
    Quantum memory correction codes you can
  • 00:20:55
    also look here
  • 00:20:57
    okay uh thank you for your attention
  • 00:20:59
    this is all of my presentation if you
  • 00:21:02
    have any questions or comments feel free
  • 00:21:04
    to leave in the comment line in the
  • 00:21:07
    comments below or contact us bye
Etiquetas
  • quantum error correction
  • LDPC codes
  • neural decoding
  • propagation decoder
  • error correction
  • stabilizer codes
  • overcomplete check matrices
  • quaternary BP decoder
  • information theory
  • quantum computing