But what is a Fourier series? From heat flow to drawing with circles | DE4

00:24:47
https://www.youtube.com/watch?v=r6sGWTCMz2k

Zusammenfassung

TLDRThis video explores the mathematics of complex Fourier series and their application in creating intricate animations through the combination of rotating vectors. Each vector rotates at a constant frequency, and when combined, they can represent a wide array of shapes over time. The discussion includes the heat equation, linking the history of Fourier series to real-world phenomena. Using complex numbers and exponentials enhances the calculations, allowing for real-time visualization of how functions can be decomposed into simpler oscillations. Emphasis is placed on how tuning initial conditions can influence the outcomes and the importance of infinite sums in expressing functions.

Mitbringsel

  • 📈 Complex Fourier series use rotating vectors to create intricate shapes.
  • 🧊 The heat equation is foundational in understanding Fourier series.
  • 🔗 Fourier series connects mathematical theory to real-world applications.
  • ⚙️ Infinite sums allow for expressing discontinuous functions.
  • 🌀 Complex exponentials simplify calculations in Fourier series.
  • 🔄 Initial conditions can drastically alter outcomes in Fourier representations.
  • 📊 Each coefficient in a Fourier series is determined through integral calculations.
  • 📉 High-frequency terms decay faster, smoothing complex functions over time.
  • 🎨 Animations are generated by calculating Fourier series coefficients from paths.
  • 💡 Understanding these concepts paves the way for solving differential equations.

Zeitleiste

  • 00:00:00 - 00:05:00

    The video introduces the complex Fourier series, illustrating how individually simple rotating vectors combine to create complex shapes over time. The viewer is invited to appreciate the intricate animation consisting of 300 arrows, emphasizing the surprising coordination of these simplistic motions to form detailed figures, and highlights the mathematical control we have over this process.

  • 00:05:00 - 00:10:00

    The narrative transitions to the origins of Fourier series in solving the heat equation, explaining that the temperature evolution of a rod can be simplified into scaling cosine waves. Notably, combining these waves showcases a critical property: higher frequency oscillations decay faster, leading to a smoothing effect over time, which captures the evolution of the heat distribution as a summary of its frequency components.

  • 00:10:00 - 00:15:00

    The video emphasizes Fourier's pivotal question of expressing arbitrary functions, like a step function, as sums of sine waves. Even though finite sums cannot replicate discontinuities, Fourier's infinite series can approximate them, showcasing the power of convergence in functions despite the complexities of initial conditions for phenomena such as heat flow mathematic modeling.

  • 00:15:00 - 00:24:47

    Finally, it explains the advancements in understanding Fourier series through complex functions, particularly using the complex exponential as a foundational element. The computation of coefficients for representing arbitrary functions as sums of these rotating vectors is clarified, leading to practical applications in numerical integration and graphics rendering, while foreshadowing the importance of exponential functions and complex numbers in solving differential equations.

Mehr anzeigen

Mind Map

Video-Fragen und Antworten

  • What is a complex Fourier series?

    A complex Fourier series is a representation of a function as a sum of rotating vectors, facilitating the creation of intricate shapes over time.

  • How does the heat equation relate to Fourier series?

    The heat equation describes the temperature distribution over time, and Fourier series provide a mathematical tool to express and solve various conditions associated with it.

  • Why is Fourier series considered important?

    Fourier series allows for the breakdown of complex functions into simpler oscillations, making it invaluable in mathematics, physics, and engineering.

  • What is an infinite sum in the context of Fourier series?

    An infinite sum refers to the process of summing an infinite series of terms that approach a specific function or value, like expressing discontinuous functions.

  • How are coefficients in a Fourier series determined?

    Coefficients in a Fourier series are determined by integrating the function in conjunction with an exponential factor that holds still the vector corresponding to each term.

  • What role do complex numbers play in Fourier series?

    Complex numbers provide a cleaner way to handle calculations in Fourier series by representing the values as rotating vectors in the complex plane.

  • What is the impact of different initial conditions in Fourier series?

    Different initial conditions can be adjusted to produce a wide array of shapes and functions, showcasing the flexibility of Fourier series.

  • Can all functions be represented by a Fourier series?

    Most functions can be approximated by a Fourier series under certain conditions, although specific criteria must be met.

  • How are these animations generated in the video?

    The animations are generated by calculating coefficients of the Fourier series for predefined paths, typically defined in SVG format.

  • What is the significance of the exponential function in Fourier series?

    The exponential function, particularly the complex exponential, is fundamental in expressing oscillations and solving differential equations.

Weitere Video-Zusammenfassungen anzeigen

Erhalten Sie sofortigen Zugang zu kostenlosen YouTube-Videozusammenfassungen, die von AI unterstützt werden!
Untertitel
en
Automatisches Blättern:
  • 00:00:05
    Here, we look at the math behind an animation like this one,
  • 00:00:08
    what's known as a complex Fourier series.
  • 00:00:11
    Each little vector is rotating at some constant integer frequency,
  • 00:00:15
    and when you add them together, tip to tail, the final tip draws out some shape over time.
  • 00:00:21
    By tweaking the initial size and angle of each vector,
  • 00:00:24
    we can make it draw pretty much anything we want, and here you'll see how.
  • 00:00:31
    Before diving into it all, I want you to take
  • 00:00:33
    a moment to just linger on how striking this is.
  • 00:00:37
    This particular animation has 300 rotating arrows in total.
  • 00:00:41
    Go full screen for this if you can, the intricacy is worth it.
  • 00:00:50
    Think about this, the action of each individual arrow is perhaps
  • 00:00:54
    the simplest thing you could imagine, rotation at a steady rate.
  • 00:00:58
    And yet the collection of all added together is anything but simple,
  • 00:01:02
    and the mind-boggling complexity is put into an even sharper focus the farther we
  • 00:01:06
    zoom in, revealing the contributions of the littlest, quickest,
  • 00:01:09
    and downright frenetic arrows.
  • 00:01:12
    When you consider the chaotic frenzy you're looking at,
  • 00:01:15
    and the clockwork rigidity underlying all the motions,
  • 00:01:18
    it's bizarre how the swarm acts with a kind of coordination to trace
  • 00:01:21
    out some very specific shape.
  • 00:01:23
    And unlike much of the emergent complexity you find elsewhere in nature,
  • 00:01:27
    this is something that we have the math to describe and to control completely.
  • 00:01:31
    Just by tuning the starting conditions, nothing more,
  • 00:01:34
    we can make this swarm conspire in all of the right ways to draw anything you want,
  • 00:01:39
    provided you have enough little arrows.
  • 00:01:42
    What's even crazier is that the ultimate formula for all of this is incredibly short.
  • 00:01:52
    Now often, Fourier series are described in terms of something that looks a little
  • 00:01:56
    different, functions of real numbers being broken down as a sum of sine waves.
  • 00:02:01
    That turns out to be a special case of this more general rotating vector
  • 00:02:04
    phenomenon that we'll build up to, but it's where Fourier himself started,
  • 00:02:07
    and there's good reason for us to start the story there as well.
  • 00:02:11
    Technically, this is the third video in a sequence about the heat equation,
  • 00:02:14
    what Fourier was working on when he developed his big idea.
  • 00:02:18
    I would like to teach you about Fourier series in a way that doesn't depend on
  • 00:02:21
    you coming from those chapters, but if you have at least a high-level idea for
  • 00:02:25
    the problem from physics which originally motivated this piece of math,
  • 00:02:28
    it gives some indication for just how unexpectedly far-reaching Fourier series are.
  • 00:02:32
    All you need to know is that we had a certain equation which tells us
  • 00:02:36
    how the temperature distribution on a rod would evolve over time,
  • 00:02:40
    and incidentally it also describes many other phenomena unrelated to heat.
  • 00:02:44
    While it's hard to directly use this equation to figure out what will happen to an
  • 00:02:49
    arbitrary heat distribution, there's a simple solution if the initial function just
  • 00:02:53
    happens to look like a cosine wave, with the frequency tuned so that it's flat at each
  • 00:02:57
    end point.
  • 00:02:58
    Specifically, as you graph what happens over time,
  • 00:03:01
    these waves simply get scaled down exponentially,
  • 00:03:04
    with higher frequency waves having a faster exponential decay.
  • 00:03:10
    The heat equation happens to be what's known in the business as a linear equation,
  • 00:03:15
    meaning if you know two solutions and add them up, that sum is a new solution.
  • 00:03:20
    You can even scale them each by some constant,
  • 00:03:23
    which gives you some dials to turn to construct a custom function solving the equation.
  • 00:03:29
    This is a fairly straightforward property that you can verify for yourself,
  • 00:03:32
    but it's incredibly important.
  • 00:03:34
    It means we can take our infinite family of solutions,
  • 00:03:37
    these exponentially decaying cosine waves, scale a few of them by some
  • 00:03:41
    custom constants of our choosing, and combine them to get a solution for a new,
  • 00:03:45
    tailor-made initial condition, which is some combination of cosine waves.
  • 00:03:50
    One important thing I'd like you to notice is that when you combine these waves,
  • 00:03:54
    because the higher frequency ones decay faster,
  • 00:03:57
    the sum you construct will tend to smooth out over time,
  • 00:04:00
    as all the high frequency terms quickly go to zero,
  • 00:04:02
    leaving only the low frequency terms dominating.
  • 00:04:06
    So in a funny way, all of the complexity in the evolution of this heat
  • 00:04:09
    distribution which the heat equation implies is captured by this
  • 00:04:12
    difference in the decay rates for the different pure frequency components.
  • 00:04:18
    It's at this point that Fourier gains immortality.
  • 00:04:21
    I think most normal people at this stage would say, well,
  • 00:04:24
    I can solve the heat equation when the initial distribution just happens to look like
  • 00:04:28
    a wave, or a sum of waves, but what a shame it is that most real world distributions
  • 00:04:32
    don't at all look like that.
  • 00:04:34
    I mean, for example, let's say you brought together two rods
  • 00:04:37
    which were each at some uniform temperature, and you wanted
  • 00:04:40
    to know what happens immediately after they come into contact.
  • 00:04:45
    To make the number simple, let's say the temperature of the left rod is 1 degree,
  • 00:04:49
    and the right rod is negative 1 degree, and that the total length,
  • 00:04:53
    L, of the combined two rods is 1.
  • 00:04:54
    What this means is our initial temperature distribution is a step function,
  • 00:04:59
    which is so obviously different from a sine wave, or the sum of sine waves,
  • 00:05:03
    don't you think?
  • 00:05:05
    I mean, it's almost entirely flat, not wavy, and for god's sake it's even discontinuous!
  • 00:05:10
    And yet Fourier thought to ask a question which seems absurd.
  • 00:05:14
    How do you express this as a sum of sine waves?
  • 00:05:17
    Even more boldly, how do you express any initial distribution as a sum of sine waves?
  • 00:05:21
    And it's more constrained than just that!
  • 00:05:24
    You have to restrict yourself to adding waves which satisfy a certain boundary condition,
  • 00:05:28
    and as we saw last video, that means working with these cosine functions whose
  • 00:05:32
    frequencies are all some whole number multiple of a given base frequency.
  • 00:05:36
    And by the way, if you were working with some different boundary condition,
  • 00:05:40
    say that the endpoints have to stay fixed, you'd have a different set of waves at
  • 00:05:44
    your disposal to piece together, in this case replacing that cosine expression with
  • 00:05:48
    a sine.
  • 00:05:49
    It's strange how often progress in math looks more like
  • 00:05:52
    asking a new question rather than simply answering old ones.
  • 00:05:56
    Fourier really does have a kind of immortality now,
  • 00:05:58
    with his name essentially synonymous with the idea of breaking
  • 00:06:01
    down functions and patterns as combinations of simple oscillations.
  • 00:06:05
    It's really hard to overstate just how important and far-reaching that idea
  • 00:06:09
    turned out to be, well beyond anything Fourier himself could have imagined.
  • 00:06:13
    And yet, the origin of all this is a piece of physics which,
  • 00:06:16
    at first glance, has nothing to do with frequencies and oscillations.
  • 00:06:21
    If nothing else, this should give you a hint about
  • 00:06:23
    the general applicability of Fourier series.
  • 00:06:26
    Now hang on, I hear some of you saying, none of these sums of sine waves that
  • 00:06:29
    you're showing are actually the step function, they're all just approximations.
  • 00:06:33
    And it's true, any finite sum of sine waves will never be perfectly flat,
  • 00:06:37
    except for a constant function, nor will it be discontinuous.
  • 00:06:42
    But Fourier thought more broadly, considering infinite sums.
  • 00:06:46
    In the case of our step function, it turns out to be equal to this infinite sum,
  • 00:06:51
    where the coefficients are 1, negative one third, plus one fifth, minus one seventh,
  • 00:06:57
    and so on for all the odd frequencies, and all of it is rescaled by 4 divided by pi.
  • 00:07:03
    I'll explain where those numbers come from in a moment.
  • 00:07:06
    Before that, it's worth being clear about what we mean by a phrase like infinite sum,
  • 00:07:10
    which runs the risk of being a little vague.
  • 00:07:13
    Consider the simpler context of numbers, where you could say,
  • 00:07:17
    for example, that this infinite sum of fractions equals pi divided by 4.
  • 00:07:21
    As you keep adding the terms one by one, at all times what you have is rational,
  • 00:07:26
    it never actually equals the irrational pi divided by 4.
  • 00:07:30
    But this sequence of partial sums approaches pi over 4, which is to say,
  • 00:07:34
    the numbers you see, while never equaling pi over 4,
  • 00:07:37
    get arbitrarily close to that value, and they stay arbitrarily close to that value.
  • 00:07:43
    That's all a mouthful to say, so instead we abbreviate
  • 00:07:46
    and just say the infinite sum equals pi over 4.
  • 00:07:50
    With functions, you're doing the same thing, but with many different values in parallel.
  • 00:07:55
    Consider a specific input, and the value of all
  • 00:07:58
    of these scaled cosine functions for that input.
  • 00:08:02
    If that input is less than 0.5, as you add more and more terms, the sum will approach 1.
  • 00:08:10
    If that input is greater than 0.5, as you add more and more terms,
  • 00:08:13
    it would approach negative 1.
  • 00:08:17
    At the input 0.5 itself, all of the cosines are 0,
  • 00:08:20
    so the limit of the partial sums is also 0.
  • 00:08:24
    That means that, somewhat awkwardly, for this infinite sum to be strictly true,
  • 00:08:28
    we have to prescribe the value of this set function at the point of
  • 00:08:32
    discontinuity to be 0, sort of halfway along the jump.
  • 00:08:36
    Analogous to an infinite sum of rational numbers being irrational,
  • 00:08:40
    the infinite sum of wavy continuous functions can equal a discontinuous flat function.
  • 00:08:47
    Getting limits into the game allows for qualitative changes,
  • 00:08:50
    which finite sums alone never could.
  • 00:08:53
    There are multiple technical nuances that I'm sweeping under the rug here.
  • 00:08:56
    Does the fact that we're forced into a certain value for the step function
  • 00:08:59
    at the point of discontinuity make any difference for the heat flow problem?
  • 00:09:03
    For that matter, what does it really mean to solve
  • 00:09:06
    a PDE with a discontinuous initial condition?
  • 00:09:09
    Can we be sure that the limit of solutions to the heat equation is also a solution?
  • 00:09:13
    And can we be sure that all functions actually have a Fourier series like this?
  • 00:09:17
    If not, when not?
  • 00:09:19
    These are exactly the kind of questions which real analysis is built to answer,
  • 00:09:22
    but it falls a bit deeper in the weeds than I'd like to go here,
  • 00:09:25
    so I'll relegate that all to links in the video's description.
  • 00:09:28
    The upshot is that when you take the heat equation solutions associated with
  • 00:09:33
    these cosine waves and add them all up, all infinitely many of them,
  • 00:09:37
    you do get an exact solution describing how the step function will evolve over time,
  • 00:09:41
    and if you had done this in 1822, you would have become immortal for doing so.
  • 00:09:47
    The key challenge in all of this, of course, is to find these coefficients.
  • 00:09:53
    So far, we've been thinking about functions with real number outputs,
  • 00:09:57
    but for the computations, I'd like to show you something more general than what
  • 00:10:00
    Fourier originally did, applying to functions whose output can be any complex number
  • 00:10:04
    in the 2D plane, which is where all these rotating vectors from the opening come
  • 00:10:08
    back into play.
  • 00:10:10
    Why the added complexity?
  • 00:10:12
    Well, aside from being more general, in my view, the computations become cleaner,
  • 00:10:16
    and it's easier to understand why they actually work.
  • 00:10:20
    More importantly, it sets a good foundation for the ideas that will come up later on
  • 00:10:24
    in the series, like the Laplace transform, and the importance of exponential functions.
  • 00:10:29
    We'll still think of functions whose input is some real number on a finite interval,
  • 00:10:33
    say from 0 up to 1 for simplicity, but whereas something like a temperature
  • 00:10:37
    function will have outputs on the real number line,
  • 00:10:40
    this broader view will let the outputs wander anywhere in the 2D complex plane.
  • 00:10:45
    You might think of such a function as a drawing,
  • 00:10:47
    with a pencil tip tracing out different points in the complex plane as the
  • 00:10:51
    input ranges from 0 to 1.
  • 00:10:53
    And instead of sine waves being the fundamental building block,
  • 00:10:56
    as you saw at the start, we'll focus on breaking these functions down
  • 00:10:59
    as a sum of little vectors, all rotating at some constant integer frequency.
  • 00:11:03
    Functions with real number outputs are essentially really boring drawings,
  • 00:11:09
    a one-dimensional pencil sketch.
  • 00:11:11
    You might not be used to thinking of them like this,
  • 00:11:14
    since usually we visualize such a function with a graph,
  • 00:11:17
    but right now the path being drawn is only in the output space.
  • 00:11:25
    If you do one of these decompositions into rotating vectors for a boring one-dimensional
  • 00:11:30
    drawing, what will happen is that the vectors with frequency 1 and negative 1 will
  • 00:11:34
    have the same length, and they'll be horizontal reflections of each other.
  • 00:11:39
    When you just look at the sum of these two as they rotate,
  • 00:11:42
    that sum stays fixed on the real number line, and it oscillates like a sine wave.
  • 00:11:46
    If you haven't seen it before, this might be a really weird way to think about what a
  • 00:11:51
    sine wave is, since we're used to looking at its graph rather than the output alone
  • 00:11:55
    wandering on the real number line, but in the broader context of functions with complex
  • 00:11:59
    number outputs, this oscillation on the horizontal line is what a sine wave looks like.
  • 00:12:04
    Similarly, the pair of rotating vectors with frequencies 2 and negative 2 will
  • 00:12:09
    add another sine wave component, and so on, with the sine waves we were looking
  • 00:12:14
    for earlier now corresponding to pairs of vectors rotating in opposite directions.
  • 00:12:19
    So the context that Fourier originally studied,
  • 00:12:22
    breaking down real-valued functions into sine waves,
  • 00:12:25
    is a special case of the more general idea of 2D drawings and rotating vectors.
  • 00:12:34
    And at this point, maybe you don't trust me that widening our view to
  • 00:12:37
    complex functions makes things easier to understand, but bear with me,
  • 00:12:41
    it's really worth the added effort to see the fuller picture,
  • 00:12:44
    and I think you'll be pleased with how clean the actual computation is in
  • 00:12:47
    this broader context.
  • 00:12:49
    You may also wonder why, if we're going to bump things up into two dimensions,
  • 00:12:52
    we don't just talk about 2D vectors, what does the square root
  • 00:12:55
    of negative one have to do with anything?
  • 00:12:58
    Well, the heart and soul of Fourier series is the complex exponential, e to the i times t.
  • 00:13:04
    As the input t ticks forward with time, this value walks
  • 00:13:07
    around the unit circle at a rate of one unit per second.
  • 00:13:12
    In the next video you'll see a quick intuition for why exponentiating imaginary
  • 00:13:16
    numbers walks around circles like this from the perspective of differential equations.
  • 00:13:20
    And beyond that, as the series progresses, I hope to give you some
  • 00:13:23
    sense for why complex exponentials like this are actually very important.
  • 00:13:27
    In theory, you could describe all of the Fourier series stuff purely in terms of vectors,
  • 00:13:31
    and never breathe a word of i, the square root of negative one.
  • 00:13:35
    The formulas would become more convoluted, but beyond that,
  • 00:13:38
    leaving out the function e to the x would somehow no longer authentically
  • 00:13:42
    reflect why this idea turns out to be so useful for solving differential equations.
  • 00:13:47
    For right now, if you want, you can think of e to the i t as a
  • 00:13:50
    notational shorthand for describing rotating vectors,
  • 00:13:53
    but just keep in the back of your mind that it is more significant than mere shorthand.
  • 00:13:58
    You'll notice I'm being a little loose with language using the words vector and complex
  • 00:14:02
    numbers somewhat interchangeably, in large part because thinking of complex numbers
  • 00:14:06
    as little arrows makes the idea of adding a lot of them together easier to visualize.
  • 00:14:11
    Alright, armed with the function e to the i times t,
  • 00:14:13
    let's write down a formula for each of these rotating vectors we're working with.
  • 00:14:18
    For right now, think of each of them as starting
  • 00:14:20
    pointing one unit to the right at the number 1.
  • 00:14:23
    The easiest vector to describe is the constant one, which stays at the number 1,
  • 00:14:27
    never moving, or if you prefer, it's quote-unquote rotating just at a frequency of 0.
  • 00:14:33
    Then there will be the vector rotating one cycle every second,
  • 00:14:36
    which we write as e to the 2 pi i times t.
  • 00:14:39
    That 2 pi is there because as t goes from 0 to 1,
  • 00:14:42
    it needs to cover a distance of 2 pi along the circle.
  • 00:14:47
    Technically in what's being shown, it's actually one cycle every 10 seconds
  • 00:14:50
    so things aren't too dizzying, I'm slowing everything down by a factor of 10.
  • 00:14:55
    We also have a vector rotating at one cycle per second in the other direction,
  • 00:14:59
    e to the negative 2 pi i times t.
  • 00:15:04
    Similarly, the one going two rotations per second is e to the 2 times 2 pi i times t,
  • 00:15:10
    where that 2 times 2 pi in the exponent describes how much distance is covered in one
  • 00:15:16
    second.
  • 00:15:20
    And we go on like this over all integers, both positive and negative,
  • 00:15:25
    with a general formula of e to the n times 2 pi times i t.
  • 00:15:29
    Notice, this makes it more consistent to write that constant vector
  • 00:15:32
    as e to the 0 times 2 pi times i t, which feels like an awfully
  • 00:15:35
    complicated way to write the number 1, but at least it fits the pattern.
  • 00:15:40
    The control that we have, the set of knobs and dials we get to turn,
  • 00:15:43
    is the initial size and direction of each of these numbers.
  • 00:15:47
    The way we control that is by multiplying each one by some complex constant,
  • 00:15:51
    which I'll call c sub n.
  • 00:15:53
    For example, if we wanted the constant vector not to be at the number 1,
  • 00:15:58
    but to have a length of 0.5, c sub 0 would be 0.5.
  • 00:16:02
    If we wanted the vector rotating at 1 cycle per second to start off at an
  • 00:16:06
    angle of 45 degrees, we'd multiply it by a complex number which has the effect
  • 00:16:10
    of rotating it by that much, which you can write as e to the pi fourths times i.
  • 00:16:15
    And if its initial length needed to be 0.3, then the
  • 00:16:18
    coefficient c sub 1 would be 0.3 times that amount.
  • 00:16:22
    Likewise, everyone in our infinite family of rotating vectors has some complex constant
  • 00:16:27
    being multiplied into it, which determines its initial angle and its total magnitude.
  • 00:16:32
    Our goal is to express any arbitrary function f of t,
  • 00:16:36
    say this one that draws an eighth note as t goes from 0 to 1,
  • 00:16:40
    as a sum of terms like this, so we need some way of picking out these constants
  • 00:16:46
    one by one, given the data of the function itself.
  • 00:16:51
    The easiest of these to find is the constant term.
  • 00:16:55
    This term represents a sort of center of mass for the full drawing.
  • 00:16:59
    If you were to sample a bunch of evenly spaced values for the
  • 00:17:02
    input t as it ranges from 0 to 1, the average of all the outputs
  • 00:17:06
    of the function for those samples would be the constant term c0.
  • 00:17:11
    Or more accurately, as you consider finer and finer samples,
  • 00:17:14
    the average of the outputs for these samples approaches c0 in the limit.
  • 00:17:20
    What I'm describing, finer and finer sums of a function for samples of
  • 00:17:24
    t from the input range, is an integral, an integral of f of t from 0 to 1.
  • 00:17:30
    Normally, since I'm framing this all in terms of averages,
  • 00:17:33
    you would divide the integral by the length of the input range,
  • 00:17:37
    but that length is 1, so in this case, taking an integral and taking an
  • 00:17:40
    average are the same thing.
  • 00:17:42
    There's a very nice way to think about why this integral would pull out c0.
  • 00:17:47
    Remember, we want to think of this function as a sum of rotating vectors,
  • 00:17:51
    so consider this integral, this continuous average, as being applied to that whole sum.
  • 00:17:57
    The average of a sum like this is the same as the sum over the averages of each part.
  • 00:18:06
    You can read this move as a sort of subtle shift in perspective.
  • 00:18:09
    Rather than looking at the sum of all the vectors at each point in time
  • 00:18:13
    and taking the average value they sweep out, look at the average of an
  • 00:18:17
    individual vector as t goes from 0 to 1, and then add up all these averages.
  • 00:18:22
    But each of these vectors just makes a whole number of rotations around 0,
  • 00:18:27
    so its average value as t ranges from 0 to 1 will be 0.
  • 00:18:31
    The only exception is the constant term.
  • 00:18:33
    Since it stays static and doesn't rotate, its average value
  • 00:18:37
    is just whatever number it happened to start on, which is c0.
  • 00:18:41
    So doing this average over the whole function is a
  • 00:18:44
    sort of clever way to kill all the terms that aren't c0.
  • 00:18:48
    But here's the actual clever part.
  • 00:18:49
    Let's say you wanted to compute a different term, like c2,
  • 00:18:52
    sitting in front of the vector rotating two cycles per second.
  • 00:18:56
    The trick is to first multiply f of t by something that makes that vector hold still,
  • 00:19:01
    sort of the mathematical equivalent of giving a smartphone to an overactive child.
  • 00:19:06
    Specifically, if you multiply the whole function by e to the negative
  • 00:19:10
    2 times 2 pi i times t, think about what happens to each term.
  • 00:19:16
    Since multiplying exponentials results in adding what's in the exponent,
  • 00:19:21
    the frequency term in each of our exponents gets shifted down by 2.
  • 00:19:29
    So now, as we do our averages of each term, that c-1
  • 00:19:33
    vector spins around negative 3 times with an average of 0.
  • 00:19:37
    The c0 vector, previously constant, now rotates twice as t ranges from 0 to 1,
  • 00:19:43
    so its average is also 0.
  • 00:19:46
    And likewise, all vectors other than the c2 term make some whole number of rotations,
  • 00:19:51
    meaning they average out to be 0.
  • 00:19:55
    So taking the average of this modified function is
  • 00:19:58
    a clever way to kill all the terms other than c2.
  • 00:20:02
    And of course, there's nothing special about the number 2 here,
  • 00:20:05
    you could replace it with any other n, and you have a general formula for cn,
  • 00:20:08
    which is what we're looking for.
  • 00:20:10
    Out of context, this expression might look complicated, but remember,
  • 00:20:14
    you can read it as first modifying our function, our 2d drawing,
  • 00:20:17
    so as to make the nth little vector hold still,
  • 00:20:20
    and then performing an average which kills all the moving vectors and
  • 00:20:23
    leaves you only with the still part.
  • 00:20:26
    Isn't that crazy?
  • 00:20:27
    All of the complexity in these decompositions you're seeing of drawings into
  • 00:20:31
    sums of many rotating vectors is entirely captured in this little expression.
  • 00:20:36
    So when I'm rendering these animations, that's exactly what I'm having the computer do.
  • 00:20:41
    It treats the path like a complex function, and for a certain range of values n,
  • 00:20:45
    it computes this integral to find the coefficient c of n.
  • 00:20:51
    For those of you curious about where the data for a path itself comes from,
  • 00:20:54
    I'm going the easy route and just having the program read in an SVG,
  • 00:20:57
    which is a file format that defines the image in terms of mathematical curves rather
  • 00:21:01
    than with pixel values.
  • 00:21:03
    So the mapping f of t from a time parameter to points in space basically comes predefined.
  • 00:21:10
    In what's shown right now, I'm using 101 rotating vectors,
  • 00:21:14
    computing the values of n from negative 50 up to 50.
  • 00:21:18
    In practice, each of these integrals is computed numerically,
  • 00:21:21
    basically meaning it chops up the unit interval into many small pieces of size delta t,
  • 00:21:26
    and then adds up this value, f of t times e to the negative n 2 pi i t times delta t,
  • 00:21:31
    for each one of them.
  • 00:21:33
    There are fancier methods for more efficient numerical integration,
  • 00:21:36
    but this gives the basic idea.
  • 00:21:38
    And after you compute these 101 constants, each one determines an initial
  • 00:21:42
    angle and magnitude for the little vectors, and then you just set them all rotating,
  • 00:21:47
    adding them tip to tail as they go, and the path drawn out by the final
  • 00:21:51
    tip is some approximation of the original path you fed in.
  • 00:21:55
    As the number of vectors used approaches infinity,
  • 00:21:57
    the approximation path gets more and more accurate.
  • 00:22:14
    To bring this all back down to earth, consider the example we were looking at earlier,
  • 00:22:18
    of a step function, which remember was useful for modeling the heat dissipation
  • 00:22:22
    between two rods at different temperatures after they come into contact.
  • 00:22:26
    Like any real number valued function, the step function
  • 00:22:29
    is like a boring drawing that's confined to one dimension.
  • 00:22:33
    But this one is an especially dull drawing, since for inputs between 0 and 0.5,
  • 00:22:38
    the output just stays static at the number 1, and then it discontinuously
  • 00:22:42
    jumps to negative 1 for inputs between 0.5 and 1.
  • 00:22:46
    So in the Fourier series approximation, the vector sum stays really
  • 00:22:49
    close to 1 for the first half of the cycle, then quickly jumps to negative 1,
  • 00:22:53
    and stays close to that for the second half of the cycle.
  • 00:22:57
    And remember, each pair of vectors rotating in opposite directions
  • 00:23:01
    corresponds to one of the cosine waves we were looking at earlier.
  • 00:23:06
    To find the coefficients, you would need to compute this integral,
  • 00:23:09
    and for the ambitious viewers among you itching to work out some integrals by hand,
  • 00:23:13
    this is one where you can actually do the calculus to get an exact answer,
  • 00:23:16
    rather than just having a computer do it numerically for you.
  • 00:23:19
    I'll leave it as an exercise to work this out,
  • 00:23:22
    and to relate it back to the idea of cosine waves by pairing off the vectors
  • 00:23:26
    that rotate in opposite directions.
  • 00:23:28
    And for the even more ambitious, I'll leave another exercise up on the screen for
  • 00:23:32
    how to relate this more general computation with what you might see in a textbook
  • 00:23:36
    describing Fourier series only in terms of real valued functions with sines and cosines.
  • 00:23:41
    By the way, if you're looking for more Fourier series content,
  • 00:23:44
    I highly recommend the videos by Mathologer and The Coding Train,
  • 00:23:48
    and I'd also recommend this blog post, links of course in the description.
  • 00:23:53
    So on the one hand, this concludes our discussion of the heat equation,
  • 00:23:57
    which was a little window into the study of partial differential equations.
  • 00:24:01
    But on the other hand, this Fourier-to-Fourier series is a first glimpse at a deeper idea.
  • 00:24:06
    Exponential functions, including their generalization into complex
  • 00:24:09
    numbers and even matrices, play a very important role for differential equations,
  • 00:24:13
    especially when it comes to linear equations.
  • 00:24:16
    What you just saw, breaking down a function as a combination
  • 00:24:19
    of these exponentials and using that to solve a differential equation,
  • 00:24:23
    comes up again and again in different shapes and forms.
  • 00:24:44
    Thank you.
Tags
  • Fourier series
  • complex numbers
  • heat equation
  • mathematics
  • animations
  • differential equations
  • exponential functions
  • oscillation
  • real analysis
  • integrals