Big O Notation

00:08:36
https://www.youtube.com/watch?v=v4cd1O4zkGw

Ringkasan

TLDRIn this video, Gayle Laakmann McDowell explains Big O notation, a key concept in algorithmic efficiency. She shares a humorous story about a company that tested data transfer speeds between the internet and a carrier pigeon, illustrating the difference between constant time (O(1)) and linear time (O(n)). McDowell provides examples of how to analyze algorithm efficiency and introduces four essential rules for understanding Big O: adding runtimes, dropping constants, using different variables for different inputs, and dropping non-dominant terms. She emphasizes the importance of mastering Big O for technical interviews.

Takeaways

  • 🐦 The pigeon transfer speed is O(1, constant time).
  • 🌐 Internet transfer time is O(n, linear time).
  • 📊 Big O describes how runtime scales with input size.
  • ➕ Add runtimes for different steps in an algorithm.
  • 🚫 Drop constants in Big O notation.
  • 🔄 Use different variables for different inputs.
  • 📉 Drop non-dominant terms in runtime analysis.
  • 📝 Understanding Big O is crucial for coding interviews.
  • 💡 Practice exercises to solidify your understanding.
  • 🎯 Mastering Big O can enhance your problem-solving skills.

Garis waktu

  • 00:00:00 - 00:08:36

    Gayle Laakmann McDowell introduces the concept of Big O notation, which measures algorithmic efficiency. She shares a story about a South African company that tested data transfer speeds using carrier pigeons versus the internet, illustrating that while internet transfer time increases with data size (O(n)), pigeon transfer time remains constant (O(1)). This sets the stage for understanding how Big O describes runtime scaling with input size. McDowell explains basic examples of Big O, including a function that checks an array for a value (O(n)) and one that prints all pairs of values (O(n^2)). She also discusses real-world examples, such as mowing a square plot of land, which can be described in terms of area (O(a)) or side length (O(s^2)). McDowell outlines four important rules for Big O: 1) Add runtimes for different steps, 2) Drop constants, 3) Use different variables for different inputs, and 4) Drop non-dominant terms. She emphasizes the importance of mastering Big O for interviews and encourages practice to solidify understanding.

Peta Pikiran

Video Tanya Jawab

  • What is Big O notation?

    Big O notation describes the runtime efficiency of an algorithm in relation to the size of its input.

  • What does O(1) mean?

    O(1) indicates constant time complexity, meaning the runtime does not change with the size of the input.

  • What does O(n) mean?

    O(n) indicates linear time complexity, meaning the runtime increases linearly with the size of the input.

  • What are the four rules of Big O?

    1. Add runtimes for different steps. 2. Drop constants. 3. Use different variables for different inputs. 4. Drop non-dominant terms.

  • Why is it important to understand Big O for interviews?

    Understanding Big O is crucial for evaluating algorithm efficiency, which is a common topic in technical interviews.

Lihat lebih banyak ringkasan video

Dapatkan akses instan ke ringkasan video YouTube gratis yang didukung oleh AI!
Teks
en
Gulir Otomatis:
  • 00:00:00
    Hi, I'm Gayle Laakmann McDowell, author of Cracking the Coding Interview. Today we're
  • 00:00:03
    going to cover one of my favorite topics, which is big O, or algorithmic
  • 00:00:07
    efficiency. I'm going to start off with a story. Back in 2009 there was this
  • 00:00:12
    company in South Africa and they had really slow internet they were really
  • 00:00:16
    really frustrated by, and they actually had two offices about 50 miles away, so they
  • 00:00:21
    set up this little test to see if it would be faster to transfer big chunks of
  • 00:00:26
    data over the internet with their really slow internet service provider or
  • 00:00:31
    via carrier pigeon.
  • 00:00:33
    Literally they took a bunch of USB sticks or probably actually one giant
  • 00:00:36
    one strapped it to a pigeon and flew it from one office to the next, and they got
  • 00:00:41
    a ton of press over this test and media kind of love this stuff right. And of
  • 00:00:47
    course the pigeon won right. It wouldn't be a funny story otherwise. And so they
  • 00:00:51
    got a whole bunch of press over, and people are like oh my god I can't
  • 00:00:54
    believe that a pigeon beat the internet. But the problem was it was really
  • 00:01:00
    actually kind of a ridiculous test because here's the thing.
  • 00:01:03
    Transferring data over the Internet takes longer and longer and longer with
  • 00:01:08
    how much data there is. With certain simplifications and assumptions, that's
  • 00:01:12
    not really the case with pigeons. Pigeons might be really slow or fast but it
  • 00:01:16
    always basically takes the same amount of time for a pigeon to transfer one,
  • 00:01:20
    transfer some chunk of data from one office to the next. It just has to fly
  • 00:01:24
    50 miles. It's not going to take longer because that USB stick contains more
  • 00:01:29
    data, so of course for a sufficiently large file the pigeon's going to win. So in big O,
  • 00:01:37
    the way we talk about this is we describe the pigeon transfer speed as
  • 00:01:42
    O of 1, its constant time with respect to the size of the input. It
  • 00:01:47
    doesn't take longer with more input. Again with certain simplifications and
  • 00:01:52
    assumptions about the pigeons ability to carry a USB stick. But the internet time
  • 00:01:57
    is internet transfer time is O of n. It scales linearly with respect to the
  • 00:02:02
    amount of input. Twice amount of data is going to take roughly twice as much time.
  • 00:02:06
    So that's what Big O is. Big O is essentially an equation that describes
  • 00:02:12
    how the runtime
  • 00:02:13
    scales with respect to some input variables. So I'm going to talk about four
  • 00:02:20
    specific roles, but first let me show you what this might look like in code. So
  • 00:02:25
    let's consider this simple function that just walks through an array and checks if
  • 00:02:28
    it contains a particular value. So you would describe this is O of n, where n is
  • 00:02:33
    the size of the array.
  • 00:02:35
    What about this method that walks through an array and prints all pairs of
  • 00:02:39
    values in the array? Note that this will print if it has the elements 5 and 2 in
  • 00:02:44
    the array it'll print both 2 comma 5 and 5 comma 2. So you describe this
  • 00:02:49
    probably as of O of n squared and where n is the size of the array.
  • 00:02:53
    You can see that because it has n elements in the array, so it has n
  • 00:02:58
    squared pairs, so the amount of time it's going to take you to run that
  • 00:03:03
    function is going to increase with respect to n squared.
  • 00:03:08
    Ok so that's kind of basics of Big O. Let's take another real-world example.
  • 00:03:14
    How would you describe the run time to mow a square plot of land? So a square
  • 00:03:21
    plot of grass. So you have this plot of grass and you need to mow all of it.
  • 00:03:25
    What's the runtime of this? Well it's kind of an interesting question, but you
  • 00:03:29
    could describe it one of two ways. One of which, one way you can describe it is O
  • 00:03:34
    of a where a is the amount of area in that plot of land. Remember that this is
  • 00:03:41
    just an equation that expresses how the time increases so you don't have to use
  • 00:03:47
    n in your equation, you can use other variables and often it make sense to do
  • 00:03:51
    that. So you just described this as O of a where a is that the amount of area
  • 00:03:55
    there. You could also describe this as O of s squared where s is the
  • 00:04:01
    amount of, is this length of one side, because it is a square plot of land so
  • 00:04:07
    the amount of area is s squared. So it's important that you realize that there's O of a, and
  • 00:04:13
    O of s squared are obviously saying essentially the same thing, just like when
  • 00:04:17
    you describe the area of a square.
  • 00:04:19
    Well you could describe it as 25 or you describe it as 5 squared, just because it
  • 00:04:24
    has a square in it doesn't make it bigger. So both are valid
  • 00:04:27
    ways of describing the runtime. It just depends on what might be clearer for
  • 00:04:31
    that problem. So with that said, let me go on to 4 important rules to know with the big O. The
  • 00:04:38
    first one is if you have two different steps in your algorithm, you add up those
  • 00:04:44
    steps, so if you have a first step that takes O of a time and a
  • 00:04:49
    second step that takes O of b you would add up those run times and you get O of a plus
  • 00:04:53
    b. So for example you have this runtime that, you have this algorithm, that first
  • 00:05:02
    walks through one array, and then it walks the other array. It's going to be the amount of
  • 00:05:07
    time it takes to walk through each of them. So you'll add up their runtimes.
  • 00:05:11
    The second way, the second rule is that you drop constants. So let's look at
  • 00:05:19
    these two different ways that you can print out the min and max
  • 00:05:25
    element in an array. One finds the min element and then finds the max element, the other
  • 00:05:30
    one finds the min and the max simultaneously. Now these are fundamentally doing the
  • 00:05:34
    same thing, they're doing essentially the exact same operations, just doing them in
  • 00:05:38
    slightly different orders. Both of these get described as O of n, where n is the
  • 00:05:44
    size of the array. Now it's really tempting for people to sometimes see two
  • 00:05:48
    different loops and describe it as O of 2n and so it's important that you remember
  • 00:05:51
    that you drop constants in big O. You do not describe things as O of 2n or O of 3n
  • 00:05:56
    because you're just looking for how things scale roughly. Is it a linear
  • 00:06:00
    relationship, is it a quadratic relationship, so you always drop
  • 00:06:03
    constants. The third rule is that if you have different inputs you're usually
  • 00:06:09
    going to use different variables to represent them. So let's take this
  • 00:06:13
    example where we have two arrays and we're walking through them to figure out
  • 00:06:17
    the number of elements in common between the two arrays. Some people
  • 00:06:21
    mistakenly call this O of n squared but that's not correct.
  • 00:06:25
    When you just talk about runtime if you describe things as O of n or O of n squared
  • 00:06:30
    or O of n log n, n must have a meaning, it's not like things are inherently
  • 00:06:35
    one
  • 00:06:36
    or other. So when you described this as O of n squared it really doesn't
  • 00:06:39
    make sense because it doesn't, what does n mean? n is not the
  • 00:06:44
    size of the array, if there's two different arrays. What you want to describe
  • 00:06:47
    instead is O of a times b. Again fundamentally, big O is an equation that
  • 00:06:54
    expresses how the runtime changes, how its scales, so you describe this as O of a
  • 00:06:59
    times b. The fourth rule is that you drop non-dominant terms. So let's suppose you
  • 00:07:07
    have this code here that walks through an array and prints out the biggest element
  • 00:07:12
    and then for some reason it goes and prints all pairs of values in the array.
  • 00:07:17
    So that first step takes O of n time, the second step takes O of n squared
  • 00:07:22
    time, so you could first get as a less simplified form, you could describe this as
  • 00:07:30
    O of n + n squared, but note that, compare this to O of n
  • 00:07:36
    and the runtime, or compare this to the runtime of O of n squared, and the runtime
  • 00:07:40
    of O of n squared plus n squared. Both of those two runtimes reduce down to n
  • 00:07:46
    squared and what's interesting here is that O of n plus n squared, should logically
  • 00:07:50
    be between them. So this n squared thing kind of dominates this O of n thing, and
  • 00:07:55
    so you drop that non dominant term. It's the n squared that's really going to
  • 00:08:00
    drive how the runtime changes. And if you want you can look up a more academic
  • 00:08:06
    explanation for when exactly you drop things and when exactly you can't, but
  • 00:08:10
    this is kinda layman's explanation for it.
  • 00:08:12
    We've now covered the very basic pieces of big O. This is an incredibly
  • 00:08:19
    important concept to master for your interviews so don't be lazy here, make
  • 00:08:23
    sure you really really understand this and try a whole bunch of exercises to
  • 00:08:27
    solidify your understanding.
  • 00:08:29
    Good luck.
Tags
  • Big O
  • algorithmic efficiency
  • O(1)
  • O(n)
  • runtime
  • data transfer
  • interview preparation
  • algorithm analysis
  • efficiency rules
  • coding interview