Asyncio in Python - Full Tutorial

00:24:58
https://www.youtube.com/watch?v=Qb9s3UiMSTA

摘要

TLDRAsynchronous programming in Python, particularly with async I/O, allows tasks to begin without waiting for others to finish, increasing efficiency, especially for operations with long wait times like network or file access. Choosing the right concurrency model is crucial: async I/O for high-wait tasks, threads for shared data tasks, and processes for CPU-intensive tasks. A key structure is the event loop that manages task execution. Key concepts include coroutines, tasks, and synchronization tools such as locks, semaphores, and events. Coroutines, defined by 'async', return objects that need to be awaited to execute. Tasks enable concurrent execution of coroutines, which can be managed with functions like 'gather'. Synchronization primitives ensure orderly task execution and data integrity. Understanding these concepts is crucial for writing efficient asynchronous Python code.

心得

  • 🚀 Asynchronous programming allows tasks to run independently, starting before other tasks finish, enhancing efficiency.
  • 🕒 Async I/O is most beneficial for tasks with long wait times, such as network or file operations.
  • 💼 Threads are ideal for IO-bound tasks that might also share data, running in parallel.
  • ⚙️ Processes are suited for CPU-heavy tasks, using multiple cores for maximum performance.
  • 🔄 The Event Loop in Python’s asyncio manages task distribution asynchronously, keeping the program efficient.
  • 🔧 A coroutine function in Python involves async task execution, requiring 'await' to proceed.
  • ✍️ Tasks allow scheduling of multiple coroutines, enabling concurrent operations for efficient performance.
  • 📊 Gather function can run multiple awaitable tasks concurrently and collect the results in a list.
  • 🔐 Locks and Semaphores in async programming control task access to shared resources, avoiding conflicts.
  • 🛠️ Events serve as synchronization tools for simple task flagging operations.

时间轴

  • 00:00:00 - 00:05:00

    In traditional synchronous programming, tasks are handled linearly, causing delays if one task is halted. Asynchronous programming allows tasks to be executed concurrently, enhancing efficiency by not waiting unnecessarily. It's particularly useful for operations with inherent waiting times, such as network requests. The speaker outlines how async IIO, threads, and processes can be chosen based on task needs, highlighting the role of the event loop in managing and distributing tasks efficiently in asynchronous programming.

  • 00:05:00 - 00:10:00

    The creation and management of an event loop in Python's async IIO is discussed. It starts with importing the async IIO module and using async I.run with a coroutine function, which yields a coroutine object. The importance of awaiting coroutine objects for execution is emphasized. The speaker introduces the 'await' keyword for executing coroutines and uses examples to illustrate differences in execution timing depending on how and when coroutines are awaited.

  • 00:10:00 - 00:15:00

    The speaker explains tasks in asynchronous programming, highlighting how they schedule and run coroutines as soon as possible without needing to wait for prior tasks to complete. This optimizes efficiency by switching tasks when one is idle. Different methods like 'create task' and 'gather' are shown to run coroutines concurrently. The concept of a task group is introduced for organizing multiple tasks together, emphasizing error handling and task scheduling.

  • 00:15:00 - 00:24:58

    Synchronization primitives such as locks, semaphores, and events are introduced to manage access to shared resources in complex programs. Locks ensure only one coroutine accesses a critical section at a time. Semaphores allow limited concurrent access to resources. Events are described as simple boolean flags that block code execution until a condition is met. These tools help maintain organized and error-free asynchronous operations, with tasks managing concurrent executions efficiently.

显示更多

思维导图

视频问答

  • What is asynchronous programming?

    Asynchronous programming allows multiple tasks to be started and potentially run in parallel, without waiting for each task to finish before moving on to the next.

  • When should async I/O be used?

    Async I/O is ideal for tasks involving long wait times, such as network requests or reading files, without much CPU usage.

  • What are Python coroutines?

    Coroutines in Python are functions defined with 'async' that return coroutine objects, requiring 'await' to execute.

  • What role does the event loop play in async programming?

    The event loop in Python's asyncio handles and distributes tasks efficiently by managing their execution asynchronously.

  • How do tasks differ from coroutines?

    Tasks schedule coroutines to run as soon as possible, allowing multiple coroutines to run concurrently.

查看更多视频摘要

即时访问由人工智能支持的免费 YouTube 视频摘要!
字幕
en
自动滚动:
  • 00:00:00
    imagine programming is a journey from
  • 00:00:02
    point A to D in traditional synchronous
  • 00:00:04
    programming we travel in a straight line
  • 00:00:07
    stopping at each point before moving to
  • 00:00:09
    the next this means if there's a delay
  • 00:00:11
    at any point everything pauses until we
  • 00:00:13
    can move on now a synchronous
  • 00:00:15
    programming changes the game it allows
  • 00:00:18
    us to start tasks at b c and d even if
  • 00:00:21
    the task at a isn't finished yet this is
  • 00:00:23
    like sending out Scouts to explore
  • 00:00:25
    multiple paths at once without waiting
  • 00:00:28
    for the first Scout to return before
  • 00:00:29
    sending out the next this way our
  • 00:00:32
    program can handle multiple tasks
  • 00:00:34
    simultaneously making it more efficient
  • 00:00:36
    especially when dealing with operations
  • 00:00:38
    that have waiting times like loading a
  • 00:00:40
    web page and that's the essence of
  • 00:00:42
    asynchronous programming making our code
  • 00:00:44
    more efficient by doing multiple things
  • 00:00:46
    at once without the unnecessary waiting
  • 00:00:48
    so now let's quickly discuss when we
  • 00:00:50
    should use async iio because when we
  • 00:00:52
    build software choosing the right
  • 00:00:54
    concurrency model and picking between
  • 00:00:56
    asyn iio threads or processes is crucial
  • 00:00:59
    for performance and efficiency now async
  • 00:01:02
    iio is your choice for tasks that wait a
  • 00:01:04
    lot like Network requests or reading
  • 00:01:07
    files it excels in handling many tasks
  • 00:01:09
    concurrently without using much CPU
  • 00:01:12
    power this makes your application more
  • 00:01:14
    efficient and responsive when you're
  • 00:01:15
    waiting on a lot of different tasks now
  • 00:01:18
    threads are suited for tasks that may
  • 00:01:19
    need to wait but also share data they
  • 00:01:22
    can run in parallel within the same
  • 00:01:24
    application making them useful for tasks
  • 00:01:26
    that are IO bound but less CPU intensive
  • 00:01:29
    IO meaning input output now for CPU
  • 00:01:32
    heavy tasks processes are the way to go
  • 00:01:35
    each process operates independently
  • 00:01:37
    maximizing CPU usage by running in
  • 00:01:40
    parallel across multiple cores this is
  • 00:01:42
    ideal for intensive computations in
  • 00:01:45
    summary choose asyn iio for managing
  • 00:01:47
    many waiting tasks efficiently threads
  • 00:01:50
    for parallel tasks that share data with
  • 00:01:52
    minimal CPU use and processes for
  • 00:01:55
    maximizing performance on CPU intensive
  • 00:01:58
    tasks now that we know when to use async
  • 00:02:00
    iio let's dive into the five key
  • 00:02:02
    Concepts that we need to understand the
  • 00:02:04
    first concept is the event Loop in
  • 00:02:07
    Python's async iio The Event Loop is the
  • 00:02:09
    core that manages and distributes tasks
  • 00:02:12
    think of it as a central Hub with tasks
  • 00:02:14
    circling around it waiting for their
  • 00:02:16
    turn to be executed each task takes its
  • 00:02:18
    turn in the center where it's either
  • 00:02:20
    executed immediately or paused if it's
  • 00:02:22
    waiting for something like data from the
  • 00:02:24
    internet when a task awaits it steps
  • 00:02:27
    aside making room for another task to
  • 00:02:29
    run ensuring the loop is always
  • 00:02:31
    efficiently utilized once the awaited
  • 00:02:34
    operation is complete the task will
  • 00:02:36
    resume ensuring a smooth and responsive
  • 00:02:38
    program flow and that's how async io's
  • 00:02:41
    event Loop keeps your Python program
  • 00:02:42
    running efficiently handling multiple
  • 00:02:45
    tasks a synchronously so just a quick
  • 00:02:47
    pause here for any of you that are
  • 00:02:48
    serious about becoming software
  • 00:02:50
    developers if you want to be like Max
  • 00:02:52
    who landed a 70k per your job in Just 4
  • 00:02:54
    months of work consider checking out my
  • 00:02:57
    program with course careers now this
  • 00:02:59
    teaches you the fun fundamentals of
  • 00:03:00
    programming but also lets you pick a
  • 00:03:02
    specialization taught by an industry
  • 00:03:04
    expert in front end backend or devops
  • 00:03:07
    beyond that we even help you prepare
  • 00:03:09
    your resume we give you tips to optimize
  • 00:03:11
    your LinkedIn profile how to prepare for
  • 00:03:13
    interviews we really only succeed if our
  • 00:03:15
    students actually get jobs that's the
  • 00:03:18
    entire goal of the program so if that's
  • 00:03:20
    at all of interest to you we do have a
  • 00:03:21
    free introduction course that has a ton
  • 00:03:23
    of value no obligation no strings
  • 00:03:25
    attached you can check it out for free
  • 00:03:27
    from the link in the description so now
  • 00:03:29
    that we understand understand what the
  • 00:03:30
    event Loop is it's time to look at how
  • 00:03:32
    we create one and then talk about the
  • 00:03:34
    next important concept which is co-
  • 00:03:36
    routines now whenever we start writing
  • 00:03:38
    asynchronous code in Python We Begin by
  • 00:03:41
    importing the async io module now this
  • 00:03:44
    is built into python you don't need to
  • 00:03:45
    install it and for the purpose of this
  • 00:03:47
    video I'll be referencing all of the
  • 00:03:49
    features in Python version 3.11 and
  • 00:03:52
    above so if you're using an older
  • 00:03:53
    version of python just make sure you
  • 00:03:55
    update it because some things have
  • 00:03:56
    changed in the recent versions so we
  • 00:03:59
    begin by the module then we use the
  • 00:04:01
    command or the line async i.run and we
  • 00:04:05
    pass to this something known as a
  • 00:04:06
    co-routine function which will return a
  • 00:04:09
    co- routine object now asyn i.run is
  • 00:04:12
    going to start our event Loop and it's
  • 00:04:14
    going to start that by running a co-
  • 00:04:16
    routine now in our case there's two
  • 00:04:18
    types of co- routines we're concerned
  • 00:04:20
    with we have a co- routine function
  • 00:04:23
    which is this right here and we have
  • 00:04:25
    what's returned when you call a co-
  • 00:04:28
    routine function I know it seems a bit
  • 00:04:30
    strange but when you call Main like this
  • 00:04:33
    when it's defined using this async
  • 00:04:35
    keyword this returns to us something
  • 00:04:37
    known as a co- routine object now the
  • 00:04:41
    co-routine object is what we need to
  • 00:04:43
    pass here to async i.run it's going to
  • 00:04:46
    wait for that to finish and it's going
  • 00:04:47
    to start the event Loop for us where it
  • 00:04:50
    handles all of our asynchronous
  • 00:04:51
    programming so recap import the module
  • 00:04:55
    Define some asynchronous functions so
  • 00:04:57
    async and then you write the function
  • 00:04:59
    name out this is known as a co- routine
  • 00:05:01
    function you then call the function and
  • 00:05:03
    pass that to async i.run and that's
  • 00:05:06
    going to start your event Loop and allow
  • 00:05:08
    you to start running asynchronous code
  • 00:05:10
    that starts from this entry point now to
  • 00:05:12
    illustrate this a bit further let's look
  • 00:05:14
    at the difference between an
  • 00:05:15
    asynchronous function something defined
  • 00:05:17
    with this async keyword and a normal
  • 00:05:20
    function so watch what happens if I go
  • 00:05:22
    here and I simply call this function
  • 00:05:24
    some of you may assume that it's simply
  • 00:05:26
    going to print out start of main Cod
  • 00:05:28
    routine but you'll see that that's
  • 00:05:29
    actually not the case I know that my
  • 00:05:31
    terminal is a little bit messy here but
  • 00:05:33
    it says co-routine main was never
  • 00:05:36
    awaited now the reason we get that issue
  • 00:05:39
    is because when we call the function
  • 00:05:41
    here what we're actually doing is we're
  • 00:05:43
    generating a co-routine object this
  • 00:05:46
    co-routine object needs to be awaited in
  • 00:05:48
    order for us to actually get the result
  • 00:05:50
    of its execution now if we want to see
  • 00:05:52
    this even more visually we can actually
  • 00:05:54
    print out what we get when we call this
  • 00:05:56
    main function so let's call it here and
  • 00:05:59
    notice that we actually get this
  • 00:06:00
    co-routine object so when you call a
  • 00:06:03
    function defined with the async keyword
  • 00:06:05
    it returns a co-routine object and that
  • 00:06:07
    coroutine object needs to be awaited in
  • 00:06:10
    order for it to actually execute so
  • 00:06:12
    that's why we use the async i.run syntax
  • 00:06:15
    because this will handle awaiting this
  • 00:06:16
    Co routine and then allow us to write
  • 00:06:19
    some more asynchronous code now the next
  • 00:06:21
    thing that we need to look at is the
  • 00:06:23
    await keyword now the await keyword is
  • 00:06:25
    what we can use to await a coverou
  • 00:06:27
    tetine and to actually allow it to
  • 00:06:29
    execute and for us to get the result the
  • 00:06:31
    thing is though we can only use this
  • 00:06:32
    awake keyword inside of an asynchronous
  • 00:06:35
    function or inside of a code routine so
  • 00:06:37
    let's write another code routine and see
  • 00:06:39
    how we would await it and how we get its
  • 00:06:41
    result so now I've included a slightly
  • 00:06:43
    more complex example where we're
  • 00:06:44
    actually waiting on a different code
  • 00:06:46
    routine just to see how that works so
  • 00:06:49
    notice that we have a code routine up
  • 00:06:50
    here and what this is aiming to do is
  • 00:06:52
    simulate some input output bound
  • 00:06:54
    operation now that could be going to the
  • 00:06:56
    network and retrieving some data trying
  • 00:06:59
    to read file something that's not
  • 00:07:01
    controlled by our program that we're
  • 00:07:02
    going to wait on the result from so in
  • 00:07:05
    this case you can see that we fetch some
  • 00:07:06
    data we delay so we're just sleeping for
  • 00:07:09
    a certain amount of seconds just to
  • 00:07:10
    simulate that input output bound
  • 00:07:12
    operation we then get the data and we
  • 00:07:14
    return it now we know this is a co-
  • 00:07:16
    routine because we've defined it as an
  • 00:07:18
    asynchronous function now remember that
  • 00:07:20
    in order for a co- routine to actually
  • 00:07:22
    be executed it needs to be awaited now
  • 00:07:25
    in this case what we do is we create a
  • 00:07:27
    task and this task is the co-routine
  • 00:07:30
    object now the co-routine object at this
  • 00:07:32
    point in time is not yet being executed
  • 00:07:35
    and the reason it's not being executed
  • 00:07:37
    yet is because it hasn't been awaited
  • 00:07:39
    what I'm trying to show you is that when
  • 00:07:40
    you call an asynchronous function it
  • 00:07:43
    returns a co- routine that co- routine
  • 00:07:45
    needs to be awaited before it will
  • 00:07:47
    actually start executing so in this case
  • 00:07:49
    here we now await the task when we await
  • 00:07:51
    it it will start executing and we'll
  • 00:07:54
    wait for it to finish before we move on
  • 00:07:55
    to the rest of the code in our program
  • 00:07:57
    so let's run the code and see what the
  • 00:07:59
    output is here and you can see it says
  • 00:08:01
    start of main code routine data fetched
  • 00:08:04
    it then receives the results and it says
  • 00:08:06
    the end of the main code routine now
  • 00:08:08
    let's clear that and let's look at a
  • 00:08:10
    slightly different example so let's take
  • 00:08:13
    this result code right here and let me
  • 00:08:15
    just get rid of this
  • 00:08:16
    comment and let's put this actually at
  • 00:08:20
    the end of this function so now what we
  • 00:08:22
    have is print start of main co- routine
  • 00:08:25
    we create the co- routine object we then
  • 00:08:27
    print end of main routine object then we
  • 00:08:30
    await the code routine and I just want
  • 00:08:31
    to show you the difference in the result
  • 00:08:33
    that we're going to get so let's run the
  • 00:08:35
    code and notice we get start of main
  • 00:08:37
    code routine end of main code routine
  • 00:08:39
    and then we get fetching data data
  • 00:08:41
    fetched and then we get the result now
  • 00:08:44
    the reason we got this is because we
  • 00:08:46
    only created the code routine object
  • 00:08:48
    here we didn't yet await it so it wasn't
  • 00:08:51
    until we hit this line right here that
  • 00:08:53
    we waited for the execution of this to
  • 00:08:55
    finish before moving on to the next line
  • 00:08:58
    it's really important to understand that
  • 00:09:00
    fact that a code routine doesn't start
  • 00:09:02
    executing until it's awaited or until we
  • 00:09:04
    wrap it in something like a task which
  • 00:09:05
    we're going to look at later so I've
  • 00:09:07
    made a slight variation to the last
  • 00:09:09
    example and you can see what we're doing
  • 00:09:11
    now is we're creating two different code
  • 00:09:13
    routine objects and we're then awaiting
  • 00:09:15
    them now I want you to pause the video
  • 00:09:17
    and take a guess of what you think the
  • 00:09:19
    output's going to be and how long you
  • 00:09:21
    think it will take for this to execute
  • 00:09:23
    go ahead pause the video I'm going to
  • 00:09:25
    run the code now and explain what
  • 00:09:27
    happens so when I run this if if we move
  • 00:09:30
    it up here you'll see that we get
  • 00:09:31
    fetching data id1 data fetched id1 we
  • 00:09:35
    then receive the result and then we go
  • 00:09:36
    ahead and we fetch it for id2 now let's
  • 00:09:40
    clear this and run it one more time and
  • 00:09:42
    you can see that it takes 2 seconds we
  • 00:09:43
    fetch the first result it takes another
  • 00:09:45
    2 seconds and we fetch the second result
  • 00:09:48
    now this might seem counterintuitive
  • 00:09:50
    because you may have guessed that when
  • 00:09:52
    we created these two coroutine objects
  • 00:09:54
    they were going to start running
  • 00:09:55
    concurrently and that means that it
  • 00:09:56
    would only take us a total of 2 seconds
  • 00:09:58
    and we'd immediately get both of the
  • 00:10:00
    results but remember a code routine
  • 00:10:02
    doesn't start running until it's awaited
  • 00:10:05
    so in this case we actually wait for the
  • 00:10:07
    first co- routine to finish and only
  • 00:10:09
    once this has finished do we even start
  • 00:10:11
    executing the second co- routine meaning
  • 00:10:13
    that we haven't really got any
  • 00:10:15
    performance benefit here we've just
  • 00:10:17
    created a way to kind of wait for a task
  • 00:10:19
    to be finished that's all we've really
  • 00:10:21
    learned at this point in time now that
  • 00:10:23
    we understand this concept we can move
  • 00:10:25
    over and talk about tasks and see how we
  • 00:10:27
    can actually speed up an operation ation
  • 00:10:29
    like this and run both of these tasks or
  • 00:10:32
    these co- routines at the same time so
  • 00:10:34
    now we're moving on to the next
  • 00:10:36
    important concept which is a task now a
  • 00:10:38
    task is a way to schedule a co- routine
  • 00:10:40
    to run as soon as possible and to allow
  • 00:10:43
    us to run multiple co- routines
  • 00:10:45
    simultaneously now the issue we saw
  • 00:10:47
    previously is that we needed to wait for
  • 00:10:49
    one co- routine to finish before we
  • 00:10:51
    could start executing the next with a
  • 00:10:53
    task we don't have that issue and as
  • 00:10:55
    soon as a co- routine is sleeping or
  • 00:10:57
    it's waiting on something that's not in
  • 00:10:59
    control of our program we can move on
  • 00:11:01
    and start executing another task we're
  • 00:11:03
    never going to be executing these tasks
  • 00:11:05
    at the exact same time we're not using
  • 00:11:08
    multiple CPU cores but if one task isn't
  • 00:11:11
    doing something if it's idle if it's
  • 00:11:13
    blocked if it's waiting on something we
  • 00:11:15
    can switch over and start working on
  • 00:11:17
    another task the whole goal here is that
  • 00:11:19
    our program is optimizing its efficiency
  • 00:11:22
    so we're always attempting to do
  • 00:11:24
    something and when we're waiting on
  • 00:11:25
    something that's not in control of our
  • 00:11:27
    program we switch over to another task
  • 00:11:29
    and start working on that so here's a
  • 00:11:31
    quick example that shows you how we
  • 00:11:33
    would optimize kind of the previous
  • 00:11:34
    example that we looked at what we do
  • 00:11:36
    here is we use the simple create task
  • 00:11:39
    function now there's a few other ways to
  • 00:11:40
    make tasks which I'm going to show you
  • 00:11:42
    in a second but this is the simplest
  • 00:11:44
    what we do is we say task one is equal
  • 00:11:46
    to asyn io. create task and then we pass
  • 00:11:48
    in here a co-routine object it's a
  • 00:11:51
    co-routine object because this is a
  • 00:11:52
    co-routine function we call the function
  • 00:11:54
    and that returns to us a co- routine so
  • 00:11:56
    in this case we pass an ID then we pass
  • 00:11:58
    some time delay now if this was running
  • 00:12:01
    synchronously so if we had to wait for
  • 00:12:03
    each of these tasks to run it would take
  • 00:12:05
    us 2 seconds plus 3 seconds plus 1
  • 00:12:07
    second so a total of 6 seconds for this
  • 00:12:10
    code to execute however you'll see now
  • 00:12:13
    that what will happen is we'll be able
  • 00:12:14
    to execute this code in simply 3 seconds
  • 00:12:17
    because as soon as one of the tasks is
  • 00:12:18
    idle and we're waiting on this sleep we
  • 00:12:21
    can go and execute or start another task
  • 00:12:24
    now what I do is I still need to await
  • 00:12:26
    these tasks to finish so I just await
  • 00:12:28
    them all in line here and then collect
  • 00:12:30
    all of their different results so let's
  • 00:12:33
    bring the terminal up and let's run this
  • 00:12:35
    code and make sure it works and notice
  • 00:12:37
    that it starts all three Co routines
  • 00:12:39
    pretty much immediately and then we get
  • 00:12:41
    all of the data back at once in about 3
  • 00:12:43
    seconds again that differs from if we
  • 00:12:46
    were to use just the normal C routines
  • 00:12:48
    and we didn't create a task we'd have to
  • 00:12:50
    wait for each of them to finish before
  • 00:12:51
    we can move on to the next one so as a
  • 00:12:53
    quick recap when we create a task we're
  • 00:12:56
    essentially scheduling a code routine to
  • 00:12:57
    run as quickly as possible possible and
  • 00:12:59
    we're allowing multiple Co routines to
  • 00:13:01
    run at the same time as soon as one co-
  • 00:13:04
    routine isn't doing something and it's
  • 00:13:05
    waiting on some operation we can switch
  • 00:13:07
    to another one and start executing that
  • 00:13:10
    now all of that is handled by the event
  • 00:13:12
    loop it's not something we need to
  • 00:13:13
    manually take care of however if we do
  • 00:13:15
    want to wait on one task to finish
  • 00:13:17
    before moving to the next one we can use
  • 00:13:19
    the await syntax so it would be possible
  • 00:13:21
    for me to go here and write some code
  • 00:13:24
    like this and now we would see if we
  • 00:13:25
    execute the code and we can go ahead and
  • 00:13:27
    do that that we'll start the first and
  • 00:13:29
    the second code routine but we won't
  • 00:13:31
    start the third one until the first and
  • 00:13:33
    the second one are done so using a
  • 00:13:35
    synchronous programming gives us that
  • 00:13:37
    control and allows us to synchronize our
  • 00:13:39
    code in whatever manner we see fit so
  • 00:13:41
    now we move on to a quick example where
  • 00:13:43
    I'm going to show you something known as
  • 00:13:45
    The Gather function Now The Gather
  • 00:13:47
    function is a quick way to concurrently
  • 00:13:49
    run multiple co- routines just like we
  • 00:13:51
    did manually before so rather than
  • 00:13:53
    creating a task for every single one of
  • 00:13:54
    the co- routines using that create task
  • 00:13:57
    function we can simply use gather and it
  • 00:13:59
    will automatically run these
  • 00:14:01
    concurrently for us and collect the
  • 00:14:03
    results in a list the way it works is
  • 00:14:05
    that we pass multiple code routines in
  • 00:14:07
    here as arguments these are
  • 00:14:09
    automatically going to be scheduled to
  • 00:14:10
    run concurrently so we don't need to
  • 00:14:12
    wait for them to finish before we start
  • 00:14:14
    executing the next one and then we will
  • 00:14:16
    gather all of the results in a list in
  • 00:14:18
    the order in which we provided the co-
  • 00:14:20
    routines so the result of this one will
  • 00:14:21
    be the first element in the list second
  • 00:14:23
    element in the list third element in the
  • 00:14:25
    list Etc and it's going to wait for all
  • 00:14:27
    of them to finish when we use this await
  • 00:14:30
    keyword which just simplifies this
  • 00:14:31
    process for us that then allows us to
  • 00:14:34
    have all of the results in one place so
  • 00:14:35
    we can parse through them using this for
  • 00:14:37
    Loop so let's go ahead and run this code
  • 00:14:40
    and you see that it starts all three of
  • 00:14:41
    our Co routines we wait 3 seconds and
  • 00:14:43
    then we get all of our different results
  • 00:14:45
    now one thing you should know about
  • 00:14:47
    gather is that it's not that great at
  • 00:14:49
    error handling and it's not going to
  • 00:14:51
    automatically cancel other co- routines
  • 00:14:53
    if one of them were to fail now the
  • 00:14:55
    reason I'm bringing that up is because
  • 00:14:57
    the next example I show you does
  • 00:14:58
    actually provide some built-in error
  • 00:15:00
    handling which means it's typically
  • 00:15:02
    preferred over gather but it's just
  • 00:15:04
    worth noting that if there is an error
  • 00:15:05
    that occurs in one of these different
  • 00:15:07
    code routines it won't cancel the other
  • 00:15:09
    code routines which means you could get
  • 00:15:11
    some weird state in your application if
  • 00:15:13
    you're not manually handling the
  • 00:15:14
    different exceptions and errors that
  • 00:15:16
    could occur so now we're moving on to
  • 00:15:17
    the last example in the topic of tasks
  • 00:15:20
    where we're talking about something
  • 00:15:21
    relatively new known as a task group now
  • 00:15:24
    this is a slightly more preferred way to
  • 00:15:25
    actually create multiple tasks and to
  • 00:15:27
    organize them together and the reason
  • 00:15:29
    for this is this provides some built-in
  • 00:15:31
    error handling and if any of the tasks
  • 00:15:33
    inside of our task groups were to fail
  • 00:15:35
    it will automatically cancel all of the
  • 00:15:37
    other tasks which is typically
  • 00:15:39
    preferable when we are dealing with some
  • 00:15:41
    Advanced errors or some larger
  • 00:15:43
    applications where we want to be a bit
  • 00:15:44
    more robust now the fetch data function
  • 00:15:46
    has not changed at all all we've done
  • 00:15:49
    here is we've started using async i.ask
  • 00:15:51
    group now notice that what I'm using
  • 00:15:53
    here is the async width now this is
  • 00:15:55
    what's known as an asynchronous context
  • 00:15:58
    manager you don't to understand that you
  • 00:16:00
    don't have to have seen context managers
  • 00:16:02
    before but what this does is give us
  • 00:16:04
    access to this TG variable so we create
  • 00:16:06
    a task group as TG and now to create a
  • 00:16:09
    task we can say TG our task group.
  • 00:16:12
    create task just like we did before in
  • 00:16:14
    that first example we can create an
  • 00:16:16
    individual task we can then add this to
  • 00:16:19
    something like our tasks list if we care
  • 00:16:21
    about the result of it and now once we
  • 00:16:23
    get by this asynchronous width so once
  • 00:16:26
    we get down here to where I have the
  • 00:16:29
    comment what happens is all of these
  • 00:16:31
    tasks will have already been executed so
  • 00:16:34
    the idea is this is a little bit cleaner
  • 00:16:36
    it's automatically going to execute all
  • 00:16:38
    of the tasks that we add inside of the
  • 00:16:40
    task group once all of those tasks have
  • 00:16:42
    finished then this will stop blocking
  • 00:16:45
    when I say stop blocking that means we
  • 00:16:47
    can move down to the next line of code
  • 00:16:49
    and at this point we can retrieve all of
  • 00:16:50
    the different results from our tasks now
  • 00:16:53
    there's various different ways to go
  • 00:16:54
    about writing this type of code but the
  • 00:16:56
    idea is you simply create a task here as
  • 00:16:59
    soon as it's created inside of the task
  • 00:17:01
    group we now need to wait for that and
  • 00:17:03
    all the other tasks to finish before we
  • 00:17:05
    unblock from this block of code then
  • 00:17:08
    once they're all finished we move on to
  • 00:17:09
    the next lines of code now similarly to
  • 00:17:12
    any other task that we looked at before
  • 00:17:14
    these are all going to run concurrently
  • 00:17:16
    meaning if one task is sleeping we can
  • 00:17:18
    go on and we can start another task and
  • 00:17:19
    work on something else so those are
  • 00:17:21
    tasks obviously there's a lot more you
  • 00:17:23
    can do here but understand that you run
  • 00:17:25
    tasks when you want to execute code
  • 00:17:27
    concurrently and you want multiple
  • 00:17:29
    different operations to be happening at
  • 00:17:31
    the same time so now we're moving on to
  • 00:17:33
    the fourth important concept which is a
  • 00:17:35
    future now it's worth noting that a
  • 00:17:37
    future is not something that you're
  • 00:17:39
    expected to write on your own it's
  • 00:17:41
    typically utilized in lower level
  • 00:17:43
    libraries but it's good to just be
  • 00:17:44
    familiar with the concept in case you
  • 00:17:46
    see it in asynchronous programming so
  • 00:17:48
    I'll go through this fairly quickly but
  • 00:17:50
    really what a future is is a promise of
  • 00:17:52
    a future result so all it's saying is
  • 00:17:54
    that a result is to come in the future
  • 00:17:57
    you don't know exactly when that's going
  • 00:17:58
    to be that's all future is so in this
  • 00:18:01
    case you can see that we actually create
  • 00:18:02
    a future and we await its value what we
  • 00:18:05
    do is we actually get the event Loop you
  • 00:18:07
    don't need to do this you'll probably
  • 00:18:09
    never write this type of code we create
  • 00:18:11
    our own future we then have a new task
  • 00:18:14
    that we create using async iio and you
  • 00:18:16
    can see the task is set future result
  • 00:18:19
    inside here we wait for 2 seconds so
  • 00:18:21
    this is some blocking operation and then
  • 00:18:24
    we set the result of the future and we
  • 00:18:26
    print out the result here we AIT the
  • 00:18:29
    future and then we print the result now
  • 00:18:32
    notice we didn't actually await the task
  • 00:18:34
    to finish we awaited the future object
  • 00:18:37
    so inside of the task we set the value
  • 00:18:40
    of the future and we awaited that which
  • 00:18:43
    means as soon as we get the value of the
  • 00:18:45
    future this task may or may not actually
  • 00:18:47
    be complete so this is slightly
  • 00:18:49
    different than using a task when we use
  • 00:18:51
    a future we're just waiting for some
  • 00:18:53
    value to be available we're not waiting
  • 00:18:56
    for an entire task or an entire co-
  • 00:18:58
    routine to finish that's all I really
  • 00:19:00
    want to show you here I don't want to
  • 00:19:01
    get into too many details that's a
  • 00:19:03
    future really just a promise of an
  • 00:19:04
    eventual result so now we're moving on
  • 00:19:07
    and talking about synchronization
  • 00:19:08
    Primitives now these are tools that
  • 00:19:10
    allow us to synchronize the execution of
  • 00:19:12
    various co- routines especially when we
  • 00:19:14
    have larger more complicated programs
  • 00:19:17
    now let's look at this example so we can
  • 00:19:19
    understand how we use the first
  • 00:19:20
    synchronization tool which is lock let's
  • 00:19:23
    say that we have some shared resource
  • 00:19:25
    maybe this is a database maybe it's a
  • 00:19:27
    table maybe it's a file doesn't matter
  • 00:19:29
    what it is but the idea is that it might
  • 00:19:31
    take a fair amount of time for us to
  • 00:19:33
    actually modify or do some operation on
  • 00:19:35
    this shared resource and we want to make
  • 00:19:37
    sure that no two co-routines are working
  • 00:19:40
    on this at the same time the reason for
  • 00:19:42
    that is if two co-routines were say
  • 00:19:44
    modifying the same file if they're
  • 00:19:45
    writing something to the database we
  • 00:19:47
    could get some kind of error where we
  • 00:19:49
    get a mutated state or just weird
  • 00:19:52
    results end up occurring because we have
  • 00:19:54
    kind of different operations happening
  • 00:19:55
    at different times and they're
  • 00:19:56
    simultaneously occurring when we want
  • 00:19:58
    really wait for one entire operation to
  • 00:20:00
    finish before the next one completes
  • 00:20:03
    that might seem a little bit confusing
  • 00:20:05
    but the idea is we have something and we
  • 00:20:06
    want to lock it off and only be using it
  • 00:20:09
    from one co- routine at a time so what
  • 00:20:12
    we can do for that is we can create a
  • 00:20:13
    lock now when we create a lock we have
  • 00:20:16
    the ability to acquire the lock and we
  • 00:20:18
    do that with this code right here which
  • 00:20:20
    is async with lock now this again is an
  • 00:20:23
    asynchronous context manager and what
  • 00:20:25
    this will do is it will check if any
  • 00:20:27
    other code routine is currently using
  • 00:20:29
    the lock if it is it's going to wait
  • 00:20:32
    until that code routine is finished if
  • 00:20:34
    it's not it's going to go into this
  • 00:20:35
    block of code now the idea is whatever
  • 00:20:38
    we put inside of this context manager
  • 00:20:40
    needs to finish executing before the
  • 00:20:43
    lock will be released which means we can
  • 00:20:45
    do some critical part of modification we
  • 00:20:48
    can have some kind of code occurring in
  • 00:20:49
    here that we know will happen all at
  • 00:20:51
    once before we move on to a different
  • 00:20:53
    task or to a different code routine the
  • 00:20:56
    reason that's important is because we
  • 00:20:57
    have something like an await maybe we're
  • 00:20:59
    waiting a network operation to save
  • 00:21:01
    something else that could trigger a
  • 00:21:03
    different task to start running in this
  • 00:21:05
    case we're saying hey within this lock
  • 00:21:08
    wait for all of this to finish before we
  • 00:21:10
    release the lock which means that even
  • 00:21:12
    though another task could potentially be
  • 00:21:14
    executing when the Sleep occurs it can't
  • 00:21:16
    start executing this critical part of
  • 00:21:19
    code until all of this is finished and
  • 00:21:21
    the lock is released so all the lock is
  • 00:21:24
    really doing is it's synchronizing our
  • 00:21:26
    different co- routines so that they
  • 00:21:28
    can't be using this block of code or
  • 00:21:30
    executing this block of code while
  • 00:21:32
    another code routine is executing it
  • 00:21:34
    that's all it's doing it's locking off
  • 00:21:36
    access to in this case a critical
  • 00:21:38
    resource that we only want to be
  • 00:21:39
    accessed one at a time so in this case
  • 00:21:42
    you can see that we create five
  • 00:21:43
    different instances of this Co routine
  • 00:21:46
    we then are accessing the lock and then
  • 00:21:48
    again once we get down here we're going
  • 00:21:49
    to release it so if we bring up the
  • 00:21:52
    terminal here and we start executing
  • 00:21:54
    this you'll see that we have resource
  • 00:21:55
    before modification resource after
  • 00:21:58
    before after before after and the idea
  • 00:22:00
    is even though that we've executed these
  • 00:22:01
    cortines concurrently we're gating them
  • 00:22:04
    off and we're locking their access to
  • 00:22:06
    this resers so that only one can be
  • 00:22:08
    accessing it at a time moving on the
  • 00:22:10
    next synchronization primitive to cover
  • 00:22:12
    is known as the semaphore now a
  • 00:22:14
    semaphore is something that works very
  • 00:22:16
    similarly to a lock however it allows
  • 00:22:19
    multiple Cod routines to have access to
  • 00:22:21
    the same object at the same time but we
  • 00:22:23
    can decide how many we want that to be
  • 00:22:26
    so in this case we create a semaphore
  • 00:22:28
    and we give give it a limit of two that
  • 00:22:30
    means only two co- routine story can
  • 00:22:32
    access some resource at the exact same
  • 00:22:34
    time and the reason we would do that is
  • 00:22:36
    to make sure that we kind of throttle
  • 00:22:38
    our program and we don't overload some
  • 00:22:40
    kind of resource so it's possible that
  • 00:22:42
    we're going to send a bunch of different
  • 00:22:43
    network requests we can do a few of them
  • 00:22:46
    at the same time but we can't do maybe a
  • 00:22:48
    thousand or 10,000 at the same time so
  • 00:22:50
    in that case we would create a semaphor
  • 00:22:52
    we'd say okay our limit is maybe five at
  • 00:22:54
    a time and this way now we have the
  • 00:22:56
    event Loop automatically handled this
  • 00:22:58
    throttle our code intentionally to only
  • 00:23:00
    send maximum five requests at a time
  • 00:23:03
    anyways let's bring up our terminal here
  • 00:23:05
    and run this code so Python 3 semap 4.
  • 00:23:09
    piy and you can see that we can access
  • 00:23:10
    the resource kind of two at a time and
  • 00:23:13
    modify it but we can't have any more
  • 00:23:15
    than that now moving on to the last
  • 00:23:17
    primitive we're going to talk about this
  • 00:23:19
    is the event now the event is something
  • 00:23:21
    that's a little bit more basic and
  • 00:23:22
    allows us to do some simpler
  • 00:23:24
    synchronization in this case we can
  • 00:23:26
    create an event and what we can do is we
  • 00:23:28
    can await the event to be set and we can
  • 00:23:31
    set the event and this acts as a simple
  • 00:23:33
    Boolean flag and it allows us to block
  • 00:23:36
    other areas of our code until we've set
  • 00:23:39
    this flag to be true so it's really just
  • 00:23:41
    like setting a variable to true or false
  • 00:23:43
    in this case it's just doing it in the
  • 00:23:44
    asynchronous way so you can see we have
  • 00:23:47
    some Setter function maybe it takes two
  • 00:23:49
    seconds to be able to set some result we
  • 00:23:51
    then set the result and as soon as that
  • 00:23:53
    result has been set we can come up here
  • 00:23:55
    we await that so we wait for this to
  • 00:23:57
    finish and and then we can go ahead and
  • 00:23:59
    print the event has been set continue
  • 00:24:01
    execution so we can bring this up here
  • 00:24:04
    and quickly have a look at this so
  • 00:24:05
    Python 3 if we spell that correctly
  • 00:24:08
    event. pi and you'll see it says
  • 00:24:10
    awaiting the event to be set event has
  • 00:24:12
    been set event has been set continuing
  • 00:24:14
    the execution okay pretty
  • 00:24:16
    straightforward it's just a Boolean flag
  • 00:24:18
    that allows us to wait at certain points
  • 00:24:20
    in our program there's lots of different
  • 00:24:21
    times when you would want to use this
  • 00:24:23
    but I just wanted to quickly show you
  • 00:24:24
    that we do have something like this that
  • 00:24:25
    exists now there is another type of
  • 00:24:27
    primitive here that's a bit more
  • 00:24:29
    complicated called the condition I'm not
  • 00:24:31
    going to get into that in this video in
  • 00:24:33
    fact I'm going to leave the video here
  • 00:24:35
    if you guys enjoyed this make sure you
  • 00:24:37
    leave a like subscribe to the channel
  • 00:24:39
    and consider checking out my premium
  • 00:24:40
    software development course with course
  • 00:24:42
    careers if you enjoy this teaching style
  • 00:24:44
    and you're serious about becoming a
  • 00:24:46
    developer anyways I will see you guys in
  • 00:24:48
    another YouTube
  • 00:24:50
    [Music]
  • 00:24:57
    video
标签
  • Asynchronous programming
  • Python
  • Concurrency
  • Event Loop
  • Coroutines
  • Async I/O
  • Threads
  • Processes
  • Synchronization
  • Concurrency models