What are Autoregressive (AR) Models

00:05:00
https://www.youtube.com/watch?v=Mc6sBAUdDP4

الملخص

TLDREl vídeo explica els models autoregressius (AR) dins del context de les sèries temporals, destacant la importància clau de l'estacionaritat. Els models AR busquen preveure una sèrie basant-se exclusivament en els seus valors anteriors, anomenats retards o 'lags'. Un model que només depèn d'un valor anterior s'anomena AR1. Quan es consideren múltiples valors anteriors, es tracta de models AR de major ordre, com l'ARP. Aquests models són considerats de 'long memory' perquè els efectes dels valors passats, tot i que disminueixen, mai desapareixen del tot. L'estacionaritat és essencial per assegurar que aquests efectes minven i mantenen consistència en les prediccions.

الوجبات الجاهزة

  • 🔄 Els models AR s'utilitzen per preveure el futur basant-se en el passat de la sèrie temporal.
  • 📏 L'estacionaritat assegura que els models AR siguin fiables.
  • 📉 Els models AR1 utilitzen un sol valor anterior per a la predicció.
  • 🔢 Els models ARP poden incloure múltiples valors anteriors.
  • ⏳ Els efectes dels esdeveniments passats es redueixen però no desapareixen mai completament.
  • 🔍 Els models AR permeten estudiar la dependència temporal de les dades.
  • 🔗 Els models de 'long memory' retenen la influència d'esdeveniments passats més llunyans.
  • 🧩 Els models estacionaris requereixen l'estacionaritat per funcionar correctament.
  • 📊 Els models AR poden complexificar-se amb més retards.
  • 🕰️ L'estudi de sèries temporals és essencial per a prediccions futures.

الجدول الزمني

  • 00:00:00 - 00:05:00

    La sèrie tractada torna a la idea de modelització, centrant-se avui en els models autoregressius (AR). S'explica la importància de l'estacionarietat, determinant que la distribució dependrà de les diferències en el temps, no en localitzacions específiques. Els models estacionaris més populars són els models autoregressius, que pronostiquen una sèrie basant-se només en els valors anteriors. Un model que depèn només del valor anterior es coneix com un model AR d'ordre 1 (AR 1). Aquest model utilitza valors passats per predir el valor objectiu, amb un intercept i un coeficient implicats. Es fa ús progressiu de valors anteriors per millorar el pronòstic, conceptualitzant-se com a models de memòria llarga, on els efectes es dissipen amb el temps però no desapareixen. S'introdueixen els models ARP, amb múltiples 'lags', i es menciona que hi ha una possible contrapart que serà presentada en futurs vídeos.

الخريطة الذهنية

فيديو أسئلة وأجوبة

  • Què és un model autoregressiu (AR)?

    Un model autoregressiu utilitza els valors anteriors d'una sèrie temporal per predir el seu valor actual.

  • Per què és important l'estacionaritat en sèries temporals?

    L'estacionaritat és crucial per a certs models que són dependents d'ella, com els models estacionaris, ja que permet consistència en prediccions.

  • Com afecta l'estacionaritat als models AR?

    L'estacionaritat assegura que els shocks del passat tenen un efecte minvant al present, mantenint la predicció consistent.

  • Què significa un 'AR1' quan es parla de models AR?

    Un AR1 és un model autoregressiu que fa servir només el valor immediatament anterior de la sèrie temporal per a la predicció.

  • És possible utilitzar més d'un valor anterior en un model AR?

    Sí, es poden utilitzar diversos valors anteriors, i això es coneix com un model AR de més alt ordre, per exemple ARP.

  • Què es vol dir amb models de 'long memory'?

    Els models de 'long memory' indiquen que els efectes dels esdeveniments passats no desapareixen mai completament, tot i que sí que es redueixen amb el temps.

عرض المزيد من ملخصات الفيديو

احصل على وصول فوري إلى ملخصات فيديو YouTube المجانية المدعومة بالذكاء الاصطناعي!
الترجمات
en
التمرير التلقائي:
  • 00:00:00
    finally our series on time series gets
  • 00:00:03
    back to the idea of modeling
  • 00:00:05
    specifically today what are Auto
  • 00:00:07
    regressive also known as AR models well
  • 00:00:11
    first of all let's remember stationarity
  • 00:00:12
    we can't have independence in time
  • 00:00:15
    series but we still want consistency and
  • 00:00:17
    that means that we want the distribution
  • 00:00:18
    to depend only on differences in time
  • 00:00:21
    not on locations in time of course if
  • 00:00:24
    you're not quite sure about stationarity
  • 00:00:25
    you can always go back and watch my
  • 00:00:27
    video on stationarity the reason the
  • 00:00:29
    stationarity is so important is there
  • 00:00:31
    are entire classes of models that depend
  • 00:00:34
    on stationarity and these models are
  • 00:00:36
    well called stationary models I know
  • 00:00:38
    very complicated and one of the most
  • 00:00:40
    popular of these stationary models are
  • 00:00:42
    called Auto regressive models Auto
  • 00:00:45
    regressive models try to forecast a
  • 00:00:47
    series based solely on the past values
  • 00:00:49
    of the series we call these lags a model
  • 00:00:53
    that only depends on the previous value
  • 00:00:56
    of the previous lag or one lag in the
  • 00:00:58
    past is called an AR model of order 1 or
  • 00:01:02
    an AR 1 model as you can see here so we
  • 00:01:06
    have the target variable Y that we're
  • 00:01:07
    still interested in but we're
  • 00:01:09
    essentially using previous values of Y
  • 00:01:12
    to predict this target variable Y you
  • 00:01:14
    can imagine is like a lagged target and
  • 00:01:16
    then of course it's not going to be a
  • 00:01:18
    perfect prediction so we have some error
  • 00:01:19
    as well now there's an intercept there's
  • 00:01:22
    a coefficient we'll deal with the
  • 00:01:24
    coefficient in a little bit but let's
  • 00:01:26
    talk a little bit more about this lagged
  • 00:01:27
    target maybe you're having a hard time
  • 00:01:29
    conceptualizing this so let's zoom in a
  • 00:01:31
    little bit more on my sales chart on the
  • 00:01:33
    right hand side again let's say I want
  • 00:01:36
    to forecast my sales on a weekly basis
  • 00:01:39
    well how can I use the previous week to
  • 00:01:42
    be able to help me out well let's take a
  • 00:01:43
    look at one observation this one
  • 00:01:46
    observation would essentially be looking
  • 00:01:49
    back at the previous week sales to try
  • 00:01:51
    and predict it so again I'm using a
  • 00:01:53
    previous week to predict a current week
  • 00:01:55
    now of course I'm not going to do this
  • 00:01:57
    for just one observation I'm doing this
  • 00:01:59
    for all of the observations in my
  • 00:02:01
    dataset every observation is looking
  • 00:02:03
    back at the week before it of course
  • 00:02:05
    except for the very first observation
  • 00:02:07
    it's lonely it doesn't have one before
  • 00:02:08
    it but don't worry first observation
  • 00:02:11
    you'll be important just wait
  • 00:02:13
    and so if you keep doing this whole idea
  • 00:02:16
    of recursion where you have that today
  • 00:02:18
    depends on the time period before and
  • 00:02:20
    that depends on the time period before
  • 00:02:22
    that and that depends on the time period
  • 00:02:24
    before that you can sort of recursively
  • 00:02:26
    plug this in in a mathematical way which
  • 00:02:29
    is why these are called long memory
  • 00:02:31
    models
  • 00:02:31
    let me show you what I mean
  • 00:02:33
    mathematically so let's say that today
  • 00:02:35
    YT depends on YT minus 1 well that means
  • 00:02:39
    that YT minus 1 depends on YT minus 2
  • 00:02:42
    and so I can actually plug these
  • 00:02:45
    equations into each other and show in a
  • 00:02:46
    roundabout way that YT depends on YT
  • 00:02:50
    minus 2 with a little bit extra error
  • 00:02:53
    and a little bit different of an
  • 00:02:54
    intercept I still have that YT minus 2
  • 00:02:57
    is somewhat influential on YT ok well if
  • 00:03:01
    I know that YT is dependent on YT minus
  • 00:03:03
    2 let's think about YT minus 2
  • 00:03:06
    well YT minus 2 depends on YT minus 3
  • 00:03:08
    again
  • 00:03:09
    time to repeat the process I can plug
  • 00:03:11
    these two equations into each other and
  • 00:03:13
    again in a roundabout way show that YT
  • 00:03:15
    minus 3 has some minor effect on YT
  • 00:03:18
    today again with a little bit extra
  • 00:03:20
    error and a little bit different of an
  • 00:03:22
    intercept of course I can repeat this
  • 00:03:24
    over and over and over and over again
  • 00:03:27
    in fact I can sort of do this all the
  • 00:03:29
    way till we get back to the very very
  • 00:03:31
    first observation essentially plug in a
  • 00:03:34
    bunch of math and you can get what you
  • 00:03:36
    have here at the very bottom of the
  • 00:03:37
    screen and if you take a look the first
  • 00:03:41
    observation does matter
  • 00:03:42
    the first observation in a roundabout
  • 00:03:45
    way still has a small effect on what's
  • 00:03:47
    going on today now the whole point of
  • 00:03:50
    stationarity is that this effect goes
  • 00:03:52
    away so the effect of these shocks that
  • 00:03:55
    happened long ago will have only a
  • 00:03:57
    little effect on today if that
  • 00:03:59
    coefficient in absolute value is less
  • 00:04:01
    than 1
  • 00:04:02
    again this is why these are called long
  • 00:04:05
    memory models is the effect doesn't ever
  • 00:04:07
    fully go away but the effect does
  • 00:04:10
    dissipate over time hence the idea of
  • 00:04:13
    stationarity now of course why just stop
  • 00:04:16
    with only having one like you could
  • 00:04:18
    easily have two lags where if looking at
  • 00:04:21
    today's weeks in sales I can look back
  • 00:04:23
    at the previous two weeks to try and
  • 00:04:25
    predict this week in sales
  • 00:04:27
    of course again I'm not doing this for
  • 00:04:29
    one observation I'm doing this for all
  • 00:04:30
    of the observations but if you try and
  • 00:04:32
    draw this out it gets a little bit
  • 00:04:33
    complicated and again why stop with two
  • 00:04:36
    why not have as many different lags in
  • 00:04:38
    the model that you'd like so again you
  • 00:04:40
    can go up to P lags into the past these
  • 00:04:43
    are called ARP models and these models
  • 00:04:45
    can get rather large if only they had a
  • 00:04:48
    counterpart to be able to help them out
  • 00:04:50
    oh you'll have to turn in to our next
  • 00:04:52
    video to figure that one out but for
  • 00:04:54
    right now what are auto regressive
  • 00:04:56
    models those are Auto regressive models
  • 00:04:58
    in under five minutes
الوسوم
  • autoregressius
  • estacionaritat
  • long memory
  • lags
  • sèries temporals
  • models estacionaris