Meta Goes ALL IN on AMD's MI300X AI Chip!

00:12:29
https://www.youtube.com/watch?v=cRcEeZpmOfs

Sintesi

TLDRIn the podcast, the host discusses Mark Zuckerberg's investment in AMD, positing that while analysts downgrade the stock amid concerns, AMD has a substantial advantage in the inference market over Nvidia. Zuckerberg's use of the Mi 300X for AI tasks serves as an endorsement of AMD's technology, which the host argues could drive AMD's growth as the inference market expands significantly. Experts from both AMD and Meta highlight their collaboration and alignment on software strategies to meet evolving AI demands. The podcast ultimately suggests that AMD's stock could see substantial positive movements in the near future as the demand for AI inference grows.

Punti di forza

  • 📈 Mark Zuckerberg is heavily investing in AMD.
  • 📉 Analysts are downgrading AMD based on training market performance.
  • 🌟 AMD's Mi 300X excels in AI inference workloads.
  • 💡 Inference market potential is greater than training market.
  • 🔄 AMD's chiplet platform offers flexibility for AI tasks.
  • 🤝 Partnership between AMD and Meta is strong and fruitful.
  • 🧠 AI applications are becoming more dominant in the tech landscape.
  • 💰 AMD's stock could be entering a profitable phase.
  • ⚙️ Continuous innovation is key for AMD's success.
  • 🔍 Meta's loyalty to AMD illustrates confidence in the technology.

Linea temporale

  • 00:00:00 - 00:05:00

    The podcast discusses AMD's current market position, emphasizing Mark Zuckerberg's investment in the company despite negative analyst views. The hosts argue that AMD has a strategic advantage in the inference market over Nvidia, which is not fully recognized by analysts. The importance of AMD's chiplet platform, which allows for efficient adaptations to various AI workloads, is underscored as a key factor for future revenue growth.

  • 00:05:00 - 00:12:29

    A detailed conversation highlights how AMD's Mi 300X is being utilized exclusively by Meta for inferences, showcasing its competitive edge in the AI market. The collaboration between Meta and AMD is marked by a shared commitment to open software and rapid iteration of products, indicating strong potential for future growth. The podcast concludes by asserting that AMD is likely to experience significant success as the inference market expands, making it a promising investment opportunity.

Mappa mentale

Video Domande e Risposte

  • Why is Mark Zuckerberg investing in AMD?

    He believes in AMD's potential in the inference market, which is poised to grow larger than the training market.

  • What is the Mi 300X?

    It is AMD's chip designed specifically for AI workloads and inference tasks.

  • How does AMD compare to Nvidia?

    While Nvidia is currently dominant in the training market, AMD has an edge in inference capabilities.

  • What concerns do analysts have regarding AMD?

    Some analysts have downgraded AMD's stock due to uncertainty, focusing mainly on its training market performance.

  • What makes AMD's chiplet platform advantageous?

    It allows for flexible, cost-effective adaptations for various AI workloads, enhancing performance for inference.

Visualizza altre sintesi video

Ottenete l'accesso immediato ai riassunti gratuiti dei video di YouTube grazie all'intelligenza artificiale!
Sottotitoli
en
Scorrimento automatico:
  • 00:00:00
    hello everyone and welcome back to the
  • 00:00:01
    podcast while everyone is crying about
  • 00:00:04
    AMD stock guess who's all in on the
  • 00:00:06
    company well you guessed it Mark
  • 00:00:08
    Zuckerberg while analysts are
  • 00:00:09
    downgrading the stock and citing some
  • 00:00:12
    very feeble research and some very
  • 00:00:14
    unfounded conclusions Mark Zuckerberg is
  • 00:00:17
    all in on this company and there's a lot
  • 00:00:18
    of people selling and losing their money
  • 00:00:20
    and so forth and essentially missing out
  • 00:00:22
    on a huge huge long-term opportunity
  • 00:00:24
    which I'm going to explain in this
  • 00:00:26
    podcast essentially what's happening is
  • 00:00:27
    that AMD has an advantage over Nvidia in
  • 00:00:30
    the inference Market which in turn will
  • 00:00:32
    be much larger than the training market
  • 00:00:34
    and so analysts are looking for signs of
  • 00:00:36
    traction on the training side of the
  • 00:00:38
    equation AMD is doing all right there
  • 00:00:41
    but obviously not as big as Nvidia but
  • 00:00:42
    they're missing out the opportunity
  • 00:00:44
    ahead this is how Lisa Sue turned the
  • 00:00:46
    company back around when her board was
  • 00:00:48
    saying hey you should get into tablets
  • 00:00:50
    and stuff like that Lisa was betting on
  • 00:00:52
    the nonobvious uh business opportunity
  • 00:00:54
    that lay ahead and executing
  • 00:00:56
    successfully exactly the same thing is
  • 00:00:58
    going on now and people are just look in
  • 00:01:00
    the wrong place so let's get deep into
  • 00:01:02
    it and let me explain why this company
  • 00:01:04
    is going to do so incredibly well in the
  • 00:01:06
    coming few years analysts are
  • 00:01:08
    increasingly unsure about AMD with
  • 00:01:10
    hsbc's Frank Lee double downgrading the
  • 00:01:13
    stock last week from buy to sell I think
  • 00:01:15
    he had a price target of just over $200
  • 00:01:18
    and now it's down to 100 which in any
  • 00:01:21
    case says that this guy had no idea what
  • 00:01:23
    he was doing since the beginning because
  • 00:01:25
    fundamentally the company hasn't changed
  • 00:01:26
    in between these two price targets so
  • 00:01:28
    obviously some fishy business going on
  • 00:01:30
    in there one way or another but
  • 00:01:32
    essentially meanwhile mettis Mark
  • 00:01:34
    Zuckerberg has gone all in on AMD by
  • 00:01:37
    running llas inferences on amd's Mi 300X
  • 00:01:40
    exclusively Zuckerberg's move is
  • 00:01:42
    indicative of AMD superiority on the
  • 00:01:45
    inference side of the equation which
  • 00:01:47
    currently the market fails to understand
  • 00:01:49
    the inference Market is going to be much
  • 00:01:51
    much larger than the training Market
  • 00:01:54
    because once you train an AI model you
  • 00:01:56
    make many inferences with it the demand
  • 00:01:58
    for training gpus is behind nvidia's
  • 00:02:01
    meteoric Revenue growth that you can see
  • 00:02:03
    depicted in the graph below hence if AMD
  • 00:02:06
    is as well positioned for the inference
  • 00:02:08
    Market as I believe we are likely to see
  • 00:02:10
    a similar Revenue growth curve in the
  • 00:02:12
    years ahead my view of amd's positioning
  • 00:02:15
    for the inference Market stems from a
  • 00:02:17
    first principle's understanding of the
  • 00:02:19
    physics behind the chips all chips do is
  • 00:02:22
    move electrons around the place to
  • 00:02:24
    perform arithmetic operations since AI
  • 00:02:26
    models are quite large the closer you
  • 00:02:28
    can place the memory engine where you
  • 00:02:30
    actually store the AI model to the
  • 00:02:32
    actual compute engine where then the
  • 00:02:34
    model gets used the less distance the
  • 00:02:36
    electrons have to go this equates to
  • 00:02:38
    lower latency and ultimately very
  • 00:02:40
    importantly faster and cheaper
  • 00:02:42
    inferences what gives amds advantage on
  • 00:02:45
    the inference side is the chiplet
  • 00:02:47
    platform which allows AMD to mix and
  • 00:02:49
    match different computer engines at a
  • 00:02:52
    marginal cost for the Mi 300X this
  • 00:02:55
    platform has enabled them to introduce
  • 00:02:57
    more memory on chip than nvidia's
  • 00:02:59
    alternative alternative which is
  • 00:03:00
    ultimately why meta has picked the Mi
  • 00:03:03
    300X to make inferences with llama
  • 00:03:06
    exclusively again I make particular
  • 00:03:08
    emphasis on that term because I don't
  • 00:03:10
    believe people understand that meta is
  • 00:03:13
    using amd's Hardware to actually perform
  • 00:03:16
    inferences with its Top Model across all
  • 00:03:18
    of its family of apps in this interview
  • 00:03:20
    that I'm going to be showing you next
  • 00:03:22
    Lisa explains that AMD has oriented the
  • 00:03:25
    Mi 300 to capitalize on the inference
  • 00:03:28
    opportunity she also explains that there
  • 00:03:30
    won't be a single chip for all AI
  • 00:03:32
    workloads but rather that there will be
  • 00:03:34
    a broad range of them that will excel at
  • 00:03:36
    specific AI workloads thus while Legacy
  • 00:03:39
    analysts are looking for signs of
  • 00:03:41
    traction for amdg bu use on the training
  • 00:03:43
    side they are completely missing the
  • 00:03:45
    point the platform not only gives AMD an
  • 00:03:48
    advantage on the inference side but it
  • 00:03:50
    will also enable them to capitalize on
  • 00:03:52
    emerging AI workloads in a way that
  • 00:03:54
    other competitors likely won't be able
  • 00:03:56
    to my 300 Mi 300 you got it you heard
  • 00:03:59
    here first performance-wise this is
  • 00:04:01
    going to be competitive with the h100 or
  • 00:04:03
    exceed the h100 uh it is uh definitely
  • 00:04:06
    going to be competitive um from you know
  • 00:04:07
    training workloads type things but one
  • 00:04:09
    of the things that uh you know we've
  • 00:04:11
    done and in um the AI Market there's no
  • 00:04:15
    one-size fits-all um as it relates to uh
  • 00:04:18
    you know chips um there are some that
  • 00:04:20
    are going to be um exceptional for
  • 00:04:22
    training uh there are some that are
  • 00:04:23
    going to be exceptional for inference uh
  • 00:04:26
    and you know that depends on how you put
  • 00:04:28
    it together at what we've done done with
  • 00:04:30
    mi30 is we've built um an exceptional uh
  • 00:04:33
    product for inference uh especially
  • 00:04:35
    large language model inference so when
  • 00:04:37
    we look going forward um much of what uh
  • 00:04:40
    work is done right now is uh companies
  • 00:04:42
    kind of training and deciding what their
  • 00:04:44
    models are going to be but going forward
  • 00:04:46
    we actually think inference is going to
  • 00:04:48
    be a larger market and uh that uh plays
  • 00:04:51
    well into uh some of what we've you know
  • 00:04:53
    designed Mi 3004 meta going all in on
  • 00:04:56
    the Mi 300X is Testament to the power of
  • 00:04:59
    amd's platform amd's chiplet platform
  • 00:05:01
    enables them to repurpose the chip to
  • 00:05:04
    any specific AI workload at a marginal
  • 00:05:06
    cost this means that as the demand for
  • 00:05:08
    specialized workload arises we'll see
  • 00:05:11
    AMD get ahead as we are seeing with
  • 00:05:13
    inference now indeed Legacy analysts are
  • 00:05:16
    not only failing to understand amd's
  • 00:05:18
    Edge on the inference side but also the
  • 00:05:20
    long-term implications of its platform
  • 00:05:23
    as evidence of this consider the
  • 00:05:25
    difference between the Mi 300a and the
  • 00:05:27
    Mi 300X the mi3 a contains three CPU
  • 00:05:31
    tiles where the Mi 300X contains 3 GPU
  • 00:05:35
    tiles ultimately catering for decidedly
  • 00:05:38
    desperate and AI workloads the cost of
  • 00:05:41
    this modification is marginal for AMD
  • 00:05:43
    because the chiplet platform enables
  • 00:05:45
    them to swap Computing units within the
  • 00:05:47
    chip easily people on X had a tough time
  • 00:05:50
    believing that mettis Lama runs
  • 00:05:52
    inferences exclusively on the Mi 300X
  • 00:05:55
    this week so I had to upload the snippet
  • 00:05:57
    that I'm going to show you now of AMD
  • 00:05:59
    advancing AI event in October 2024
  • 00:06:03
    during this clip you will hear metas
  • 00:06:04
    Kevin salvator VP of infrastructure
  • 00:06:07
    supply chain and Engineering say that
  • 00:06:09
    quote unquote all meta life traffic has
  • 00:06:12
    been served using the Mi 300X
  • 00:06:14
    exclusively due to its large memory
  • 00:06:17
    capacity and TCO which stands for total
  • 00:06:20
    cost of ownership by live traffic he's
  • 00:06:22
    referring to inferences and his words
  • 00:06:25
    confirm essentially why my thesis for
  • 00:06:27
    amd's competitive advantage stemming
  • 00:06:30
    from its unique platform are correct and
  • 00:06:32
    factual right after that statement
  • 00:06:34
    you'll pick up on three critical data
  • 00:06:36
    points which I want you to remember one
  • 00:06:39
    meta is working on deploying Mi 300X for
  • 00:06:42
    training workloads 2 two both meta and
  • 00:06:45
    AMD are culturally aligned around the
  • 00:06:47
    idea of open software versus nvidia's
  • 00:06:49
    closed ecosystem and three the feedback
  • 00:06:52
    loop between the two companies is fast
  • 00:06:54
    across the full stack with them
  • 00:06:56
    collaborating on the Mi 350 and mi4 100
  • 00:06:59
    series already we are also like so
  • 00:07:02
    excited about our AI work together one
  • 00:07:04
    of the things I've been incredibly
  • 00:07:06
    impressed by is just how fast you've
  • 00:07:08
    adopted and ramped mi30 for your
  • 00:07:10
    production workloads can you tell us
  • 00:07:12
    more about um how you're using mi30 I I
  • 00:07:15
    can so um as you know we like to move
  • 00:07:18
    fast at meta and the Deep collaboration
  • 00:07:21
    with between our teams from top to
  • 00:07:23
    bottom combined with really a rigorous
  • 00:07:26
    optimization of our workloads has en
  • 00:07:28
    nailed enabled us to get Mi 300
  • 00:07:32
    qualified and deployed into production
  • 00:07:34
    very very quickly and the collective
  • 00:07:36
    Teamworks to go through whatever
  • 00:07:38
    challenge came up along the way has just
  • 00:07:40
    been amazing to see how the teams work
  • 00:07:42
    really well together and Mi 300X in
  • 00:07:46
    production has been really instrumental
  • 00:07:48
    in helping us scale our AI
  • 00:07:50
    infrastructure particularly powering
  • 00:07:52
    inference with very high efficiency and
  • 00:07:55
    as you know we're super excited about
  • 00:07:58
    llama and its growth
  • 00:07:59
    you know particularly in July when we
  • 00:08:01
    launched llama 405b the first Frontier
  • 00:08:05
    level open source aii model with 405
  • 00:08:09
    billion parameters and all meta live
  • 00:08:12
    traffic has been served using Mi 300X
  • 00:08:16
    exclusively due to its large memory
  • 00:08:18
    capacity and TCO
  • 00:08:24
    Advantage yeah it's I mean it's been a
  • 00:08:26
    great partnership and you know based on
  • 00:08:29
    that success we're continuing to find
  • 00:08:31
    new areas where Instinct can offer
  • 00:08:33
    competitive TCO for us so we're already
  • 00:08:35
    working on several training workloads
  • 00:08:37
    and what we love is culturally we're
  • 00:08:39
    really aligned around you know from a
  • 00:08:41
    software perspective around pytorch
  • 00:08:44
    Triton and and our llama models which
  • 00:08:47
    has been really key for our Engineers to
  • 00:08:49
    land the products and services we want
  • 00:08:50
    in production quickly and it's just been
  • 00:08:52
    great to see you know I um I really have
  • 00:08:55
    to say Kevin when I think about you know
  • 00:08:58
    meta I mean there you know we we do so
  • 00:09:00
    much on the day-to-day trying to ensure
  • 00:09:01
    that the infrastructure is good but one
  • 00:09:03
    of the things I like to say is um you
  • 00:09:06
    guys are really good at providing
  • 00:09:08
    feedback and I think we're pretty good
  • 00:09:10
    at maybe listening to some of that
  • 00:09:12
    feedback but look we're we're talking
  • 00:09:14
    about road map today uh meta had
  • 00:09:15
    substantial input uh to our Instinct
  • 00:09:17
    road map and I I think that's so
  • 00:09:19
    necessary when you're talking about all
  • 00:09:21
    of the Innovation on hardware and
  • 00:09:22
    software you know can you share a little
  • 00:09:24
    bit about that work sure sure well the
  • 00:09:27
    problems we're trying to solve as we
  • 00:09:30
    scale and develop these new AI
  • 00:09:32
    experience they're really difficult
  • 00:09:34
    problems to solve and it only makes
  • 00:09:36
    sense for us to work together on what
  • 00:09:39
    those problems are and kind of align on
  • 00:09:41
    you know what you can build into future
  • 00:09:43
    products and what what we love is we're
  • 00:09:46
    doing that across the full stack you
  • 00:09:48
    know from Silicon to systems and
  • 00:09:50
    Hardware to software to
  • 00:09:52
    applications from top to bottom and
  • 00:09:54
    we've really appreciated the Deep
  • 00:09:56
    engagement of your team and and you guys
  • 00:09:58
    do listen and we love that um and what
  • 00:10:01
    that means is we're we're pretty excited
  • 00:10:03
    the Instinct road maps going to address
  • 00:10:05
    more and more use cases and really
  • 00:10:07
    continue to enhance performance and
  • 00:10:09
    efficiency as we go forward and scale
  • 00:10:12
    and we're already collaborating together
  • 00:10:14
    on Mi 350 and the Mi 400 series
  • 00:10:17
    platforms and we think that's ultimately
  • 00:10:19
    going to be to AMD building better
  • 00:10:22
    products and for meta helps us continue
  • 00:10:24
    to deliver industry-leading AI
  • 00:10:27
    experiences for the world so we're
  • 00:10:29
    really excited about that Kevin um thank
  • 00:10:31
    you so much for your partnership thank
  • 00:10:33
    you to your teams for all the hard work
  • 00:10:34
    that we're doing together and uh we look
  • 00:10:36
    forward to uh doing a lot more together
  • 00:10:38
    in the future yes thank you Lisa thank
  • 00:10:40
    you thank
  • 00:10:44
    you all right wonderful look um I hope
  • 00:10:48
    you've heard a little bit from you know
  • 00:10:49
    C our customers and partners as to you
  • 00:10:52
    know how we really like to bring
  • 00:10:54
    co-innovation together because yes it's
  • 00:10:56
    about our road map uh but it's also
  • 00:10:58
    about you know how we work together to
  • 00:11:00
    really optimize across the stack all of
  • 00:11:03
    the things that I've explained
  • 00:11:04
    previously in this video means that AMD
  • 00:11:06
    is likely to have a quote unquote Nvidia
  • 00:11:08
    moment in the next few years as the
  • 00:11:10
    inference Market explodes in my mongod
  • 00:11:13
    DB Deep dive which you can find here or
  • 00:11:15
    in my substack or X or whatever I
  • 00:11:17
    learned that AI apps have not achieved
  • 00:11:20
    widespread traction yet but I believe
  • 00:11:22
    meta evolution is a taste of things to
  • 00:11:24
    come their apps have become incredibly
  • 00:11:27
    addictive over the past 2 years is
  • 00:11:29
    essentially driven by AI inferences
  • 00:11:32
    therefore I don't think it will take
  • 00:11:33
    long for the rest of the economy to
  • 00:11:35
    become inference driven as I predicted
  • 00:11:37
    in my original AMD Deep dive greatly
  • 00:11:40
    benefiting AMD in turn the conversation
  • 00:11:42
    between Lisa Sue and mettis Kevin
  • 00:11:44
    salvatori also reveals that the two
  • 00:11:47
    companies are working together for the
  • 00:11:49
    long term an underappreciated
  • 00:11:51
    characteristic of amd's platform is that
  • 00:11:53
    it enables rapid iteration AMD dethroned
  • 00:11:56
    Intel by working closely with customers
  • 00:11:59
    and using their feedback to make better
  • 00:12:01
    products the same Dynamic ises it play
  • 00:12:03
    with a market that's set to grow
  • 00:12:05
    explosively so my take from this update
  • 00:12:08
    is that one it's a very bad idea to bet
  • 00:12:10
    against this company at present two I
  • 00:12:12
    believe that this stock is now entering
  • 00:12:14
    the fortune making zone all right so
  • 00:12:16
    that's it for today as always if you
  • 00:12:18
    enjoyed it can I please ask you to share
  • 00:12:20
    this with one friend these deep Dives
  • 00:12:21
    are for free so the only way this grows
  • 00:12:23
    is with your help thank you very much in
  • 00:12:25
    advance take care and until next time
Tag
  • AMD
  • Nvidia
  • Mark Zuckerberg
  • AI
  • Inference Market
  • Mi 300X
  • Chiplet Architecture
  • Meta
  • Lisa Su
  • Stock Analysis