How China’s New AI Model DeepSeek Is Threatening U.S. Dominance

00:40:24
https://www.youtube.com/watch?v=WEBiebbeNCA

概要

TLDRIn a surprising turn of events, China's AI lab Deepseek has unveiled an open-source AI model that has outperformed U.S.-based models from giants like OpenAI, Google, and Meta, achieving this remarkable feat with significantly lower costs and development time. With an investment of just $5.6 million and a development period of only two months, Deepseek's model has disrupted the status quo in Silicon Valley, where top firms spend billions to develop their technologies. Despite facing hardware restrictions imposed by the U.S. government, Deepseek has demonstrated exceptional efficiency and innovation, raising concerns about the future of AI leadership and the dynamics of the global tech landscape. Observers warn that the widespread adoption of cost-effective and powerful models like Deepseek's may undermine U.S. dominance in the field, while also highlighting the risks associated with AI deployed under authoritarian control.

収穫

  • 🚀 Deepseek has unveiled a powerful open-source AI model that outperforms several top U.S. models.
  • 💰 Developed at a fraction of the cost, only $5.6 million for their latest model.
  • 🔍 The model has surpassed GPT-4, Llama, and other leading AIs in accuracy tests.
  • 🖥️ Deepseek managed to overcome U.S. semiconductor restrictions creatively.
  • 🌍 This development signals a shift in the AI competition landscape, challenging U.S. dominance.
  • 📉 Open-source models could democratize access to AI technology and innovation.
  • 🇨🇳 There are ethical concerns regarding AI models developed within China's regulatory environment.
  • 📈 China has shown rapid advances in AI, catching up within a short period.
  • 💡 Necessity in resource constraints has driven Chinese innovation in AI.
  • 🤖 The implications of a Chinese open-source AI model could reshape global tech dynamics.

タイムライン

  • 00:00:00 - 00:05:00

    China's AI breakthrough, led by Deepseek, has caught the attention of Silicon Valley by outperforming major players like OpenAI and Google with significantly lower costs and faster development times.

  • 00:05:00 - 00:10:00

    Deepseek's model was developed in just two months for approximately $5.6 million, contrasting with the billions spent by American firms. This achievement is reshaping perceptions of China's capabilities in AI.

  • 00:10:00 - 00:15:00

    Deepseek's advanced models have shown superior performance in various benchmarks compared to models from major tech companies, indicating that China's AI landscape is evolving rapidly and competitively.

  • 00:15:00 - 00:20:00

    The restrictions on semiconductor exports to China have not hindered Deepseek's success; instead, they have innovatively utilized available resources to create efficient AI models.

  • 00:20:00 - 00:25:00

    Deepseek's foundation remains somewhat enigmatic, with little known about its creators, posing intriguing questions about transparency and collaboration in the evolving AI landscape.

  • 00:25:00 - 00:30:00

    A broader trend in China’s AI development is emerging, with other companies, especially startups, attaining significant achievements with limited funding, shaking the prior belief that the U.S. held a substantial leading edge.

  • 00:30:00 - 00:35:00

    There’s a growing consensus that the open-source AI models from China could disrupt traditional closed-source models in the U.S, leading to a potential shift in the dynamics of global AI development.

  • 00:35:00 - 00:40:24

    Experts argue that the distinction between open-source and closed-source models will become increasingly important, and how firms respond to these competitive pressures will shape the future landscape of AI development.

もっと見る

マインドマップ

ビデオQ&A

  • What is Deepseek?

    Deepseek is a Chinese AI lab that created a free, open-source AI model that outperforms several leading models from U.S. companies.

  • How much did Deepseek spend on their AI model?

    Deepseek reportedly spent just $5.6 million to develop their model.

  • How does Deepseek's model compare to OpenAI's models?

    Deepseek's model outperformed OpenAI's GPT-4 on accuracy in various tests while being significantly cheaper to develop.

  • What challenges has Deepseek faced due to U.S. restrictions?

    Despite U.S. semiconductor restrictions, Deepseek has managed to develop competitive AI models using less powerful hardware efficiently.

  • What are the implications of Deepseek's success?

    Deepseek's breakthrough raises questions about the future of AI competition, the viability of open-source models, and the impact on U.S. technological leadership.

  • What role does open-source play in AI development?

    Open-source models like Deepseek's could democratize AI development, allowing smaller teams to build on existing models without large capital investments.

  • Why is the global tech landscape changing?

    Deepseek's advancements may encourage a shift towards open-source models, impacting the competitive landscape for AI development across nations.

  • What are the concerns over AI models developed in China?

    Models from China may adhere to state-imposed values and censorship, raising ethical concerns about the information they provide.

ビデオをもっと見る

AIを活用したYouTubeの無料動画要約に即アクセス!
字幕
en
オートスクロール:
  • 00:00:00
    China's latest AI
  • 00:00:01
    breakthrough has leapfrogged
  • 00:00:03
    the world.
  • 00:00:04
    I think we should take the
  • 00:00:05
    development out of China
  • 00:00:06
    very, very seriously.
  • 00:00:08
    A game changing move that
  • 00:00:09
    does not come from OpenAI,
  • 00:00:11
    Google or Meta.
  • 00:00:13
    There is a new model that
  • 00:00:14
    has all of the valley
  • 00:00:16
    buzzing.
  • 00:00:17
    But from a Chinese lab
  • 00:00:18
    called Deepseek.
  • 00:00:20
    It's opened a lot of eyes of
  • 00:00:22
    like what is actually
  • 00:00:23
    happening in AI in China.
  • 00:00:25
    What took Google and OpenAI
  • 00:00:26
    years and hundreds of
  • 00:00:27
    millions of dollars to
  • 00:00:28
    build... Deepseek says took
  • 00:00:30
    it just two months and less
  • 00:00:32
    than $6 million dollars.
  • 00:00:34
    They have the best open
  • 00:00:35
    source model, and all the
  • 00:00:37
    American developers are
  • 00:00:38
    building on that.
  • 00:00:39
    I'm Deirdre Bosa with the
  • 00:00:40
    tech check take... China's
  • 00:00:42
    AI breakthrough.
  • 00:00:53
    It was a technological leap
  • 00:00:55
    that shocked Silicon Valley.
  • 00:00:57
    A newly unveiled free,
  • 00:00:59
    open-source AI model that
  • 00:01:01
    beats some of the most
  • 00:01:02
    powerful ones on the market.
  • 00:01:03
    But it wasn't a new launch
  • 00:01:04
    from OpenAI or model
  • 00:01:06
    announcement from Anthropic.
  • 00:01:07
    This one was built in the
  • 00:01:09
    East by a Chinese research
  • 00:01:11
    lab called Deepseek.
  • 00:01:13
    And the details behind its
  • 00:01:14
    development stunned top AI
  • 00:01:16
    researchers here in the U.S.
  • 00:01:17
    First-the cost.
  • 00:01:19
    The AI lab reportedly spent
  • 00:01:20
    just $5.6 million dollars to
  • 00:01:22
    build Deepseek version 3.
  • 00:01:24
    Compare that to OpenAI,
  • 00:01:26
    which is spending $5 billion
  • 00:01:27
    a year, and Google,
  • 00:01:28
    which expects capital
  • 00:01:30
    expenditures in 2024 to soar
  • 00:01:32
    to over $50 billion.
  • 00:01:34
    And then there's Microsoft
  • 00:01:35
    that shelled out more than
  • 00:01:36
    $13 billion just to invest
  • 00:01:39
    in OpenAI.
  • 00:01:40
    But even more stunning how
  • 00:01:42
    Deepseek's scrap pier model
  • 00:01:43
    was able to outperform the
  • 00:01:45
    lavishly-funded American
  • 00:01:46
    ones.
  • 00:01:47
    To see the Deepseek,
  • 00:01:49
    new model. It's super
  • 00:01:51
    impressive in terms of both
  • 00:01:52
    how they have really
  • 00:01:53
    effectively done an
  • 00:01:55
    open-source model that does
  • 00:01:56
    what is this inference time
  • 00:01:58
    compute. And it's super
  • 00:01:59
    compute efficient.
  • 00:02:00
    It beat Meta's Llama,
  • 00:02:02
    OpenAI's GPT 4-O and
  • 00:02:04
    Anthropic's Claude Sonnet
  • 00:02:05
    3.5 on accuracy on
  • 00:02:07
    wide-ranging tests.
  • 00:02:08
    A subset of 500 math
  • 00:02:09
    problems, an AI math
  • 00:02:11
    evaluation, coding
  • 00:02:12
    competitions, and a test of
  • 00:02:14
    spotting and fixing bugs in
  • 00:02:16
    code. Quickly following that
  • 00:02:18
    up with a new reasoning
  • 00:02:19
    model called R1,
  • 00:02:20
    which just as easily
  • 00:02:22
    outperformed OpenAI's
  • 00:02:23
    cutting-edge o1 in some of
  • 00:02:25
    those third-party tests.
  • 00:02:26
    Today we released Humanity's
  • 00:02:29
    Last Exam, which is a new
  • 00:02:31
    evaluation or benchmark of
  • 00:02:32
    AI models that we produced
  • 00:02:34
    by getting math,
  • 00:02:36
    physics, biology,
  • 00:02:37
    chemistry professors to
  • 00:02:39
    provide the hardest
  • 00:02:40
    questions they could
  • 00:02:41
    possibly imagine. Deepseek,
  • 00:02:42
    which is the leading Chinese
  • 00:02:44
    AI lab, their model is
  • 00:02:47
    actually the top performing,
  • 00:02:48
    or roughly on par with the
  • 00:02:50
    best American models.
  • 00:02:51
    They accomplished all that
  • 00:02:52
    despite the strict
  • 00:02:53
    semiconductor restrictions
  • 00:02:54
    that the U.S . government
  • 00:02:55
    has imposed on China,
  • 00:02:57
    which has essentially
  • 00:02:58
    shackled the amount of
  • 00:02:59
    computing power. Washington
  • 00:03:01
    has drawn a hard line
  • 00:03:02
    against China in the AI
  • 00:03:03
    race. Cutting the country
  • 00:03:05
    off from receiving America's
  • 00:03:06
    most powerful chips like...
  • 00:03:08
    Nvidia's H-100 GPUs.
  • 00:03:10
    Those were once thought to
  • 00:03:11
    be essential to building a
  • 00:03:13
    competitive AI model.
  • 00:03:15
    With startups and big tech
  • 00:03:16
    firms alike scrambling to
  • 00:03:17
    get their hands on any
  • 00:03:18
    available. But Deepseek
  • 00:03:20
    turned that on its head.
  • 00:03:21
    Side-stepping the rules by
  • 00:03:22
    using Nvidia's less
  • 00:03:24
    performant H-800s to build
  • 00:03:27
    the latest model and showing
  • 00:03:29
    that the chip export
  • 00:03:30
    controls were not the
  • 00:03:31
    chokehold D.C. intended.
  • 00:03:33
    They
  • 00:03:33
    were able to take whatever
  • 00:03:34
    hardware they were trained
  • 00:03:36
    on, but use it way more
  • 00:03:37
    efficiently.
  • 00:03:38
    But just who's behind Deep
  • 00:03:40
    seek anyway? Despite its
  • 00:03:42
    breakthrough, very,
  • 00:03:43
    very little is known about
  • 00:03:45
    its lab and its founder,
  • 00:03:46
    Liang Wenfeng.
  • 00:03:48
    According to Chinese media
  • 00:03:49
    reports, Deepseek was born
  • 00:03:50
    out of a Chinese hedge fund
  • 00:03:52
    called High Flyer Quant.
  • 00:03:53
    That manages about $8
  • 00:03:55
    billion in assets.
  • 00:03:56
    The mission, on its
  • 00:03:57
    developer site, it reads
  • 00:03:58
    simply: "unravel the mystery
  • 00:04:00
    of AGI with curiosity.
  • 00:04:03
    Answer the essential
  • 00:04:04
    question with long-termism."
  • 00:04:06
    The leading American AI
  • 00:04:08
    startups, meanwhile – OpenAI
  • 00:04:09
    and Anthropic – they have
  • 00:04:11
    detailed charters and
  • 00:04:12
    constitutions that lay out
  • 00:04:13
    their principles and their
  • 00:04:14
    founding missions,
  • 00:04:15
    like these sections on AI
  • 00:04:17
    safety and responsibility.
  • 00:04:19
    Despite several attempts to
  • 00:04:20
    reach someone at Deepeseek,
  • 00:04:22
    we never got a response.
  • 00:04:24
    How did they actually
  • 00:04:26
    assemble this talent?
  • 00:04:27
    How did they assemble all
  • 00:04:28
    the hardware? How did they
  • 00:04:29
    assemble the data to do all
  • 00:04:30
    this? We don't know, and
  • 00:04:32
    it's never been publicized,
  • 00:04:33
    and hopefully we can learn
  • 00:04:34
    that.
  • 00:04:35
    But the mystery brings into
  • 00:04:36
    sharp relief just how urgent
  • 00:04:38
    and complex the AI face off
  • 00:04:40
    against China has become.
  • 00:04:42
    Because it's not just
  • 00:04:43
    Deepseek. Other,
  • 00:04:44
    more well -known Chinese AI
  • 00:04:45
    models have carved out
  • 00:04:47
    positions in the race with
  • 00:04:48
    limited resources as well.
  • 00:04:50
    Kai Fu Lee, he's one of the
  • 00:04:51
    leading AI researchers in
  • 00:04:53
    China, formerly leading
  • 00:04:54
    Google's operations there.
  • 00:04:55
    Now, his startup,
  • 00:04:57
    "Zero One Dot AI," it's
  • 00:04:59
    attracting attention,
  • 00:05:00
    becoming a unicorn just
  • 00:05:01
    eight months after founding
  • 00:05:02
    and bringing in almost $14
  • 00:05:04
    million in revenue in 2024.
  • 00:05:06
    The thing that shocks my
  • 00:05:08
    friends in the Silicon
  • 00:05:09
    Valley is not just our
  • 00:05:10
    performance, but that we
  • 00:05:12
    trained the model with only
  • 00:05:14
    $3 million, and GPT-4 was
  • 00:05:17
    trained by $80 to $100
  • 00:05:18
    million.
  • 00:05:19
    Trained with just three
  • 00:05:20
    million dollars. Alibaba's
  • 00:05:22
    Qwen, meanwhile, cut costs
  • 00:05:23
    by as much as 85% on its
  • 00:05:25
    large language models in a
  • 00:05:27
    bid to attract more
  • 00:05:27
    developers and signaling
  • 00:05:29
    that the race is on.
  • 00:05:37
    China's breakthrough
  • 00:05:38
    undermines the lead that our
  • 00:05:40
    AI labs were once thought to
  • 00:05:42
    have. In early 2024,
  • 00:05:44
    former Google CEO Eric
  • 00:05:45
    Schmidt. He predicted China
  • 00:05:46
    was 2 to 3 years behind the
  • 00:05:48
    U.S . in AI.
  • 00:05:50
    But now , Schmidt is singing
  • 00:05:51
    a different tune.
  • 00:05:52
    Here he is on ABC's "This
  • 00:05:54
    Week."
  • 00:05:55
    I used to think we were a
  • 00:05:56
    couple of years ahead of
  • 00:05:57
    China, but China has caught
  • 00:05:59
    up in the last six months in
  • 00:06:01
    a way that is remarkable.
  • 00:06:02
    The fact of the matter is
  • 00:06:03
    that a couple of the Chinese
  • 00:06:06
    programs, one,
  • 00:06:07
    for example, is called Deep
  • 00:06:08
    seek, looks like they've
  • 00:06:10
    caught up.
  • 00:06:11
    It raises major questions
  • 00:06:12
    about just how wide open
  • 00:06:15
    AI's moat really is.
  • 00:06:16
    Back when OpenAI released
  • 00:06:17
    ChatGPT to the world in
  • 00:06:19
    November of 2022,
  • 00:06:21
    it was unprecedented and
  • 00:06:22
    uncontested.
  • 00:06:24
    Now, the company faces not
  • 00:06:25
    only the international
  • 00:06:26
    competition from Chinese
  • 00:06:27
    models, but fierce domestic
  • 00:06:29
    competition from Google's
  • 00:06:30
    Gemini, Anthropic's Claude,
  • 00:06:32
    and Meta's open source Llama
  • 00:06:33
    Model. And now the game has
  • 00:06:35
    changed. The widespread
  • 00:06:36
    availability of powerful
  • 00:06:38
    open-source models allows
  • 00:06:40
    developers to skip the
  • 00:06:41
    demanding, capital-intensive
  • 00:06:43
    steps of building and
  • 00:06:45
    training models themselves.
  • 00:06:46
    Now they can build on top of
  • 00:06:48
    existing models,
  • 00:06:49
    making it significantly
  • 00:06:51
    easier to jump to the
  • 00:06:52
    frontier, that is the front
  • 00:06:53
    of the race, with a smaller
  • 00:06:55
    budget and a smaller team.
  • 00:06:57
    In the last two weeks,
  • 00:06:59
    AI research teams have
  • 00:07:01
    really opened their eyes and
  • 00:07:04
    have become way more
  • 00:07:05
    ambitious on what's possible
  • 00:07:07
    with a lot less capital.
  • 00:07:09
    So previously,
  • 00:07:11
    to get to the frontier,
  • 00:07:13
    you would have to think
  • 00:07:13
    about hundreds of millions
  • 00:07:14
    of dollars of investment and
  • 00:07:16
    perhaps a billion dollars of
  • 00:07:17
    investment. What Deepseek
  • 00:07:18
    has now done here in Silicon
  • 00:07:19
    Valley is it's opened our
  • 00:07:21
    eyes to what you can
  • 00:07:22
    actually accomplish with 10,
  • 00:07:24
    15, 20, or 30 million
  • 00:07:26
    dollars.
  • 00:07:27
    It also means any company,
  • 00:07:28
    like OpenAI, that claims the
  • 00:07:30
    frontier today ...could lose
  • 00:07:32
    it tomorrow. That's how
  • 00:07:34
    Deepseek was able to catch
  • 00:07:35
    up so quickly. It started
  • 00:07:36
    building on the existing
  • 00:07:37
    frontier of AI,
  • 00:07:39
    its approach focusing on
  • 00:07:40
    iterating on existing
  • 00:07:41
    technology rather than
  • 00:07:43
    reinventing the wheel.
  • 00:07:44
    They can take a really good
  • 00:07:47
    big model and use a process
  • 00:07:49
    called distillation. And
  • 00:07:50
    what distillation is,
  • 00:07:51
    basically you use a very
  • 00:07:53
    large model to help your
  • 00:07:56
    small model get smart at the
  • 00:07:57
    thing that you want it to
  • 00:07:58
    get smart at. And that's
  • 00:07:59
    actually a very cost
  • 00:08:00
    efficient.
  • 00:08:01
    It closed the gap by using
  • 00:08:03
    available data sets,
  • 00:08:04
    applying innovative tweaks,
  • 00:08:06
    and leveraging existing
  • 00:08:07
    models. So much so,
  • 00:08:09
    that Deepseek's model has
  • 00:08:10
    run into an identity crisis.
  • 00:08:13
    It's convinced that its
  • 00:08:14
    ChatGPT, when you ask it
  • 00:08:16
    directly, "what model are
  • 00:08:17
    you?" Deepseek responds...
  • 00:08:19
    I'm an AI language model
  • 00:08:20
    created by OpenAI,
  • 00:08:22
    specifically based on the
  • 00:08:23
    GPT -4 architecture.
  • 00:08:25
    Leading OpenAI CEO Sam
  • 00:08:26
    Altman to post in a thinly
  • 00:08:28
    veiled shot at Deepseek just
  • 00:08:30
    days after the model was
  • 00:08:31
    released. "It's relatively
  • 00:08:33
    easy to copy something that
  • 00:08:34
    you know works.
  • 00:08:35
    It's extremely hard to do
  • 00:08:36
    something new,
  • 00:08:37
    risky, and difficult when
  • 00:08:39
    you don't know if it will
  • 00:08:40
    work." But that's not
  • 00:08:42
    exactly what Deepseek did.
  • 00:08:44
    It emulated GPT by
  • 00:08:45
    leveraging OpenAI's existing
  • 00:08:47
    outputs and architecture
  • 00:08:48
    principles, while quietly
  • 00:08:49
    introducing its own
  • 00:08:50
    enhancements, really
  • 00:08:51
    blurring the line between
  • 00:08:53
    itself and ChatGPT.
  • 00:08:55
    It all puts pressure on a
  • 00:08:56
    closed source leader like
  • 00:08:57
    OpenAI to justify its
  • 00:08:58
    costlier model as more
  • 00:09:00
    potentially nimbler
  • 00:09:01
    competitors emerge.
  • 00:09:02
    Everybody copies everybody
  • 00:09:03
    in this field.
  • 00:09:05
    You can say Google did the
  • 00:09:07
    transformer first. It's not
  • 00:09:08
    OpenAI and OpenAI just
  • 00:09:10
    copied it. Google built the
  • 00:09:12
    first large language models.
  • 00:09:13
    They didn't productise it,
  • 00:09:14
    but OpenAI did it into a
  • 00:09:16
    productized way. So you can
  • 00:09:19
    say all this in many ways.
  • 00:09:21
    It doesn't matter.
  • 00:09:22
    So if everyone is copying
  • 00:09:24
    one another, it raises the
  • 00:09:25
    question, is massive spend
  • 00:09:28
    on individual L-L-Ms even a
  • 00:09:31
    good investment anymore?
  • 00:09:32
    Now, no one has as much at
  • 00:09:34
    stake as OpenAI.
  • 00:09:35
    The startup raised over $6
  • 00:09:36
    billion in its last funding
  • 00:09:38
    round alone. But,
  • 00:09:39
    the company has yet to turn
  • 00:09:41
    a profit. And with its core
  • 00:09:43
    business centered on
  • 00:09:44
    building the models -
  • 00:09:45
    it's much more exposed than
  • 00:09:46
    companies like Google and
  • 00:09:47
    Amazon, who have cloud and
  • 00:09:49
    ad businesses bankrolling
  • 00:09:51
    their spend. For OpenAI,
  • 00:09:53
    reasoning will be key.
  • 00:09:54
    A model that thinks before
  • 00:09:56
    it generates a response,
  • 00:09:57
    going beyond pattern
  • 00:09:58
    recognition to analyze,
  • 00:09:59
    draw logical conclusions,
  • 00:10:01
    and solve really complex
  • 00:10:02
    problems. For now,
  • 00:10:04
    the startup's o1 reasoning
  • 00:10:05
    model is still cutting edge.
  • 00:10:08
    But for how long?
  • 00:10:09
    Researchers at Berkeley
  • 00:10:10
    showed that they could build
  • 00:10:11
    a reasoning model for $450
  • 00:10:13
    just last week. So you can
  • 00:10:15
    actually create these models
  • 00:10:16
    that do thinking for much,
  • 00:10:18
    much less. You don't need
  • 00:10:19
    those huge amounts of to
  • 00:10:21
    pre-train the models. So I
  • 00:10:22
    think the game is shifting.
  • 00:10:24
    It means that staying on top
  • 00:10:26
    may require as much
  • 00:10:27
    creativity as capital.
  • 00:10:29
    Deepseek's breakthrough also
  • 00:10:31
    comes at a very tricky time
  • 00:10:32
    for the AI darling.
  • 00:10:33
    Just as OpenAI is moving to
  • 00:10:35
    a for-profit model and
  • 00:10:37
    facing unprecedented brain
  • 00:10:39
    drain. Can it raise more
  • 00:10:41
    money at ever higher
  • 00:10:42
    valuations if the game is
  • 00:10:43
    changing? As Chamath
  • 00:10:44
    Palihapitiya puts it...
  • 00:10:46
    let me say the quiet part
  • 00:10:47
    out loud: AI model building
  • 00:10:49
    is a money trap.
  • 00:10:58
    Those trip restrictions from
  • 00:10:59
    the U.S . government, they
  • 00:11:00
    were intended to slow down
  • 00:11:03
    the race. To keep American
  • 00:11:04
    tech on American ground,
  • 00:11:06
    to stay ahead in the race.
  • 00:11:07
    What we want to do is we
  • 00:11:08
    want to keep it in this
  • 00:11:09
    country. China is a
  • 00:11:10
    competitor and others are
  • 00:11:11
    competitors.
  • 00:11:12
    So instead, the restrictions
  • 00:11:14
    might have been just what
  • 00:11:15
    China needed.
  • 00:11:16
    Necessity is the mother of
  • 00:11:17
    invention.
  • 00:11:19
    B ecause they had to go
  • 00:11:22
    figure out workarounds,
  • 00:11:25
    they actually ended up
  • 00:11:25
    building something a lot
  • 00:11:26
    more efficient.
  • 00:11:27
    It's really remarkable the
  • 00:11:28
    amount of progress they've
  • 00:11:29
    made with as little capital
  • 00:11:32
    as it's taken them to make
  • 00:11:33
    that progress.
  • 00:11:34
    It drove them to get
  • 00:11:35
    creative. With huge
  • 00:11:36
    implications. Deepseek is an
  • 00:11:38
    open-source model, meaning
  • 00:11:39
    that developers have full
  • 00:11:41
    access and they can
  • 00:11:42
    customize its weights or
  • 00:11:43
    fine -tune it to their
  • 00:11:44
    liking.
  • 00:11:45
    It's known that once open
  • 00:11:46
    -source is caught up or
  • 00:11:48
    improved over closed source
  • 00:11:49
    software, all developers
  • 00:11:52
    migrate to that.
  • 00:11:53
    But
  • 00:11:53
    key is that it's also
  • 00:11:55
    inexpensive. The lower the
  • 00:11:57
    cost, the more attractive it
  • 00:11:58
    is for developers to adopt.
  • 00:12:00
    The bottom line is our
  • 00:12:01
    inference cost is 10 cents
  • 00:12:03
    per million tokens,
  • 00:12:05
    and that's 1/30th of what
  • 00:12:07
    the typical comparable model
  • 00:12:08
    charges. Where's it going?
  • 00:12:10
    It's well, the 10 cents
  • 00:12:11
    would lead to building apps
  • 00:12:14
    for much lower costs.
  • 00:12:15
    So if you wanted to build a
  • 00:12:17
    u.com or Perplexity or some
  • 00:12:19
    other app, you can either
  • 00:12:21
    pay OpenAI $4.40 per million
  • 00:12:23
    tokens, or if you have our
  • 00:12:26
    model, it costs you just 10
  • 00:12:27
    cents.
  • 00:12:28
    It could mean that the
  • 00:12:29
    prevailing model in global
  • 00:12:30
    AI may be open -source,
  • 00:12:32
    as organizations and nations
  • 00:12:34
    come around to the idea that
  • 00:12:35
    collaboration and
  • 00:12:36
    decentralization,
  • 00:12:37
    those things can drive
  • 00:12:38
    innovation faster and more
  • 00:12:40
    efficiently than
  • 00:12:41
    proprietary, closed
  • 00:12:42
    ecosystems. A cheaper,
  • 00:12:44
    more efficient, widely
  • 00:12:45
    adopted open -source model
  • 00:12:47
    from China that could lead
  • 00:12:49
    to a major shift in
  • 00:12:51
    dynamics.
  • 00:12:52
    That's more dangerous,
  • 00:12:54
    because then they get to own
  • 00:12:56
    the mindshare, the
  • 00:12:58
    ecosystem.
  • 00:12:59
    In other words, the adoption
  • 00:13:00
    of a Chinese open-source
  • 00:13:01
    model at scale that could
  • 00:13:02
    undermine U.S . leadership
  • 00:13:04
    while embedding China more
  • 00:13:06
    deeply into the fabric of
  • 00:13:07
    global tech infrastructure.
  • 00:13:09
    There's always a point where
  • 00:13:10
    open source can stop being
  • 00:13:12
    open -source, too,
  • 00:13:13
    right? So, the licenses are
  • 00:13:15
    very favorable today,
  • 00:13:16
    but-it could close it.
  • 00:13:17
    Exactly, over time,
  • 00:13:19
    they can always change the
  • 00:13:20
    license. So, it's important
  • 00:13:22
    that we actually have people
  • 00:13:24
    here in America building,
  • 00:13:26
    and that's why Meta is so
  • 00:13:27
    important.
  • 00:13:28
    Another consequence of
  • 00:13:29
    China's AI breakthrough is
  • 00:13:31
    giving its Communist Party
  • 00:13:32
    control of the narrative.
  • 00:13:34
    AI models built in China t
  • 00:13:35
    hey're forced to adhere to a
  • 00:13:36
    certain set of rules set by
  • 00:13:37
    the state. They must embody
  • 00:13:39
    "core socialist values."
  • 00:13:41
    Studies have shown that
  • 00:13:42
    models created by Tencent
  • 00:13:44
    and Alibaba, they will
  • 00:13:45
    censor historical events
  • 00:13:46
    like Tiananmen Square,
  • 00:13:48
    deny human rights abuse,
  • 00:13:50
    and filter criticism of
  • 00:13:51
    Chinese political leaders.
  • 00:13:53
    That contest is about
  • 00:13:54
    whether we're going to have
  • 00:13:55
    democratic AI informed by
  • 00:13:56
    democratic values,
  • 00:13:58
    built to serve democratic
  • 00:14:00
    purposes, or we're going to
  • 00:14:01
    end up with with autocratic
  • 00:14:03
    AI.
  • 00:14:03
    If developers really begin
  • 00:14:04
    to adopt these models en
  • 00:14:06
    masse because they're more
  • 00:14:07
    efficient, that could have a
  • 00:14:08
    serious ripple effect.
  • 00:14:10
    Trickle down to even
  • 00:14:11
    consumer-facing AI
  • 00:14:12
    applications, and influence
  • 00:14:13
    how trustworthy those
  • 00:14:15
    AI-generated responses from
  • 00:14:16
    chatbots really are.
  • 00:14:18
    And there's really only two
  • 00:14:19
    countries right now in the
  • 00:14:20
    world that can build this at
  • 00:14:22
    scale, you know,
  • 00:14:23
    and that is the U.S .
  • 00:14:25
    and China, and so,
  • 00:14:27
    you know, the consequences
  • 00:14:28
    of the stakes in and around
  • 00:14:30
    this are just enormous.
  • 00:14:32
    Enormous stakes,
  • 00:14:33
    enormous consequences,
  • 00:14:35
    and hanging in the balance:
  • 00:14:37
    A merica's lead.
  • 00:14:42
    For a topic so complex and
  • 00:14:44
    new, we turn to an expert
  • 00:14:45
    who's actually building in
  • 00:14:47
    the space, and
  • 00:14:48
    model-agnostic. Perplexity
  • 00:14:50
    co-founder and CEO Arvind
  • 00:14:51
    Srinivas – who you heard
  • 00:14:52
    from throughout our piece.
  • 00:14:54
    He sat down with me for more
  • 00:14:55
    than 30 minutes to discuss
  • 00:14:56
    Deepseek and its
  • 00:14:57
    implications, as well as
  • 00:14:59
    Perplexity's roadmap.
  • 00:15:00
    We think it's worth
  • 00:15:01
    listening to that whole
  • 00:15:02
    conversation, so here it is.
  • 00:15:04
    So first I want to know what
  • 00:15:05
    the stakes are. What,
  • 00:15:07
    like describe the AI race
  • 00:15:09
    between China and the U.S .
  • 00:15:11
    and what's at stake.
  • 00:15:13
    Okay, so first of all,
  • 00:15:14
    China has a lot of
  • 00:15:16
    disadvantages in competing
  • 00:15:18
    with the U.S. Number one is,
  • 00:15:21
    the fact that they don't get
  • 00:15:22
    access to all the hardware
  • 00:15:24
    that we have access to here.
  • 00:15:27
    So they're kind of working
  • 00:15:28
    with lower end GPUs than us.
  • 00:15:31
    I t's almost like working
  • 00:15:32
    with the previous generation
  • 00:15:33
    GPUs, scrappily.
  • 00:15:35
    S o and the fact that the
  • 00:15:38
    bigger models tend to be
  • 00:15:39
    more smarter, naturally puts
  • 00:15:42
    them at a disadvantage.
  • 00:15:43
    But the flip side of this is
  • 00:15:46
    that necessity is the mother
  • 00:15:47
    of invention, because they
  • 00:15:51
    had to go figure out
  • 00:15:53
    workarounds. They actually
  • 00:15:55
    ended up building something
  • 00:15:56
    a lot more efficient.
  • 00:15:58
    It's like saying, "hey look,
  • 00:15:59
    you guys really got to get a
  • 00:16:01
    top notch model, and I'm not
  • 00:16:04
    going to give you resources
  • 00:16:05
    and figure out something,"
  • 00:16:07
    right? Unless it's
  • 00:16:08
    impossible, unless it's
  • 00:16:09
    mathematically possible to
  • 00:16:11
    prove that it's impossible
  • 00:16:13
    to do so, you can always try
  • 00:16:14
    to like come up with
  • 00:16:15
    something more efficient.
  • 00:16:17
    But that is likely to make
  • 00:16:20
    them come up with a more
  • 00:16:21
    efficient solution than
  • 00:16:22
    America. And of course,
  • 00:16:24
    they have open -sourced it,
  • 00:16:25
    so we can still adopt
  • 00:16:27
    something like that here.
  • 00:16:28
    But that kind of talent
  • 00:16:30
    they're building to do that
  • 00:16:32
    will become an edge for them
  • 00:16:33
    over time right?
  • 00:16:35
    T he leading open-source
  • 00:16:36
    model in America's Meta's
  • 00:16:38
    Llama family. It's really
  • 00:16:40
    good. It's kind of like a
  • 00:16:41
    model that you can run on
  • 00:16:42
    your computer.
  • 00:16:43
    B ut even though it got
  • 00:16:45
    pretty close to GBT-4,
  • 00:16:48
    and at the time of its
  • 00:16:50
    release, the model that was
  • 00:16:51
    closest in quality was the
  • 00:16:54
    giant 405B, not the 70B that
  • 00:16:56
    you could run on your
  • 00:16:56
    computer. And so there was
  • 00:16:59
    still a not a small,
  • 00:17:01
    cheap, fast, efficient,
  • 00:17:02
    open-source model that
  • 00:17:04
    rivaled the most powerful
  • 00:17:06
    closed models from OpenAI,
  • 00:17:07
    Anthropic. Nothing from
  • 00:17:09
    America, nothing from
  • 00:17:11
    Mistral AI either.
  • 00:17:12
    And then these guys come
  • 00:17:13
    out, with like a crazy model
  • 00:17:16
    that's like 10x cheaper and
  • 00:17:17
    API pricing than GPT -4 and
  • 00:17:19
    15x cheaper than Sonnet,
  • 00:17:21
    I believe. Really fast,
  • 00:17:23
    16 tokens per second–60
  • 00:17:24
    tokens per second,
  • 00:17:26
    and pretty much equal or
  • 00:17:29
    better in some benchmarks
  • 00:17:30
    and worse in some others.
  • 00:17:31
    But like roughly in that
  • 00:17:32
    ballpark of 4-O's quality.
  • 00:17:35
    And they did it all with
  • 00:17:36
    like approximately just 20,
  • 00:17:39
    48, 800 GPUs, which is
  • 00:17:41
    actually equivalent to like
  • 00:17:42
    somewhere around 1,500 or
  • 00:17:44
    1,000 to 1,500 H100 GPUs.
  • 00:17:47
    That's like 20 to 30x lower
  • 00:17:50
    than the amount of GPUs that
  • 00:17:52
    GPT -4s is usually trained
  • 00:17:53
    on, and roughly $5 million
  • 00:17:56
    in total compute budget.
  • 00:17:59
    They did it with so little
  • 00:18:00
    money and such an amazing
  • 00:18:02
    model, gave it away for
  • 00:18:04
    free, wrote a technical
  • 00:18:04
    paper, and definitely it
  • 00:18:06
    makes us all question like,
  • 00:18:09
    "okay, like if we have the
  • 00:18:10
    equivalent of Doge for like
  • 00:18:12
    model training,
  • 00:18:14
    this is an example of that,
  • 00:18:15
    right?"
  • 00:18:16
    Right. Yeah. Efficiency,
  • 00:18:18
    is what you're getting at.
  • 00:18:19
    So, fraction of the price,
  • 00:18:21
    fraction of the time.
  • 00:18:22
    Yeah. Dumb down GPUs
  • 00:18:23
    essentially. What was your
  • 00:18:25
    surprise when you understood
  • 00:18:27
    what they had done.
  • 00:18:28
    So my surprise was that when
  • 00:18:30
    I actually went through the
  • 00:18:31
    technical paper,
  • 00:18:33
    the amount of clever
  • 00:18:35
    solutions they came up with,
  • 00:18:38
    first of all, they train a
  • 00:18:39
    mixture of experts model.
  • 00:18:40
    It's not that easy to train,
  • 00:18:43
    there's a lot of like,
  • 00:18:44
    the main reason people find
  • 00:18:46
    it difficult to catch up
  • 00:18:46
    with OpenAI, especially on
  • 00:18:48
    the MoE architecture,
  • 00:18:49
    is that there's a lot of,
  • 00:18:52
    irregular loss spikes.
  • 00:18:54
    The numerics are not stable,
  • 00:18:56
    so often, like,
  • 00:18:57
    you've got to restart the
  • 00:18:59
    training checkpoint again,
  • 00:19:00
    and a lot of infrastructure
  • 00:19:01
    needs to be built for that.
  • 00:19:03
    And they came up with very
  • 00:19:04
    clever solutions to balance
  • 00:19:06
    that without adding
  • 00:19:07
    additional hacks.
  • 00:19:09
    T hey also figured out
  • 00:19:12
    floating point-8 bit
  • 00:19:13
    training, at least for some
  • 00:19:15
    of the numerics. And they
  • 00:19:17
    cleverly figured out which
  • 00:19:18
    has to be in higher
  • 00:19:19
    precision, which has to be
  • 00:19:20
    in lower precision. T o my
  • 00:19:22
    knowledge, I think floating
  • 00:19:24
    point-8 training is not that
  • 00:19:26
    well understood. Most of the
  • 00:19:27
    training in America is still
  • 00:19:28
    running in FP16.
  • 00:19:30
    Maybe OpenAI and some of the
  • 00:19:31
    people are trying to explore
  • 00:19:32
    that, but it's pretty
  • 00:19:33
    difficult to get it right.
  • 00:19:35
    So because necessity is the
  • 00:19:36
    mother of invention, because
  • 00:19:37
    they don't have that much
  • 00:19:38
    memory, that many GPUs.
  • 00:19:40
    T hey figured out a lot
  • 00:19:41
    of
  • 00:19:42
    numerical stability stuff
  • 00:19:44
    that makes their training
  • 00:19:45
    work. And they claimed in
  • 00:19:46
    the paper that for majority
  • 00:19:48
    of the training was stable.
  • 00:19:50
    Which means what? They can
  • 00:19:51
    always rerun those training
  • 00:19:53
    runs again and on more data
  • 00:19:57
    or better data. And then,
  • 00:20:00
    it only trained for 60 days.
  • 00:20:02
    So that's pretty amazing.
  • 00:20:04
    Safe to say you were
  • 00:20:05
    surprised.
  • 00:20:05
    So I was definitely
  • 00:20:06
    surprised. Usually the
  • 00:20:08
    wisdom or, like I wouldn't
  • 00:20:11
    say, wisdom, the myth, is
  • 00:20:12
    that Chinese are just good
  • 00:20:14
    at copying. So if we start
  • 00:20:16
    stop writing research papers
  • 00:20:18
    in America, if we stop
  • 00:20:20
    describing the details of
  • 00:20:22
    our infrastructure or
  • 00:20:23
    architecture, and stop open
  • 00:20:25
    sourcing, they're not going
  • 00:20:27
    to be able to catch up. But
  • 00:20:29
    the reality is, some of the
  • 00:20:30
    details in Deep seek v3 are
  • 00:20:33
    so good that I wouldn't be
  • 00:20:34
    surprised if Meta took a
  • 00:20:36
    look at it and incorporated
  • 00:20:38
    some of that –tried to copy
  • 00:20:38
    them . Right.
  • 00:20:41
    I wouldn't necessarily say
  • 00:20:42
    copy. It's all like,
  • 00:20:43
    you know, sharing science,
  • 00:20:45
    engineering, but the point
  • 00:20:47
    is like, it's changing.
  • 00:20:48
    Like, it's not like China is
  • 00:20:50
    just copycat. They're also
  • 00:20:51
    innovating.
  • 00:20:52
    We don't know exactly the
  • 00:20:53
    data that it was trained on
  • 00:20:55
    right? Even though it's open
  • 00:20:56
    -source, we know some of the
  • 00:20:57
    ways and things that was
  • 00:20:59
    trained up, but not
  • 00:20:59
    everything. And there's this
  • 00:21:01
    idea that it was trained on
  • 00:21:02
    public ChatGPT outputs,
  • 00:21:05
    which would mean it just was
  • 00:21:06
    copied. But you're saying it
  • 00:21:07
    goes beyond that? There's
  • 00:21:08
    real innovation in there?
  • 00:21:09
    Yeah,
  • 00:21:09
    look, I mean, they've
  • 00:21:11
    trained it on 14.8 trillion
  • 00:21:13
    tokens. T he internet has so
  • 00:21:15
    much ChatGPT. If you
  • 00:21:16
    actually go to any LinkedIn
  • 00:21:18
    post or X post.
  • 00:21:19
    Now, most of the comments
  • 00:21:21
    are written by AI. You can
  • 00:21:22
    just see it, like people are
  • 00:21:24
    just trying to write. In
  • 00:21:25
    fact, even with an X,
  • 00:21:28
    there's like a Grok tweet
  • 00:21:30
    enhancer, or in LinkedIn
  • 00:21:31
    there's an AI enhancer,
  • 00:21:33
    or in Google Docs and Word.
  • 00:21:37
    There are AI tools to like
  • 00:21:38
    rewrite your stuff. So if
  • 00:21:40
    you do something there and
  • 00:21:41
    copy paste somewhere on the
  • 00:21:43
    internet, it's naturally
  • 00:21:44
    going to have some elements
  • 00:21:45
    of a ChatGPT like training,
  • 00:21:48
    right? And there's a lot of
  • 00:21:49
    people who don't even bother
  • 00:21:51
    to strip away that I'm a
  • 00:21:53
    language model, right?
  • 00:21:55
    –part. So, they just paste
  • 00:21:56
    it somewhere and it's very
  • 00:21:58
    difficult to control for
  • 00:21:59
    this. I think xAI has spoken
  • 00:22:01
    about this too, so I
  • 00:22:02
    wouldn't like disregard
  • 00:22:04
    their technical
  • 00:22:05
    accomplishment just because
  • 00:22:07
    like for some prompts like
  • 00:22:08
    who are you, or like which
  • 00:22:10
    model are you at response
  • 00:22:11
    like that? It doesn't even
  • 00:22:12
    matter in my opinion.
  • 00:22:13
    For a long
  • 00:22:13
    time we thought, I don't
  • 00:22:14
    know if you agreed with us,
  • 00:22:15
    China was behind in AI,
  • 00:22:17
    what does this do to that
  • 00:22:18
    race? Can we say that China
  • 00:22:20
    is catching up or has it
  • 00:22:22
    caught up?
  • 00:22:23
    I mean, like if we say the
  • 00:22:25
    matter is catching up to
  • 00:22:27
    OpenAI and Anthropic,
  • 00:22:28
    if you make that claim,
  • 00:22:31
    then the same claim can be
  • 00:22:32
    made for China catching up
  • 00:22:33
    to America.
  • 00:22:34
    A lot of papers from China
  • 00:22:36
    that have tried to replicate
  • 00:22:37
    o1, in fact, I saw more
  • 00:22:39
    papers from China after o1
  • 00:22:42
    announcement that tried to
  • 00:22:43
    replicate it than from
  • 00:22:44
    America. Like,
  • 00:22:46
    and the amount of compute
  • 00:22:48
    Deepseek has access to is
  • 00:22:50
    roughly similar to what PhD
  • 00:22:52
    students in the U.S .
  • 00:22:54
    have access to. By the way,
  • 00:22:55
    this is not meant to
  • 00:22:56
    criticize others like even
  • 00:22:57
    for ourselves, like,
  • 00:22:59
    you know, I for Perplexity,
  • 00:23:00
    we decided not to train
  • 00:23:01
    models because we thought
  • 00:23:02
    it's like a very expensive
  • 00:23:03
    thing. A nd we thought like,
  • 00:23:07
    there's no way to catch up
  • 00:23:08
    with the rest.
  • 00:23:09
    But will you incorporate
  • 00:23:10
    Deepseek into Perplexity?
  • 00:23:12
    Oh, we already are beginning
  • 00:23:13
    to use it.
  • 00:23:15
    I think they have an API,
  • 00:23:16
    and we're also they have
  • 00:23:18
    open source weights, so we
  • 00:23:18
    can host it ourselves, too.
  • 00:23:20
    And it's good to, like,
  • 00:23:21
    try to start using that
  • 00:23:23
    because it's actually,
  • 00:23:24
    allows us to do a lot of the
  • 00:23:25
    things at lower cost.
  • 00:23:27
    But what I'm kind of
  • 00:23:28
    thinking is beyond that,
  • 00:23:30
    which is like, okay, if
  • 00:23:31
    these guys actually could
  • 00:23:33
    train such a great model
  • 00:23:34
    with, good team like,
  • 00:23:37
    and there's no excuse
  • 00:23:38
    anymore for companies in the
  • 00:23:39
    U.S., including ourselves,
  • 00:23:41
    to like, not try to do
  • 00:23:42
    something like that.
  • 00:23:43
    You hear a lot in public
  • 00:23:44
    from a lot of, you know,
  • 00:23:45
    thought leaders in
  • 00:23:46
    generative AI, both on the
  • 00:23:47
    research side, on the
  • 00:23:48
    entrepreneurial side,
  • 00:23:50
    like Elon Musk and others
  • 00:23:51
    say that China can't catch
  • 00:23:53
    up. Like it's the stakes are
  • 00:23:55
    too big. The geopolitical
  • 00:23:56
    stakes, whoever dominates AI
  • 00:23:58
    is going to kind of dominate
  • 00:23:59
    the economy, dominate the
  • 00:24:01
    world. You know,
  • 00:24:02
    it's been talked about in
  • 00:24:03
    those massive terms. Are you
  • 00:24:04
    worried about what China
  • 00:24:06
    proved it was able to do?
  • 00:24:08
    Firstly, I don't know if
  • 00:24:09
    Elon ever said China can't
  • 00:24:10
    catch up.
  • 00:24:11
    I'm not – just the threat of
  • 00:24:13
    China. He's only identified
  • 00:24:14
    the threat of letting China,
  • 00:24:16
    and you know, Sam Altman has
  • 00:24:17
    said similar things, we
  • 00:24:18
    can't let China win the
  • 00:24:20
    race.
  • 00:24:20
    You know, it's all I think
  • 00:24:22
    you got to decouple what
  • 00:24:25
    someone like Sam says to
  • 00:24:26
    like what is in his
  • 00:24:27
    self-interest. Right?
  • 00:24:30
    Look, I think the my point
  • 00:24:34
    is, like, whatever you did
  • 00:24:37
    to not let them catch up
  • 00:24:38
    didn't even matter. They
  • 00:24:40
    ended up catching up anyway.
  • 00:24:42
    Necessity is the mother of
  • 00:24:43
    invention
  • 00:24:44
    like you said. And you it's
  • 00:24:46
    actually, you know what's
  • 00:24:48
    more dangerous than trying
  • 00:24:49
    to do all the things to not
  • 00:24:51
    let them catch up and, you
  • 00:24:52
    know, all this stuff is
  • 00:24:54
    what's more dangerous is
  • 00:24:55
    they have the best
  • 00:24:56
    open-source model. And all
  • 00:24:57
    the American developers are
  • 00:24:59
    building on that. Right.
  • 00:25:00
    That's more dangerous
  • 00:25:02
    because then they get to own
  • 00:25:05
    the mindshare, the
  • 00:25:06
    ecosystem.
  • 00:25:07
    If the entire American AI
  • 00:25:09
    ecosystem look,
  • 00:25:10
    in general, it's known that
  • 00:25:12
    once open-source is caught
  • 00:25:13
    up or improved over closed
  • 00:25:15
    source software, all
  • 00:25:18
    developers migrate to that.
  • 00:25:20
    It's historically known,
  • 00:25:21
    right?
  • 00:25:21
    When Llama was being built
  • 00:25:23
    and becoming more widely
  • 00:25:24
    used, there was this
  • 00:25:25
    question should we trust
  • 00:25:26
    Zuckerberg? But now the
  • 00:25:27
    question is should we trust
  • 00:25:29
    China? That's a very–You
  • 00:25:30
    should
  • 00:25:30
    trust open-source, that's
  • 00:25:31
    the like it's not about who,
  • 00:25:33
    is it Zuckerberg, or is it.
  • 00:25:35
    Does it matter then if it's
  • 00:25:37
    Chinese, if it's
  • 00:25:37
    open-source?
  • 00:25:39
    Look, it doesn't matter in
  • 00:25:41
    the sense that you still
  • 00:25:43
    have full control.
  • 00:25:45
    Y ou run it as your own,
  • 00:25:47
    like set of weights on your
  • 00:25:48
    own computer, you are in
  • 00:25:50
    charge of the model. But,
  • 00:25:52
    it's not a great look for
  • 00:25:54
    our own, like, talent to
  • 00:25:57
    rely on software built by
  • 00:25:59
    others.
  • 00:26:00
    E ven if it's open-source,
  • 00:26:01
    there's always, like, a
  • 00:26:04
    point where open-source can
  • 00:26:05
    stop being open-source, too,
  • 00:26:07
    right? So the licenses are
  • 00:26:09
    very favorable today,
  • 00:26:10
    but if – you can close it –
  • 00:26:11
    exactly, over time,
  • 00:26:13
    they can always change the
  • 00:26:15
    license. So, it's important
  • 00:26:16
    that we actually have people
  • 00:26:18
    here in America building,
  • 00:26:20
    and that's why Meta is so
  • 00:26:21
    important. Like I look I
  • 00:26:23
    still think Meta will build
  • 00:26:25
    a better model than Deep
  • 00:26:26
    seek v3 and open-source it,
  • 00:26:28
    and they'll call it Llama 4
  • 00:26:29
    or 3 point something,
  • 00:26:31
    doesn't matter, but I think
  • 00:26:33
    what is more key is that we
  • 00:26:35
    don't try to focus all our
  • 00:26:38
    energy on banning them,
  • 00:26:41
    stopping them, and just try
  • 00:26:42
    to outcompete and win them.
  • 00:26:43
    That's just that's just the
  • 00:26:44
    American way of doing things
  • 00:26:45
    just be
  • 00:26:46
    better. And it feels like
  • 00:26:47
    there's, you know, we hear a
  • 00:26:48
    lot more about these Chinese
  • 00:26:49
    companies who are developing
  • 00:26:51
    in a similar way, a lot more
  • 00:26:52
    efficiently, a lot more cost
  • 00:26:53
    effectively right? –Yeah,
  • 00:26:55
    again, like, look,
  • 00:26:56
    it's hard to fake scarcity,
  • 00:26:58
    right? If you raise $10
  • 00:27:01
    billion and you decide to
  • 00:27:02
    spend 80% of it on a compute
  • 00:27:04
    cluster, it's hard for you
  • 00:27:06
    to come up with the exact
  • 00:27:07
    same solution that someone
  • 00:27:08
    with $5 million would do.
  • 00:27:10
    And there's no point,
  • 00:27:13
    no need to, like, sort of
  • 00:27:14
    berate those who are putting
  • 00:27:15
    more money. They're trying
  • 00:27:17
    to do it as fast as they
  • 00:27:18
    can.
  • 00:27:18
    When we say open -source,
  • 00:27:19
    there's so many different
  • 00:27:20
    versions. Some people
  • 00:27:21
    criticize Meta for not
  • 00:27:22
    publishing everything,
  • 00:27:23
    and even Deepseek itself
  • 00:27:24
    isn't totally transparent.
  • 00:27:26
    Yeah, you can go to the
  • 00:27:27
    limits of open-source and
  • 00:27:28
    say, I should exactly be
  • 00:27:30
    able to replicate your
  • 00:27:31
    training run. But first of
  • 00:27:33
    all, how many people even
  • 00:27:34
    have the resources to do
  • 00:27:36
    that. And I think the amount
  • 00:27:40
    of detail they've shared in
  • 00:27:41
    the technical report,
  • 00:27:43
    actually Meta did that too,
  • 00:27:44
    by the way, Meta's Llama 3.3
  • 00:27:46
    technical report is
  • 00:27:47
    incredibly detailed,
  • 00:27:48
    and very great for science.
  • 00:27:51
    So the amount of details
  • 00:27:52
    they get these people are
  • 00:27:53
    sharing is already a lot
  • 00:27:54
    more than what the other
  • 00:27:56
    companies are doing right
  • 00:27:57
    now.
  • 00:27:57
    When you think about how
  • 00:27:58
    much it costs Deepseek to do
  • 00:27:59
    this, less than $6 million,
  • 00:28:01
    I think about what OpenAI
  • 00:28:03
    has spent to develop GPT
  • 00:28:05
    models. What does that mean
  • 00:28:07
    for the closed source model,
  • 00:28:09
    ecosystem trajectory,
  • 00:28:10
    momentum? What does it mean
  • 00:28:12
    for OpenAI?
  • 00:28:13
    I mean, it's very clear that
  • 00:28:15
    we'll have an open-source
  • 00:28:17
    version 4-O, or even better
  • 00:28:19
    than that, and much cheaper
  • 00:28:21
    than that open-source,
  • 00:28:22
    like completely this year.
  • 00:28:24
    Made by OpenAI?
  • 00:28:26
    Probably not. Most likely
  • 00:28:27
    not. And I don't think they
  • 00:28:29
    care if it's not made by
  • 00:28:30
    them. I think they've
  • 00:28:32
    already moved to a new
  • 00:28:33
    paradigm called the o1
  • 00:28:34
    family of models.
  • 00:28:38
    I looked at I can't like
  • 00:28:41
    Ilya Sutskever came and
  • 00:28:42
    said, pre-training is a
  • 00:28:44
    wall, right?
  • 00:28:45
    So, I mean, he didn't
  • 00:28:48
    exactly use the word, but he
  • 00:28:49
    clearly said–yeah–the age of
  • 00:28:50
    pre-training is over.
  • 00:28:51
    –many people have said that
  • 00:28:52
    .
  • 00:28:52
    Right? So, that doesn't mean
  • 00:28:55
    scaling has hit a wall.
  • 00:28:56
    I think we're scaling on
  • 00:28:58
    different dimensions now.
  • 00:28:59
    The amount of time model
  • 00:29:00
    spends thinking at test
  • 00:29:01
    time. Reinforcement
  • 00:29:03
    learning, like trying to,
  • 00:29:04
    like, make the model,
  • 00:29:06
    okay, if it doesn't know
  • 00:29:07
    what to do for a new prompt,
  • 00:29:09
    it'll go and reason and
  • 00:29:10
    collect data and interact
  • 00:29:11
    with the world,
  • 00:29:13
    use a bunch of tools.
  • 00:29:14
    I think that's where things
  • 00:29:15
    are headed, and I feel like
  • 00:29:16
    OpenAI is more focused on
  • 00:29:17
    that right now. Yeah.
  • 00:29:19
    –I nstead of just the
  • 00:29:20
    bigger, better model?
  • 00:29:21
    Correct. –Reasoning
  • 00:29:22
    capacities. But didn't you
  • 00:29:23
    say that deep seek is likely
  • 00:29:24
    to turn their attention to
  • 00:29:25
    reasoning?
  • 00:29:26
    100%, I think they will.
  • 00:29:28
    A nd that's why I'm pretty
  • 00:29:31
    excited about what they'll
  • 00:29:32
    produce next.
  • 00:29:34
    I guess that's then my
  • 00:29:35
    question is sort of what's
  • 00:29:37
    OpenAI's moat now?
  • 00:29:39
    Well, I still think that,
  • 00:29:41
    no one else has produced a
  • 00:29:42
    system similar to the o1
  • 00:29:45
    yet, exactly.
  • 00:29:47
    I know that there's debates
  • 00:29:49
    about whether o1 is actually
  • 00:29:50
    worth it. Y ou know,
  • 00:29:53
    on maybe a few prompts,
  • 00:29:54
    it's really better. But like
  • 00:29:55
    most of the times, it's not
  • 00:29:56
    producing any differentiated
  • 00:29:57
    output from Sonnet.
  • 00:29:59
    But, at least the results
  • 00:30:01
    they showed in o3 where,
  • 00:30:03
    they had like,
  • 00:30:05
    competitive coding
  • 00:30:06
    performance and almost like
  • 00:30:08
    an AI software engineer
  • 00:30:09
    level.
  • 00:30:10
    Isn't it just a matter of
  • 00:30:11
    time, though, before the
  • 00:30:12
    internet is filled with
  • 00:30:13
    reasoning data that.
  • 00:30:16
    –yeah– Deepseek.
  • 00:30:17
    Again, it's possible.
  • 00:30:19
    Nobody knows yet.
  • 00:30:20
    Yeah. So until it's done,
  • 00:30:23
    it's still uncertain right?
  • 00:30:24
    Right. So maybe that
  • 00:30:26
    uncertainty is their moat.
  • 00:30:27
    T hat, like, no one else has
  • 00:30:28
    the same, reasoning
  • 00:30:30
    capability yet,
  • 00:30:32
    but will by end of this
  • 00:30:34
    year, will there be multiple
  • 00:30:36
    players even in the
  • 00:30:37
    reasoning arena?
  • 00:30:38
    I absolutely think so.
  • 00:30:40
    So are we seeing the
  • 00:30:41
    commoditization of large
  • 00:30:43
    language models?
  • 00:30:44
    I think we will see a
  • 00:30:45
    similar trajectory,
  • 00:30:49
    just like how in
  • 00:30:50
    pre-training and
  • 00:30:51
    post-training that that sort
  • 00:30:52
    of system for getting
  • 00:30:54
    commoditized this year will
  • 00:30:57
    be a lot more
  • 00:30:57
    commoditization there.
  • 00:30:59
    I think the reasoning kind
  • 00:31:00
    of models will go through a
  • 00:31:02
    similar trajectory where in
  • 00:31:04
    the beginning, 1 or 2
  • 00:31:05
    players really know how to
  • 00:31:06
    do it, but over time
  • 00:31:07
    –That's.
  • 00:31:08
    and who knows right? Because
  • 00:31:10
    OpenAI could make another
  • 00:31:11
    advancement to focus on.
  • 00:31:13
    But right now reasoning is
  • 00:31:14
    their mode.
  • 00:31:14
    By the way, if advancements
  • 00:31:16
    keep happening again and
  • 00:31:18
    again and again, like,
  • 00:31:20
    I think the meaning of the
  • 00:31:21
    word advancement also loses
  • 00:31:23
    some of its value, right?
  • 00:31:24
    Totally. Even now it's very
  • 00:31:25
    difficult, right. Because
  • 00:31:26
    there's pre-training
  • 00:31:27
    advancements. Yeah.
  • 00:31:28
    And then we've moved into a
  • 00:31:29
    different phase.
  • 00:31:30
    Yeah, so what is guaranteed
  • 00:31:32
    to happen is whatever models
  • 00:31:33
    exist today, that level of
  • 00:31:36
    reasoning, that level of
  • 00:31:37
    multimodal capability in
  • 00:31:40
    like 5 or 10x cheaper
  • 00:31:41
    models, open source,
  • 00:31:43
    all that's going to happen.
  • 00:31:45
    It's just a matter of time.
  • 00:31:46
    What is unclear is if
  • 00:31:49
    something like a model that
  • 00:31:50
    reasons at test time will be
  • 00:31:53
    extremely cheap enough that
  • 00:31:55
    we can just run it on our
  • 00:31:56
    phones. I think that's not
  • 00:31:58
    clear to me yet.
  • 00:31:58
    It feels like so much of the
  • 00:31:59
    landscape has changed with
  • 00:32:00
    what Deepseek was able to
  • 00:32:02
    prove. Could you call it
  • 00:32:03
    China's ChatGPT moment?
  • 00:32:07
    Possible,
  • 00:32:07
    I mean, I think it certainly
  • 00:32:10
    probably gave them a lot of
  • 00:32:11
    confidence that, like,
  • 00:32:14
    you know, we're not really
  • 00:32:16
    behind no matter what you do
  • 00:32:17
    to restrict our compute.
  • 00:32:19
    Like, we can always figure
  • 00:32:21
    out some workarounds.
  • 00:32:22
    And, yeah, I'm sure the team
  • 00:32:23
    feels pumped about the
  • 00:32:25
    results.
  • 00:32:26
    How does this change,
  • 00:32:27
    like the investment
  • 00:32:28
    landscape, the hyperscalers
  • 00:32:30
    that are spending tens of
  • 00:32:32
    billions of dollars a year
  • 00:32:33
    on CapEx have just ramped it
  • 00:32:34
    up huge. And OpenAI and
  • 00:32:36
    Anthropic that are raising
  • 00:32:37
    billions of dollars for
  • 00:32:38
    GPUs, essentially.
  • 00:32:39
    But what Deepseek told us is
  • 00:32:41
    you don't need, you don't
  • 00:32:42
    necessarily need that.
  • 00:32:44
    Yeah.
  • 00:32:45
    I mean, look, I think it's
  • 00:32:47
    very clear that they're
  • 00:32:48
    going to go even harder on
  • 00:32:50
    reasoning because they
  • 00:32:53
    understand that, like,
  • 00:32:53
    whatever they were building
  • 00:32:54
    in the previous two years is
  • 00:32:56
    getting extremely cheap,
  • 00:32:57
    that it doesn't make sense
  • 00:32:58
    to go justify raising that–
  • 00:33:01
    Is the spending.
  • 00:33:02
    proposition the same? Do
  • 00:33:03
    they need the same amount
  • 00:33:05
    of, you know, high end GPUs,
  • 00:33:07
    or can you reason using the
  • 00:33:08
    lower end ones that
  • 00:33:09
    Deepseek–
  • 00:33:10
    Again, it's hard to say no
  • 00:33:11
    until proven it's not.
  • 00:33:14
    But I guess, like in the
  • 00:33:17
    spirit of moving fast,
  • 00:33:19
    you would want to use the
  • 00:33:20
    high end chips, and you
  • 00:33:22
    would want to, like, move
  • 00:33:24
    faster than your
  • 00:33:24
    competitors. I think,
  • 00:33:26
    like the best talent still
  • 00:33:27
    wants to work in the team
  • 00:33:28
    that made it happen first.
  • 00:33:31
    You know, there's always
  • 00:33:32
    some glory to like, who did
  • 00:33:33
    this, actually? Like, who's
  • 00:33:34
    the real pioneer? Versus
  • 00:33:36
    who's the fast follow right?
  • 00:33:38
    That was like kind of like
  • 00:33:39
    Sam Altman's tweet kind of
  • 00:33:41
    veiled response to what
  • 00:33:43
    Deepseek has been able to,
  • 00:33:44
    he kind of implied that they
  • 00:33:45
    just copied, and anyone can
  • 00:33:46
    copy.
  • 00:33:47
    Right? Yeah, but then you
  • 00:33:48
    can always say that, like,
  • 00:33:50
    everybody copies everybody
  • 00:33:51
    in this field.
  • 00:33:53
    You can say Google did the
  • 00:33:54
    transformer first. It's not
  • 00:33:56
    OpenAI and OpenAI just
  • 00:33:57
    copied it. Google built the
  • 00:33:59
    first large language models.
  • 00:34:01
    They didn't productise it,
  • 00:34:02
    but OpenAI did it in a
  • 00:34:04
    productized way. So you can
  • 00:34:06
    say all this in many ways,
  • 00:34:09
    it doesn't matter.
  • 00:34:09
    I remember asking you being
  • 00:34:11
    like, you know, why don't
  • 00:34:12
    you want to build the model?
  • 00:34:13
    Yeah, that's that's,
  • 00:34:14
    you know, the glory. And a
  • 00:34:16
    year later, just one year
  • 00:34:18
    later, you look very,
  • 00:34:19
    very smart. To not engage in
  • 00:34:21
    that extremely expensive
  • 00:34:23
    race that has become so
  • 00:34:24
    competitive. And you kind of
  • 00:34:25
    have this lead now in what
  • 00:34:27
    everyone wants to see now,
  • 00:34:28
    which is like real world
  • 00:34:30
    applications, killer
  • 00:34:31
    applications of generative
  • 00:34:33
    AI. Talk a little bit about
  • 00:34:35
    like that decision and how
  • 00:34:37
    that's sort of guided you
  • 00:34:39
    where you see Perplexity
  • 00:34:40
    going from here.
  • 00:34:41
    Look, one year ago,
  • 00:34:43
    I don't even think we had
  • 00:34:45
    something like,
  • 00:34:47
    this is what, like 2024
  • 00:34:51
    beginning, right? I feel
  • 00:34:54
    like we didn't even have
  • 00:34:54
    something like Sonnet 3.5,
  • 00:34:56
    right? W e had GPT -4,
  • 00:34:58
    I believe, and it was kind
  • 00:35:00
    of nobody else was able to
  • 00:35:01
    catch up to it. Yeah.
  • 00:35:03
    B ut there was no multimodal
  • 00:35:05
    nothing, and my sense was
  • 00:35:08
    like, okay, if people with
  • 00:35:09
    way more resources and way
  • 00:35:10
    more talent cannot catch up,
  • 00:35:12
    it's very difficult to play
  • 00:35:14
    that game. So let's play a
  • 00:35:15
    different game. Anyway,
  • 00:35:17
    people want to use these
  • 00:35:18
    models. And there's one use
  • 00:35:21
    case of asking questions and
  • 00:35:22
    getting accurate answers
  • 00:35:23
    with sources, with real time
  • 00:35:25
    information, accurate
  • 00:35:27
    information.
  • 00:35:28
    There's still a lot of work
  • 00:35:30
    there to do outside the
  • 00:35:31
    model, and making sure the
  • 00:35:33
    product works reliably,
  • 00:35:34
    keep scaling it up to usage.
  • 00:35:36
    Keep building custom UIs,
  • 00:35:38
    there's just a lot of work
  • 00:35:39
    to do, and we would focus on
  • 00:35:40
    that, and we would benefit
  • 00:35:42
    from all the tailwinds of
  • 00:35:43
    models getting better and
  • 00:35:44
    better. That's essentially
  • 00:35:46
    what happened, in fact, I
  • 00:35:47
    would say, Sonnet 3.5 made
  • 00:35:50
    our product so good,
  • 00:35:51
    in the sense that if you use
  • 00:35:54
    Sonnet 3.5 as the model
  • 00:35:56
    choice within Perplexity,
  • 00:35:59
    it's very difficult to find
  • 00:36:00
    a hallucination. I'm not
  • 00:36:01
    saying it's impossible,
  • 00:36:04
    but it dramatically reduced
  • 00:36:06
    the rate of hallucinations,
  • 00:36:08
    which meant, the problem of
  • 00:36:10
    question-answering,
  • 00:36:11
    asking a question, getting
  • 00:36:12
    an answer, doing fact
  • 00:36:13
    checks, research, going and
  • 00:36:15
    asking anything out there
  • 00:36:16
    because almost all the
  • 00:36:17
    information is on the
  • 00:36:18
    web,was such a big unlock.
  • 00:36:22
    And that helped us grow 10x
  • 00:36:24
    over the course of the year
  • 00:36:24
    in terms of usage.
  • 00:36:25
    And you've made huge strides
  • 00:36:27
    in terms of users,
  • 00:36:28
    and you know, we hear on
  • 00:36:29
    CNBC a lot, like big
  • 00:36:30
    investors who are huge fans.
  • 00:36:32
    Yeah. Jensen Huang himself
  • 00:36:33
    right? He mentioned it the
  • 00:36:34
    other, in his keynote.
  • 00:36:35
    Yeah. The other night.
  • 00:36:37
    He's a pretty regular user,
  • 00:36:38
    actually, he's not just
  • 00:36:39
    saying it. He's actually a
  • 00:36:40
    pretty regular user.
  • 00:36:42
    So, a year ago we weren't
  • 00:36:43
    even talking about
  • 00:36:44
    monetization because you
  • 00:36:45
    guys were just so new and
  • 00:36:46
    you wanted to, you know,
  • 00:36:48
    get yourselves out there and
  • 00:36:49
    build some scale, but now
  • 00:36:50
    you are looking at things
  • 00:36:51
    like that, increasingly an
  • 00:36:53
    ad model, right?
  • 00:36:54
    Yeah, we're experimenting
  • 00:36:55
    with it.
  • 00:36:56
    I know there's some
  • 00:36:58
    controversy on like,
  • 00:37:00
    why should we do ads?
  • 00:37:01
    Whether you can have a
  • 00:37:03
    truthful answer engine
  • 00:37:04
    despite having ads.
  • 00:37:06
    And in my opinion,
  • 00:37:08
    we've been pretty
  • 00:37:10
    proactively thoughtful about
  • 00:37:11
    it where we said,
  • 00:37:13
    okay, as long as the answer
  • 00:37:14
    is always accurate,
  • 00:37:15
    unbiased and not corrupted
  • 00:37:17
    by someone's advertising
  • 00:37:19
    budget, only you get to see
  • 00:37:21
    some sponsored questions,
  • 00:37:23
    and even the answers to
  • 00:37:24
    those sponsored questions
  • 00:37:25
    are not influenced by them,
  • 00:37:27
    and questions are also not
  • 00:37:30
    picked in a way where it's
  • 00:37:31
    manipulative. Sure,
  • 00:37:34
    there are some things that
  • 00:37:35
    the advertiser also wants,
  • 00:37:36
    which is they want you to
  • 00:37:37
    know about their brand, and
  • 00:37:38
    they want you to know the
  • 00:37:39
    best parts of their brand,
  • 00:37:41
    just like how you go,
  • 00:37:42
    and if you're introducing
  • 00:37:43
    yourself to someone you want
  • 00:37:44
    to, you want them to see the
  • 00:37:45
    best parts of you, right?
  • 00:37:47
    So that's all there.
  • 00:37:48
    But you still don't have to
  • 00:37:50
    click on a sponsored
  • 00:37:51
    question. You can ignore it.
  • 00:37:53
    And we're only charging them
  • 00:37:54
    CPM right now.
  • 00:37:55
    So we're not we ourselves
  • 00:37:57
    are not even incentivized to
  • 00:37:58
    make you click yet.
  • 00:38:00
    So I think considering all
  • 00:38:02
    this, we're actually trying
  • 00:38:03
    to get it right long term.
  • 00:38:05
    Instead of going the Google
  • 00:38:06
    way of forcing you to click
  • 00:38:08
    on links. I remember when
  • 00:38:08
    people were talking about
  • 00:38:09
    the commoditization of
  • 00:38:10
    models a year ago and you
  • 00:38:11
    thought, oh, it was
  • 00:38:12
    controversial, but now it's
  • 00:38:14
    not controversial. It's kind
  • 00:38:15
    of like that's happening and
  • 00:38:16
    you're keeping your eye on
  • 00:38:17
    that is smart.
  • 00:38:19
    By the way, we benefit a lot
  • 00:38:20
    from model commoditization,
  • 00:38:22
    except we also need to
  • 00:38:23
    figure out something to
  • 00:38:24
    offer to the paid users,
  • 00:38:26
    like a more sophisticated
  • 00:38:27
    research agent that can do
  • 00:38:29
    like multi-step reasoning,
  • 00:38:30
    go and like do like 15
  • 00:38:31
    minutes worth of searching
  • 00:38:32
    and give you like an
  • 00:38:34
    analysis, an analyst type of
  • 00:38:35
    answer. All that's going to
  • 00:38:37
    come, all that's going to
  • 00:38:38
    stay in the product. Nothing
  • 00:38:39
    changes there. But there's a
  • 00:38:41
    ton of questions every free
  • 00:38:43
    user asks day-to-day basis
  • 00:38:45
    that that needs to be quick,
  • 00:38:46
    fast answers, like it
  • 00:38:48
    shouldn't be slow,
  • 00:38:49
    and all that will be free,
  • 00:38:51
    whether you like it or not,
  • 00:38:52
    it has to be free. That's
  • 00:38:53
    what people are used to.
  • 00:38:55
    And that means like figuring
  • 00:38:57
    out a way to make that free
  • 00:38:58
    traffic also monetizable.
  • 00:39:00
    So you're not trying to
  • 00:39:01
    change user habits. But it's
  • 00:39:02
    interesting because you are
  • 00:39:03
    kind of trying to teach new
  • 00:39:04
    habits to advertisers.
  • 00:39:05
    They can't have everything
  • 00:39:07
    that they have in a Google
  • 00:39:08
    ten blue links search.
  • 00:39:09
    What's the response been
  • 00:39:10
    from them so far? Are they
  • 00:39:11
    willing to accept some of
  • 00:39:12
    the trade offs?
  • 00:39:13
    Yeah, I mean that's why they
  • 00:39:14
    are trying stuff like Intuit
  • 00:39:17
    is working with us.
  • 00:39:18
    And then there's many other
  • 00:39:20
    brands. Dell, like all these
  • 00:39:23
    people are working with us
  • 00:39:24
    to test, right?
  • 00:39:26
    They're also excited about,
  • 00:39:28
    look, everyone knows that,
  • 00:39:30
    like, whether you like it or
  • 00:39:31
    not, 5 or 10 years from now,
  • 00:39:33
    most people are going to be
  • 00:39:34
    asking AIs most of the
  • 00:39:36
    things, and not on the
  • 00:39:37
    traditional search engine,
  • 00:39:38
    everybody understands that.
  • 00:39:40
    So everybody wants to be
  • 00:39:43
    early adopters of the new
  • 00:39:45
    platforms, new UX,
  • 00:39:47
    and learn from it,
  • 00:39:48
    and build things together.
  • 00:39:49
    Not like they're not viewing
  • 00:39:51
    it as like, okay, you guys
  • 00:39:52
    go figure out everything
  • 00:39:53
    else and then we'll come
  • 00:39:54
    later.
  • 00:39:55
    I'm smiling because it goes
  • 00:39:56
    back perfectly to the point
  • 00:39:57
    you made when you first sat
  • 00:39:58
    down today, which is
  • 00:40:00
    necessity is the mother of
  • 00:40:01
    all invention,
  • 00:40:03
    right? And that's what
  • 00:40:03
    advertisers are essentially
  • 00:40:04
    looking at. They're saying
  • 00:40:05
    this field is changing.
  • 00:40:06
    We have to learn to adapt
  • 00:40:07
    with it. Okay,
  • 00:40:09
    Arvind, I took up so much of
  • 00:40:10
    your time. Thank you so much
  • 00:40:11
    for taking the time.
タグ
  • AI
  • Deepseek
  • Open Source
  • China
  • Silicon Valley
  • GPT-4
  • Model Efficiency
  • Nvidia
  • Tech Competition
  • AI Ethics