OpenAI DevDay 2024 | Virtual AMA with Sam Altman, moderated by Harry Stebbings, 20VC

00:47:44
https://www.youtube.com/watch?v=Hn27upT2m_o

Sintesi

TLDRDalam sesi wawancara yang mendalam dan bersifat terbuka antara Sam Altman dari OpenAI dan Harry Stebbings, isu-isu utama seperti masa depan model AI, perkembangan alat no-code untuk pengasas bukan teknikal, serta strategi OpenAI untuk bertahan dalam industri yang kompetitif dibincangkan. Altman menekankan pentingnya model pemikiran dan bagaimana ini dapat membuka potensi baru dalam sains dan teknologi. Beliau juga berkongsi pandangan tentang penggunaan AI dalam inovasi produk dan bagaimana OpenAI mengatasi cabaran seperti kebimbangan global terhadap bekalan semikonduktor dan pembangunan model yang lebih besar dan lebih baik. Wawancara ini menonjolkan visi optimis OpenAI untuk masa depan AI sambil mengekalkan tumpuan pada peningkatan kemampuan dan inovasi sistem.

Punti di forza

  • 👔 Sam Altman menyesuaikan diri dengan kehidupan sibuknya.
  • 🧠 Model pemikiran adalah fokus utama OpenAI di masa depan.
  • 🔧 Alat no-code akan dikembangkan untuk pengasas tidak teknikal.
  • 🔓 OpenAI melihat kepentingan model sumber terbuka dalam ekosistem AI.
  • 🚀 OpenAI yakin bahawa model mereka akan terus bertambah baik.
  • 🤖 Agent AI di masa depan akan lebih canggih dan fleksibel.
  • ⚖️ Harga model AI mungkin didasarkan pada penggunaan komputasi.
  • 🌎 OpenAI menyedari kebimbangan bekalan semikonduktor global.
  • 💡 Inovasi AI boleh membuka potensi dalam pendidikan dan kesihatan.
  • 📈 Perkembangan teknologi dan masyarakat bergerak pada kadar berbeza.

Linea temporale

  • 00:00:00 - 00:05:00

    Harry Stebbings dari 20VC memulakan Open AI Dev Day dengan temu bual bersama Sam Altman, mengajukan soalan yang jarang diajukan tentang kesejahteraan Sam. Sam menyatakan bahawa walaupun hidup sangat sibuk, ia kini berasa normal baginya. Sam membincangkan masa depan OpenAI, menekankan pada model yang lebih baik dengan penekanan pada model penaakulan yang boleh membuka bidang baru dalam sains dan pengekodan.

  • 00:05:00 - 00:10:00

    Sam Altman bercakap tentang menyediakan alat pembinaan AI tanpa kod untuk pengasas bukan teknikal. Dia menjelaskan bahawa walaupun alat tersebut sedang dikembangkan, pada mulanya, alat tersebut akan membantu pengkod. Sam turut menjelaskan bahawa peningkatan model OpenAI akan menjadikan perniagaan yang bergantung kepada menampal kekurangan model sekarang kurang relevan di masa depan.

  • 00:10:00 - 00:15:00

    Perbincangan beralih kepada potensi OpenAI untuk "merata" dalam pasaran dan Sam mengakui model AI yang baik akan menjadikan pengembangan produk yang hebat menjadi lebih mudah. Dia menyebut terdapat perubahan di kalangan pemula dari bertaruh melawan kepada bertaruh untuk peningkatan model yang cepat. Sam optimis terhadap nilai pasaran yang dicipta oleh AI.

  • 00:15:00 - 00:20:00

    Sam dan Harry membincangkan peranan sumber terbuka dalam AI. Sam mengiktiraf kepentingan sumber terbuka dan tempatnya dalam ekosistem AI. Dia juga berkongsi pandangannya tentang agen AI, menyebut bahawa tugas agen AI adalah untuk melakukan tugas jangka panjang dengan pengawasan minimum dan memberi contoh bagaimana ia boleh diguna dalam dunia nyata.

  • 00:20:00 - 00:25:00

    Sam membincangkan bagaimana AI boleh meningkatkan nilai ekonomi dengan menjadikan sesuatu yang rumit lebih mudah. Dia menyoroti bahawa tenaga haba AI tidak terletak pada angka besar nilai yang dicipta tetapi pada aksesibilitas ekonomi yang akan diperluas. Peranan AI dalam industri seperti kesihatan dan pendidikan juga diharapkan akan sangat besar.

  • 00:25:00 - 00:30:00

    OpenAI sedang fokus pada peningkatan kemampuan penaakulan model mereka, serta kerja multimodal. Sam memperkatakan peranan penaakulan dalam pembangunan AI dan memperlihatkan keyakinan tentang kemajuan cepat. Harry mengemukakan soalan tentang pendapat umum model sebagai aset yang menyusut tetapi Sam yakin dengan nilai jangka panjangnya serta justifikasi terhadap pelaburan dalam pembangunnya.

  • 00:30:00 - 00:35:00

    Sam berkongsi pengalaman OpenAI dalam mencapai kemampuan penaakulan yang lebih baik melalui eksperimen dan pembelajaran dari kesilapan lepas semasa mengembangkan model. Dia juga menekankan pentingnya keupayaan organisasi untuk melaksanakan sesuatu yang baru dan belum terbukti, suatu yang dia banggakan tentang budaya OpenAI.

  • 00:35:00 - 00:40:00

    Sam bercakap tentang potensi sumbangan AI dalam meningkatkan bakat manusia di seluruh dunia dan mengakui bahawa ramai individu berbakat tidak mencapai potensi kerana pelbagai halangan. Dia merenung perubahan dalam gaya kepemimpinannya selama 10 tahun terakhir, terutama dalam membangun syarikat dengan pertumbuhan cepat dan dinasihati tentang bagaimana memfokuskan pada pertumbuhan jangka panjang sambil mengekalkan tugas harian.

  • 00:40:00 - 00:47:44

    Sam menjawab pertanyaan mengenai cabaran dan ketidakpastian membuat keputusan dalam konteks organisasi AI yang berkembang pesat dengan sistem yang kompleks. Dia menekankan pendekatan yang pelbagai dalam mencari nasihat dan kepentingan menavigasi bekalan semikonduktor di tengah ketegangan internasional selain potensi dampaknya pada industri. Sam juga mengulas bahawa AI berbeza dengan revolusi teknologi sebelumnya.

Mostra di più

Mappa mentale

Video Domande e Risposte

  • Bagaimana Sam Altman menjaga kesegaran diri dalam menghadapi jadual yang padat?

    Sam Altman berkata dia telah menyesuaikan diri dengan gaya hidup yang sibuk dan menganggapnya sebagai normal baharu.

  • Apakah fokus OpenAI dalam pembangunan masa depan?

    OpenAI berfokus pada model pemikiran yang dapat membantu dalam kemajuan saintifik dan penulisan kod yang kompleks.

  • Adakah OpenAI akan menyediakan alat no-code untuk pengasas tidak teknikal?

    Ya, OpenAI bercadang untuk menyediakan alat no-code berkualiti tinggi untuk membolehkan individu bukan teknikal membina dan meningkatkan aplikasi AI.

  • Bagaimana pandangan OpenAI terhadap sumber terbuka dalam AI?

    OpenAI melihat bahawa terdapat tempat penting untuk model sumber terbuka dalam ekosistem AI, dan pelanggan akan memilih mekanisme penyampaian yang sesuai.

  • Apakah perbezaan antara model OpenAI dan pesaing lain?

    OpenAI memfokuskan kebolehan pemikiran sebagai perbezaan utama untuk menyokong kemajuan besar dalam nilai yang dihasilkan.

  • Bagaimana OpenAI menganggap peranan agen AI masa depan?

    Agen AI diharapkan dapat melakukan tugas jangka panjang dengan pengawasan minimum dan bekerjasama seperti rakan sekerja pintar.

  • Adakah OpenAI percaya model akan terus berkembang dan bertambah baik?

    Ya, OpenAI percaya bahawa trajektori peningkatan keupayaan model akan berterusan untuk masa yang panjang.

  • Bagaimana OpenAI mengatasi cabaran kekurangan semikonduktor?

    Walaupun terdapat kebimbangan mengenai rantai bekalan semikonduktor, OpenAI menganggapnya sebagai salah satu daripada banyak kerumitan ekosistem yang mesti diurus.

  • Bagaimana pendekatan OpenAI terhadap pengagihan dan harga model AI?

    Wujud pandangan bahawa harga akan berdasarkan kepada jumlah komputasi yang digunakan untuk menyelesaikan masalah, bukan berdasarkan tempat duduk individu.

  • Kenapa OpenAI mementingkan pemikiran dalam AI?

    Pemikiran adalah kunci untuk melangkah ke depan dalam pelbagai aplikasi dan nilai yang diberikan oleh model AI.

Visualizza altre sintesi video

Ottenete l'accesso immediato ai riassunti gratuiti dei video di YouTube grazie all'intelligenza artificiale!
Sottotitoli
en
Scorrimento automatico:
  • 00:00:13
    hello everyone welcome to open AI Dev
  • 00:00:15
    day I am Harry stebbings of 20 VC and I
  • 00:00:19
    am very very excited to interview Sam
  • 00:00:22
    ultman welcome Sam Sam thank you for
  • 00:00:26
    letting me do this today with you thanks
  • 00:00:28
    for doing now we have many many
  • 00:00:31
    questions from the audience and so I
  • 00:00:33
    wanted to start with one which I don't
  • 00:00:35
    think people ask you actually very often
  • 00:00:36
    in interviews which is firstly like how
  • 00:00:38
    are you you are one of the busiest
  • 00:00:40
    people on the planet you also always
  • 00:00:42
    look remarkably fresh how are
  • 00:00:46
    you fine I
  • 00:00:50
    think yeah yeah I think kind of get used
  • 00:00:52
    to anything and it has been like a sort
  • 00:00:54
    of crazy busy last couple of years but
  • 00:00:56
    now it just feels like normal life and I
  • 00:00:58
    forget that it used to be otherwise okay
  • 00:01:00
    listen I want to start by kind of diving
  • 00:01:02
    in we had a lot of fantastic questions
  • 00:01:04
    from the audience across a number of
  • 00:01:06
    different kind of areas and I want to
  • 00:01:08
    start with actually the question of when
  • 00:01:10
    we look forward is the future of open AI
  • 00:01:13
    more models like 01 or is it more larger
  • 00:01:18
    models that we would maybe have expected
  • 00:01:20
    of old how do we think about
  • 00:01:23
    that I mean we want to make things
  • 00:01:26
    better across the board but this
  • 00:01:27
    direction of reasoning models is a
  • 00:01:30
    particular importance to us I think
  • 00:01:32
    reasoning will unlock I hope reasoning
  • 00:01:34
    will unlock a lot of the things that
  • 00:01:35
    we've been waiting years to do and the
  • 00:01:39
    the ability for models like this to for
  • 00:01:41
    example contribute to new science uh
  • 00:01:43
    help write a lot more very difficult
  • 00:01:46
    code uh that I think can drive things
  • 00:01:48
    forward to a significant degree so you
  • 00:01:50
    should expect rapid Improvement in the O
  • 00:01:53
    Series of models and it's of great
  • 00:01:55
    strategic importance to us so another
  • 00:01:59
    one that I thought was really important
  • 00:02:00
    for us to touch on was when we look
  • 00:02:02
    forward to open ai's future plans how do
  • 00:02:05
    you think about developing no code tools
  • 00:02:07
    for non-technical Founders to build and
  • 00:02:10
    scale AI apps how do you think about
  • 00:02:12
    that it'll get there for sure uh I I
  • 00:02:15
    think the the first step will be tools
  • 00:02:18
    that make people who know how to code
  • 00:02:20
    well more productive but eventually I
  • 00:02:22
    think we can offer really high quality
  • 00:02:24
    no code tools and already there's some
  • 00:02:26
    out there that Mak sense but you can't
  • 00:02:29
    you can't sort of in a no code way say I
  • 00:02:30
    have like a full startup I want to build
  • 00:02:33
    um that's going to take a while so when
  • 00:02:36
    we look at where we are in the stat
  • 00:02:38
    today open AI sits in a certain place
  • 00:02:41
    how far up the stack is open AI going to
  • 00:02:43
    go I think it's a brilliant question but
  • 00:02:45
    if you're spending a lot of time tuning
  • 00:02:47
    your rag system is this a waste of time
  • 00:02:49
    because open AI ultimately thinks
  • 00:02:51
    they'll own this part of the application
  • 00:02:53
    layer or is it not and how do you answer
  • 00:02:55
    a Founder who has that question
  • 00:03:00
    the the the general answer we try to
  • 00:03:02
    give is and you have to assume that
  • 00:03:04
    we're biased here and talking our book
  • 00:03:06
    and may be wrong but the general answer
  • 00:03:07
    we try to give
  • 00:03:10
    is we are going to try our hardest and
  • 00:03:13
    believe we will succeed at making our
  • 00:03:15
    models better and better and better and
  • 00:03:18
    if you are building a business that
  • 00:03:20
    patches some current small
  • 00:03:23
    shortcomings if we do our job right uh
  • 00:03:26
    then that will not be as important in
  • 00:03:29
    the future
  • 00:03:30
    if on the other hand you build a company
  • 00:03:33
    that benefits from the model getting
  • 00:03:35
    better and better if you know an oracle
  • 00:03:38
    told you today that
  • 00:03:40
    04 was going to be just absolutely
  • 00:03:42
    incredible and do all of these things
  • 00:03:45
    that right now feel impossible and you
  • 00:03:47
    were happy about that then you know
  • 00:03:50
    maybe we're wrong but at least that's
  • 00:03:52
    what we're going for and if instead you
  • 00:03:54
    say okay there's this area where there
  • 00:03:57
    are many but you pick one of the many
  • 00:03:59
    areas where oh preview underperforms and
  • 00:04:01
    say I'm going to patch this and just
  • 00:04:02
    barely get it to work then you're sort
  • 00:04:05
    of assuming that the next turn of the
  • 00:04:07
    model crank won't be as good as we think
  • 00:04:08
    it will be and that is the general
  • 00:04:12
    philosophical message we try to get out
  • 00:04:14
    to startups like we we believe that we
  • 00:04:17
    are on a pretty a quite steep trajectory
  • 00:04:20
    of improvement and that the current
  • 00:04:22
    shortcomings of the models today um will
  • 00:04:26
    just be taken care of by Future
  • 00:04:28
    generations and
  • 00:04:30
    you know I encourage people to be
  • 00:04:31
    aligned with that so we did an interview
  • 00:04:34
    before with Brad and sorry it's not
  • 00:04:37
    quite on schedule but I think the show
  • 00:04:38
    has always been successful when we kind
  • 00:04:40
    of go a little bit off schedule there
  • 00:04:42
    was this brilliant yeah sorry for that
  • 00:04:44
    uh but there was this brilliant kind of
  • 00:04:45
    meme that came out of it and I felt a
  • 00:04:47
    little bit guilty but you you said
  • 00:04:49
    wearing this 20 VC jump which is
  • 00:04:50
    incredibly proud moment for me uh for
  • 00:04:53
    certain segments like the one you
  • 00:04:54
    mentioned there there would be the
  • 00:04:56
    potential to steamroll if you're
  • 00:04:58
    thinking as a founder stay building
  • 00:05:00
    where is open AI going to potentially
  • 00:05:03
    come and steamroll versus where they're
  • 00:05:04
    not also for me as an investor trying to
  • 00:05:07
    invest in opportunities that aren't
  • 00:05:08
    going to get damaged how should Founders
  • 00:05:12
    and me as an investor think about
  • 00:05:14
    that there will be many trillions of
  • 00:05:16
    dollars of market cap that gets created
  • 00:05:19
    new market cap that gets created by
  • 00:05:22
    using AI to build products and services
  • 00:05:24
    that were either impossible or quite
  • 00:05:26
    impractical before and the
  • 00:05:31
    there's this one set of areas where
  • 00:05:33
    we're going to try
  • 00:05:35
    to make it relevant which is you know we
  • 00:05:38
    just want the models to be really really
  • 00:05:40
    good such that you don't have to like
  • 00:05:42
    fight so hard to get them to do what you
  • 00:05:44
    want to do but all of this other stuff
  • 00:05:47
    uh which is building these incredible
  • 00:05:49
    products and services on top of this new
  • 00:05:52
    technology we think that just gets
  • 00:05:54
    better and better um one of the
  • 00:05:57
    surprises to me early on was and this is
  • 00:06:00
    no longer the case but in like the GPT
  • 00:06:03
    3.5 days it felt like 95% of startups
  • 00:06:06
    something like that wanted to bet
  • 00:06:09
    against the models getting way better
  • 00:06:11
    and so and they were doing these things
  • 00:06:13
    where we could already see gp4 coming
  • 00:06:15
    and we're like man it's going to be so
  • 00:06:17
    good it's not going to have these
  • 00:06:18
    problems if you're building a tool just
  • 00:06:21
    to get around this one shortcoming of
  • 00:06:23
    the model that's going to become less
  • 00:06:25
    and less relevant and we forget how bad
  • 00:06:29
    the model were a couple of years ago it
  • 00:06:31
    hasn't been that long on the calendar
  • 00:06:33
    but there were there were just a lot of
  • 00:06:35
    things and so it seemed like these good
  • 00:06:36
    areas to build a thing uh to like to
  • 00:06:41
    plug a hole rather than to build
  • 00:06:43
    something to go deliver like the great
  • 00:06:46
    AI tutor or the great AI medical adviser
  • 00:06:48
    or whatever
  • 00:06:50
    and and so I felt like 95% of people
  • 00:06:53
    that were were like betting against the
  • 00:06:55
    models getting better 5% of the people
  • 00:06:57
    were betting for the models getting
  • 00:06:58
    better I I think that's now reversed I
  • 00:07:01
    think people have
  • 00:07:02
    like internalized the rate of
  • 00:07:04
    improvement and have heard us on what we
  • 00:07:08
    intend to do
  • 00:07:11
    um so it's it no longer seems to be such
  • 00:07:14
    an issue but it was something we used to
  • 00:07:16
    fret about a lot because we kind of we
  • 00:07:18
    saw it was going to happen to all of
  • 00:07:20
    these very hardworking people you you
  • 00:07:21
    said about the trillions of dollars of
  • 00:07:23
    value to be created there and then I
  • 00:07:24
    promise we will return to these
  • 00:07:25
    brilliant questions I'm sure you saw I'm
  • 00:07:28
    not sure if you saw but Massa sit on
  • 00:07:30
    stage and say we will have I'm not going
  • 00:07:31
    to do an accent cuz my accents are
  • 00:07:33
    terrible um but there will be9 trillion
  • 00:07:36
    dollar of value created every single
  • 00:07:39
    year which will offset the9 trillion
  • 00:07:42
    capex that he thought would be needed
  • 00:07:45
    I'm just intrigued how did you think
  • 00:07:47
    about that when you saw that how do you
  • 00:07:49
    reflect on
  • 00:07:51
    that uh I can't put it down to like any
  • 00:07:57
    I think like if we can get it right with
  • 00:07:58
    an orders of to that's that's good
  • 00:08:00
    enough for now there's clearly going to
  • 00:08:01
    be a lot of capex spent and clearly a
  • 00:08:03
    lot of value created this happens with
  • 00:08:05
    every other Mega technological
  • 00:08:07
    revolution of which this is clearly one
  • 00:08:10
    um
  • 00:08:12
    but uh you know like next year will be a
  • 00:08:15
    big push for us
  • 00:08:17
    into these next Generation systems you
  • 00:08:20
    talked about when there could be like a
  • 00:08:22
    no code software agent I don't know how
  • 00:08:24
    long that's going to take but if we use
  • 00:08:26
    that as an example and imagine forward
  • 00:08:28
    to towards it think about what think
  • 00:08:31
    about how much economic value gets
  • 00:08:33
    unlocked for the world if anybody can
  • 00:08:35
    just describe like a whole company's
  • 00:08:38
    worth of software that they want this is
  • 00:08:39
    a ways away obviously but when we get
  • 00:08:41
    there and have it happen um think about
  • 00:08:45
    how difficult and how expensive that is
  • 00:08:46
    now think about how much value it
  • 00:08:47
    creates if you keep the same amount of
  • 00:08:49
    value but make it wildly more accessible
  • 00:08:51
    and less expensive that that's really
  • 00:08:53
    powerful and I think we'll see many
  • 00:08:55
    other examples like that we I mentioned
  • 00:08:58
    earlier like healthcare and education
  • 00:09:00
    but those are two that are both like
  • 00:09:03
    trillions of dollars of value to the
  • 00:09:04
    world to get right if you and if AI can
  • 00:09:07
    really really truly enable this to
  • 00:09:09
    happen in a different way than it has
  • 00:09:11
    before
  • 00:09:13
    I I don't think big numbers are the
  • 00:09:15
    point and they also the debate about
  • 00:09:17
    whether it's n trillion or 1 trillion or
  • 00:09:19
    whatever like you know I don't smarter
  • 00:09:23
    people than me it takes to figure that
  • 00:09:24
    out
  • 00:09:26
    but but the value creation does seem
  • 00:09:28
    just unbelievable here we're going to
  • 00:09:32
    get to agents in terms of kind of how
  • 00:09:33
    that value is delivered in terms of like
  • 00:09:35
    the delivery mechan mechanism for which
  • 00:09:37
    it's valued open source is an incredibly
  • 00:09:39
    prominent method through which it could
  • 00:09:41
    be how do you think about the role of
  • 00:09:42
    Open Source in the future of AI and how
  • 00:09:45
    does internal discussions look like for
  • 00:09:48
    you when the question comes should we
  • 00:09:50
    open- Source any models or some models
  • 00:09:54
    there there's clearly a really important
  • 00:09:55
    place in the ecosystem for open source
  • 00:09:57
    models there's also really good open
  • 00:09:59
    source models that now exist
  • 00:10:02
    um I think there's also a place for like
  • 00:10:06
    nicely offered well integrated services
  • 00:10:08
    and apis and you know I think it's I
  • 00:10:12
    think it makes sense that all of the
  • 00:10:14
    stuff is an offer and people will pick
  • 00:10:15
    what what works for
  • 00:10:17
    them as a delivery mechanism we have the
  • 00:10:20
    open source as of kind of androp to
  • 00:10:22
    customers and a way to deliver that we
  • 00:10:24
    can have agents I think there's a lot of
  • 00:10:27
    uh kind of semantic confusion around
  • 00:10:29
    what an agent is how do you think about
  • 00:10:31
    the definition of Agents today and what
  • 00:10:32
    is an agent to you and what is it
  • 00:10:35
    not
  • 00:10:38
    um this is like my offthe cuff answer
  • 00:10:41
    it's not well considered but something
  • 00:10:43
    that I can give a long duration task to
  • 00:10:47
    and provide minimal
  • 00:10:49
    supervision during execution for what do
  • 00:10:52
    you think people think about agents that
  • 00:10:55
    actually they get
  • 00:10:58
    wrong well it's more like I don't I
  • 00:11:01
    don't think any of us yet have an
  • 00:11:02
    intuition for what this is going to be
  • 00:11:05
    like you know we're all gesturing at
  • 00:11:06
    something that seems
  • 00:11:10
    important maybe I can give the following
  • 00:11:13
    example when people talk about an AI
  • 00:11:17
    agent acting on their behalf uh the the
  • 00:11:20
    main example they seem to give fairly
  • 00:11:23
    consistently is oh you can
  • 00:11:26
    like you know you can like ask the agent
  • 00:11:29
    to go book you a restaurant
  • 00:11:32
    reservation um and either it can like
  • 00:11:34
    use open table or it can like call the
  • 00:11:36
    restaurant or or whatever and you know
  • 00:11:40
    it's like okay sure that's that's like a
  • 00:11:42
    mildly annoying thing to have to do and
  • 00:11:45
    it maybe like saves you some
  • 00:11:47
    work one of the things that I think is
  • 00:11:50
    interesting is a world where
  • 00:11:53
    uh you can just do things that you
  • 00:11:55
    wouldn't or couldn't do as a human so
  • 00:11:57
    what if what if instead of of calling uh
  • 00:12:01
    one restaurant to make a reservation my
  • 00:12:03
    agent would call me like 300 and figure
  • 00:12:05
    out which one had the best food for me
  • 00:12:06
    or some special thing available or
  • 00:12:08
    whatever and then you would say well
  • 00:12:09
    that's like really annoying if your
  • 00:12:10
    agent is calling 300 restaurants but if
  • 00:12:13
    if it's an agent answering each of those
  • 00:12:15
    300 300 places then no problem and it
  • 00:12:17
    can be this like massively parallel
  • 00:12:19
    thing that a human can't do so that's
  • 00:12:21
    like a trivial example
  • 00:12:24
    but there are these like limitations to
  • 00:12:27
    human bandwidth that maybe these agents
  • 00:12:29
    won't
  • 00:12:30
    have the category I think though is more
  • 00:12:33
    interesting is not the one that people
  • 00:12:36
    normally talk about where you have this
  • 00:12:37
    thing calling restaurants for you
  • 00:12:41
    um but something that's more like a
  • 00:12:45
    really smart senior
  • 00:12:48
    coworker um where you can like
  • 00:12:50
    collaborate on a project with and the
  • 00:12:52
    agent can go do like a two-day task or
  • 00:12:55
    two week task really well and you know
  • 00:12:58
    paying you at when it has questions but
  • 00:13:00
    come back to you with like a great work
  • 00:13:03
    product does this fundamentally change
  • 00:13:05
    the way that SAS is priced when you
  • 00:13:07
    think about extraction of value bluntly
  • 00:13:11
    and normally it's on a per seat basis
  • 00:13:12
    but now you're actually kind of
  • 00:13:14
    replacing labor so to speak how do you
  • 00:13:16
    think about the future of pricing with
  • 00:13:19
    that in mind when you are such a core
  • 00:13:20
    part of an Enterprise
  • 00:13:22
    Workforce like how will price or what it
  • 00:13:24
    will do for people who are no like how
  • 00:13:27
    will it price oh
  • 00:13:33
    um like we always have per seat
  • 00:13:40
    pricing we look I can make I can like
  • 00:13:42
    I'll speculate here for fun but we
  • 00:13:44
    really have no idea this I mean Sam I'm
  • 00:13:46
    a venture investor for a living so we
  • 00:13:48
    speculate for fun all the time it's
  • 00:13:50
    okay um I mean I could imagine a world
  • 00:13:54
    where you can say like I want one GPU or
  • 00:13:56
    10 gpus or 100 gpus to just be like
  • 00:13:59
    turning on my problems all the
  • 00:14:01
    time and it's not like you're not like
  • 00:14:06
    paying per seat or even per agent but
  • 00:14:09
    you're like it's priced based off the
  • 00:14:11
    amount of compute that's like working on
  • 00:14:13
    a you know on your problems all the time
  • 00:14:16
    do we need to build specific models for
  • 00:14:18
    agentic use or do we not how do you
  • 00:14:22
    think about
  • 00:14:23
    that
  • 00:14:25
    um there's a huge amount of
  • 00:14:27
    infrastructure and Scaffolding to build
  • 00:14:28
    for
  • 00:14:30
    but I think 01 points the way to a model
  • 00:14:32
    that is capable of doing great agentic
  • 00:14:36
    tasks I hate the word agentic by the way
  • 00:14:38
    i' I'd love it if we could come up with
  • 00:14:40
    a what would you like this is your
  • 00:14:42
    chance to coin a new word I don't have
  • 00:14:45
    this could be a spoiler of that that
  • 00:14:46
    that really is something I I'll keep
  • 00:14:48
    thinking while we
  • 00:14:50
    talk on the model side Sam everyone says
  • 00:14:54
    that uh models are depreciating assets
  • 00:14:56
    the commoditization of models is so
  • 00:14:59
    R how do you respond and think about
  • 00:15:02
    that and when you think about the
  • 00:15:04
    increasing Capital intensity to train
  • 00:15:06
    models are we actually seeing the
  • 00:15:07
    reversion of that where it requires so
  • 00:15:10
    much money that actually very few people
  • 00:15:11
    can do
  • 00:15:12
    it uh it's definitely true that they're
  • 00:15:15
    depreciating assets um this thing that
  • 00:15:18
    they're not though worth as much as they
  • 00:15:21
    cost to train that seems totally wrong
  • 00:15:24
    um to say nothing of the fact that
  • 00:15:26
    there's like a there's a positive
  • 00:15:29
    compounding effect as you learn to train
  • 00:15:30
    these models you get better at training
  • 00:15:32
    the next one but the actual like Revenue
  • 00:15:34
    we can make from a model I think
  • 00:15:36
    justifies the investment um I to be fair
  • 00:15:41
    uh I don't think that's true for
  • 00:15:43
    everyone and there's a lot of there are
  • 00:15:46
    probably too many people training very
  • 00:15:48
    similar models and if you're a little
  • 00:15:50
    behind or if you don't have
  • 00:15:54
    a product with the sort of normal rules
  • 00:15:57
    of business that make that product
  • 00:15:59
    sticky and valuable then yeah maybe you
  • 00:16:03
    can't maybe it's harder to get a return
  • 00:16:06
    on the investment we're very fortunate
  • 00:16:08
    to have chat GPT and hundreds of
  • 00:16:10
    millions of people that use our models
  • 00:16:11
    and so even if it costs a lot we get to
  • 00:16:13
    like amortise that cost across a lot of
  • 00:16:15
    people how do you think about how open
  • 00:16:17
    AI models continue to differentiate over
  • 00:16:20
    time and where you most want to focus to
  • 00:16:22
    expand that
  • 00:16:25
    differentiation uh reasoning is our
  • 00:16:29
    current most important area of focus I
  • 00:16:31
    think this is what unlocks the next like
  • 00:16:34
    massive Leap Forward in in value created
  • 00:16:37
    so that's we'll improve them in lots of
  • 00:16:40
    ways uh we will
  • 00:16:42
    do multimodal work uh we will do other
  • 00:16:46
    features in the models that we think are
  • 00:16:47
    super important
  • 00:16:50
    to the ways that people want to use
  • 00:16:52
    these things how do you think how do you
  • 00:16:54
    think about reasoning in multimodal work
  • 00:16:56
    like there the challenge is what you
  • 00:16:58
    want to achieve love to understand that
  • 00:17:01
    reasoning in multimodality specific
  • 00:17:04
    yeah I hope it's just going to work I
  • 00:17:06
    mean it obviously takes some doing to
  • 00:17:08
    get done but uh you know
  • 00:17:11
    like people like when they're babies and
  • 00:17:14
    toddlers before they're good at language
  • 00:17:16
    can still do quite complex visual
  • 00:17:18
    reasoning so clearly this is
  • 00:17:20
    possible totally is um how will Vision
  • 00:17:23
    capabilities scale with new inference
  • 00:17:26
    time Paradigm set by 01
  • 00:17:36
    uh without spoiling anything I would
  • 00:17:38
    expect rapid progress in
  • 00:17:41
    image based
  • 00:17:44
    models that's a spoiler isn't
  • 00:17:46
    it it's a bit of a carrot
  • 00:17:49
    Sam okay um going off schedule is one
  • 00:17:53
    thing trying to tease that out might get
  • 00:17:54
    me in real trouble um gpt's output is
  • 00:17:57
    generally I I like the this one a lot
  • 00:17:59
    yeah I I don't think that we are nearly
  • 00:18:01
    British enough in a lot of gpt's output
  • 00:18:03
    gpt's output is generally American in
  • 00:18:05
    spelling and spelling and tone how do we
  • 00:18:08
    think about
  • 00:18:10
    internationalization with models
  • 00:18:12
    different cultures different languages
  • 00:18:14
    and how important that
  • 00:18:16
    is it's interesting I you know I don't
  • 00:18:19
    use British English I haven't tried but
  • 00:18:20
    I would have guessed that it's really
  • 00:18:22
    good at doing British English is it
  • 00:18:26
    not okay well we'll look at that
  • 00:18:30
    we can get you your s's I'm sure there
  • 00:18:33
    we go um how does open AI make
  • 00:18:35
    breakthroughs in terms of like core
  • 00:18:38
    reasoning do we need to start pushing
  • 00:18:39
    into reinforcement learning as a pathway
  • 00:18:42
    or other new techniques aside from the
  • 00:18:47
    transforma uh I mean there's two
  • 00:18:49
    questions in there there's how we do it
  • 00:18:51
    and then you know there's everyone's
  • 00:18:53
    favorite question which is what comes
  • 00:18:54
    beyond the Transformer
  • 00:18:57
    the how we do it is our special sauce
  • 00:19:00
    it's easy it's really easy to copy
  • 00:19:02
    something you know Works uh and one of
  • 00:19:03
    the reasons that people don't talk about
  • 00:19:05
    about why it's so easy is you have the
  • 00:19:06
    conviction to know it's possible and so
  • 00:19:10
    after after a research lab does
  • 00:19:12
    something even if you don't know exactly
  • 00:19:14
    how they did it
  • 00:19:15
    it's say easy but it's doable to go off
  • 00:19:18
    and copy it and you can see this in the
  • 00:19:20
    replications of gp4 and I'm sure you'll
  • 00:19:23
    see this in replications of
  • 00:19:25
    01 what is really hard and the thing
  • 00:19:27
    that I'm most proud of about our culture
  • 00:19:30
    is the repeated ability to go off and do
  • 00:19:35
    something new and
  • 00:19:37
    totally unproven and a lot of
  • 00:19:41
    organizations I'm not talking about AI
  • 00:19:43
    research just generally a lot of
  • 00:19:45
    organizations talk about the ability to
  • 00:19:47
    do this there are very few that do um
  • 00:19:50
    across any field and in some sense I
  • 00:19:53
    think this is one of the most important
  • 00:19:55
    inputs to human progress so one of the
  • 00:19:58
    like retirement things I fantasize about
  • 00:20:01
    doing is writing a book of everything
  • 00:20:03
    I've learned about how to build an
  • 00:20:05
    organization and a culture that does
  • 00:20:07
    this thing not the organization that
  • 00:20:09
    just copies what everybody else has done
  • 00:20:11
    because I think this is something that
  • 00:20:13
    the world could have a lot more of it's
  • 00:20:15
    limited by human talent but there's a
  • 00:20:17
    huge amount of wasted human talent
  • 00:20:20
    because this is
  • 00:20:21
    not an organization style or culture
  • 00:20:25
    whatever you want to call it that we are
  • 00:20:27
    all good at building so I love way more
  • 00:20:30
    of that and that is I think the thing
  • 00:20:32
    most special about us Sam how is human
  • 00:20:34
    Talent
  • 00:20:35
    wasted oh there's just a lot of really
  • 00:20:38
    talented people in the world that are
  • 00:20:39
    not working to their full potential um
  • 00:20:42
    because they work at a bad company or
  • 00:20:44
    they live in a country that doesn't
  • 00:20:45
    support any good companies uh
  • 00:20:48
    or a long list of other things I mean
  • 00:20:51
    the
  • 00:20:52
    the one of the things I'm most excited
  • 00:20:54
    about with AI is I hope it'll get us
  • 00:20:57
    much better than we are now at helping
  • 00:20:59
    get everyone to their Max potential
  • 00:21:01
    which we are nowhere nowhere near
  • 00:21:04
    there's a lot of people in the world
  • 00:21:05
    that I'm sure would be phenomenal AI
  • 00:21:08
    researchers had their life paths just
  • 00:21:09
    gone a little bit
  • 00:21:11
    differently Sam you've had an incredible
  • 00:21:13
    journey again sorry for the off-script
  • 00:21:15
    you've had an incredible journey over
  • 00:21:17
    the last few years through you know
  • 00:21:20
    unbelievable hypergrowth you say about
  • 00:21:22
    writing a book there in retirement if
  • 00:21:24
    you reflect back on the 10 years of
  • 00:21:26
    leadership change that you've under gone
  • 00:21:29
    how have you changed your leadership
  • 00:21:31
    most
  • 00:21:34
    significantly
  • 00:21:38
    well I think the thing that has been
  • 00:21:41
    most unusual for me about these last
  • 00:21:43
    couple of
  • 00:21:44
    years
  • 00:21:46
    is just the rate at
  • 00:21:48
    which things have changed at a normal
  • 00:21:51
    company you get time to go from Z to 100
  • 00:21:55
    million in Revenue 100 million to a
  • 00:21:56
    billion billion to 10 billion you don't
  • 00:21:58
    have to do that in like a 2-year period
  • 00:22:01
    and you don't have to like build the
  • 00:22:03
    company we had to research that but we
  • 00:22:05
    really didn't have a company in the
  • 00:22:06
    sense of a traditional Silicon Valley
  • 00:22:08
    startup that's you know scaling and
  • 00:22:09
    serving lots of customers whatever um
  • 00:22:12
    having to do that
  • 00:22:15
    so quickly there was just like a lot of
  • 00:22:17
    stuff that I was supposed to get more
  • 00:22:19
    time to learn than I got and what did
  • 00:22:23
    you not know that you would have liked
  • 00:22:25
    more time to learn
  • 00:22:30
    I I I mean I would say like what did I
  • 00:22:32
    know um
  • 00:22:37
    the one of the things that just came to
  • 00:22:40
    mind out of like a rolling list of a 100
  • 00:22:42
    is how hard it is how much active work
  • 00:22:46
    it takes to get the company to focus not
  • 00:22:50
    on how you grow the next 10% but the
  • 00:22:52
    next 10x and growing the next 10% it's
  • 00:22:55
    the same things that worked before will
  • 00:22:56
    work again but to go from a company
  • 00:22:58
    doing say like a billion to1 billion
  • 00:23:00
    doar in Revenue
  • 00:23:02
    requires a whole lot of change and it is
  • 00:23:05
    not the sort of like let's do last week
  • 00:23:08
    what we did this week mindset and in a
  • 00:23:11
    world where people don't get time to
  • 00:23:14
    even get caught up on the basics because
  • 00:23:16
    growth is just
  • 00:23:18
    so rapid uh I I badly underappreciated
  • 00:23:24
    the amount of work it took to be able to
  • 00:23:26
    like keep charging at the next big step
  • 00:23:30
    forward while still not neglecting
  • 00:23:32
    everything else that we have to do um
  • 00:23:36
    there's a big piece of internal
  • 00:23:37
    communication around that and how you
  • 00:23:39
    sort of share information how you build
  • 00:23:41
    the structures to like get the company
  • 00:23:44
    to get good at thinking about 10x more
  • 00:23:47
    stuff or bigger stuff or more complex
  • 00:23:49
    stuff every eight months 12 months
  • 00:23:52
    whatever
  • 00:23:53
    um there's a big piece in there about
  • 00:23:56
    planning about how you
  • 00:23:58
    balance what has to happen today and
  • 00:24:01
    next month with the the long lead pieces
  • 00:24:03
    you need in place for to be able to
  • 00:24:06
    execute in a year or two years with you
  • 00:24:08
    know build out of compute or even you
  • 00:24:11
    know things that are more normal like
  • 00:24:13
    planning ahead enough for like office
  • 00:24:15
    space in a city like San Francisco is
  • 00:24:18
    surprisingly hard at this kind of rate
  • 00:24:21
    so I I think
  • 00:24:24
    the there was either no playbook for
  • 00:24:27
    this or someone had a secret Playbook
  • 00:24:28
    book they didn't give me um or all of us
  • 00:24:31
    like we've all just sort of fumbled our
  • 00:24:33
    way through this but there's been a lot
  • 00:24:34
    to learn on the
  • 00:24:36
    Fly God I don't know if I'm going to get
  • 00:24:38
    into trouble for this but s it I'll ask
  • 00:24:40
    it anyway and if so I'll deal with it
  • 00:24:42
    later um Keith R boy uh did a talk and
  • 00:24:46
    he said about you should hire incredibly
  • 00:24:48
    young people under 30 and that is what
  • 00:24:51
    Peter teal taught him and that is the
  • 00:24:52
    secret to building great companies and
  • 00:24:55
    it got a little bit of resistance uh to
  • 00:24:57
    say the least
  • 00:24:58
    um I'm intrigued when you think about
  • 00:25:01
    this book that you write in retirement
  • 00:25:03
    and that advice you build great
  • 00:25:05
    companies by building incredibly young
  • 00:25:08
    hungry ambitious people who are under 30
  • 00:25:11
    and that is the mechanism how do you
  • 00:25:13
    feel I think I was 30 when we started
  • 00:25:14
    opening eye or at least thereabouts so
  • 00:25:17
    you know I wasn't that
  • 00:25:24
    young seem to work okay so far
  • 00:25:28
    worth a try uh uh going back uh is the
  • 00:25:33
    question like the question is how do you
  • 00:25:35
    think about hiring incredibly young
  • 00:25:36
    under 30s as this like Trojan Horse of
  • 00:25:40
    Youth
  • 00:25:42
    energy ambition but less experience or
  • 00:25:45
    the much more experienced I know how to
  • 00:25:48
    do this I've done it
  • 00:25:49
    before um I mean the obvious answer is
  • 00:25:53
    you can succeed with hiring both classes
  • 00:25:56
    of people like we have
  • 00:25:59
    I was just like right before this I was
  • 00:26:01
    sending someone a slack message about
  • 00:26:04
    there was a guy that we recently hired
  • 00:26:05
    on one of the teams I don't know how old
  • 00:26:07
    he is but low 20 is probably doing just
  • 00:26:09
    insanely amazing work and I was like can
  • 00:26:11
    we find a lot more people like this this
  • 00:26:13
    is just like off the charts brilliant I
  • 00:26:14
    don't get how these people can be so
  • 00:26:16
    good so young but it clearly happens and
  • 00:26:18
    when you can find those people uh they
  • 00:26:21
    bring amazing fresh perspective energy
  • 00:26:24
    whatever else on the other hand uh when
  • 00:26:27
    you're like
  • 00:26:30
    designing some of the most complex and
  • 00:26:33
    massively
  • 00:26:34
    expensive computer systems that Humanity
  • 00:26:36
    has ever built actually like pieces of
  • 00:26:39
    infrastructure of any sort then I would
  • 00:26:42
    not be comfortable taking a bet on
  • 00:26:44
    someone who is just sort of like
  • 00:26:46
    starting out uh where the stakes are
  • 00:26:48
    higher so you
  • 00:26:51
    want you want both uh and I think what
  • 00:26:55
    you really want is just like an extreme
  • 00:26:58
    ex High Talent bar of people at any
  • 00:27:00
    age
  • 00:27:02
    and a strategy that said I'm only going
  • 00:27:05
    to
  • 00:27:06
    hire younger people or I'm only going to
  • 00:27:08
    hire older people I believe would be
  • 00:27:11
    misguided uh I I think it's like somehow
  • 00:27:15
    just not it's not quite the framing that
  • 00:27:17
    resonates with me but the part of it
  • 00:27:19
    that does is and one of the things that
  • 00:27:22
    I feel most
  • 00:27:24
    grateful about why combinator 4 is
  • 00:27:28
    inexperience does not inherently mean
  • 00:27:31
    not valuable and there are
  • 00:27:35
    incredibly high potential people at the
  • 00:27:37
    very beginning of their career that can
  • 00:27:39
    create huge amounts of value and uh we
  • 00:27:44
    as a society should bet on those people
  • 00:27:46
    and it's a great
  • 00:27:47
    thing I am going to return to some
  • 00:27:49
    semblance of the schedule as I'm I'm
  • 00:27:51
    really going to get told off but
  • 00:27:52
    anthropics models have been sometimes
  • 00:27:55
    cited as being better for coding
  • 00:27:58
    C why is that do you think that's fair
  • 00:28:01
    and how should developers think about
  • 00:28:03
    when to pick open AI versus a different
  • 00:28:07
    provider yeah they have a model that is
  • 00:28:09
    great at coding for sure uh and it's
  • 00:28:11
    impressive work
  • 00:28:14
    I I think developers use multiple models
  • 00:28:18
    most of the time and I'm not sure how
  • 00:28:21
    that's all going to evolve as we head
  • 00:28:23
    towards this more agen toied World um
  • 00:28:27
    but
  • 00:28:29
    I sort of think there's just going to be
  • 00:28:30
    a lot of AI everywhere and something
  • 00:28:33
    about the way that we currently talk
  • 00:28:35
    about it or think about it
  • 00:28:39
    feels wrong uh may maybe if I had to
  • 00:28:43
    describe it we will shift from talking
  • 00:28:44
    about models to talking about systems
  • 00:28:47
    but that'll take a
  • 00:28:48
    while when we think about skating models
  • 00:28:52
    how many more model iterations do you
  • 00:28:54
    think scaling laws will hold true for it
  • 00:28:57
    was the kind of
  • 00:28:58
    common refrain that it won't last for
  • 00:29:00
    long and it seems to be proving to last
  • 00:29:03
    longer than people
  • 00:29:05
    think
  • 00:29:08
    uh without going into detail about how
  • 00:29:10
    it's going to happen the the the core of
  • 00:29:13
    the question that you're getting at is
  • 00:29:16
    is the trajectory of model capability
  • 00:29:19
    Improvement going to keep going like it
  • 00:29:21
    has been going and the answer that I
  • 00:29:25
    believe is yes for a long time
  • 00:29:28
    have you ever doubted that totally why
  • 00:29:33
    uh we have had well we've had
  • 00:29:35
    like Behavior we don't understand we've
  • 00:29:38
    had filled training runs we' all sorts
  • 00:29:39
    of things we've had to figure out new
  • 00:29:41
    paradigms when we kind of get to towards
  • 00:29:43
    the end of one and have to figure out
  • 00:29:45
    the next what was the hardest one to
  • 00:29:49
    navigate um could be a new paradigm
  • 00:29:52
    could be a training
  • 00:29:53
    run which one do you remember most Fally
  • 00:29:56
    and how did you get through that
  • 00:30:00
    well when we started working on gp4
  • 00:30:02
    there were some issues that caused us a
  • 00:30:04
    lot of consternation that we really
  • 00:30:05
    didn't know how to solve we figured it
  • 00:30:08
    out but there was there was definitely a
  • 00:30:10
    time period where we just didn't know
  • 00:30:11
    how we were going to do that model um
  • 00:30:14
    and then in this shift to 01 and the
  • 00:30:18
    idea of reasoning models uh that was
  • 00:30:21
    something we had been excited about for
  • 00:30:23
    a long time but it was like a long and
  • 00:30:26
    Winding Road of research to get
  • 00:30:29
    here is it difficult to maintain morale
  • 00:30:33
    when it is long and winding roads when
  • 00:30:34
    training runs can fail how do you
  • 00:30:37
    maintain moral in those
  • 00:30:40
    times you know we have a lot of people
  • 00:30:43
    here who are excited to build AGI and
  • 00:30:45
    that that's a very motivating thing and
  • 00:30:48
    no one expects that to be easy and a
  • 00:30:50
    straight line to success but uh it does
  • 00:30:53
    feel like
  • 00:30:59
    there's a famous quote from history it's
  • 00:31:01
    something like I'm going to get this
  • 00:31:03
    totally wrong but the spirit of it is
  • 00:31:06
    like I never pray and ask for God to be
  • 00:31:09
    on my side you know I pray and hope to
  • 00:31:11
    be on God's side and there is something
  • 00:31:14
    about betting on deep learning that
  • 00:31:16
    feels like being on the side of the
  • 00:31:17
    angels
  • 00:31:18
    and you kind of just it eventually seems
  • 00:31:21
    to work out even though you hit some big
  • 00:31:23
    stumbling blocks along the way and so
  • 00:31:25
    like a deep belief in that has been good
  • 00:31:27
    for us
  • 00:31:30
    can I ask you a really weird one I had a
  • 00:31:32
    great quote the other day and it was the
  • 00:31:34
    heaviest things in life are not iron or
  • 00:31:36
    gold but unmade
  • 00:31:38
    decisions what unmade decision weighs on
  • 00:31:41
    your mind
  • 00:31:44
    most it's different every day like I
  • 00:31:46
    don't there's not one big
  • 00:31:50
    one I mean I guess there are some big
  • 00:31:52
    ones like about are we gonna bet on this
  • 00:31:56
    next product or that next product uh or
  • 00:31:59
    are we going to like build our next
  • 00:32:01
    computer this way or that way they are
  • 00:32:03
    kind of like really high stakes oneway
  • 00:32:06
    doorish that like everybody else I
  • 00:32:07
    probably delay for too long but but
  • 00:32:11
    mostly the hard part is every day it
  • 00:32:14
    feels like there are a few
  • 00:32:17
    new 5149
  • 00:32:19
    decisions that come up
  • 00:32:22
    that kind of make it to me because they
  • 00:32:25
    were 51 49 in the first place and that I
  • 00:32:28
    don't feel like particularly likely I
  • 00:32:30
    can do better than somebody else would
  • 00:32:33
    have done but I kind of have to make him
  • 00:32:34
    anyway and it's it's the volume of them
  • 00:32:37
    it is not
  • 00:32:40
    anyone is there a commonality in the
  • 00:32:42
    person that you co when it's
  • 00:32:45
    5149 no um
  • 00:32:50
    I I think the wrong way to do that is to
  • 00:32:52
    have one person you lean on for
  • 00:32:54
    everything and the right way to at least
  • 00:32:56
    for me the right way to do it is to have
  • 00:32:57
    like like 15 or 20 people Each of which
  • 00:33:00
    you have come to believe has good
  • 00:33:02
    instincts and good context in a
  • 00:33:04
    particular way and you get to like phone
  • 00:33:07
    a friend to the best expert rather than
  • 00:33:09
    try to have just one across the
  • 00:33:11
    board in terms of hard decisions I do
  • 00:33:13
    want to touch ON Semiconductor Supply
  • 00:33:15
    chains how wide are you about
  • 00:33:18
    semiconductor Supply chains and
  • 00:33:20
    international tensions
  • 00:33:23
    today I don't know how to quantify that
  • 00:33:26
    worried of course is the answer answer
  • 00:33:29
    uh it's probably not it's well I guess I
  • 00:33:30
    could quantify it this way it is not my
  • 00:33:32
    top worry but it is in like the top 10%
  • 00:33:35
    of all
  • 00:33:38
    worries am I allowed to ask what's your
  • 00:33:40
    top
  • 00:33:41
    worry I'm I'm in so much I've got past
  • 00:33:43
    the stage of being in trouble for this
  • 00:33:45
    one
  • 00:33:48
    um it's something
  • 00:33:53
    about the the sort of generalized
  • 00:33:56
    complexity of all we as a whole field
  • 00:33:59
    are trying to do
  • 00:34:02
    the and it feels like a I think it's all
  • 00:34:06
    going to work out fine but it feels like
  • 00:34:08
    a very complex system now this kind of
  • 00:34:12
    like works fractally at every level so
  • 00:34:13
    you can say that's also true like inside
  • 00:34:16
    of opening eye itself uh that's also
  • 00:34:18
    true inside of anyone team um but you
  • 00:34:23
    know an example of this since you were
  • 00:34:24
    just talking about semiconductors is you
  • 00:34:26
    got to balance the
  • 00:34:28
    power availability with the right
  • 00:34:30
    networking decisions with being able to
  • 00:34:31
    like get enough chips in time and
  • 00:34:33
    whatever risk there's going to be there
  • 00:34:35
    um with the ability to have the research
  • 00:34:37
    ready to intersect that so you don't
  • 00:34:39
    either like be caught totally flat
  • 00:34:41
    footed or have a system that you can't
  • 00:34:43
    utilize um with the right product that
  • 00:34:47
    is going to use that research to be able
  • 00:34:49
    to like pay the eye watering cost of
  • 00:34:51
    that system so
  • 00:34:55
    the it's supply chain makes it sign
  • 00:34:58
    sound too much like a pipeline but but
  • 00:35:00
    yeah the overall ecosystem complexity at
  • 00:35:03
    every level of like the fractal scan is
  • 00:35:06
    unlike anything I have seen in any
  • 00:35:08
    industry
  • 00:35:10
    before uh and some version of that is
  • 00:35:14
    probably my top
  • 00:35:16
    worry you said unlike anything we've
  • 00:35:18
    seen before a lot of people I think
  • 00:35:19
    compare this you know wave to the
  • 00:35:22
    internet bubble uh in terms of you know
  • 00:35:25
    the excitement and the exuberance and I
  • 00:35:26
    think the thing that's different is the
  • 00:35:27
    amount that people are spending Larry
  • 00:35:29
    Ellison said that it will cost a hundred
  • 00:35:31
    billion doar to enter the foundation
  • 00:35:33
    model race as a starting point do you
  • 00:35:36
    agree with that statement and when you
  • 00:35:37
    saw that we like yeah that makes
  • 00:35:40
    sense uh no I think it will cost less
  • 00:35:42
    than that but there there's an
  • 00:35:47
    interesting there's an interesting point
  • 00:35:49
    here um which is everybody likes to
  • 00:35:54
    use previous examples of a technology
  • 00:35:57
    Revolution to talk about to put a new
  • 00:35:59
    one into more familiar context and a I
  • 00:36:04
    think that's a bad habit on the whole
  • 00:36:06
    and but I understand why people do it
  • 00:36:09
    and B I think the ones people pick
  • 00:36:12
    for analogizing AI are particularly bad
  • 00:36:16
    so the internet was obviously quite
  • 00:36:19
    different than Ai and you brought up
  • 00:36:20
    this one thing about cost and whether it
  • 00:36:22
    cost like 10 billion or 100 billion or
  • 00:36:24
    whatever to be competitive it was very
  • 00:36:26
    like one of the defining things about
  • 00:36:28
    the
  • 00:36:29
    internet Revolution was it was actually
  • 00:36:33
    really easy to get started now another
  • 00:36:36
    thing that cuts more towards the
  • 00:36:38
    internet is
  • 00:36:41
    mostly for many companies this will just
  • 00:36:43
    be like a continuation of the Internet
  • 00:36:45
    it's just like someone else makes these
  • 00:36:46
    AI models and you get to use them to
  • 00:36:49
    build all sorts of great stuff and it's
  • 00:36:51
    like a new primitive for Building
  • 00:36:53
    Technology but if you're trying to build
  • 00:36:54
    the AI itself that's pretty different
  • 00:36:57
    another example people uses electricity
  • 00:37:00
    um which I think doesn't make sense for
  • 00:37:02
    a ton of reasons the one I like the most
  • 00:37:05
    caveat by my
  • 00:37:07
    earlier comment that I don't think
  • 00:37:09
    people should be doing this or trying to
  • 00:37:11
    like use these analogies to seriously is
  • 00:37:14
    the
  • 00:37:16
    transistor it was a new discovery of
  • 00:37:18
    physics it had incredible scaling
  • 00:37:21
    properties it seeped everywhere pretty
  • 00:37:24
    quickly you know we had things like mors
  • 00:37:27
    in a way that we could now imagine like
  • 00:37:29
    a bunch of laws for AI that tell us
  • 00:37:31
    something about how quickly it's going
  • 00:37:32
    to get better um and everyone kind of B
  • 00:37:36
    like the whole tech industry kind of
  • 00:37:38
    benefited from it and there's a lot of
  • 00:37:41
    transistors involved in the products and
  • 00:37:43
    delivery of services that you use but
  • 00:37:45
    you don't really think
  • 00:37:46
    of them as transistor
  • 00:37:49
    companies um it's there's a very complex
  • 00:37:53
    very expensive industrial process around
  • 00:37:55
    it with a massive supply chain and
  • 00:37:57
    and you know the the incredible progress
  • 00:38:01
    based off of this very simple discovery
  • 00:38:03
    of physics led to this gigantic uplift
  • 00:38:05
    of the whole economy for a long time
  • 00:38:07
    even though most of the time you didn't
  • 00:38:09
    to think about it and you don't say oh
  • 00:38:11
    this is a transistor product it's just
  • 00:38:13
    like all right this thing can like
  • 00:38:14
    process information for me you don't
  • 00:38:17
    even really think about that it's just
  • 00:38:19
    expected Sam I'd love to do a quick fire
  • 00:38:22
    round with you so I'm going to say so
  • 00:38:24
    I'm going to say a short statement you
  • 00:38:25
    give me your immediate thoughts okay
  • 00:38:28
    okay so you are building today as a
  • 00:38:32
    whatever 23 24 year old with the
  • 00:38:34
    infrastructure that we have
  • 00:38:36
    today what do you choose to build if you
  • 00:38:39
    started
  • 00:38:42
    today uh some AI enabled vertical I'll
  • 00:38:46
    I'll I'll use tutors as an example but
  • 00:38:48
    like the the the best AI tutoring
  • 00:38:51
    product or the you know that I could
  • 00:38:53
    possibly imagine to teach people to
  • 00:38:54
    learn any category like that could be
  • 00:38:56
    the AI lawyer could could be the sort of
  • 00:38:58
    like AI CAD engineer whatever you
  • 00:39:02
    mentioned your book that will be coming
  • 00:39:04
    out once it's written in retirement no I
  • 00:39:07
    said I think about it I I don't know if
  • 00:39:09
    I'll actually get around to it but I
  • 00:39:10
    think it's an interesting idea if you
  • 00:39:12
    were to write a book what would you call
  • 00:39:15
    it this is so unfair sorry Sam um I
  • 00:39:20
    don't have the title really I haven't
  • 00:39:22
    thought about this book other than like
  • 00:39:24
    I wish something existed because I think
  • 00:39:25
    it could unlock a lot of human potential
  • 00:39:28
    so maybe I think it would be something
  • 00:39:29
    about human
  • 00:39:30
    potential what in AI does no one focus
  • 00:39:33
    on that everyone should spend more time
  • 00:39:37
    on what is not hot should be
  • 00:39:44
    hot I think there's people focused on
  • 00:39:47
    everything uh what I would love to
  • 00:39:50
    see there's a lot of different ways to
  • 00:39:52
    solve this problem but something about
  • 00:39:54
    an AI that can understand your whole
  • 00:39:55
    life doesn't have to literally the
  • 00:39:58
    infinite context but some way that you
  • 00:40:00
    can have an AI agent that like knows
  • 00:40:02
    everything there is to know about you
  • 00:40:03
    has access to all of your data things
  • 00:40:06
    like that what was one thing that
  • 00:40:08
    surprised you in the last month
  • 00:40:12
    Sam it's a research result I can't talk
  • 00:40:15
    about but it is breathtakingly
  • 00:40:18
    good which competitor do you most
  • 00:40:21
    respect why
  • 00:40:24
    then uh I mean I kind of respect
  • 00:40:27
    everybody in the space right now I think
  • 00:40:29
    there's like really amazing work coming
  • 00:40:31
    from the whole field
  • 00:40:37
    I an incredibly talented incredibly
  • 00:40:39
    hardworking people I don't mean this to
  • 00:40:41
    be a question Dodge it's like I can
  • 00:40:42
    point to super talented people doing
  • 00:40:45
    super great work everywhere in the
  • 00:40:49
    field is that
  • 00:40:53
    one not
  • 00:40:56
    really uh tell me what's your favorite
  • 00:40:58
    open a
  • 00:41:00
    API I think the new real time API is
  • 00:41:03
    pretty awesome but we have a lot of I
  • 00:41:05
    mean we have a we have a big API
  • 00:41:07
    business at this point so there's a lot
  • 00:41:08
    of good stuff in there what are the
  • 00:41:10
    biggest constraints on llama from it
  • 00:41:12
    being open
  • 00:41:16
    source I don't know it seems like a
  • 00:41:18
    better question for
  • 00:41:21
    them I'm doing well with this quick F uh
  • 00:41:24
    who do you most respect in AI today Sam
  • 00:41:31
    like ever or doing current work doing
  • 00:41:33
    current
  • 00:41:34
    work like a person
  • 00:41:38
    yeah
  • 00:41:45
    um well I feel like I I feel like I
  • 00:41:47
    can't just go name a bunch of open AI
  • 00:41:49
    people because that would
  • 00:41:51
    be unfair uh and it would sound like I'm
  • 00:41:54
    just biased so let me try to think of a
  • 00:41:56
    non-opening eye person
  • 00:41:58
    person uh let me give a shout out to the
  • 00:42:00
    cursor team I mean there's a lot of
  • 00:42:02
    people doing incredible work in AI but I
  • 00:42:04
    think to really have do what they've
  • 00:42:07
    done and built I thought about like a
  • 00:42:09
    bunch of researchers I could name um
  • 00:42:12
    but in terms of using AI to deliver a
  • 00:42:16
    really magical uh experience that
  • 00:42:20
    creates a lot of value in a way that
  • 00:42:22
    people just didn't quite manage to put
  • 00:42:23
    the pieces together I think that's it's
  • 00:42:25
    really quite remarkable and I
  • 00:42:27
    specifically left anybody at open AI out
  • 00:42:29
    as I was thinking through it otherwise
  • 00:42:30
    it would have been a long list of open
  • 00:42:32
    AI people first how do you think about
  • 00:42:34
    the trade-off between latency and
  • 00:42:40
    accuracy you need a dial to change
  • 00:42:43
    between them like in the same way
  • 00:42:46
    that you want to do a rapid fire thing
  • 00:42:48
    now and I'm not even going that quick
  • 00:42:50
    but I'm you know trying not to think for
  • 00:42:51
    multiple minutes uh in this context
  • 00:42:54
    latency is what you want if you but if
  • 00:42:57
    you were like hey Sam I want you to go
  • 00:42:59
    like make a new important Discovery in
  • 00:43:01
    physics you'd probably be happy to wait
  • 00:43:03
    a couple of years and the answer is it
  • 00:43:07
    should be user
  • 00:43:10
    controllable can I ask when you think
  • 00:43:12
    about insecurity and Leadership I think
  • 00:43:13
    it's something that everyone has uh it's
  • 00:43:15
    something we don't often talk about um
  • 00:43:17
    when you think about maybe an insecurity
  • 00:43:19
    in leadership an area of your leadership
  • 00:43:21
    that you'd like to improve where would
  • 00:43:23
    you most like to improve as a leader and
  • 00:43:25
    a CEO today
  • 00:43:29
    [Music]
  • 00:43:31
    um it's a long list I'm trying to scam
  • 00:43:33
    for the top one
  • 00:43:38
    here the thing I'm struggling with most
  • 00:43:41
    this week is I feel more uncertain than
  • 00:43:45
    I have in the past
  • 00:43:47
    about
  • 00:43:49
    uh what are like the details of what our
  • 00:43:52
    product strategy should be um I think
  • 00:43:55
    that product is a weakness of mine in
  • 00:43:58
    general um and it's something that right
  • 00:44:02
    now the company like needs stronger and
  • 00:44:05
    clearer vision on from me like we have a
  • 00:44:07
    wonderful head of product and a great
  • 00:44:08
    product team but it's an area that I
  • 00:44:10
    wish I were a lot stronger on and
  • 00:44:13
    acutely feeling the the miss right now
  • 00:44:15
    you hired Kevin um I've known Kevin for
  • 00:44:17
    years he's exceptional Kevin's amazing
  • 00:44:21
    what makes Kevin worldclass as a product
  • 00:44:23
    leader to
  • 00:44:24
    you um this was the first word that came
  • 00:44:28
    to mind huh in terms of
  • 00:44:32
    focus focus what we're going to say no
  • 00:44:34
    to like really trying to speak on behalf
  • 00:44:37
    of the user about why we would do
  • 00:44:38
    something or not do something like
  • 00:44:40
    really trying to be rigorous about not
  • 00:44:43
    not having like Fantastical
  • 00:44:47
    dreams Sam you've done a lot of
  • 00:44:49
    interviews I want to finish with one
  • 00:44:51
    which is what question are you not often
  • 00:44:53
    or never asked that you often leave an
  • 00:44:56
    interview and think God I should have
  • 00:44:58
    been asked that or I wish I was asked
  • 00:45:05
    that I mean this is such a meta answer
  • 00:45:07
    but I've been asked that question so
  • 00:45:08
    many times that I've like used up all
  • 00:45:10
    time that is that is so
  • 00:45:12
    massive okay I'll change it then we have
  • 00:45:16
    a 5year horizon for open Ai and a
  • 00:45:20
    10-year if you have a magic wand and can
  • 00:45:23
    paint that scenario for the 5 year and
  • 00:45:25
    the 10 year can you paint that canvas
  • 00:45:27
    for me for the 5 and 10
  • 00:45:35
    year I mean I can easily do it for like
  • 00:45:37
    the next two years but if we are right
  • 00:45:41
    and we start to make systems that
  • 00:45:45
    are so good at you know for example
  • 00:45:48
    helping us with scientific
  • 00:45:50
    advancement actually I I will just say
  • 00:45:52
    it I think in five years it looks like
  • 00:45:54
    we have
  • 00:45:58
    an unbelievably rapid rate of
  • 00:46:00
    improvement in technology itself you
  • 00:46:03
    know people are like man the AGI moment
  • 00:46:06
    came and went whatever the like the the
  • 00:46:09
    pace of progress is like totally crazy
  • 00:46:12
    and we're discovering all this new stuff
  • 00:46:14
    both about AI research and also about
  • 00:46:16
    all the rest of
  • 00:46:17
    Science and that
  • 00:46:20
    feels like if we could sit here now and
  • 00:46:22
    look at it it would seem like it should
  • 00:46:25
    be very crazy and then the second part
  • 00:46:27
    of the prediction is that Society itself
  • 00:46:31
    actually changes surprisingly
  • 00:46:34
    little an example of this would be that
  • 00:46:37
    I think if you asked people five years
  • 00:46:39
    ago if computers were going to pass the
  • 00:46:41
    Turing test they would say no and then
  • 00:46:43
    if you said well what if an oracle told
  • 00:46:44
    you it was going to they would say well
  • 00:46:46
    it would somehow be like just this crazy
  • 00:46:49
    breathtaking change for society and we
  • 00:46:52
    did kind of satisfy the Turning test
  • 00:46:54
    roughly speaking of course and so
  • 00:46:56
    Society didn't change that much it just
  • 00:46:58
    sort of went whooshing
  • 00:47:00
    by and that's kind of uh example of what
  • 00:47:06
    I expect to keep happening which is
  • 00:47:08
    progress scientific progress keeps
  • 00:47:12
    going outperforming all expectations and
  • 00:47:15
    Society in a way that I think is good
  • 00:47:17
    and healthy um changes not that
  • 00:47:20
    much at least not that much in the long
  • 00:47:22
    term it will hugely change five or 10
  • 00:47:26
    years you've been amazing I had this
  • 00:47:28
    list of questions I I didn't really
  • 00:47:29
    stick to them uh thank you for putting
  • 00:47:32
    up with my Meandering around different
  • 00:47:34
    questions thank you everyone for coming
  • 00:47:36
    I'm so thrilled that we were able to do
  • 00:47:37
    this today and Sam thank you for making
  • 00:47:39
    it happen ma'am thank you
Tag
  • AI
  • OpenAI
  • Sam Altman
  • no-code
  • agent AI
  • pemikiran
  • inovasi
  • model besar
  • sumber terbuka
  • semikonduktor