OpenAI DevDay 2024 | Virtual AMA with Sam Altman, moderated by Harry Stebbings, 20VC
摘要
TLDRDalam sesi wawancara yang mendalam dan bersifat terbuka antara Sam Altman dari OpenAI dan Harry Stebbings, isu-isu utama seperti masa depan model AI, perkembangan alat no-code untuk pengasas bukan teknikal, serta strategi OpenAI untuk bertahan dalam industri yang kompetitif dibincangkan. Altman menekankan pentingnya model pemikiran dan bagaimana ini dapat membuka potensi baru dalam sains dan teknologi. Beliau juga berkongsi pandangan tentang penggunaan AI dalam inovasi produk dan bagaimana OpenAI mengatasi cabaran seperti kebimbangan global terhadap bekalan semikonduktor dan pembangunan model yang lebih besar dan lebih baik. Wawancara ini menonjolkan visi optimis OpenAI untuk masa depan AI sambil mengekalkan tumpuan pada peningkatan kemampuan dan inovasi sistem.
心得
- 👔 Sam Altman menyesuaikan diri dengan kehidupan sibuknya.
- 🧠 Model pemikiran adalah fokus utama OpenAI di masa depan.
- 🔧 Alat no-code akan dikembangkan untuk pengasas tidak teknikal.
- 🔓 OpenAI melihat kepentingan model sumber terbuka dalam ekosistem AI.
- 🚀 OpenAI yakin bahawa model mereka akan terus bertambah baik.
- 🤖 Agent AI di masa depan akan lebih canggih dan fleksibel.
- ⚖️ Harga model AI mungkin didasarkan pada penggunaan komputasi.
- 🌎 OpenAI menyedari kebimbangan bekalan semikonduktor global.
- 💡 Inovasi AI boleh membuka potensi dalam pendidikan dan kesihatan.
- 📈 Perkembangan teknologi dan masyarakat bergerak pada kadar berbeza.
时间轴
- 00:00:00 - 00:05:00
Harry Stebbings dari 20VC memulakan Open AI Dev Day dengan temu bual bersama Sam Altman, mengajukan soalan yang jarang diajukan tentang kesejahteraan Sam. Sam menyatakan bahawa walaupun hidup sangat sibuk, ia kini berasa normal baginya. Sam membincangkan masa depan OpenAI, menekankan pada model yang lebih baik dengan penekanan pada model penaakulan yang boleh membuka bidang baru dalam sains dan pengekodan.
- 00:05:00 - 00:10:00
Sam Altman bercakap tentang menyediakan alat pembinaan AI tanpa kod untuk pengasas bukan teknikal. Dia menjelaskan bahawa walaupun alat tersebut sedang dikembangkan, pada mulanya, alat tersebut akan membantu pengkod. Sam turut menjelaskan bahawa peningkatan model OpenAI akan menjadikan perniagaan yang bergantung kepada menampal kekurangan model sekarang kurang relevan di masa depan.
- 00:10:00 - 00:15:00
Perbincangan beralih kepada potensi OpenAI untuk "merata" dalam pasaran dan Sam mengakui model AI yang baik akan menjadikan pengembangan produk yang hebat menjadi lebih mudah. Dia menyebut terdapat perubahan di kalangan pemula dari bertaruh melawan kepada bertaruh untuk peningkatan model yang cepat. Sam optimis terhadap nilai pasaran yang dicipta oleh AI.
- 00:15:00 - 00:20:00
Sam dan Harry membincangkan peranan sumber terbuka dalam AI. Sam mengiktiraf kepentingan sumber terbuka dan tempatnya dalam ekosistem AI. Dia juga berkongsi pandangannya tentang agen AI, menyebut bahawa tugas agen AI adalah untuk melakukan tugas jangka panjang dengan pengawasan minimum dan memberi contoh bagaimana ia boleh diguna dalam dunia nyata.
- 00:20:00 - 00:25:00
Sam membincangkan bagaimana AI boleh meningkatkan nilai ekonomi dengan menjadikan sesuatu yang rumit lebih mudah. Dia menyoroti bahawa tenaga haba AI tidak terletak pada angka besar nilai yang dicipta tetapi pada aksesibilitas ekonomi yang akan diperluas. Peranan AI dalam industri seperti kesihatan dan pendidikan juga diharapkan akan sangat besar.
- 00:25:00 - 00:30:00
OpenAI sedang fokus pada peningkatan kemampuan penaakulan model mereka, serta kerja multimodal. Sam memperkatakan peranan penaakulan dalam pembangunan AI dan memperlihatkan keyakinan tentang kemajuan cepat. Harry mengemukakan soalan tentang pendapat umum model sebagai aset yang menyusut tetapi Sam yakin dengan nilai jangka panjangnya serta justifikasi terhadap pelaburan dalam pembangunnya.
- 00:30:00 - 00:35:00
Sam berkongsi pengalaman OpenAI dalam mencapai kemampuan penaakulan yang lebih baik melalui eksperimen dan pembelajaran dari kesilapan lepas semasa mengembangkan model. Dia juga menekankan pentingnya keupayaan organisasi untuk melaksanakan sesuatu yang baru dan belum terbukti, suatu yang dia banggakan tentang budaya OpenAI.
- 00:35:00 - 00:40:00
Sam bercakap tentang potensi sumbangan AI dalam meningkatkan bakat manusia di seluruh dunia dan mengakui bahawa ramai individu berbakat tidak mencapai potensi kerana pelbagai halangan. Dia merenung perubahan dalam gaya kepemimpinannya selama 10 tahun terakhir, terutama dalam membangun syarikat dengan pertumbuhan cepat dan dinasihati tentang bagaimana memfokuskan pada pertumbuhan jangka panjang sambil mengekalkan tugas harian.
- 00:40:00 - 00:47:44
Sam menjawab pertanyaan mengenai cabaran dan ketidakpastian membuat keputusan dalam konteks organisasi AI yang berkembang pesat dengan sistem yang kompleks. Dia menekankan pendekatan yang pelbagai dalam mencari nasihat dan kepentingan menavigasi bekalan semikonduktor di tengah ketegangan internasional selain potensi dampaknya pada industri. Sam juga mengulas bahawa AI berbeza dengan revolusi teknologi sebelumnya.
思维导图
视频问答
Bagaimana Sam Altman menjaga kesegaran diri dalam menghadapi jadual yang padat?
Sam Altman berkata dia telah menyesuaikan diri dengan gaya hidup yang sibuk dan menganggapnya sebagai normal baharu.
Apakah fokus OpenAI dalam pembangunan masa depan?
OpenAI berfokus pada model pemikiran yang dapat membantu dalam kemajuan saintifik dan penulisan kod yang kompleks.
Adakah OpenAI akan menyediakan alat no-code untuk pengasas tidak teknikal?
Ya, OpenAI bercadang untuk menyediakan alat no-code berkualiti tinggi untuk membolehkan individu bukan teknikal membina dan meningkatkan aplikasi AI.
Bagaimana pandangan OpenAI terhadap sumber terbuka dalam AI?
OpenAI melihat bahawa terdapat tempat penting untuk model sumber terbuka dalam ekosistem AI, dan pelanggan akan memilih mekanisme penyampaian yang sesuai.
Apakah perbezaan antara model OpenAI dan pesaing lain?
OpenAI memfokuskan kebolehan pemikiran sebagai perbezaan utama untuk menyokong kemajuan besar dalam nilai yang dihasilkan.
Bagaimana OpenAI menganggap peranan agen AI masa depan?
Agen AI diharapkan dapat melakukan tugas jangka panjang dengan pengawasan minimum dan bekerjasama seperti rakan sekerja pintar.
Adakah OpenAI percaya model akan terus berkembang dan bertambah baik?
Ya, OpenAI percaya bahawa trajektori peningkatan keupayaan model akan berterusan untuk masa yang panjang.
Bagaimana OpenAI mengatasi cabaran kekurangan semikonduktor?
Walaupun terdapat kebimbangan mengenai rantai bekalan semikonduktor, OpenAI menganggapnya sebagai salah satu daripada banyak kerumitan ekosistem yang mesti diurus.
Bagaimana pendekatan OpenAI terhadap pengagihan dan harga model AI?
Wujud pandangan bahawa harga akan berdasarkan kepada jumlah komputasi yang digunakan untuk menyelesaikan masalah, bukan berdasarkan tempat duduk individu.
Kenapa OpenAI mementingkan pemikiran dalam AI?
Pemikiran adalah kunci untuk melangkah ke depan dalam pelbagai aplikasi dan nilai yang diberikan oleh model AI.
查看更多视频摘要
Pengembangan dan Pengolahan Nilai Hasil Belajar
5 Langkah Awal dalam Penerapan HACCP
MODEL PEMBELAJARAN UNTUK PENDIDIKAN KARAKTER DAN SPIRITUALITAS
PERBEDAAN CHILLER WATERCOOL DAN CHILLER AIRCOOL
ISLAM DAN LINGKUNGAN HIDUP Oleh Ust. Achmad Solechan, M.Si
MEDIA DAN ISLAM Oleh Dr Zuly Qodir, M Ag || MATA KULIAH AGAMA ISLAM UNIVERSITAS INDONESIA
- 00:00:13hello everyone welcome to open AI Dev
- 00:00:15day I am Harry stebbings of 20 VC and I
- 00:00:19am very very excited to interview Sam
- 00:00:22ultman welcome Sam Sam thank you for
- 00:00:26letting me do this today with you thanks
- 00:00:28for doing now we have many many
- 00:00:31questions from the audience and so I
- 00:00:33wanted to start with one which I don't
- 00:00:35think people ask you actually very often
- 00:00:36in interviews which is firstly like how
- 00:00:38are you you are one of the busiest
- 00:00:40people on the planet you also always
- 00:00:42look remarkably fresh how are
- 00:00:46you fine I
- 00:00:50think yeah yeah I think kind of get used
- 00:00:52to anything and it has been like a sort
- 00:00:54of crazy busy last couple of years but
- 00:00:56now it just feels like normal life and I
- 00:00:58forget that it used to be otherwise okay
- 00:01:00listen I want to start by kind of diving
- 00:01:02in we had a lot of fantastic questions
- 00:01:04from the audience across a number of
- 00:01:06different kind of areas and I want to
- 00:01:08start with actually the question of when
- 00:01:10we look forward is the future of open AI
- 00:01:13more models like 01 or is it more larger
- 00:01:18models that we would maybe have expected
- 00:01:20of old how do we think about
- 00:01:23that I mean we want to make things
- 00:01:26better across the board but this
- 00:01:27direction of reasoning models is a
- 00:01:30particular importance to us I think
- 00:01:32reasoning will unlock I hope reasoning
- 00:01:34will unlock a lot of the things that
- 00:01:35we've been waiting years to do and the
- 00:01:39the ability for models like this to for
- 00:01:41example contribute to new science uh
- 00:01:43help write a lot more very difficult
- 00:01:46code uh that I think can drive things
- 00:01:48forward to a significant degree so you
- 00:01:50should expect rapid Improvement in the O
- 00:01:53Series of models and it's of great
- 00:01:55strategic importance to us so another
- 00:01:59one that I thought was really important
- 00:02:00for us to touch on was when we look
- 00:02:02forward to open ai's future plans how do
- 00:02:05you think about developing no code tools
- 00:02:07for non-technical Founders to build and
- 00:02:10scale AI apps how do you think about
- 00:02:12that it'll get there for sure uh I I
- 00:02:15think the the first step will be tools
- 00:02:18that make people who know how to code
- 00:02:20well more productive but eventually I
- 00:02:22think we can offer really high quality
- 00:02:24no code tools and already there's some
- 00:02:26out there that Mak sense but you can't
- 00:02:29you can't sort of in a no code way say I
- 00:02:30have like a full startup I want to build
- 00:02:33um that's going to take a while so when
- 00:02:36we look at where we are in the stat
- 00:02:38today open AI sits in a certain place
- 00:02:41how far up the stack is open AI going to
- 00:02:43go I think it's a brilliant question but
- 00:02:45if you're spending a lot of time tuning
- 00:02:47your rag system is this a waste of time
- 00:02:49because open AI ultimately thinks
- 00:02:51they'll own this part of the application
- 00:02:53layer or is it not and how do you answer
- 00:02:55a Founder who has that question
- 00:03:00the the the general answer we try to
- 00:03:02give is and you have to assume that
- 00:03:04we're biased here and talking our book
- 00:03:06and may be wrong but the general answer
- 00:03:07we try to give
- 00:03:10is we are going to try our hardest and
- 00:03:13believe we will succeed at making our
- 00:03:15models better and better and better and
- 00:03:18if you are building a business that
- 00:03:20patches some current small
- 00:03:23shortcomings if we do our job right uh
- 00:03:26then that will not be as important in
- 00:03:29the future
- 00:03:30if on the other hand you build a company
- 00:03:33that benefits from the model getting
- 00:03:35better and better if you know an oracle
- 00:03:38told you today that
- 00:03:4004 was going to be just absolutely
- 00:03:42incredible and do all of these things
- 00:03:45that right now feel impossible and you
- 00:03:47were happy about that then you know
- 00:03:50maybe we're wrong but at least that's
- 00:03:52what we're going for and if instead you
- 00:03:54say okay there's this area where there
- 00:03:57are many but you pick one of the many
- 00:03:59areas where oh preview underperforms and
- 00:04:01say I'm going to patch this and just
- 00:04:02barely get it to work then you're sort
- 00:04:05of assuming that the next turn of the
- 00:04:07model crank won't be as good as we think
- 00:04:08it will be and that is the general
- 00:04:12philosophical message we try to get out
- 00:04:14to startups like we we believe that we
- 00:04:17are on a pretty a quite steep trajectory
- 00:04:20of improvement and that the current
- 00:04:22shortcomings of the models today um will
- 00:04:26just be taken care of by Future
- 00:04:28generations and
- 00:04:30you know I encourage people to be
- 00:04:31aligned with that so we did an interview
- 00:04:34before with Brad and sorry it's not
- 00:04:37quite on schedule but I think the show
- 00:04:38has always been successful when we kind
- 00:04:40of go a little bit off schedule there
- 00:04:42was this brilliant yeah sorry for that
- 00:04:44uh but there was this brilliant kind of
- 00:04:45meme that came out of it and I felt a
- 00:04:47little bit guilty but you you said
- 00:04:49wearing this 20 VC jump which is
- 00:04:50incredibly proud moment for me uh for
- 00:04:53certain segments like the one you
- 00:04:54mentioned there there would be the
- 00:04:56potential to steamroll if you're
- 00:04:58thinking as a founder stay building
- 00:05:00where is open AI going to potentially
- 00:05:03come and steamroll versus where they're
- 00:05:04not also for me as an investor trying to
- 00:05:07invest in opportunities that aren't
- 00:05:08going to get damaged how should Founders
- 00:05:12and me as an investor think about
- 00:05:14that there will be many trillions of
- 00:05:16dollars of market cap that gets created
- 00:05:19new market cap that gets created by
- 00:05:22using AI to build products and services
- 00:05:24that were either impossible or quite
- 00:05:26impractical before and the
- 00:05:31there's this one set of areas where
- 00:05:33we're going to try
- 00:05:35to make it relevant which is you know we
- 00:05:38just want the models to be really really
- 00:05:40good such that you don't have to like
- 00:05:42fight so hard to get them to do what you
- 00:05:44want to do but all of this other stuff
- 00:05:47uh which is building these incredible
- 00:05:49products and services on top of this new
- 00:05:52technology we think that just gets
- 00:05:54better and better um one of the
- 00:05:57surprises to me early on was and this is
- 00:06:00no longer the case but in like the GPT
- 00:06:033.5 days it felt like 95% of startups
- 00:06:06something like that wanted to bet
- 00:06:09against the models getting way better
- 00:06:11and so and they were doing these things
- 00:06:13where we could already see gp4 coming
- 00:06:15and we're like man it's going to be so
- 00:06:17good it's not going to have these
- 00:06:18problems if you're building a tool just
- 00:06:21to get around this one shortcoming of
- 00:06:23the model that's going to become less
- 00:06:25and less relevant and we forget how bad
- 00:06:29the model were a couple of years ago it
- 00:06:31hasn't been that long on the calendar
- 00:06:33but there were there were just a lot of
- 00:06:35things and so it seemed like these good
- 00:06:36areas to build a thing uh to like to
- 00:06:41plug a hole rather than to build
- 00:06:43something to go deliver like the great
- 00:06:46AI tutor or the great AI medical adviser
- 00:06:48or whatever
- 00:06:50and and so I felt like 95% of people
- 00:06:53that were were like betting against the
- 00:06:55models getting better 5% of the people
- 00:06:57were betting for the models getting
- 00:06:58better I I think that's now reversed I
- 00:07:01think people have
- 00:07:02like internalized the rate of
- 00:07:04improvement and have heard us on what we
- 00:07:08intend to do
- 00:07:11um so it's it no longer seems to be such
- 00:07:14an issue but it was something we used to
- 00:07:16fret about a lot because we kind of we
- 00:07:18saw it was going to happen to all of
- 00:07:20these very hardworking people you you
- 00:07:21said about the trillions of dollars of
- 00:07:23value to be created there and then I
- 00:07:24promise we will return to these
- 00:07:25brilliant questions I'm sure you saw I'm
- 00:07:28not sure if you saw but Massa sit on
- 00:07:30stage and say we will have I'm not going
- 00:07:31to do an accent cuz my accents are
- 00:07:33terrible um but there will be9 trillion
- 00:07:36dollar of value created every single
- 00:07:39year which will offset the9 trillion
- 00:07:42capex that he thought would be needed
- 00:07:45I'm just intrigued how did you think
- 00:07:47about that when you saw that how do you
- 00:07:49reflect on
- 00:07:51that uh I can't put it down to like any
- 00:07:57I think like if we can get it right with
- 00:07:58an orders of to that's that's good
- 00:08:00enough for now there's clearly going to
- 00:08:01be a lot of capex spent and clearly a
- 00:08:03lot of value created this happens with
- 00:08:05every other Mega technological
- 00:08:07revolution of which this is clearly one
- 00:08:10um
- 00:08:12but uh you know like next year will be a
- 00:08:15big push for us
- 00:08:17into these next Generation systems you
- 00:08:20talked about when there could be like a
- 00:08:22no code software agent I don't know how
- 00:08:24long that's going to take but if we use
- 00:08:26that as an example and imagine forward
- 00:08:28to towards it think about what think
- 00:08:31about how much economic value gets
- 00:08:33unlocked for the world if anybody can
- 00:08:35just describe like a whole company's
- 00:08:38worth of software that they want this is
- 00:08:39a ways away obviously but when we get
- 00:08:41there and have it happen um think about
- 00:08:45how difficult and how expensive that is
- 00:08:46now think about how much value it
- 00:08:47creates if you keep the same amount of
- 00:08:49value but make it wildly more accessible
- 00:08:51and less expensive that that's really
- 00:08:53powerful and I think we'll see many
- 00:08:55other examples like that we I mentioned
- 00:08:58earlier like healthcare and education
- 00:09:00but those are two that are both like
- 00:09:03trillions of dollars of value to the
- 00:09:04world to get right if you and if AI can
- 00:09:07really really truly enable this to
- 00:09:09happen in a different way than it has
- 00:09:11before
- 00:09:13I I don't think big numbers are the
- 00:09:15point and they also the debate about
- 00:09:17whether it's n trillion or 1 trillion or
- 00:09:19whatever like you know I don't smarter
- 00:09:23people than me it takes to figure that
- 00:09:24out
- 00:09:26but but the value creation does seem
- 00:09:28just unbelievable here we're going to
- 00:09:32get to agents in terms of kind of how
- 00:09:33that value is delivered in terms of like
- 00:09:35the delivery mechan mechanism for which
- 00:09:37it's valued open source is an incredibly
- 00:09:39prominent method through which it could
- 00:09:41be how do you think about the role of
- 00:09:42Open Source in the future of AI and how
- 00:09:45does internal discussions look like for
- 00:09:48you when the question comes should we
- 00:09:50open- Source any models or some models
- 00:09:54there there's clearly a really important
- 00:09:55place in the ecosystem for open source
- 00:09:57models there's also really good open
- 00:09:59source models that now exist
- 00:10:02um I think there's also a place for like
- 00:10:06nicely offered well integrated services
- 00:10:08and apis and you know I think it's I
- 00:10:12think it makes sense that all of the
- 00:10:14stuff is an offer and people will pick
- 00:10:15what what works for
- 00:10:17them as a delivery mechanism we have the
- 00:10:20open source as of kind of androp to
- 00:10:22customers and a way to deliver that we
- 00:10:24can have agents I think there's a lot of
- 00:10:27uh kind of semantic confusion around
- 00:10:29what an agent is how do you think about
- 00:10:31the definition of Agents today and what
- 00:10:32is an agent to you and what is it
- 00:10:35not
- 00:10:38um this is like my offthe cuff answer
- 00:10:41it's not well considered but something
- 00:10:43that I can give a long duration task to
- 00:10:47and provide minimal
- 00:10:49supervision during execution for what do
- 00:10:52you think people think about agents that
- 00:10:55actually they get
- 00:10:58wrong well it's more like I don't I
- 00:11:01don't think any of us yet have an
- 00:11:02intuition for what this is going to be
- 00:11:05like you know we're all gesturing at
- 00:11:06something that seems
- 00:11:10important maybe I can give the following
- 00:11:13example when people talk about an AI
- 00:11:17agent acting on their behalf uh the the
- 00:11:20main example they seem to give fairly
- 00:11:23consistently is oh you can
- 00:11:26like you know you can like ask the agent
- 00:11:29to go book you a restaurant
- 00:11:32reservation um and either it can like
- 00:11:34use open table or it can like call the
- 00:11:36restaurant or or whatever and you know
- 00:11:40it's like okay sure that's that's like a
- 00:11:42mildly annoying thing to have to do and
- 00:11:45it maybe like saves you some
- 00:11:47work one of the things that I think is
- 00:11:50interesting is a world where
- 00:11:53uh you can just do things that you
- 00:11:55wouldn't or couldn't do as a human so
- 00:11:57what if what if instead of of calling uh
- 00:12:01one restaurant to make a reservation my
- 00:12:03agent would call me like 300 and figure
- 00:12:05out which one had the best food for me
- 00:12:06or some special thing available or
- 00:12:08whatever and then you would say well
- 00:12:09that's like really annoying if your
- 00:12:10agent is calling 300 restaurants but if
- 00:12:13if it's an agent answering each of those
- 00:12:15300 300 places then no problem and it
- 00:12:17can be this like massively parallel
- 00:12:19thing that a human can't do so that's
- 00:12:21like a trivial example
- 00:12:24but there are these like limitations to
- 00:12:27human bandwidth that maybe these agents
- 00:12:29won't
- 00:12:30have the category I think though is more
- 00:12:33interesting is not the one that people
- 00:12:36normally talk about where you have this
- 00:12:37thing calling restaurants for you
- 00:12:41um but something that's more like a
- 00:12:45really smart senior
- 00:12:48coworker um where you can like
- 00:12:50collaborate on a project with and the
- 00:12:52agent can go do like a two-day task or
- 00:12:55two week task really well and you know
- 00:12:58paying you at when it has questions but
- 00:13:00come back to you with like a great work
- 00:13:03product does this fundamentally change
- 00:13:05the way that SAS is priced when you
- 00:13:07think about extraction of value bluntly
- 00:13:11and normally it's on a per seat basis
- 00:13:12but now you're actually kind of
- 00:13:14replacing labor so to speak how do you
- 00:13:16think about the future of pricing with
- 00:13:19that in mind when you are such a core
- 00:13:20part of an Enterprise
- 00:13:22Workforce like how will price or what it
- 00:13:24will do for people who are no like how
- 00:13:27will it price oh
- 00:13:33um like we always have per seat
- 00:13:40pricing we look I can make I can like
- 00:13:42I'll speculate here for fun but we
- 00:13:44really have no idea this I mean Sam I'm
- 00:13:46a venture investor for a living so we
- 00:13:48speculate for fun all the time it's
- 00:13:50okay um I mean I could imagine a world
- 00:13:54where you can say like I want one GPU or
- 00:13:5610 gpus or 100 gpus to just be like
- 00:13:59turning on my problems all the
- 00:14:01time and it's not like you're not like
- 00:14:06paying per seat or even per agent but
- 00:14:09you're like it's priced based off the
- 00:14:11amount of compute that's like working on
- 00:14:13a you know on your problems all the time
- 00:14:16do we need to build specific models for
- 00:14:18agentic use or do we not how do you
- 00:14:22think about
- 00:14:23that
- 00:14:25um there's a huge amount of
- 00:14:27infrastructure and Scaffolding to build
- 00:14:28for
- 00:14:30but I think 01 points the way to a model
- 00:14:32that is capable of doing great agentic
- 00:14:36tasks I hate the word agentic by the way
- 00:14:38i' I'd love it if we could come up with
- 00:14:40a what would you like this is your
- 00:14:42chance to coin a new word I don't have
- 00:14:45this could be a spoiler of that that
- 00:14:46that really is something I I'll keep
- 00:14:48thinking while we
- 00:14:50talk on the model side Sam everyone says
- 00:14:54that uh models are depreciating assets
- 00:14:56the commoditization of models is so
- 00:14:59R how do you respond and think about
- 00:15:02that and when you think about the
- 00:15:04increasing Capital intensity to train
- 00:15:06models are we actually seeing the
- 00:15:07reversion of that where it requires so
- 00:15:10much money that actually very few people
- 00:15:11can do
- 00:15:12it uh it's definitely true that they're
- 00:15:15depreciating assets um this thing that
- 00:15:18they're not though worth as much as they
- 00:15:21cost to train that seems totally wrong
- 00:15:24um to say nothing of the fact that
- 00:15:26there's like a there's a positive
- 00:15:29compounding effect as you learn to train
- 00:15:30these models you get better at training
- 00:15:32the next one but the actual like Revenue
- 00:15:34we can make from a model I think
- 00:15:36justifies the investment um I to be fair
- 00:15:41uh I don't think that's true for
- 00:15:43everyone and there's a lot of there are
- 00:15:46probably too many people training very
- 00:15:48similar models and if you're a little
- 00:15:50behind or if you don't have
- 00:15:54a product with the sort of normal rules
- 00:15:57of business that make that product
- 00:15:59sticky and valuable then yeah maybe you
- 00:16:03can't maybe it's harder to get a return
- 00:16:06on the investment we're very fortunate
- 00:16:08to have chat GPT and hundreds of
- 00:16:10millions of people that use our models
- 00:16:11and so even if it costs a lot we get to
- 00:16:13like amortise that cost across a lot of
- 00:16:15people how do you think about how open
- 00:16:17AI models continue to differentiate over
- 00:16:20time and where you most want to focus to
- 00:16:22expand that
- 00:16:25differentiation uh reasoning is our
- 00:16:29current most important area of focus I
- 00:16:31think this is what unlocks the next like
- 00:16:34massive Leap Forward in in value created
- 00:16:37so that's we'll improve them in lots of
- 00:16:40ways uh we will
- 00:16:42do multimodal work uh we will do other
- 00:16:46features in the models that we think are
- 00:16:47super important
- 00:16:50to the ways that people want to use
- 00:16:52these things how do you think how do you
- 00:16:54think about reasoning in multimodal work
- 00:16:56like there the challenge is what you
- 00:16:58want to achieve love to understand that
- 00:17:01reasoning in multimodality specific
- 00:17:04yeah I hope it's just going to work I
- 00:17:06mean it obviously takes some doing to
- 00:17:08get done but uh you know
- 00:17:11like people like when they're babies and
- 00:17:14toddlers before they're good at language
- 00:17:16can still do quite complex visual
- 00:17:18reasoning so clearly this is
- 00:17:20possible totally is um how will Vision
- 00:17:23capabilities scale with new inference
- 00:17:26time Paradigm set by 01
- 00:17:36uh without spoiling anything I would
- 00:17:38expect rapid progress in
- 00:17:41image based
- 00:17:44models that's a spoiler isn't
- 00:17:46it it's a bit of a carrot
- 00:17:49Sam okay um going off schedule is one
- 00:17:53thing trying to tease that out might get
- 00:17:54me in real trouble um gpt's output is
- 00:17:57generally I I like the this one a lot
- 00:17:59yeah I I don't think that we are nearly
- 00:18:01British enough in a lot of gpt's output
- 00:18:03gpt's output is generally American in
- 00:18:05spelling and spelling and tone how do we
- 00:18:08think about
- 00:18:10internationalization with models
- 00:18:12different cultures different languages
- 00:18:14and how important that
- 00:18:16is it's interesting I you know I don't
- 00:18:19use British English I haven't tried but
- 00:18:20I would have guessed that it's really
- 00:18:22good at doing British English is it
- 00:18:26not okay well we'll look at that
- 00:18:30we can get you your s's I'm sure there
- 00:18:33we go um how does open AI make
- 00:18:35breakthroughs in terms of like core
- 00:18:38reasoning do we need to start pushing
- 00:18:39into reinforcement learning as a pathway
- 00:18:42or other new techniques aside from the
- 00:18:47transforma uh I mean there's two
- 00:18:49questions in there there's how we do it
- 00:18:51and then you know there's everyone's
- 00:18:53favorite question which is what comes
- 00:18:54beyond the Transformer
- 00:18:57the how we do it is our special sauce
- 00:19:00it's easy it's really easy to copy
- 00:19:02something you know Works uh and one of
- 00:19:03the reasons that people don't talk about
- 00:19:05about why it's so easy is you have the
- 00:19:06conviction to know it's possible and so
- 00:19:10after after a research lab does
- 00:19:12something even if you don't know exactly
- 00:19:14how they did it
- 00:19:15it's say easy but it's doable to go off
- 00:19:18and copy it and you can see this in the
- 00:19:20replications of gp4 and I'm sure you'll
- 00:19:23see this in replications of
- 00:19:2501 what is really hard and the thing
- 00:19:27that I'm most proud of about our culture
- 00:19:30is the repeated ability to go off and do
- 00:19:35something new and
- 00:19:37totally unproven and a lot of
- 00:19:41organizations I'm not talking about AI
- 00:19:43research just generally a lot of
- 00:19:45organizations talk about the ability to
- 00:19:47do this there are very few that do um
- 00:19:50across any field and in some sense I
- 00:19:53think this is one of the most important
- 00:19:55inputs to human progress so one of the
- 00:19:58like retirement things I fantasize about
- 00:20:01doing is writing a book of everything
- 00:20:03I've learned about how to build an
- 00:20:05organization and a culture that does
- 00:20:07this thing not the organization that
- 00:20:09just copies what everybody else has done
- 00:20:11because I think this is something that
- 00:20:13the world could have a lot more of it's
- 00:20:15limited by human talent but there's a
- 00:20:17huge amount of wasted human talent
- 00:20:20because this is
- 00:20:21not an organization style or culture
- 00:20:25whatever you want to call it that we are
- 00:20:27all good at building so I love way more
- 00:20:30of that and that is I think the thing
- 00:20:32most special about us Sam how is human
- 00:20:34Talent
- 00:20:35wasted oh there's just a lot of really
- 00:20:38talented people in the world that are
- 00:20:39not working to their full potential um
- 00:20:42because they work at a bad company or
- 00:20:44they live in a country that doesn't
- 00:20:45support any good companies uh
- 00:20:48or a long list of other things I mean
- 00:20:51the
- 00:20:52the one of the things I'm most excited
- 00:20:54about with AI is I hope it'll get us
- 00:20:57much better than we are now at helping
- 00:20:59get everyone to their Max potential
- 00:21:01which we are nowhere nowhere near
- 00:21:04there's a lot of people in the world
- 00:21:05that I'm sure would be phenomenal AI
- 00:21:08researchers had their life paths just
- 00:21:09gone a little bit
- 00:21:11differently Sam you've had an incredible
- 00:21:13journey again sorry for the off-script
- 00:21:15you've had an incredible journey over
- 00:21:17the last few years through you know
- 00:21:20unbelievable hypergrowth you say about
- 00:21:22writing a book there in retirement if
- 00:21:24you reflect back on the 10 years of
- 00:21:26leadership change that you've under gone
- 00:21:29how have you changed your leadership
- 00:21:31most
- 00:21:34significantly
- 00:21:38well I think the thing that has been
- 00:21:41most unusual for me about these last
- 00:21:43couple of
- 00:21:44years
- 00:21:46is just the rate at
- 00:21:48which things have changed at a normal
- 00:21:51company you get time to go from Z to 100
- 00:21:55million in Revenue 100 million to a
- 00:21:56billion billion to 10 billion you don't
- 00:21:58have to do that in like a 2-year period
- 00:22:01and you don't have to like build the
- 00:22:03company we had to research that but we
- 00:22:05really didn't have a company in the
- 00:22:06sense of a traditional Silicon Valley
- 00:22:08startup that's you know scaling and
- 00:22:09serving lots of customers whatever um
- 00:22:12having to do that
- 00:22:15so quickly there was just like a lot of
- 00:22:17stuff that I was supposed to get more
- 00:22:19time to learn than I got and what did
- 00:22:23you not know that you would have liked
- 00:22:25more time to learn
- 00:22:30I I I mean I would say like what did I
- 00:22:32know um
- 00:22:37the one of the things that just came to
- 00:22:40mind out of like a rolling list of a 100
- 00:22:42is how hard it is how much active work
- 00:22:46it takes to get the company to focus not
- 00:22:50on how you grow the next 10% but the
- 00:22:52next 10x and growing the next 10% it's
- 00:22:55the same things that worked before will
- 00:22:56work again but to go from a company
- 00:22:58doing say like a billion to1 billion
- 00:23:00doar in Revenue
- 00:23:02requires a whole lot of change and it is
- 00:23:05not the sort of like let's do last week
- 00:23:08what we did this week mindset and in a
- 00:23:11world where people don't get time to
- 00:23:14even get caught up on the basics because
- 00:23:16growth is just
- 00:23:18so rapid uh I I badly underappreciated
- 00:23:24the amount of work it took to be able to
- 00:23:26like keep charging at the next big step
- 00:23:30forward while still not neglecting
- 00:23:32everything else that we have to do um
- 00:23:36there's a big piece of internal
- 00:23:37communication around that and how you
- 00:23:39sort of share information how you build
- 00:23:41the structures to like get the company
- 00:23:44to get good at thinking about 10x more
- 00:23:47stuff or bigger stuff or more complex
- 00:23:49stuff every eight months 12 months
- 00:23:52whatever
- 00:23:53um there's a big piece in there about
- 00:23:56planning about how you
- 00:23:58balance what has to happen today and
- 00:24:01next month with the the long lead pieces
- 00:24:03you need in place for to be able to
- 00:24:06execute in a year or two years with you
- 00:24:08know build out of compute or even you
- 00:24:11know things that are more normal like
- 00:24:13planning ahead enough for like office
- 00:24:15space in a city like San Francisco is
- 00:24:18surprisingly hard at this kind of rate
- 00:24:21so I I think
- 00:24:24the there was either no playbook for
- 00:24:27this or someone had a secret Playbook
- 00:24:28book they didn't give me um or all of us
- 00:24:31like we've all just sort of fumbled our
- 00:24:33way through this but there's been a lot
- 00:24:34to learn on the
- 00:24:36Fly God I don't know if I'm going to get
- 00:24:38into trouble for this but s it I'll ask
- 00:24:40it anyway and if so I'll deal with it
- 00:24:42later um Keith R boy uh did a talk and
- 00:24:46he said about you should hire incredibly
- 00:24:48young people under 30 and that is what
- 00:24:51Peter teal taught him and that is the
- 00:24:52secret to building great companies and
- 00:24:55it got a little bit of resistance uh to
- 00:24:57say the least
- 00:24:58um I'm intrigued when you think about
- 00:25:01this book that you write in retirement
- 00:25:03and that advice you build great
- 00:25:05companies by building incredibly young
- 00:25:08hungry ambitious people who are under 30
- 00:25:11and that is the mechanism how do you
- 00:25:13feel I think I was 30 when we started
- 00:25:14opening eye or at least thereabouts so
- 00:25:17you know I wasn't that
- 00:25:24young seem to work okay so far
- 00:25:28worth a try uh uh going back uh is the
- 00:25:33question like the question is how do you
- 00:25:35think about hiring incredibly young
- 00:25:36under 30s as this like Trojan Horse of
- 00:25:40Youth
- 00:25:42energy ambition but less experience or
- 00:25:45the much more experienced I know how to
- 00:25:48do this I've done it
- 00:25:49before um I mean the obvious answer is
- 00:25:53you can succeed with hiring both classes
- 00:25:56of people like we have
- 00:25:59I was just like right before this I was
- 00:26:01sending someone a slack message about
- 00:26:04there was a guy that we recently hired
- 00:26:05on one of the teams I don't know how old
- 00:26:07he is but low 20 is probably doing just
- 00:26:09insanely amazing work and I was like can
- 00:26:11we find a lot more people like this this
- 00:26:13is just like off the charts brilliant I
- 00:26:14don't get how these people can be so
- 00:26:16good so young but it clearly happens and
- 00:26:18when you can find those people uh they
- 00:26:21bring amazing fresh perspective energy
- 00:26:24whatever else on the other hand uh when
- 00:26:27you're like
- 00:26:30designing some of the most complex and
- 00:26:33massively
- 00:26:34expensive computer systems that Humanity
- 00:26:36has ever built actually like pieces of
- 00:26:39infrastructure of any sort then I would
- 00:26:42not be comfortable taking a bet on
- 00:26:44someone who is just sort of like
- 00:26:46starting out uh where the stakes are
- 00:26:48higher so you
- 00:26:51want you want both uh and I think what
- 00:26:55you really want is just like an extreme
- 00:26:58ex High Talent bar of people at any
- 00:27:00age
- 00:27:02and a strategy that said I'm only going
- 00:27:05to
- 00:27:06hire younger people or I'm only going to
- 00:27:08hire older people I believe would be
- 00:27:11misguided uh I I think it's like somehow
- 00:27:15just not it's not quite the framing that
- 00:27:17resonates with me but the part of it
- 00:27:19that does is and one of the things that
- 00:27:22I feel most
- 00:27:24grateful about why combinator 4 is
- 00:27:28inexperience does not inherently mean
- 00:27:31not valuable and there are
- 00:27:35incredibly high potential people at the
- 00:27:37very beginning of their career that can
- 00:27:39create huge amounts of value and uh we
- 00:27:44as a society should bet on those people
- 00:27:46and it's a great
- 00:27:47thing I am going to return to some
- 00:27:49semblance of the schedule as I'm I'm
- 00:27:51really going to get told off but
- 00:27:52anthropics models have been sometimes
- 00:27:55cited as being better for coding
- 00:27:58C why is that do you think that's fair
- 00:28:01and how should developers think about
- 00:28:03when to pick open AI versus a different
- 00:28:07provider yeah they have a model that is
- 00:28:09great at coding for sure uh and it's
- 00:28:11impressive work
- 00:28:14I I think developers use multiple models
- 00:28:18most of the time and I'm not sure how
- 00:28:21that's all going to evolve as we head
- 00:28:23towards this more agen toied World um
- 00:28:27but
- 00:28:29I sort of think there's just going to be
- 00:28:30a lot of AI everywhere and something
- 00:28:33about the way that we currently talk
- 00:28:35about it or think about it
- 00:28:39feels wrong uh may maybe if I had to
- 00:28:43describe it we will shift from talking
- 00:28:44about models to talking about systems
- 00:28:47but that'll take a
- 00:28:48while when we think about skating models
- 00:28:52how many more model iterations do you
- 00:28:54think scaling laws will hold true for it
- 00:28:57was the kind of
- 00:28:58common refrain that it won't last for
- 00:29:00long and it seems to be proving to last
- 00:29:03longer than people
- 00:29:05think
- 00:29:08uh without going into detail about how
- 00:29:10it's going to happen the the the core of
- 00:29:13the question that you're getting at is
- 00:29:16is the trajectory of model capability
- 00:29:19Improvement going to keep going like it
- 00:29:21has been going and the answer that I
- 00:29:25believe is yes for a long time
- 00:29:28have you ever doubted that totally why
- 00:29:33uh we have had well we've had
- 00:29:35like Behavior we don't understand we've
- 00:29:38had filled training runs we' all sorts
- 00:29:39of things we've had to figure out new
- 00:29:41paradigms when we kind of get to towards
- 00:29:43the end of one and have to figure out
- 00:29:45the next what was the hardest one to
- 00:29:49navigate um could be a new paradigm
- 00:29:52could be a training
- 00:29:53run which one do you remember most Fally
- 00:29:56and how did you get through that
- 00:30:00well when we started working on gp4
- 00:30:02there were some issues that caused us a
- 00:30:04lot of consternation that we really
- 00:30:05didn't know how to solve we figured it
- 00:30:08out but there was there was definitely a
- 00:30:10time period where we just didn't know
- 00:30:11how we were going to do that model um
- 00:30:14and then in this shift to 01 and the
- 00:30:18idea of reasoning models uh that was
- 00:30:21something we had been excited about for
- 00:30:23a long time but it was like a long and
- 00:30:26Winding Road of research to get
- 00:30:29here is it difficult to maintain morale
- 00:30:33when it is long and winding roads when
- 00:30:34training runs can fail how do you
- 00:30:37maintain moral in those
- 00:30:40times you know we have a lot of people
- 00:30:43here who are excited to build AGI and
- 00:30:45that that's a very motivating thing and
- 00:30:48no one expects that to be easy and a
- 00:30:50straight line to success but uh it does
- 00:30:53feel like
- 00:30:59there's a famous quote from history it's
- 00:31:01something like I'm going to get this
- 00:31:03totally wrong but the spirit of it is
- 00:31:06like I never pray and ask for God to be
- 00:31:09on my side you know I pray and hope to
- 00:31:11be on God's side and there is something
- 00:31:14about betting on deep learning that
- 00:31:16feels like being on the side of the
- 00:31:17angels
- 00:31:18and you kind of just it eventually seems
- 00:31:21to work out even though you hit some big
- 00:31:23stumbling blocks along the way and so
- 00:31:25like a deep belief in that has been good
- 00:31:27for us
- 00:31:30can I ask you a really weird one I had a
- 00:31:32great quote the other day and it was the
- 00:31:34heaviest things in life are not iron or
- 00:31:36gold but unmade
- 00:31:38decisions what unmade decision weighs on
- 00:31:41your mind
- 00:31:44most it's different every day like I
- 00:31:46don't there's not one big
- 00:31:50one I mean I guess there are some big
- 00:31:52ones like about are we gonna bet on this
- 00:31:56next product or that next product uh or
- 00:31:59are we going to like build our next
- 00:32:01computer this way or that way they are
- 00:32:03kind of like really high stakes oneway
- 00:32:06doorish that like everybody else I
- 00:32:07probably delay for too long but but
- 00:32:11mostly the hard part is every day it
- 00:32:14feels like there are a few
- 00:32:17new 5149
- 00:32:19decisions that come up
- 00:32:22that kind of make it to me because they
- 00:32:25were 51 49 in the first place and that I
- 00:32:28don't feel like particularly likely I
- 00:32:30can do better than somebody else would
- 00:32:33have done but I kind of have to make him
- 00:32:34anyway and it's it's the volume of them
- 00:32:37it is not
- 00:32:40anyone is there a commonality in the
- 00:32:42person that you co when it's
- 00:32:455149 no um
- 00:32:50I I think the wrong way to do that is to
- 00:32:52have one person you lean on for
- 00:32:54everything and the right way to at least
- 00:32:56for me the right way to do it is to have
- 00:32:57like like 15 or 20 people Each of which
- 00:33:00you have come to believe has good
- 00:33:02instincts and good context in a
- 00:33:04particular way and you get to like phone
- 00:33:07a friend to the best expert rather than
- 00:33:09try to have just one across the
- 00:33:11board in terms of hard decisions I do
- 00:33:13want to touch ON Semiconductor Supply
- 00:33:15chains how wide are you about
- 00:33:18semiconductor Supply chains and
- 00:33:20international tensions
- 00:33:23today I don't know how to quantify that
- 00:33:26worried of course is the answer answer
- 00:33:29uh it's probably not it's well I guess I
- 00:33:30could quantify it this way it is not my
- 00:33:32top worry but it is in like the top 10%
- 00:33:35of all
- 00:33:38worries am I allowed to ask what's your
- 00:33:40top
- 00:33:41worry I'm I'm in so much I've got past
- 00:33:43the stage of being in trouble for this
- 00:33:45one
- 00:33:48um it's something
- 00:33:53about the the sort of generalized
- 00:33:56complexity of all we as a whole field
- 00:33:59are trying to do
- 00:34:02the and it feels like a I think it's all
- 00:34:06going to work out fine but it feels like
- 00:34:08a very complex system now this kind of
- 00:34:12like works fractally at every level so
- 00:34:13you can say that's also true like inside
- 00:34:16of opening eye itself uh that's also
- 00:34:18true inside of anyone team um but you
- 00:34:23know an example of this since you were
- 00:34:24just talking about semiconductors is you
- 00:34:26got to balance the
- 00:34:28power availability with the right
- 00:34:30networking decisions with being able to
- 00:34:31like get enough chips in time and
- 00:34:33whatever risk there's going to be there
- 00:34:35um with the ability to have the research
- 00:34:37ready to intersect that so you don't
- 00:34:39either like be caught totally flat
- 00:34:41footed or have a system that you can't
- 00:34:43utilize um with the right product that
- 00:34:47is going to use that research to be able
- 00:34:49to like pay the eye watering cost of
- 00:34:51that system so
- 00:34:55the it's supply chain makes it sign
- 00:34:58sound too much like a pipeline but but
- 00:35:00yeah the overall ecosystem complexity at
- 00:35:03every level of like the fractal scan is
- 00:35:06unlike anything I have seen in any
- 00:35:08industry
- 00:35:10before uh and some version of that is
- 00:35:14probably my top
- 00:35:16worry you said unlike anything we've
- 00:35:18seen before a lot of people I think
- 00:35:19compare this you know wave to the
- 00:35:22internet bubble uh in terms of you know
- 00:35:25the excitement and the exuberance and I
- 00:35:26think the thing that's different is the
- 00:35:27amount that people are spending Larry
- 00:35:29Ellison said that it will cost a hundred
- 00:35:31billion doar to enter the foundation
- 00:35:33model race as a starting point do you
- 00:35:36agree with that statement and when you
- 00:35:37saw that we like yeah that makes
- 00:35:40sense uh no I think it will cost less
- 00:35:42than that but there there's an
- 00:35:47interesting there's an interesting point
- 00:35:49here um which is everybody likes to
- 00:35:54use previous examples of a technology
- 00:35:57Revolution to talk about to put a new
- 00:35:59one into more familiar context and a I
- 00:36:04think that's a bad habit on the whole
- 00:36:06and but I understand why people do it
- 00:36:09and B I think the ones people pick
- 00:36:12for analogizing AI are particularly bad
- 00:36:16so the internet was obviously quite
- 00:36:19different than Ai and you brought up
- 00:36:20this one thing about cost and whether it
- 00:36:22cost like 10 billion or 100 billion or
- 00:36:24whatever to be competitive it was very
- 00:36:26like one of the defining things about
- 00:36:28the
- 00:36:29internet Revolution was it was actually
- 00:36:33really easy to get started now another
- 00:36:36thing that cuts more towards the
- 00:36:38internet is
- 00:36:41mostly for many companies this will just
- 00:36:43be like a continuation of the Internet
- 00:36:45it's just like someone else makes these
- 00:36:46AI models and you get to use them to
- 00:36:49build all sorts of great stuff and it's
- 00:36:51like a new primitive for Building
- 00:36:53Technology but if you're trying to build
- 00:36:54the AI itself that's pretty different
- 00:36:57another example people uses electricity
- 00:37:00um which I think doesn't make sense for
- 00:37:02a ton of reasons the one I like the most
- 00:37:05caveat by my
- 00:37:07earlier comment that I don't think
- 00:37:09people should be doing this or trying to
- 00:37:11like use these analogies to seriously is
- 00:37:14the
- 00:37:16transistor it was a new discovery of
- 00:37:18physics it had incredible scaling
- 00:37:21properties it seeped everywhere pretty
- 00:37:24quickly you know we had things like mors
- 00:37:27in a way that we could now imagine like
- 00:37:29a bunch of laws for AI that tell us
- 00:37:31something about how quickly it's going
- 00:37:32to get better um and everyone kind of B
- 00:37:36like the whole tech industry kind of
- 00:37:38benefited from it and there's a lot of
- 00:37:41transistors involved in the products and
- 00:37:43delivery of services that you use but
- 00:37:45you don't really think
- 00:37:46of them as transistor
- 00:37:49companies um it's there's a very complex
- 00:37:53very expensive industrial process around
- 00:37:55it with a massive supply chain and
- 00:37:57and you know the the incredible progress
- 00:38:01based off of this very simple discovery
- 00:38:03of physics led to this gigantic uplift
- 00:38:05of the whole economy for a long time
- 00:38:07even though most of the time you didn't
- 00:38:09to think about it and you don't say oh
- 00:38:11this is a transistor product it's just
- 00:38:13like all right this thing can like
- 00:38:14process information for me you don't
- 00:38:17even really think about that it's just
- 00:38:19expected Sam I'd love to do a quick fire
- 00:38:22round with you so I'm going to say so
- 00:38:24I'm going to say a short statement you
- 00:38:25give me your immediate thoughts okay
- 00:38:28okay so you are building today as a
- 00:38:32whatever 23 24 year old with the
- 00:38:34infrastructure that we have
- 00:38:36today what do you choose to build if you
- 00:38:39started
- 00:38:42today uh some AI enabled vertical I'll
- 00:38:46I'll I'll use tutors as an example but
- 00:38:48like the the the best AI tutoring
- 00:38:51product or the you know that I could
- 00:38:53possibly imagine to teach people to
- 00:38:54learn any category like that could be
- 00:38:56the AI lawyer could could be the sort of
- 00:38:58like AI CAD engineer whatever you
- 00:39:02mentioned your book that will be coming
- 00:39:04out once it's written in retirement no I
- 00:39:07said I think about it I I don't know if
- 00:39:09I'll actually get around to it but I
- 00:39:10think it's an interesting idea if you
- 00:39:12were to write a book what would you call
- 00:39:15it this is so unfair sorry Sam um I
- 00:39:20don't have the title really I haven't
- 00:39:22thought about this book other than like
- 00:39:24I wish something existed because I think
- 00:39:25it could unlock a lot of human potential
- 00:39:28so maybe I think it would be something
- 00:39:29about human
- 00:39:30potential what in AI does no one focus
- 00:39:33on that everyone should spend more time
- 00:39:37on what is not hot should be
- 00:39:44hot I think there's people focused on
- 00:39:47everything uh what I would love to
- 00:39:50see there's a lot of different ways to
- 00:39:52solve this problem but something about
- 00:39:54an AI that can understand your whole
- 00:39:55life doesn't have to literally the
- 00:39:58infinite context but some way that you
- 00:40:00can have an AI agent that like knows
- 00:40:02everything there is to know about you
- 00:40:03has access to all of your data things
- 00:40:06like that what was one thing that
- 00:40:08surprised you in the last month
- 00:40:12Sam it's a research result I can't talk
- 00:40:15about but it is breathtakingly
- 00:40:18good which competitor do you most
- 00:40:21respect why
- 00:40:24then uh I mean I kind of respect
- 00:40:27everybody in the space right now I think
- 00:40:29there's like really amazing work coming
- 00:40:31from the whole field
- 00:40:37I an incredibly talented incredibly
- 00:40:39hardworking people I don't mean this to
- 00:40:41be a question Dodge it's like I can
- 00:40:42point to super talented people doing
- 00:40:45super great work everywhere in the
- 00:40:49field is that
- 00:40:53one not
- 00:40:56really uh tell me what's your favorite
- 00:40:58open a
- 00:41:00API I think the new real time API is
- 00:41:03pretty awesome but we have a lot of I
- 00:41:05mean we have a we have a big API
- 00:41:07business at this point so there's a lot
- 00:41:08of good stuff in there what are the
- 00:41:10biggest constraints on llama from it
- 00:41:12being open
- 00:41:16source I don't know it seems like a
- 00:41:18better question for
- 00:41:21them I'm doing well with this quick F uh
- 00:41:24who do you most respect in AI today Sam
- 00:41:31like ever or doing current work doing
- 00:41:33current
- 00:41:34work like a person
- 00:41:38yeah
- 00:41:45um well I feel like I I feel like I
- 00:41:47can't just go name a bunch of open AI
- 00:41:49people because that would
- 00:41:51be unfair uh and it would sound like I'm
- 00:41:54just biased so let me try to think of a
- 00:41:56non-opening eye person
- 00:41:58person uh let me give a shout out to the
- 00:42:00cursor team I mean there's a lot of
- 00:42:02people doing incredible work in AI but I
- 00:42:04think to really have do what they've
- 00:42:07done and built I thought about like a
- 00:42:09bunch of researchers I could name um
- 00:42:12but in terms of using AI to deliver a
- 00:42:16really magical uh experience that
- 00:42:20creates a lot of value in a way that
- 00:42:22people just didn't quite manage to put
- 00:42:23the pieces together I think that's it's
- 00:42:25really quite remarkable and I
- 00:42:27specifically left anybody at open AI out
- 00:42:29as I was thinking through it otherwise
- 00:42:30it would have been a long list of open
- 00:42:32AI people first how do you think about
- 00:42:34the trade-off between latency and
- 00:42:40accuracy you need a dial to change
- 00:42:43between them like in the same way
- 00:42:46that you want to do a rapid fire thing
- 00:42:48now and I'm not even going that quick
- 00:42:50but I'm you know trying not to think for
- 00:42:51multiple minutes uh in this context
- 00:42:54latency is what you want if you but if
- 00:42:57you were like hey Sam I want you to go
- 00:42:59like make a new important Discovery in
- 00:43:01physics you'd probably be happy to wait
- 00:43:03a couple of years and the answer is it
- 00:43:07should be user
- 00:43:10controllable can I ask when you think
- 00:43:12about insecurity and Leadership I think
- 00:43:13it's something that everyone has uh it's
- 00:43:15something we don't often talk about um
- 00:43:17when you think about maybe an insecurity
- 00:43:19in leadership an area of your leadership
- 00:43:21that you'd like to improve where would
- 00:43:23you most like to improve as a leader and
- 00:43:25a CEO today
- 00:43:29[Music]
- 00:43:31um it's a long list I'm trying to scam
- 00:43:33for the top one
- 00:43:38here the thing I'm struggling with most
- 00:43:41this week is I feel more uncertain than
- 00:43:45I have in the past
- 00:43:47about
- 00:43:49uh what are like the details of what our
- 00:43:52product strategy should be um I think
- 00:43:55that product is a weakness of mine in
- 00:43:58general um and it's something that right
- 00:44:02now the company like needs stronger and
- 00:44:05clearer vision on from me like we have a
- 00:44:07wonderful head of product and a great
- 00:44:08product team but it's an area that I
- 00:44:10wish I were a lot stronger on and
- 00:44:13acutely feeling the the miss right now
- 00:44:15you hired Kevin um I've known Kevin for
- 00:44:17years he's exceptional Kevin's amazing
- 00:44:21what makes Kevin worldclass as a product
- 00:44:23leader to
- 00:44:24you um this was the first word that came
- 00:44:28to mind huh in terms of
- 00:44:32focus focus what we're going to say no
- 00:44:34to like really trying to speak on behalf
- 00:44:37of the user about why we would do
- 00:44:38something or not do something like
- 00:44:40really trying to be rigorous about not
- 00:44:43not having like Fantastical
- 00:44:47dreams Sam you've done a lot of
- 00:44:49interviews I want to finish with one
- 00:44:51which is what question are you not often
- 00:44:53or never asked that you often leave an
- 00:44:56interview and think God I should have
- 00:44:58been asked that or I wish I was asked
- 00:45:05that I mean this is such a meta answer
- 00:45:07but I've been asked that question so
- 00:45:08many times that I've like used up all
- 00:45:10time that is that is so
- 00:45:12massive okay I'll change it then we have
- 00:45:16a 5year horizon for open Ai and a
- 00:45:2010-year if you have a magic wand and can
- 00:45:23paint that scenario for the 5 year and
- 00:45:25the 10 year can you paint that canvas
- 00:45:27for me for the 5 and 10
- 00:45:35year I mean I can easily do it for like
- 00:45:37the next two years but if we are right
- 00:45:41and we start to make systems that
- 00:45:45are so good at you know for example
- 00:45:48helping us with scientific
- 00:45:50advancement actually I I will just say
- 00:45:52it I think in five years it looks like
- 00:45:54we have
- 00:45:58an unbelievably rapid rate of
- 00:46:00improvement in technology itself you
- 00:46:03know people are like man the AGI moment
- 00:46:06came and went whatever the like the the
- 00:46:09pace of progress is like totally crazy
- 00:46:12and we're discovering all this new stuff
- 00:46:14both about AI research and also about
- 00:46:16all the rest of
- 00:46:17Science and that
- 00:46:20feels like if we could sit here now and
- 00:46:22look at it it would seem like it should
- 00:46:25be very crazy and then the second part
- 00:46:27of the prediction is that Society itself
- 00:46:31actually changes surprisingly
- 00:46:34little an example of this would be that
- 00:46:37I think if you asked people five years
- 00:46:39ago if computers were going to pass the
- 00:46:41Turing test they would say no and then
- 00:46:43if you said well what if an oracle told
- 00:46:44you it was going to they would say well
- 00:46:46it would somehow be like just this crazy
- 00:46:49breathtaking change for society and we
- 00:46:52did kind of satisfy the Turning test
- 00:46:54roughly speaking of course and so
- 00:46:56Society didn't change that much it just
- 00:46:58sort of went whooshing
- 00:47:00by and that's kind of uh example of what
- 00:47:06I expect to keep happening which is
- 00:47:08progress scientific progress keeps
- 00:47:12going outperforming all expectations and
- 00:47:15Society in a way that I think is good
- 00:47:17and healthy um changes not that
- 00:47:20much at least not that much in the long
- 00:47:22term it will hugely change five or 10
- 00:47:26years you've been amazing I had this
- 00:47:28list of questions I I didn't really
- 00:47:29stick to them uh thank you for putting
- 00:47:32up with my Meandering around different
- 00:47:34questions thank you everyone for coming
- 00:47:36I'm so thrilled that we were able to do
- 00:47:37this today and Sam thank you for making
- 00:47:39it happen ma'am thank you
- AI
- OpenAI
- Sam Altman
- no-code
- agent AI
- pemikiran
- inovasi
- model besar
- sumber terbuka
- semikonduktor