Google CEO ERIC SCHMIDT BANNED Interview LEAKED: "Future is SCARY" (AI Pep Talk)
摘要
TLDRIn a removed interview with Stanford, Eric Schmidt, former CEO of Google, discussed the unprecedented potential impact of AI technologies. He anticipates that large context windows, LLM agents, and text actions will revolutionize how we interact with AI, creating changes as significant as those made by social media, if not more. Schmidt addressed the scale of resources needed to reach Artificial General Intelligence (AGI), suggesting partnerships with Canada for its energy resources. He also criticized Google for prioritizing work-life balance over competitive edge. Discussing global AI leadership, Schmidt stressed the need for massive investments as AI technology evolves. Additionally, Schmidt acknowledged issues like misinformation and its impact on democracy, urging improved public critical thinking and effective misinformation regulation. He foresees a future where AI substantially augments programming but may eventually reduce the need for human coders as AI systems become more autonomous.
心得
- 🌍 AI will influence global scales significantly.
- 🔗 Large context windows enhance AI's capabilities.
- 💡 AI-generated content is still limited by security.
- ⚡ Energy demand is critical for AI advancement.
- 👥 Human programmers' roles might evolve with AI.
- ⚖ Google's culture prioritizes balance over competition.
- 🛑 Misinformation is a key threat in AI usage.
- 🤖 Future AI may learn independently of programmers.
- 🏭 Collaboration with energy-rich countries is vital.
- ⚔ Reducing warfare cost through AI-driven robotics.
时间轴
- 00:00:00 - 00:05:00
The video begins with a discussion on Eric Schmidt's interview about future technological impacts, particularly from large context windows and LLM agents in AI. Schmidt emphasizes the transformative potential of these technologies, likening it to or even surpassing the impact of social media, enabling complex tasks like building applications based on simple commands.
- 00:05:00 - 00:10:00
Eric Schmidt talks about the energy requirements for AI development, especially for achieving AGI, suggesting collaboration with Canada for hydro power. He argues using more data isn’t necessary for AGI, as smart data usage could suffice. Schmidt critiques Google for prioritizing work-life balance over competitive drive, contrasting startup cultures that demand intense work ethics.
- 00:10:00 - 00:15:00
Discusses the geopolitical implications of AI development, highlighting the competition between the US and China. AI and tech advancements are seen as pivotal areas of global dominance. Schmidt mentions the impact of cheap, effective warfare technologies like drones in Ukraine, suggesting a shift in military strategies due to AI interventions.
- 00:15:00 - 00:20:00
Schmidt describes the complexities and potential vulnerabilities in AI models, comparing them to unpredictable teenagers. He suggests the need for adversarial AI to test and strengthen system defenses. He also acknowledges the significant investment in AI, indicating both potential and risks in the tech industry's future.
- 00:20:00 - 00:27:30
Future implications of AI are addressed, emphasizing context windows, agents, and text-to-action capabilities. There is discussion on misinformation challenges, and how social media platforms struggle to manage false content. Schmidt concludes by reflecting on the future of programming and education, suggesting eventual integration with AI developments.
思维导图
常见问题
Where was Eric Schmidt's controversial interview uploaded and what happened after?
The video was initially uploaded by Stanford but later removed.
What does Eric Schmidt predict for AI's future?
Eric Schmidt believes that large context windows, LLM agents, and text actions will revolutionize AI's impact globally.
What are Schmidt's views on energy requirements for AI?
Schmidt highlights the energy demands for AI development and suggests cooperation with Canada due to its energy resources.
What controversial statement did Schmidt make about Google's work culture?
Schmidt criticizes Google for prioritizing work-life balance over aggressive competition.
How does Schmidt illustrate the complexity of understanding AI systems?
He compares it to teenagers whose behaviors are understandable despite some unpredictability.
What does Schmidt say about AI competition between countries?
He suggests that AI will necessitate vast data and resources, potentially limiting it to well-funded countries.
How can AI's influence on misinformation and public opinion be managed?
Maintaining critical thinking and regulating social media are essential to address misinformation.
What implications does Schmidt foresee for the future of programming and education?
Schmidt predicts AI systems will enhance productivity but also that programmer roles may evolve.
查看更多视频摘要
- 00:00:00when they are delivered at scale it's
- 00:00:02going to have an impact on the world at
- 00:00:03a scale that no one understands yet Eric
- 00:00:06Schmidt the former CEO of Google just
- 00:00:08did an interview at Stanford where he
- 00:00:10talked about a lot of controversial
- 00:00:12stuff initially the interview was
- 00:00:13uploaded on Stanford's YouTube channel
- 00:00:15but a couple days later the interview
- 00:00:16was taken down from YouTube and
- 00:00:18everywhere else but today I was somehow
- 00:00:20able to access the interview video after
- 00:00:22spending multiple hours so let's watch
- 00:00:24it together and dissect some important
- 00:00:26parts of the interview in the next year
- 00:00:28you're going to see very large context
- 00:00:31Windows agents and text action when they
- 00:00:36are delivered at scale it's going to
- 00:00:38have an impact on the world at a scale
- 00:00:40that no one understands yet much bigger
- 00:00:43than the horrific impact we've had on by
- 00:00:45social media right in my view so here's
- 00:00:48why in a context window you can
- 00:00:51basically use that as short-term memory
- 00:00:54and I was shocked that context Windows
- 00:00:57could get this long the technical
- 00:00:58reasons have to do with the fact it's
- 00:01:00hard to serve hard to calculate and so
- 00:01:01forth the interesting thing about
- 00:01:03short-term memory is when you feed the
- 00:01:06the you ask it a question read 20 books
- 00:01:10you give it the text of the books is the
- 00:01:12query and you say tell me what they say
- 00:01:14it forgets the middle which is exactly
- 00:01:16how human brains work too right that's
- 00:01:19where we are with respect to agents
- 00:01:22there are people who are now building
- 00:01:24essentially llm agents and the way they
- 00:01:27do it is they read something like
- 00:01:29chemistry they discover the principles
- 00:01:31of chemistry and then they test it and
- 00:01:34then they add that back into their
- 00:01:36understanding right that's extremely
- 00:01:39powerful and then the third thing as I
- 00:01:41mentioned is text action so I'll give
- 00:01:44you an example the government is in the
- 00:01:46process of trying to ban Tik Tok we'll
- 00:01:48see if that actually happens if Tik Tok
- 00:01:51is banned here's what I propose each and
- 00:01:53every one of you do say to your LL the
- 00:01:57following make me a copy of Tik Tok
- 00:02:01steal all the users steal all the music
- 00:02:04put my preferences in it produce this
- 00:02:07program in the next 30 seconds release
- 00:02:10it and in one hour if it's not viral do
- 00:02:13something different along the same lines
- 00:02:15that's the command boom boom boom boom
- 00:02:20right you understand how powerful that
- 00:02:23is if you can go from arbitrary language
- 00:02:26to arbitrary digital command which is
- 00:02:28essentially what python in this scario
- 00:02:30is imagine that each and every human on
- 00:02:33the planet has their own programmer that
- 00:02:36actually does what they want as opposed
- 00:02:38to the programmers that work for me who
- 00:02:39don't do what I ask
- 00:02:42right the programmers here know what I'm
- 00:02:44talking about so imagine a non arrogant
- 00:02:46programmer that actually does what you
- 00:02:48want and you don't have to pay all that
- 00:02:50money to and there's infinite supply of
- 00:02:53these programs this is all within the
- 00:02:54next year or two very soon so we've
- 00:02:57already discussed on this channel a
- 00:02:58number of different versions of this
- 00:03:00whether you're talking about ader Devon
- 00:03:02pythagora or just using agents to
- 00:03:04collaborate with each other in code
- 00:03:06there are just so many great options for
- 00:03:08coding assistance right now however AI
- 00:03:10coders that can actually build full
- 00:03:12stack complex applications were not
- 00:03:14quite there yet but hopefully soon and
- 00:03:16also what he's describing of just saying
- 00:03:18download all the music and the secrets
- 00:03:20and recreate that's not really possible
- 00:03:22right now obviously all of that stuff is
- 00:03:25behind security walls and you can't just
- 00:03:26download all that stuff so if he's
- 00:03:28saying hey rce the functionality you can
- 00:03:31certainly do that those three things and
- 00:03:35I'm quite convinced it's the union of
- 00:03:37those three
- 00:03:38things it will happen in the next
- 00:03:41wave so you asked about what else is
- 00:03:43going to happen um every six months I
- 00:03:47oscillate so we're on a it's an even odd
- 00:03:49oscillation so at the moment the gap
- 00:03:53between the frontier models which
- 00:03:55they're now only three a few who they
- 00:03:58are and everybody else
- 00:04:00appears to me to be getting larger 6
- 00:04:03months ago I was convinced that the Gap
- 00:04:05was getting smaller so I invested lots
- 00:04:07of money in the little companies now I'm
- 00:04:09not so
- 00:04:10sure and I'm talking to the big
- 00:04:12companies and the big companies are
- 00:04:14telling me that they need 10 billion 20
- 00:04:17billion 50 billion 100
- 00:04:20billion Stargate is a what 100 billion
- 00:04:23right they're very very hard I talked
- 00:04:26Sam Alman is a close friend he believes
- 00:04:29that it's going to take about 300
- 00:04:30billion maybe more I pointed out to him
- 00:04:34that i' done the calculation on the
- 00:04:35amount of energy required and I and I
- 00:04:39then in the spirit of full disclosure
- 00:04:41went to the White House on Friday and
- 00:04:43told them that we need to become best
- 00:04:45friends with Canada because Canada has
- 00:04:48really nice people helped invent Ai and
- 00:04:52lots of hydr power because we as a
- 00:04:54country do not have enough power to do
- 00:04:56this the alternative is to have the
- 00:04:59Arabs it and I like the Arabs personally
- 00:05:02I spent lots of time there right but
- 00:05:04they're not going to adhere to our
- 00:05:06national security rules whereas Canada
- 00:05:08and the US are part of a Triumph it
- 00:05:10where we all agree so these hundred
- 00:05:12billion $300 billion do data centers
- 00:05:14electricity starts becoming the scarce
- 00:05:16resource now first of all we definitely
- 00:05:18don't have enough energy resources to
- 00:05:20achieve AGI it's just not possible right
- 00:05:22now and Eric is also assuming that we're
- 00:05:24going to need more and more data and
- 00:05:26larger models to reach AGI and I think
- 00:05:29that's also not actually true Sam Alman
- 00:05:31has said similar things he has said that
- 00:05:33we need to be able to do more with less
- 00:05:35or even the same amount of data because
- 00:05:37we've already used all the data that
- 00:05:39Humanity has ever created there's really
- 00:05:41no more left so we're going to need to
- 00:05:43either figure out how to create
- 00:05:44synthetic data that is valuable not just
- 00:05:46derivative and we're also going to have
- 00:05:48to do more with the data that we do have
- 00:05:51um you were at Google for a long time
- 00:05:53and uh they invented the Transformer
- 00:05:56architecture um it's all Peter's fault
- 00:05:59thanks to to brilliant people over there
- 00:06:01like Peter and Jeff Dean and everyone um
- 00:06:04but now it doesn't seem like
- 00:06:06they're they they've kind of lost the
- 00:06:08initiative to open Ai and even the last
- 00:06:10leaderboard I saw anthropics Claud was
- 00:06:11at the top of the list um I asked SAR
- 00:06:15this you didn't really give me a very
- 00:06:17sharp answer maybe maybe you have a a
- 00:06:19sharper or more objective uh explanation
- 00:06:22for what's going on there I'm no longer
- 00:06:24a Google employee yes um in the spirit
- 00:06:26of whole disclosure um Google decided
- 00:06:29that work life balance and going home
- 00:06:32early and working from home was more
- 00:06:34important than
- 00:06:38winning okay so that is the line that
- 00:06:40got him in trouble it was everywhere all
- 00:06:42over Twitter all over the news when he
- 00:06:44said Google prioritized work life
- 00:06:46balance going home early not working as
- 00:06:49hard as the competitor to winning they
- 00:06:51chose work life balance over winning and
- 00:06:53that's actually a pretty common
- 00:06:54perception of Google and the startups
- 00:06:56the reason startups work is because the
- 00:06:58people work like hell and I'm sorry to
- 00:07:01be so blunt but the fact of the matter
- 00:07:04is if you all leave the university and
- 00:07:06go found a company you're not going to
- 00:07:09let people work from home and only come
- 00:07:11in one day a week if you want to compete
- 00:07:13against the other startups when when in
- 00:07:16the early days of Google Microsoft was
- 00:07:18like that exactly but now it seems to be
- 00:07:21and there's there's a long history of in
- 00:07:24my industry our industry I guess of
- 00:07:26companies winning in a genuinely
- 00:07:29creative way and really dominating a
- 00:07:31space and not making this the next
- 00:07:33transition it's very well documented and
- 00:07:37I think that the truth is Founders are
- 00:07:40special the founders need to be in
- 00:07:42charge the founders are difficult to
- 00:07:44work with they push people hard um as
- 00:07:47much as we can dislike elon's personal
- 00:07:49Behavior look at what he gets out of
- 00:07:51people uh I had dinner with him and he
- 00:07:53was flying he I was in Montana He was
- 00:07:56flying that night at 10:00 p.m. to have
- 00:07:58a meeting at midnight with x. a right
- 00:08:02think about it I was in Taiwan different
- 00:08:05country different culture and they said
- 00:08:07that and this is tsmc who I'm very
- 00:08:10impressed with and they have a rule that
- 00:08:12the starting phds coming out of the
- 00:08:16they're good good physicists work in the
- 00:08:19factory on the basement floor now can
- 00:08:22you imagine getting American physicist
- 00:08:24to do that with phds highly unlikely
- 00:08:27different work ethic and the problem
- 00:08:29here the the reason I'm being so harsh
- 00:08:31about work is that these are systems
- 00:08:34which have Network effects so time
- 00:08:36matters a lot and in most businesses
- 00:08:40time doesn't matter that much right you
- 00:08:42have lots of time you know Coke and
- 00:08:44Pepsi will still be around and the fight
- 00:08:46between Coke and Pepsi will continue to
- 00:08:48go along and it's all glacial right when
- 00:08:51I dealt with Telos the typical Telco
- 00:08:53deal would take 18 months to sign right
- 00:08:58there's no reason to take 18 months to
- 00:09:00do anything get it done just we're in a
- 00:09:03period of Maximum growth maximum gain so
- 00:09:06here he was asked about competition with
- 00:09:08China's Ai and AGI and that's his answer
- 00:09:11we're ahead we need to stay ahead and we
- 00:09:13need money is going to play a role or
- 00:09:16competition with China as well so I was
- 00:09:18the chairman of an AI commission that
- 00:09:20sort of looked at this very
- 00:09:21carefully and um you can read it it's
- 00:09:24about 752 pages and I'll just summarize
- 00:09:27it by saying we're ahead we need to stay
- 00:09:29ahead and we need lots of money to do so
- 00:09:32our customers were the senate in the
- 00:09:34house um and out of that came the chips
- 00:09:38act and a lot of other stuff like that
- 00:09:40um the a rough scenario is that if you
- 00:09:44assume the frontier models drive forward
- 00:09:47and a few of the open source models it's
- 00:09:49likely that a very small number of
- 00:09:51companies can play this game countries
- 00:09:53excuse me what are those countries or
- 00:09:56who are they countries with a lot of
- 00:09:58money and a lot of t tent strong
- 00:10:00Educational Systems and a willingness to
- 00:10:02win the US is one of them China is
- 00:10:05another one how many others are there
- 00:10:07are there any
- 00:10:09others I don't know maybe but certainly
- 00:10:12the the in your lifetimes the battle
- 00:10:14between you the US and China for
- 00:10:17knowledge Supremacy is going to be the
- 00:10:19big fight right so the US government
- 00:10:22banned uh essentially the Nvidia chips
- 00:10:24although they weren't allowed to say
- 00:10:26that was what they were doing but they
- 00:10:27actually did that into China
- 00:10:29um they have about a 10year chip advant
- 00:10:32we have a a roughly 10-year chip
- 00:10:34advantage in terms of subdv that is sub5
- 00:10:37n years roughly 10 years wow um and so
- 00:10:41you're going to have so an example would
- 00:10:43be today we're a couple of years ahead
- 00:10:45of China my guess is we'll get a few
- 00:10:47more years ahead of China and the
- 00:10:49Chinese are whopping mad about this it's
- 00:10:51like hugely upset about it well let's
- 00:10:54talk to about a real war that's going on
- 00:10:56I know that uh something you've been
- 00:10:57very involved in is uh
- 00:11:00the Ukraine war and in particular uh I
- 00:11:03don't know how much you can talk about
- 00:11:04white stor and your your goal of having
- 00:11:07500,000 $500 drones destroy $5 million
- 00:11:12tanks so so how's that changing Warfare
- 00:11:14so I worked for the Secretary of Defense
- 00:11:16for seven years and and Tred to change
- 00:11:21the way we run our military I'm I'm not
- 00:11:23a particularly big fan of the military
- 00:11:24but it's very expensive and I wanted to
- 00:11:26see if I could be helpful and I think in
- 00:11:28my view I failed they gave me a medal so
- 00:11:32they must give medals to failure or you
- 00:11:35know whatever but my self-criticism was
- 00:11:38nothing has really changed and the
- 00:11:40system in America is not going to lead
- 00:11:43to real
- 00:11:44Innovation so watching the Russians use
- 00:11:48tanks to destroy apartment buildings
- 00:11:50with little old ladies and kids just
- 00:11:52drove me crazy so I decided to work on a
- 00:11:55company with your friend Sebastian thrun
- 00:11:57and a as a former faculty member here
- 00:11:59here and a whole bunch of Stanford
- 00:12:01people and the idea basically is to do
- 00:12:05two things use Ai and complicated
- 00:12:07powerful ways for these essentially
- 00:12:09robotic War and the second one is to
- 00:12:11lower the cost of the robots now you sit
- 00:12:14there and you go why would a good
- 00:12:16liberal like me do that and the answer
- 00:12:18is that the
- 00:12:20whole theory of armies is tanks
- 00:12:23artilleries and mortar and we can
- 00:12:25eliminate all of them so here what he's
- 00:12:27talking about is that UK Ukraine has
- 00:12:29been able to create really cheap and
- 00:12:31simple drones by spending just a couple
- 00:12:33hundred dollar Ukraine is creating 3D
- 00:12:35printed drones they carry a bomb drop it
- 00:12:38on a million dooll tank and they've been
- 00:12:39able to do that over and over again so
- 00:12:42there's this asymmetric Warfare
- 00:12:44happening between drones and more
- 00:12:46traditional artillery so there was an
- 00:12:48article that you and Henry Kissinger and
- 00:12:50Dan hleer uh wrote last year about the
- 00:12:54nature of knowledge and how it's
- 00:12:55evolving I had a discussion the other
- 00:12:57night about this as well so
- 00:12:59for most of History humans sort of had a
- 00:13:02mystical understanding of the universe
- 00:13:04and then there's the Scientific
- 00:13:05Revolution and the enlightenment um and
- 00:13:08in your article you argue that now these
- 00:13:10models are becoming so complicated and
- 00:13:14uh uh difficult to understand that we
- 00:13:17don't really know what's going on in
- 00:13:19them I'll take a quote from Richard fean
- 00:13:21he says what I cannot create I do not
- 00:13:23understand the saw this quote the other
- 00:13:25day but now people are creating things
- 00:13:26they do not that that they can create
- 00:13:28but they don't really understand what's
- 00:13:29inside of them is the nature of
- 00:13:31knowledge changing in a way are we going
- 00:13:33to have to start just taking the word
- 00:13:35for these models let them able being
- 00:13:37able to explain it to us the analogy I
- 00:13:39would offer is to teenagers if you have
- 00:13:42a teenager you know that they're human
- 00:13:44but you can't quite figure out what
- 00:13:45they're
- 00:13:46thinking um but somehow we've managed in
- 00:13:49society to adapt to the presence of
- 00:13:50teenagers right and they eventually grow
- 00:13:52out of it and this serious so it's
- 00:13:56probably the case that we're going to
- 00:13:58have knowledge systems that we cannot
- 00:14:01fully characterize M but we understand
- 00:14:04their boundaries right we understand the
- 00:14:06limits of what they can do and that's
- 00:14:08probably the best outcome we can get do
- 00:14:10you think we'll understand the
- 00:14:12limits we we'll get pretty good at it
- 00:14:14he's referencing the way that large
- 00:14:16language models work which is really
- 00:14:17essentially a blackbox you put in a
- 00:14:20prompt you get a response but we don't
- 00:14:21know why certain nodes within the
- 00:14:23algorithm light up and we don't know
- 00:14:25exactly how the answers come to be it is
- 00:14:27really a black box there's a lot lot of
- 00:14:29work being done right now trying to kind
- 00:14:31of unveil what is going on behind the
- 00:14:32curtain but we just don't know the
- 00:14:35consensus of my group that meets on uh
- 00:14:37every week is that eventually the way
- 00:14:40you'll do this uh it's called so-called
- 00:14:42adversarial AI is that there will there
- 00:14:45will actually be companies that you will
- 00:14:47hire and pay money to to break your AI
- 00:14:50system te so it'll be the red instead of
- 00:14:52human red teams which is what they do
- 00:14:54today you'll have whole companies and a
- 00:14:57whole industry of AI systems whose jobs
- 00:15:00are to break the existing AI systems and
- 00:15:02find their vulnerabilities especially
- 00:15:04the knowledge that they have that we
- 00:15:05can't figure out that makes sense to me
- 00:15:08it's also a great project for you here
- 00:15:10at Stanford because if you have a
- 00:15:12graduate student who has to figure out
- 00:15:13how to attack one of these large models
- 00:15:16and understand what it does that is a
- 00:15:18great skill to build the Next Generation
- 00:15:20so it makes sense to me that the two
- 00:15:22will travel together all right let's
- 00:15:24take some questions from the student
- 00:15:26there's one right there in the back just
- 00:15:27say your name
- 00:15:29you mentioned and this is related to
- 00:15:31comment right now I'm getting AI that
- 00:15:33actually does what you want you just
- 00:15:34mentioned adversarial AI I'm wondering
- 00:15:37if you could elaborate on that more so
- 00:15:38it seems to be besides obviously compute
- 00:15:41will increase and get more performant
- 00:15:43models but getting them to do what you
- 00:15:46want issue seems largely unanswered my
- 00:15:50well you have to assume that the current
- 00:15:52hallucination problems become less right
- 00:15:56in as the technology gets better and so
- 00:15:58forth I'm not suggesting it goes away
- 00:16:01and then you also have to assume that
- 00:16:03there are tests for E efficacy so there
- 00:16:05has to be a way of knowing that the
- 00:16:07thing exceeded so in the example that I
- 00:16:09gave of the Tik Tock competitor and by
- 00:16:11the way I was not arguing that you
- 00:16:12should illegally steal everybody's music
- 00:16:15what you would do if you're a Silicon
- 00:16:16Valley entrepreneur which hopefully all
- 00:16:18of you will be is if it took off then
- 00:16:20you'd hire a whole bunch of lawyers to
- 00:16:21go clean the mess up right but if if
- 00:16:24nobody uses your product it doesn't
- 00:16:26matter that you stole all the content
- 00:16:28and do not quote me right right you're
- 00:16:31you're on camera yeah that's right but
- 00:16:34but you see my point in other words
- 00:16:35Silicon Valley will run these tests and
- 00:16:37clean up the mess and that's typically
- 00:16:40how those things are done so so my own
- 00:16:42view is that you'll see more and more um
- 00:16:46performative systems with even better
- 00:16:48tests and eventually adversarial tests
- 00:16:50and that'll keep it within a box the
- 00:16:52technical term is called Chain of
- 00:16:54Thought reasoning and people believe
- 00:16:57that in the next few years you'll be
- 00:16:58able to generate a thousand steps of
- 00:17:00Chain of Thought reasoning right do this
- 00:17:03do this it's like building recipes right
- 00:17:05that the recipes you can run the recipe
- 00:17:07and you can actually test that It
- 00:17:09produced the correct outcome now that
- 00:17:11was maybe not my exact understanding of
- 00:17:13Chain of Thought reasoning my
- 00:17:14understanding of Chain of Thought
- 00:17:15reasoning which I think is accurate is
- 00:17:17when you break a problem down into its
- 00:17:19basic steps and you solve each step
- 00:17:21allowing for progression into the next
- 00:17:23step not only it allows you to kind of
- 00:17:25replay the steps it's more of how do you
- 00:17:27break problems down and then think
- 00:17:28through them step by step the amounts of
- 00:17:31money being thrown around are
- 00:17:34mindboggling and um I've chose I I
- 00:17:37essentially invest in everything because
- 00:17:39I can't figure out who's going to win
- 00:17:41and the amounts of money that are
- 00:17:43following me are so large I think some
- 00:17:46of it is because the early money has
- 00:17:48been made and the big money people who
- 00:17:50don't know what they're doing have to
- 00:17:52have an AI component and everything is
- 00:17:54now an AI investment so they can't tell
- 00:17:56the difference I Define ai as Learning
- 00:17:58System
- 00:17:59systems that actually learn so I think
- 00:18:01that's one of them the second is that
- 00:18:02there are very sophisticated new
- 00:18:05algorithms that are sort of post
- 00:18:07Transformers my friend my collaborator
- 00:18:09for a long time has invented a new non-
- 00:18:11Transformer architecture there's a group
- 00:18:13that I'm funding in Paris that has
- 00:18:15claims to have done the same thing so
- 00:18:17there there's enormous uh invention
- 00:18:19there a lot of things at Stanford and
- 00:18:21the final thing is that there is a
- 00:18:23belief in the market that the invention
- 00:18:26of intelligence has infinite return
- 00:18:29so let's say you have you put $50
- 00:18:31billion of capital into a company you
- 00:18:34have to make an awful lot of money from
- 00:18:36intelligence to pay that back so it's
- 00:18:38probably the case that we'll go through
- 00:18:40some huge investment bubble and then
- 00:18:43it'll sort itself out that's always been
- 00:18:44true in the past and it's likely to be
- 00:18:47true here and what you said earlier yeah
- 00:18:50so there's been something like a
- 00:18:52trillion dollars already invested into
- 00:18:54artificial intelligence and only 30
- 00:18:56billion of Revenue I think those are
- 00:18:57accurate numbers and really there just
- 00:19:00hasn't been a return on investment yet
- 00:19:02but again as he just mentioned that's
- 00:19:03been the theme on previous waves of
- 00:19:05Technology huge upfront investment and
- 00:19:08then it pays off in the end well I don't
- 00:19:10know what he's talking about here cuz
- 00:19:11didn't he run Google and Google has
- 00:19:13always been about being closed source
- 00:19:15and always tried to protect the
- 00:19:16algorithm at all costs so I don't know
- 00:19:18what he's referring to there you think
- 00:19:20that the leaders are pulling away from
- 00:19:22right now and
- 00:19:24and this is a
- 00:19:26really the question is um roughly the
- 00:19:29following there's a company called mrr
- 00:19:31in France they've done a really good job
- 00:19:34um and I'm I'm obviously an investor um
- 00:19:36they have produced their second version
- 00:19:38their third model is likely to be closed
- 00:19:41because it's so expensive they need
- 00:19:43revenue and they can't give their model
- 00:19:45away so this open source versus closed
- 00:19:48Source debate in our industry is huge
- 00:19:51and um my entire career was based on
- 00:19:55people being willing to share software
- 00:19:57in open source everything about me is
- 00:20:00open source much of Google's
- 00:20:02underpinnings were open source
- 00:20:04everything I've done technically what
- 00:20:06didn't he run Google and Google was all
- 00:20:08about staying closed source and
- 00:20:09everything about Google was Kept Secret
- 00:20:11at all times so I don't know what he's
- 00:20:13referring to there everything I've done
- 00:20:15technically and yet it may be that the
- 00:20:18capital costs which are so immense
- 00:20:21fundamentally Chang this how software is
- 00:20:22built you and I were talking um my own
- 00:20:26view of software programmers is that
- 00:20:27software programmers productivity will
- 00:20:29at least double MH there are three or
- 00:20:31four software companies that are trying
- 00:20:33to do that I've invested in all of them
- 00:20:36in the spirit and they're all trying to
- 00:20:38make software programmers more
- 00:20:40productive the most interesting one that
- 00:20:41I just met with is called augment and I
- 00:20:44I always think of an individual
- 00:20:45programmer and they said that's not our
- 00:20:46Target our Target are these 100 person
- 00:20:48software programming teams on millions
- 00:20:50of lines of code where nobody knows
- 00:20:52what's going on well that's a really
- 00:20:54good AI thing will they make money I
- 00:20:57hope so
- 00:20:59so a lot of questions here hi um so at
- 00:21:02the very beginning yes ma um at the very
- 00:21:05beginning you mentioned that there's the
- 00:21:07combination of the context window
- 00:21:09expansion the agents and the text to
- 00:21:11action is going to have unimaginable
- 00:21:13impacts first of all why is the
- 00:21:16combination important and second of all
- 00:21:18I know that you know you're not like a
- 00:21:20crystal ball and you can't necessarily
- 00:21:22tell the future but why do you think
- 00:21:23it's beyond anything that we could
- 00:21:25imagine I think largely because the
- 00:21:27context window allows you to solve the
- 00:21:29problem of recency the current models
- 00:21:32take a year to train roughly six six
- 00:21:35there's 18 months six months of
- 00:21:37preparation six months of training six
- 00:21:39months of fine-tuning so they're always
- 00:21:41out of date contact window you can feed
- 00:21:44what happened like you can ask it
- 00:21:46questions about the um the Hamas Israel
- 00:21:50war right in a context that's very
- 00:21:52powerful it becomes current like Google
- 00:21:54yeah so that's essentially how search
- 00:21:55GPT works for example the new search
- 00:21:58from open AI can scour the web scrape
- 00:22:01the web and then take all of that
- 00:22:02information and put it into the context
- 00:22:04text window that is the recency he's
- 00:22:06talking about um in the case of Agents
- 00:22:08I'll give you an example I set up a
- 00:22:10foundation which is funding a nonprofit
- 00:22:13which starts there's a u i don't know if
- 00:22:15there's Chemists in the room that I
- 00:22:16don't really understand chemistry
- 00:22:18there's a a tool called chem cro C which
- 00:22:22was an llm based system that learned
- 00:22:24chemistry and what they do is they run
- 00:22:27it to generate chemistry hypotheses
- 00:22:29about proteins and they have a lab which
- 00:22:32runs the tests overnight and then it
- 00:22:34learns that's a huge acceleration
- 00:22:37accelerant in chemistry Material Science
- 00:22:39and so forth so that's that's an agent
- 00:22:42model and I think the text to action can
- 00:22:44be understood by just having a lot of
- 00:22:47cheap programmers right um and I don't
- 00:22:49think we understand what happens and
- 00:22:52this is again your area of expertise
- 00:22:54what happens when everyone has their own
- 00:22:55programmer and I'm not talking about
- 00:22:57turning on and off the light
- 00:22:59you know I imagine another example um
- 00:23:02for some reason you don't like Google so
- 00:23:04you say build me a Google competitor
- 00:23:06yeah you personally you don't build me a
- 00:23:08Google
- 00:23:08competitor uh search the web build a UI
- 00:23:12make a good copy um add generative AI in
- 00:23:16an interesting way do it in 30 seconds
- 00:23:20and see if it
- 00:23:21works
- 00:23:23right so a lot of people believe that
- 00:23:25the incumbents including Google are
- 00:23:28vulnerable to this kind of an attack now
- 00:23:31we'll see how can we stop AI from
- 00:23:33influencing public opinion
- 00:23:35misinformation especially during the
- 00:23:36upcoming election what are the short and
- 00:23:38long-term solutions
- 00:23:40from most of the misinformation in this
- 00:23:43upcoming election and globally will be
- 00:23:45on social media and the social media
- 00:23:47companies are not organized well enough
- 00:23:49to police it if you look at Tik Tok for
- 00:23:52example there are lots of accusations
- 00:23:55that Tik Tok is favoring one kind of
- 00:23:57misinformation over another and there
- 00:23:58are many people who claim without proof
- 00:24:01that I'm aware of that the Chinese are
- 00:24:03forcing them to do it I think we just we
- 00:24:06have a mess here and
- 00:24:08um the country is going to have to learn
- 00:24:11critical
- 00:24:12thinking that may be an impossible
- 00:24:14challenge for the us but but the fact
- 00:24:17that somebody told you something does
- 00:24:18not mean that it's true I think that the
- 00:24:20the greatest threat to democracy is
- 00:24:22misinformation because we're going to
- 00:24:24get really good at it um when Ian man
- 00:24:27managed YouTube
- 00:24:29the biggest problems we had on YouTube
- 00:24:30were that people would upload false
- 00:24:33videos and people would die as a result
- 00:24:35and we had a no death policy shocking
- 00:24:37yeah and also it's not even about
- 00:24:39potentially making deep fakes or kind of
- 00:24:41misinformation just muddying the waters
- 00:24:43is enough to make the entire topic kind
- 00:24:45of Untouchable um I'm really curious
- 00:24:48about the text to action and its impact
- 00:24:51on for example Computer Science
- 00:24:54Education wondering what you have
- 00:24:55thoughts on like how cus education
- 00:24:59should
- 00:25:00transform kind of Meet the age well I'm
- 00:25:03assuming that computer scientists as a
- 00:25:05group in undergraduate school will
- 00:25:08always have a programmer buddy with them
- 00:25:10so when you when you learn learn your
- 00:25:12first for Loop and so forth and so on
- 00:25:15you'll have a tool that will be your
- 00:25:17natural partner and then that's how the
- 00:25:19teaching will go on that the professor
- 00:25:21you know here or she will talk about the
- 00:25:23concepts but you'll engage with it that
- 00:25:25way and that's my guess yes ma'am behind
- 00:25:27you so so here I have a slightly
- 00:25:29different view I think in the long run
- 00:25:31there probably isn't going to be the
- 00:25:32need for programmers eventually the llms
- 00:25:35will become so sophisticated they're
- 00:25:37writing their own kind of code maybe it
- 00:25:39gets to a point where we can't even read
- 00:25:41that code anymore so there is this world
- 00:25:43in which it is not necessary to have
- 00:25:44programmers researchers or computer
- 00:25:47scientists I'm not sure that's the way
- 00:25:48it's going to be but there is a timeline
- 00:25:50in which that happens the most
- 00:25:52interesting country is India because the
- 00:25:55top AI people come from India to the US
- 00:25:58and we should let India keep some of its
- 00:26:00top talent not all of them but some of
- 00:26:02them um and they don't have the kind of
- 00:26:04training facilities and programs that we
- 00:26:06so richly have here to me India is the
- 00:26:08big swing state in that regard China's
- 00:26:10Lost it's not going to not going to come
- 00:26:12back they're not going to change the
- 00:26:14regime as much as people wish them to do
- 00:26:17Japan and Korea are clearly in our camp
- 00:26:20Taiwan is a fantastic country whose
- 00:26:22software is terrible so that's not going
- 00:26:24to going to work um amazing hardware and
- 00:26:28and in the rest of the world there are
- 00:26:30not a lot of other good choices that are
- 00:26:31big German the EUR Europe is screwed up
- 00:26:34because of Brussels it's not a new fact
- 00:26:36I spent 10 years fighting them and I
- 00:26:39worked really hard to get them to fix
- 00:26:42the a the EU act and they still have all
- 00:26:44the restrictions that make it very
- 00:26:46difficult to do our kind of research in
- 00:26:47Europe my French friends have spent all
- 00:26:50their time battling Brussels and mcon
- 00:26:52who's a personal friend is fighting hard
- 00:26:55for this and so France I think has a
- 00:26:57chance I don't see in I don't see
- 00:26:58Germany coming and the rest is not big
- 00:27:00enough given the capabilities that you
- 00:27:03envision these models having should we
- 00:27:05still spend time learning to code yeah
- 00:27:07so here she asked should we still learn
- 00:27:09to code because because ultimately it's
- 00:27:11it's the old thing of why do you study
- 00:27:12English if you can speak English you get
- 00:27:15better at it right you really do need to
- 00:27:17understand how these systems work and I
- 00:27:19feel very strongly yes sir so these were
- 00:27:21the most important parts of the
- 00:27:22interview and with that being said this
- 00:27:24is it for today's video see you again
- 00:27:25next week with another video
- AI impact
- Eric Schmidt
- Context windows
- Artificial intelligence
- AGI
- Misinformation
- Programming futures
- Work culture
- Global AI competition