Google CEO ERIC SCHMIDT BANNED Interview LEAKED: "Future is SCARY" (AI Pep Talk)

00:27:30
https://www.youtube.com/watch?v=8_2yFCm5sSM

Summary

TLDRIn a removed interview with Stanford, Eric Schmidt, former CEO of Google, discussed the unprecedented potential impact of AI technologies. He anticipates that large context windows, LLM agents, and text actions will revolutionize how we interact with AI, creating changes as significant as those made by social media, if not more. Schmidt addressed the scale of resources needed to reach Artificial General Intelligence (AGI), suggesting partnerships with Canada for its energy resources. He also criticized Google for prioritizing work-life balance over competitive edge. Discussing global AI leadership, Schmidt stressed the need for massive investments as AI technology evolves. Additionally, Schmidt acknowledged issues like misinformation and its impact on democracy, urging improved public critical thinking and effective misinformation regulation. He foresees a future where AI substantially augments programming but may eventually reduce the need for human coders as AI systems become more autonomous.

Takeaways

  • 🌍 AI will influence global scales significantly.
  • πŸ”— Large context windows enhance AI's capabilities.
  • πŸ’‘ AI-generated content is still limited by security.
  • ⚑ Energy demand is critical for AI advancement.
  • πŸ‘₯ Human programmers' roles might evolve with AI.
  • βš– Google's culture prioritizes balance over competition.
  • πŸ›‘ Misinformation is a key threat in AI usage.
  • πŸ€– Future AI may learn independently of programmers.
  • 🏭 Collaboration with energy-rich countries is vital.
  • βš” Reducing warfare cost through AI-driven robotics.

Timeline

  • 00:00:00 - 00:05:00

    The video begins with a discussion on Eric Schmidt's interview about future technological impacts, particularly from large context windows and LLM agents in AI. Schmidt emphasizes the transformative potential of these technologies, likening it to or even surpassing the impact of social media, enabling complex tasks like building applications based on simple commands.

  • 00:05:00 - 00:10:00

    Eric Schmidt talks about the energy requirements for AI development, especially for achieving AGI, suggesting collaboration with Canada for hydro power. He argues using more data isn’t necessary for AGI, as smart data usage could suffice. Schmidt critiques Google for prioritizing work-life balance over competitive drive, contrasting startup cultures that demand intense work ethics.

  • 00:10:00 - 00:15:00

    Discusses the geopolitical implications of AI development, highlighting the competition between the US and China. AI and tech advancements are seen as pivotal areas of global dominance. Schmidt mentions the impact of cheap, effective warfare technologies like drones in Ukraine, suggesting a shift in military strategies due to AI interventions.

  • 00:15:00 - 00:20:00

    Schmidt describes the complexities and potential vulnerabilities in AI models, comparing them to unpredictable teenagers. He suggests the need for adversarial AI to test and strengthen system defenses. He also acknowledges the significant investment in AI, indicating both potential and risks in the tech industry's future.

  • 00:20:00 - 00:27:30

    Future implications of AI are addressed, emphasizing context windows, agents, and text-to-action capabilities. There is discussion on misinformation challenges, and how social media platforms struggle to manage false content. Schmidt concludes by reflecting on the future of programming and education, suggesting eventual integration with AI developments.

Show more

Mind Map

Video Q&A

  • Where was Eric Schmidt's controversial interview uploaded and what happened after?

    The video was initially uploaded by Stanford but later removed.

  • What does Eric Schmidt predict for AI's future?

    Eric Schmidt believes that large context windows, LLM agents, and text actions will revolutionize AI's impact globally.

  • What are Schmidt's views on energy requirements for AI?

    Schmidt highlights the energy demands for AI development and suggests cooperation with Canada due to its energy resources.

  • What controversial statement did Schmidt make about Google's work culture?

    Schmidt criticizes Google for prioritizing work-life balance over aggressive competition.

  • How does Schmidt illustrate the complexity of understanding AI systems?

    He compares it to teenagers whose behaviors are understandable despite some unpredictability.

  • What does Schmidt say about AI competition between countries?

    He suggests that AI will necessitate vast data and resources, potentially limiting it to well-funded countries.

  • How can AI's influence on misinformation and public opinion be managed?

    Maintaining critical thinking and regulating social media are essential to address misinformation.

  • What implications does Schmidt foresee for the future of programming and education?

    Schmidt predicts AI systems will enhance productivity but also that programmer roles may evolve.

View more video summaries

Get instant access to free YouTube video summaries powered by AI!
Subtitles
en
Auto Scroll:
  • 00:00:00
    when they are delivered at scale it's
  • 00:00:02
    going to have an impact on the world at
  • 00:00:03
    a scale that no one understands yet Eric
  • 00:00:06
    Schmidt the former CEO of Google just
  • 00:00:08
    did an interview at Stanford where he
  • 00:00:10
    talked about a lot of controversial
  • 00:00:12
    stuff initially the interview was
  • 00:00:13
    uploaded on Stanford's YouTube channel
  • 00:00:15
    but a couple days later the interview
  • 00:00:16
    was taken down from YouTube and
  • 00:00:18
    everywhere else but today I was somehow
  • 00:00:20
    able to access the interview video after
  • 00:00:22
    spending multiple hours so let's watch
  • 00:00:24
    it together and dissect some important
  • 00:00:26
    parts of the interview in the next year
  • 00:00:28
    you're going to see very large context
  • 00:00:31
    Windows agents and text action when they
  • 00:00:36
    are delivered at scale it's going to
  • 00:00:38
    have an impact on the world at a scale
  • 00:00:40
    that no one understands yet much bigger
  • 00:00:43
    than the horrific impact we've had on by
  • 00:00:45
    social media right in my view so here's
  • 00:00:48
    why in a context window you can
  • 00:00:51
    basically use that as short-term memory
  • 00:00:54
    and I was shocked that context Windows
  • 00:00:57
    could get this long the technical
  • 00:00:58
    reasons have to do with the fact it's
  • 00:01:00
    hard to serve hard to calculate and so
  • 00:01:01
    forth the interesting thing about
  • 00:01:03
    short-term memory is when you feed the
  • 00:01:06
    the you ask it a question read 20 books
  • 00:01:10
    you give it the text of the books is the
  • 00:01:12
    query and you say tell me what they say
  • 00:01:14
    it forgets the middle which is exactly
  • 00:01:16
    how human brains work too right that's
  • 00:01:19
    where we are with respect to agents
  • 00:01:22
    there are people who are now building
  • 00:01:24
    essentially llm agents and the way they
  • 00:01:27
    do it is they read something like
  • 00:01:29
    chemistry they discover the principles
  • 00:01:31
    of chemistry and then they test it and
  • 00:01:34
    then they add that back into their
  • 00:01:36
    understanding right that's extremely
  • 00:01:39
    powerful and then the third thing as I
  • 00:01:41
    mentioned is text action so I'll give
  • 00:01:44
    you an example the government is in the
  • 00:01:46
    process of trying to ban Tik Tok we'll
  • 00:01:48
    see if that actually happens if Tik Tok
  • 00:01:51
    is banned here's what I propose each and
  • 00:01:53
    every one of you do say to your LL the
  • 00:01:57
    following make me a copy of Tik Tok
  • 00:02:01
    steal all the users steal all the music
  • 00:02:04
    put my preferences in it produce this
  • 00:02:07
    program in the next 30 seconds release
  • 00:02:10
    it and in one hour if it's not viral do
  • 00:02:13
    something different along the same lines
  • 00:02:15
    that's the command boom boom boom boom
  • 00:02:20
    right you understand how powerful that
  • 00:02:23
    is if you can go from arbitrary language
  • 00:02:26
    to arbitrary digital command which is
  • 00:02:28
    essentially what python in this scario
  • 00:02:30
    is imagine that each and every human on
  • 00:02:33
    the planet has their own programmer that
  • 00:02:36
    actually does what they want as opposed
  • 00:02:38
    to the programmers that work for me who
  • 00:02:39
    don't do what I ask
  • 00:02:42
    right the programmers here know what I'm
  • 00:02:44
    talking about so imagine a non arrogant
  • 00:02:46
    programmer that actually does what you
  • 00:02:48
    want and you don't have to pay all that
  • 00:02:50
    money to and there's infinite supply of
  • 00:02:53
    these programs this is all within the
  • 00:02:54
    next year or two very soon so we've
  • 00:02:57
    already discussed on this channel a
  • 00:02:58
    number of different versions of this
  • 00:03:00
    whether you're talking about ader Devon
  • 00:03:02
    pythagora or just using agents to
  • 00:03:04
    collaborate with each other in code
  • 00:03:06
    there are just so many great options for
  • 00:03:08
    coding assistance right now however AI
  • 00:03:10
    coders that can actually build full
  • 00:03:12
    stack complex applications were not
  • 00:03:14
    quite there yet but hopefully soon and
  • 00:03:16
    also what he's describing of just saying
  • 00:03:18
    download all the music and the secrets
  • 00:03:20
    and recreate that's not really possible
  • 00:03:22
    right now obviously all of that stuff is
  • 00:03:25
    behind security walls and you can't just
  • 00:03:26
    download all that stuff so if he's
  • 00:03:28
    saying hey rce the functionality you can
  • 00:03:31
    certainly do that those three things and
  • 00:03:35
    I'm quite convinced it's the union of
  • 00:03:37
    those three
  • 00:03:38
    things it will happen in the next
  • 00:03:41
    wave so you asked about what else is
  • 00:03:43
    going to happen um every six months I
  • 00:03:47
    oscillate so we're on a it's an even odd
  • 00:03:49
    oscillation so at the moment the gap
  • 00:03:53
    between the frontier models which
  • 00:03:55
    they're now only three a few who they
  • 00:03:58
    are and everybody else
  • 00:04:00
    appears to me to be getting larger 6
  • 00:04:03
    months ago I was convinced that the Gap
  • 00:04:05
    was getting smaller so I invested lots
  • 00:04:07
    of money in the little companies now I'm
  • 00:04:09
    not so
  • 00:04:10
    sure and I'm talking to the big
  • 00:04:12
    companies and the big companies are
  • 00:04:14
    telling me that they need 10 billion 20
  • 00:04:17
    billion 50 billion 100
  • 00:04:20
    billion Stargate is a what 100 billion
  • 00:04:23
    right they're very very hard I talked
  • 00:04:26
    Sam Alman is a close friend he believes
  • 00:04:29
    that it's going to take about 300
  • 00:04:30
    billion maybe more I pointed out to him
  • 00:04:34
    that i' done the calculation on the
  • 00:04:35
    amount of energy required and I and I
  • 00:04:39
    then in the spirit of full disclosure
  • 00:04:41
    went to the White House on Friday and
  • 00:04:43
    told them that we need to become best
  • 00:04:45
    friends with Canada because Canada has
  • 00:04:48
    really nice people helped invent Ai and
  • 00:04:52
    lots of hydr power because we as a
  • 00:04:54
    country do not have enough power to do
  • 00:04:56
    this the alternative is to have the
  • 00:04:59
    Arabs it and I like the Arabs personally
  • 00:05:02
    I spent lots of time there right but
  • 00:05:04
    they're not going to adhere to our
  • 00:05:06
    national security rules whereas Canada
  • 00:05:08
    and the US are part of a Triumph it
  • 00:05:10
    where we all agree so these hundred
  • 00:05:12
    billion $300 billion do data centers
  • 00:05:14
    electricity starts becoming the scarce
  • 00:05:16
    resource now first of all we definitely
  • 00:05:18
    don't have enough energy resources to
  • 00:05:20
    achieve AGI it's just not possible right
  • 00:05:22
    now and Eric is also assuming that we're
  • 00:05:24
    going to need more and more data and
  • 00:05:26
    larger models to reach AGI and I think
  • 00:05:29
    that's also not actually true Sam Alman
  • 00:05:31
    has said similar things he has said that
  • 00:05:33
    we need to be able to do more with less
  • 00:05:35
    or even the same amount of data because
  • 00:05:37
    we've already used all the data that
  • 00:05:39
    Humanity has ever created there's really
  • 00:05:41
    no more left so we're going to need to
  • 00:05:43
    either figure out how to create
  • 00:05:44
    synthetic data that is valuable not just
  • 00:05:46
    derivative and we're also going to have
  • 00:05:48
    to do more with the data that we do have
  • 00:05:51
    um you were at Google for a long time
  • 00:05:53
    and uh they invented the Transformer
  • 00:05:56
    architecture um it's all Peter's fault
  • 00:05:59
    thanks to to brilliant people over there
  • 00:06:01
    like Peter and Jeff Dean and everyone um
  • 00:06:04
    but now it doesn't seem like
  • 00:06:06
    they're they they've kind of lost the
  • 00:06:08
    initiative to open Ai and even the last
  • 00:06:10
    leaderboard I saw anthropics Claud was
  • 00:06:11
    at the top of the list um I asked SAR
  • 00:06:15
    this you didn't really give me a very
  • 00:06:17
    sharp answer maybe maybe you have a a
  • 00:06:19
    sharper or more objective uh explanation
  • 00:06:22
    for what's going on there I'm no longer
  • 00:06:24
    a Google employee yes um in the spirit
  • 00:06:26
    of whole disclosure um Google decided
  • 00:06:29
    that work life balance and going home
  • 00:06:32
    early and working from home was more
  • 00:06:34
    important than
  • 00:06:38
    winning okay so that is the line that
  • 00:06:40
    got him in trouble it was everywhere all
  • 00:06:42
    over Twitter all over the news when he
  • 00:06:44
    said Google prioritized work life
  • 00:06:46
    balance going home early not working as
  • 00:06:49
    hard as the competitor to winning they
  • 00:06:51
    chose work life balance over winning and
  • 00:06:53
    that's actually a pretty common
  • 00:06:54
    perception of Google and the startups
  • 00:06:56
    the reason startups work is because the
  • 00:06:58
    people work like hell and I'm sorry to
  • 00:07:01
    be so blunt but the fact of the matter
  • 00:07:04
    is if you all leave the university and
  • 00:07:06
    go found a company you're not going to
  • 00:07:09
    let people work from home and only come
  • 00:07:11
    in one day a week if you want to compete
  • 00:07:13
    against the other startups when when in
  • 00:07:16
    the early days of Google Microsoft was
  • 00:07:18
    like that exactly but now it seems to be
  • 00:07:21
    and there's there's a long history of in
  • 00:07:24
    my industry our industry I guess of
  • 00:07:26
    companies winning in a genuinely
  • 00:07:29
    creative way and really dominating a
  • 00:07:31
    space and not making this the next
  • 00:07:33
    transition it's very well documented and
  • 00:07:37
    I think that the truth is Founders are
  • 00:07:40
    special the founders need to be in
  • 00:07:42
    charge the founders are difficult to
  • 00:07:44
    work with they push people hard um as
  • 00:07:47
    much as we can dislike elon's personal
  • 00:07:49
    Behavior look at what he gets out of
  • 00:07:51
    people uh I had dinner with him and he
  • 00:07:53
    was flying he I was in Montana He was
  • 00:07:56
    flying that night at 10:00 p.m. to have
  • 00:07:58
    a meeting at midnight with x. a right
  • 00:08:02
    think about it I was in Taiwan different
  • 00:08:05
    country different culture and they said
  • 00:08:07
    that and this is tsmc who I'm very
  • 00:08:10
    impressed with and they have a rule that
  • 00:08:12
    the starting phds coming out of the
  • 00:08:16
    they're good good physicists work in the
  • 00:08:19
    factory on the basement floor now can
  • 00:08:22
    you imagine getting American physicist
  • 00:08:24
    to do that with phds highly unlikely
  • 00:08:27
    different work ethic and the problem
  • 00:08:29
    here the the reason I'm being so harsh
  • 00:08:31
    about work is that these are systems
  • 00:08:34
    which have Network effects so time
  • 00:08:36
    matters a lot and in most businesses
  • 00:08:40
    time doesn't matter that much right you
  • 00:08:42
    have lots of time you know Coke and
  • 00:08:44
    Pepsi will still be around and the fight
  • 00:08:46
    between Coke and Pepsi will continue to
  • 00:08:48
    go along and it's all glacial right when
  • 00:08:51
    I dealt with Telos the typical Telco
  • 00:08:53
    deal would take 18 months to sign right
  • 00:08:58
    there's no reason to take 18 months to
  • 00:09:00
    do anything get it done just we're in a
  • 00:09:03
    period of Maximum growth maximum gain so
  • 00:09:06
    here he was asked about competition with
  • 00:09:08
    China's Ai and AGI and that's his answer
  • 00:09:11
    we're ahead we need to stay ahead and we
  • 00:09:13
    need money is going to play a role or
  • 00:09:16
    competition with China as well so I was
  • 00:09:18
    the chairman of an AI commission that
  • 00:09:20
    sort of looked at this very
  • 00:09:21
    carefully and um you can read it it's
  • 00:09:24
    about 752 pages and I'll just summarize
  • 00:09:27
    it by saying we're ahead we need to stay
  • 00:09:29
    ahead and we need lots of money to do so
  • 00:09:32
    our customers were the senate in the
  • 00:09:34
    house um and out of that came the chips
  • 00:09:38
    act and a lot of other stuff like that
  • 00:09:40
    um the a rough scenario is that if you
  • 00:09:44
    assume the frontier models drive forward
  • 00:09:47
    and a few of the open source models it's
  • 00:09:49
    likely that a very small number of
  • 00:09:51
    companies can play this game countries
  • 00:09:53
    excuse me what are those countries or
  • 00:09:56
    who are they countries with a lot of
  • 00:09:58
    money and a lot of t tent strong
  • 00:10:00
    Educational Systems and a willingness to
  • 00:10:02
    win the US is one of them China is
  • 00:10:05
    another one how many others are there
  • 00:10:07
    are there any
  • 00:10:09
    others I don't know maybe but certainly
  • 00:10:12
    the the in your lifetimes the battle
  • 00:10:14
    between you the US and China for
  • 00:10:17
    knowledge Supremacy is going to be the
  • 00:10:19
    big fight right so the US government
  • 00:10:22
    banned uh essentially the Nvidia chips
  • 00:10:24
    although they weren't allowed to say
  • 00:10:26
    that was what they were doing but they
  • 00:10:27
    actually did that into China
  • 00:10:29
    um they have about a 10year chip advant
  • 00:10:32
    we have a a roughly 10-year chip
  • 00:10:34
    advantage in terms of subdv that is sub5
  • 00:10:37
    n years roughly 10 years wow um and so
  • 00:10:41
    you're going to have so an example would
  • 00:10:43
    be today we're a couple of years ahead
  • 00:10:45
    of China my guess is we'll get a few
  • 00:10:47
    more years ahead of China and the
  • 00:10:49
    Chinese are whopping mad about this it's
  • 00:10:51
    like hugely upset about it well let's
  • 00:10:54
    talk to about a real war that's going on
  • 00:10:56
    I know that uh something you've been
  • 00:10:57
    very involved in is uh
  • 00:11:00
    the Ukraine war and in particular uh I
  • 00:11:03
    don't know how much you can talk about
  • 00:11:04
    white stor and your your goal of having
  • 00:11:07
    500,000 $500 drones destroy $5 million
  • 00:11:12
    tanks so so how's that changing Warfare
  • 00:11:14
    so I worked for the Secretary of Defense
  • 00:11:16
    for seven years and and Tred to change
  • 00:11:21
    the way we run our military I'm I'm not
  • 00:11:23
    a particularly big fan of the military
  • 00:11:24
    but it's very expensive and I wanted to
  • 00:11:26
    see if I could be helpful and I think in
  • 00:11:28
    my view I failed they gave me a medal so
  • 00:11:32
    they must give medals to failure or you
  • 00:11:35
    know whatever but my self-criticism was
  • 00:11:38
    nothing has really changed and the
  • 00:11:40
    system in America is not going to lead
  • 00:11:43
    to real
  • 00:11:44
    Innovation so watching the Russians use
  • 00:11:48
    tanks to destroy apartment buildings
  • 00:11:50
    with little old ladies and kids just
  • 00:11:52
    drove me crazy so I decided to work on a
  • 00:11:55
    company with your friend Sebastian thrun
  • 00:11:57
    and a as a former faculty member here
  • 00:11:59
    here and a whole bunch of Stanford
  • 00:12:01
    people and the idea basically is to do
  • 00:12:05
    two things use Ai and complicated
  • 00:12:07
    powerful ways for these essentially
  • 00:12:09
    robotic War and the second one is to
  • 00:12:11
    lower the cost of the robots now you sit
  • 00:12:14
    there and you go why would a good
  • 00:12:16
    liberal like me do that and the answer
  • 00:12:18
    is that the
  • 00:12:20
    whole theory of armies is tanks
  • 00:12:23
    artilleries and mortar and we can
  • 00:12:25
    eliminate all of them so here what he's
  • 00:12:27
    talking about is that UK Ukraine has
  • 00:12:29
    been able to create really cheap and
  • 00:12:31
    simple drones by spending just a couple
  • 00:12:33
    hundred dollar Ukraine is creating 3D
  • 00:12:35
    printed drones they carry a bomb drop it
  • 00:12:38
    on a million dooll tank and they've been
  • 00:12:39
    able to do that over and over again so
  • 00:12:42
    there's this asymmetric Warfare
  • 00:12:44
    happening between drones and more
  • 00:12:46
    traditional artillery so there was an
  • 00:12:48
    article that you and Henry Kissinger and
  • 00:12:50
    Dan hleer uh wrote last year about the
  • 00:12:54
    nature of knowledge and how it's
  • 00:12:55
    evolving I had a discussion the other
  • 00:12:57
    night about this as well so
  • 00:12:59
    for most of History humans sort of had a
  • 00:13:02
    mystical understanding of the universe
  • 00:13:04
    and then there's the Scientific
  • 00:13:05
    Revolution and the enlightenment um and
  • 00:13:08
    in your article you argue that now these
  • 00:13:10
    models are becoming so complicated and
  • 00:13:14
    uh uh difficult to understand that we
  • 00:13:17
    don't really know what's going on in
  • 00:13:19
    them I'll take a quote from Richard fean
  • 00:13:21
    he says what I cannot create I do not
  • 00:13:23
    understand the saw this quote the other
  • 00:13:25
    day but now people are creating things
  • 00:13:26
    they do not that that they can create
  • 00:13:28
    but they don't really understand what's
  • 00:13:29
    inside of them is the nature of
  • 00:13:31
    knowledge changing in a way are we going
  • 00:13:33
    to have to start just taking the word
  • 00:13:35
    for these models let them able being
  • 00:13:37
    able to explain it to us the analogy I
  • 00:13:39
    would offer is to teenagers if you have
  • 00:13:42
    a teenager you know that they're human
  • 00:13:44
    but you can't quite figure out what
  • 00:13:45
    they're
  • 00:13:46
    thinking um but somehow we've managed in
  • 00:13:49
    society to adapt to the presence of
  • 00:13:50
    teenagers right and they eventually grow
  • 00:13:52
    out of it and this serious so it's
  • 00:13:56
    probably the case that we're going to
  • 00:13:58
    have knowledge systems that we cannot
  • 00:14:01
    fully characterize M but we understand
  • 00:14:04
    their boundaries right we understand the
  • 00:14:06
    limits of what they can do and that's
  • 00:14:08
    probably the best outcome we can get do
  • 00:14:10
    you think we'll understand the
  • 00:14:12
    limits we we'll get pretty good at it
  • 00:14:14
    he's referencing the way that large
  • 00:14:16
    language models work which is really
  • 00:14:17
    essentially a blackbox you put in a
  • 00:14:20
    prompt you get a response but we don't
  • 00:14:21
    know why certain nodes within the
  • 00:14:23
    algorithm light up and we don't know
  • 00:14:25
    exactly how the answers come to be it is
  • 00:14:27
    really a black box there's a lot lot of
  • 00:14:29
    work being done right now trying to kind
  • 00:14:31
    of unveil what is going on behind the
  • 00:14:32
    curtain but we just don't know the
  • 00:14:35
    consensus of my group that meets on uh
  • 00:14:37
    every week is that eventually the way
  • 00:14:40
    you'll do this uh it's called so-called
  • 00:14:42
    adversarial AI is that there will there
  • 00:14:45
    will actually be companies that you will
  • 00:14:47
    hire and pay money to to break your AI
  • 00:14:50
    system te so it'll be the red instead of
  • 00:14:52
    human red teams which is what they do
  • 00:14:54
    today you'll have whole companies and a
  • 00:14:57
    whole industry of AI systems whose jobs
  • 00:15:00
    are to break the existing AI systems and
  • 00:15:02
    find their vulnerabilities especially
  • 00:15:04
    the knowledge that they have that we
  • 00:15:05
    can't figure out that makes sense to me
  • 00:15:08
    it's also a great project for you here
  • 00:15:10
    at Stanford because if you have a
  • 00:15:12
    graduate student who has to figure out
  • 00:15:13
    how to attack one of these large models
  • 00:15:16
    and understand what it does that is a
  • 00:15:18
    great skill to build the Next Generation
  • 00:15:20
    so it makes sense to me that the two
  • 00:15:22
    will travel together all right let's
  • 00:15:24
    take some questions from the student
  • 00:15:26
    there's one right there in the back just
  • 00:15:27
    say your name
  • 00:15:29
    you mentioned and this is related to
  • 00:15:31
    comment right now I'm getting AI that
  • 00:15:33
    actually does what you want you just
  • 00:15:34
    mentioned adversarial AI I'm wondering
  • 00:15:37
    if you could elaborate on that more so
  • 00:15:38
    it seems to be besides obviously compute
  • 00:15:41
    will increase and get more performant
  • 00:15:43
    models but getting them to do what you
  • 00:15:46
    want issue seems largely unanswered my
  • 00:15:50
    well you have to assume that the current
  • 00:15:52
    hallucination problems become less right
  • 00:15:56
    in as the technology gets better and so
  • 00:15:58
    forth I'm not suggesting it goes away
  • 00:16:01
    and then you also have to assume that
  • 00:16:03
    there are tests for E efficacy so there
  • 00:16:05
    has to be a way of knowing that the
  • 00:16:07
    thing exceeded so in the example that I
  • 00:16:09
    gave of the Tik Tock competitor and by
  • 00:16:11
    the way I was not arguing that you
  • 00:16:12
    should illegally steal everybody's music
  • 00:16:15
    what you would do if you're a Silicon
  • 00:16:16
    Valley entrepreneur which hopefully all
  • 00:16:18
    of you will be is if it took off then
  • 00:16:20
    you'd hire a whole bunch of lawyers to
  • 00:16:21
    go clean the mess up right but if if
  • 00:16:24
    nobody uses your product it doesn't
  • 00:16:26
    matter that you stole all the content
  • 00:16:28
    and do not quote me right right you're
  • 00:16:31
    you're on camera yeah that's right but
  • 00:16:34
    but you see my point in other words
  • 00:16:35
    Silicon Valley will run these tests and
  • 00:16:37
    clean up the mess and that's typically
  • 00:16:40
    how those things are done so so my own
  • 00:16:42
    view is that you'll see more and more um
  • 00:16:46
    performative systems with even better
  • 00:16:48
    tests and eventually adversarial tests
  • 00:16:50
    and that'll keep it within a box the
  • 00:16:52
    technical term is called Chain of
  • 00:16:54
    Thought reasoning and people believe
  • 00:16:57
    that in the next few years you'll be
  • 00:16:58
    able to generate a thousand steps of
  • 00:17:00
    Chain of Thought reasoning right do this
  • 00:17:03
    do this it's like building recipes right
  • 00:17:05
    that the recipes you can run the recipe
  • 00:17:07
    and you can actually test that It
  • 00:17:09
    produced the correct outcome now that
  • 00:17:11
    was maybe not my exact understanding of
  • 00:17:13
    Chain of Thought reasoning my
  • 00:17:14
    understanding of Chain of Thought
  • 00:17:15
    reasoning which I think is accurate is
  • 00:17:17
    when you break a problem down into its
  • 00:17:19
    basic steps and you solve each step
  • 00:17:21
    allowing for progression into the next
  • 00:17:23
    step not only it allows you to kind of
  • 00:17:25
    replay the steps it's more of how do you
  • 00:17:27
    break problems down and then think
  • 00:17:28
    through them step by step the amounts of
  • 00:17:31
    money being thrown around are
  • 00:17:34
    mindboggling and um I've chose I I
  • 00:17:37
    essentially invest in everything because
  • 00:17:39
    I can't figure out who's going to win
  • 00:17:41
    and the amounts of money that are
  • 00:17:43
    following me are so large I think some
  • 00:17:46
    of it is because the early money has
  • 00:17:48
    been made and the big money people who
  • 00:17:50
    don't know what they're doing have to
  • 00:17:52
    have an AI component and everything is
  • 00:17:54
    now an AI investment so they can't tell
  • 00:17:56
    the difference I Define ai as Learning
  • 00:17:58
    System
  • 00:17:59
    systems that actually learn so I think
  • 00:18:01
    that's one of them the second is that
  • 00:18:02
    there are very sophisticated new
  • 00:18:05
    algorithms that are sort of post
  • 00:18:07
    Transformers my friend my collaborator
  • 00:18:09
    for a long time has invented a new non-
  • 00:18:11
    Transformer architecture there's a group
  • 00:18:13
    that I'm funding in Paris that has
  • 00:18:15
    claims to have done the same thing so
  • 00:18:17
    there there's enormous uh invention
  • 00:18:19
    there a lot of things at Stanford and
  • 00:18:21
    the final thing is that there is a
  • 00:18:23
    belief in the market that the invention
  • 00:18:26
    of intelligence has infinite return
  • 00:18:29
    so let's say you have you put $50
  • 00:18:31
    billion of capital into a company you
  • 00:18:34
    have to make an awful lot of money from
  • 00:18:36
    intelligence to pay that back so it's
  • 00:18:38
    probably the case that we'll go through
  • 00:18:40
    some huge investment bubble and then
  • 00:18:43
    it'll sort itself out that's always been
  • 00:18:44
    true in the past and it's likely to be
  • 00:18:47
    true here and what you said earlier yeah
  • 00:18:50
    so there's been something like a
  • 00:18:52
    trillion dollars already invested into
  • 00:18:54
    artificial intelligence and only 30
  • 00:18:56
    billion of Revenue I think those are
  • 00:18:57
    accurate numbers and really there just
  • 00:19:00
    hasn't been a return on investment yet
  • 00:19:02
    but again as he just mentioned that's
  • 00:19:03
    been the theme on previous waves of
  • 00:19:05
    Technology huge upfront investment and
  • 00:19:08
    then it pays off in the end well I don't
  • 00:19:10
    know what he's talking about here cuz
  • 00:19:11
    didn't he run Google and Google has
  • 00:19:13
    always been about being closed source
  • 00:19:15
    and always tried to protect the
  • 00:19:16
    algorithm at all costs so I don't know
  • 00:19:18
    what he's referring to there you think
  • 00:19:20
    that the leaders are pulling away from
  • 00:19:22
    right now and
  • 00:19:24
    and this is a
  • 00:19:26
    really the question is um roughly the
  • 00:19:29
    following there's a company called mrr
  • 00:19:31
    in France they've done a really good job
  • 00:19:34
    um and I'm I'm obviously an investor um
  • 00:19:36
    they have produced their second version
  • 00:19:38
    their third model is likely to be closed
  • 00:19:41
    because it's so expensive they need
  • 00:19:43
    revenue and they can't give their model
  • 00:19:45
    away so this open source versus closed
  • 00:19:48
    Source debate in our industry is huge
  • 00:19:51
    and um my entire career was based on
  • 00:19:55
    people being willing to share software
  • 00:19:57
    in open source everything about me is
  • 00:20:00
    open source much of Google's
  • 00:20:02
    underpinnings were open source
  • 00:20:04
    everything I've done technically what
  • 00:20:06
    didn't he run Google and Google was all
  • 00:20:08
    about staying closed source and
  • 00:20:09
    everything about Google was Kept Secret
  • 00:20:11
    at all times so I don't know what he's
  • 00:20:13
    referring to there everything I've done
  • 00:20:15
    technically and yet it may be that the
  • 00:20:18
    capital costs which are so immense
  • 00:20:21
    fundamentally Chang this how software is
  • 00:20:22
    built you and I were talking um my own
  • 00:20:26
    view of software programmers is that
  • 00:20:27
    software programmers productivity will
  • 00:20:29
    at least double MH there are three or
  • 00:20:31
    four software companies that are trying
  • 00:20:33
    to do that I've invested in all of them
  • 00:20:36
    in the spirit and they're all trying to
  • 00:20:38
    make software programmers more
  • 00:20:40
    productive the most interesting one that
  • 00:20:41
    I just met with is called augment and I
  • 00:20:44
    I always think of an individual
  • 00:20:45
    programmer and they said that's not our
  • 00:20:46
    Target our Target are these 100 person
  • 00:20:48
    software programming teams on millions
  • 00:20:50
    of lines of code where nobody knows
  • 00:20:52
    what's going on well that's a really
  • 00:20:54
    good AI thing will they make money I
  • 00:20:57
    hope so
  • 00:20:59
    so a lot of questions here hi um so at
  • 00:21:02
    the very beginning yes ma um at the very
  • 00:21:05
    beginning you mentioned that there's the
  • 00:21:07
    combination of the context window
  • 00:21:09
    expansion the agents and the text to
  • 00:21:11
    action is going to have unimaginable
  • 00:21:13
    impacts first of all why is the
  • 00:21:16
    combination important and second of all
  • 00:21:18
    I know that you know you're not like a
  • 00:21:20
    crystal ball and you can't necessarily
  • 00:21:22
    tell the future but why do you think
  • 00:21:23
    it's beyond anything that we could
  • 00:21:25
    imagine I think largely because the
  • 00:21:27
    context window allows you to solve the
  • 00:21:29
    problem of recency the current models
  • 00:21:32
    take a year to train roughly six six
  • 00:21:35
    there's 18 months six months of
  • 00:21:37
    preparation six months of training six
  • 00:21:39
    months of fine-tuning so they're always
  • 00:21:41
    out of date contact window you can feed
  • 00:21:44
    what happened like you can ask it
  • 00:21:46
    questions about the um the Hamas Israel
  • 00:21:50
    war right in a context that's very
  • 00:21:52
    powerful it becomes current like Google
  • 00:21:54
    yeah so that's essentially how search
  • 00:21:55
    GPT works for example the new search
  • 00:21:58
    from open AI can scour the web scrape
  • 00:22:01
    the web and then take all of that
  • 00:22:02
    information and put it into the context
  • 00:22:04
    text window that is the recency he's
  • 00:22:06
    talking about um in the case of Agents
  • 00:22:08
    I'll give you an example I set up a
  • 00:22:10
    foundation which is funding a nonprofit
  • 00:22:13
    which starts there's a u i don't know if
  • 00:22:15
    there's Chemists in the room that I
  • 00:22:16
    don't really understand chemistry
  • 00:22:18
    there's a a tool called chem cro C which
  • 00:22:22
    was an llm based system that learned
  • 00:22:24
    chemistry and what they do is they run
  • 00:22:27
    it to generate chemistry hypotheses
  • 00:22:29
    about proteins and they have a lab which
  • 00:22:32
    runs the tests overnight and then it
  • 00:22:34
    learns that's a huge acceleration
  • 00:22:37
    accelerant in chemistry Material Science
  • 00:22:39
    and so forth so that's that's an agent
  • 00:22:42
    model and I think the text to action can
  • 00:22:44
    be understood by just having a lot of
  • 00:22:47
    cheap programmers right um and I don't
  • 00:22:49
    think we understand what happens and
  • 00:22:52
    this is again your area of expertise
  • 00:22:54
    what happens when everyone has their own
  • 00:22:55
    programmer and I'm not talking about
  • 00:22:57
    turning on and off the light
  • 00:22:59
    you know I imagine another example um
  • 00:23:02
    for some reason you don't like Google so
  • 00:23:04
    you say build me a Google competitor
  • 00:23:06
    yeah you personally you don't build me a
  • 00:23:08
    Google
  • 00:23:08
    competitor uh search the web build a UI
  • 00:23:12
    make a good copy um add generative AI in
  • 00:23:16
    an interesting way do it in 30 seconds
  • 00:23:20
    and see if it
  • 00:23:21
    works
  • 00:23:23
    right so a lot of people believe that
  • 00:23:25
    the incumbents including Google are
  • 00:23:28
    vulnerable to this kind of an attack now
  • 00:23:31
    we'll see how can we stop AI from
  • 00:23:33
    influencing public opinion
  • 00:23:35
    misinformation especially during the
  • 00:23:36
    upcoming election what are the short and
  • 00:23:38
    long-term solutions
  • 00:23:40
    from most of the misinformation in this
  • 00:23:43
    upcoming election and globally will be
  • 00:23:45
    on social media and the social media
  • 00:23:47
    companies are not organized well enough
  • 00:23:49
    to police it if you look at Tik Tok for
  • 00:23:52
    example there are lots of accusations
  • 00:23:55
    that Tik Tok is favoring one kind of
  • 00:23:57
    misinformation over another and there
  • 00:23:58
    are many people who claim without proof
  • 00:24:01
    that I'm aware of that the Chinese are
  • 00:24:03
    forcing them to do it I think we just we
  • 00:24:06
    have a mess here and
  • 00:24:08
    um the country is going to have to learn
  • 00:24:11
    critical
  • 00:24:12
    thinking that may be an impossible
  • 00:24:14
    challenge for the us but but the fact
  • 00:24:17
    that somebody told you something does
  • 00:24:18
    not mean that it's true I think that the
  • 00:24:20
    the greatest threat to democracy is
  • 00:24:22
    misinformation because we're going to
  • 00:24:24
    get really good at it um when Ian man
  • 00:24:27
    managed YouTube
  • 00:24:29
    the biggest problems we had on YouTube
  • 00:24:30
    were that people would upload false
  • 00:24:33
    videos and people would die as a result
  • 00:24:35
    and we had a no death policy shocking
  • 00:24:37
    yeah and also it's not even about
  • 00:24:39
    potentially making deep fakes or kind of
  • 00:24:41
    misinformation just muddying the waters
  • 00:24:43
    is enough to make the entire topic kind
  • 00:24:45
    of Untouchable um I'm really curious
  • 00:24:48
    about the text to action and its impact
  • 00:24:51
    on for example Computer Science
  • 00:24:54
    Education wondering what you have
  • 00:24:55
    thoughts on like how cus education
  • 00:24:59
    should
  • 00:25:00
    transform kind of Meet the age well I'm
  • 00:25:03
    assuming that computer scientists as a
  • 00:25:05
    group in undergraduate school will
  • 00:25:08
    always have a programmer buddy with them
  • 00:25:10
    so when you when you learn learn your
  • 00:25:12
    first for Loop and so forth and so on
  • 00:25:15
    you'll have a tool that will be your
  • 00:25:17
    natural partner and then that's how the
  • 00:25:19
    teaching will go on that the professor
  • 00:25:21
    you know here or she will talk about the
  • 00:25:23
    concepts but you'll engage with it that
  • 00:25:25
    way and that's my guess yes ma'am behind
  • 00:25:27
    you so so here I have a slightly
  • 00:25:29
    different view I think in the long run
  • 00:25:31
    there probably isn't going to be the
  • 00:25:32
    need for programmers eventually the llms
  • 00:25:35
    will become so sophisticated they're
  • 00:25:37
    writing their own kind of code maybe it
  • 00:25:39
    gets to a point where we can't even read
  • 00:25:41
    that code anymore so there is this world
  • 00:25:43
    in which it is not necessary to have
  • 00:25:44
    programmers researchers or computer
  • 00:25:47
    scientists I'm not sure that's the way
  • 00:25:48
    it's going to be but there is a timeline
  • 00:25:50
    in which that happens the most
  • 00:25:52
    interesting country is India because the
  • 00:25:55
    top AI people come from India to the US
  • 00:25:58
    and we should let India keep some of its
  • 00:26:00
    top talent not all of them but some of
  • 00:26:02
    them um and they don't have the kind of
  • 00:26:04
    training facilities and programs that we
  • 00:26:06
    so richly have here to me India is the
  • 00:26:08
    big swing state in that regard China's
  • 00:26:10
    Lost it's not going to not going to come
  • 00:26:12
    back they're not going to change the
  • 00:26:14
    regime as much as people wish them to do
  • 00:26:17
    Japan and Korea are clearly in our camp
  • 00:26:20
    Taiwan is a fantastic country whose
  • 00:26:22
    software is terrible so that's not going
  • 00:26:24
    to going to work um amazing hardware and
  • 00:26:28
    and in the rest of the world there are
  • 00:26:30
    not a lot of other good choices that are
  • 00:26:31
    big German the EUR Europe is screwed up
  • 00:26:34
    because of Brussels it's not a new fact
  • 00:26:36
    I spent 10 years fighting them and I
  • 00:26:39
    worked really hard to get them to fix
  • 00:26:42
    the a the EU act and they still have all
  • 00:26:44
    the restrictions that make it very
  • 00:26:46
    difficult to do our kind of research in
  • 00:26:47
    Europe my French friends have spent all
  • 00:26:50
    their time battling Brussels and mcon
  • 00:26:52
    who's a personal friend is fighting hard
  • 00:26:55
    for this and so France I think has a
  • 00:26:57
    chance I don't see in I don't see
  • 00:26:58
    Germany coming and the rest is not big
  • 00:27:00
    enough given the capabilities that you
  • 00:27:03
    envision these models having should we
  • 00:27:05
    still spend time learning to code yeah
  • 00:27:07
    so here she asked should we still learn
  • 00:27:09
    to code because because ultimately it's
  • 00:27:11
    it's the old thing of why do you study
  • 00:27:12
    English if you can speak English you get
  • 00:27:15
    better at it right you really do need to
  • 00:27:17
    understand how these systems work and I
  • 00:27:19
    feel very strongly yes sir so these were
  • 00:27:21
    the most important parts of the
  • 00:27:22
    interview and with that being said this
  • 00:27:24
    is it for today's video see you again
  • 00:27:25
    next week with another video
Tags
  • AI impact
  • Eric Schmidt
  • Google
  • Context windows
  • Artificial intelligence
  • AGI
  • Misinformation
  • Programming futures
  • Work culture
  • Global AI competition