Ex-Google CEO's BANNED Interview LEAKED: "You Have No Idea What's Coming"

00:27:30
https://www.youtube.com/watch?v=EUeryhp8HSQ

ุงู„ู…ู„ุฎุต

TLDREric Schmidt, former CEO of Google, shared insights on the future of AI in an interview that was controversially removed from online platforms. He focused on innovations like "context windows," AI agents, and "text-to-action" capabilities, predicting their major impact within a year or two. Schmidt illustrated context windows as a form of short-term memory in AI systems, allowing the recall and processing of vast amounts of data. He foresees AI's potential to personalize software production, effectively acting as personal programmers. Regarding AI competition, Schmidt highlighted the significance of the ongoing US-China race, underlining that large financial investments are crucial for maintaining leadership. He touched upon adversarial AI, envisioned to enhance the robustness of systems by exposing vulnerabilities through simulated attacks. Schmidt expressed concern over misinformation, identifying it as a paramount threat to democracy, while also discussing open source vs. closed source debates in AI development. His reflections on AI's educational role suggest it could transform learning by providing interactive assistance. Despite the enthusiasm for AI progress, Schmidt acknowledged challenges such as energy and data limitations in achieving AGI.

ุงู„ูˆุฌุจุงุช ุงู„ุฌุงู‡ุฒุฉ

  • ๐Ÿš€ AI advancements like context windows could revolutionize technology.
  • ๐Ÿ’ก AI agents can learn and improve knowledge autonomously.
  • ๐Ÿค” The combination of AI advancements could reshape industries.
  • โš– Open source vs. closed source debate is significant in AI.
  • ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ณ The US-China AI competition is intense and resource-driven.
  • ๐Ÿง  Understanding AI systems might require accepting some opaqueness.
  • ๐Ÿ›ก Adversarial AIs will be crucial in securing AI systems.
  • ๐Ÿ” Misinformation through AI poses a major threat to democracy.
  • ๐Ÿง‘โ€๐Ÿซ AI could drastically change the way programming is taught.
  • โšก Energy constraints are a significant hurdle in reaching AGI.
  • ๐ŸŒ AI's potential is vast, but ethical considerations are crucial.
  • ๐ŸŽ“ AI will likely assist in education, enhancing learning efficiency.

ุงู„ุฌุฏูˆู„ ุงู„ุฒู…ู†ูŠ

  • 00:00:00 - 00:05:00

    Eric Schmidt, former Google CEO, discussed the impact of large context windows and agents in AI, suggesting it will surpass the effects of social media. He mentioned the potential of context windows as short-term memory, allowing for complex information processing akin to human cognition. Schmidt sees a future where individuals could have AI agents performing complex programming tasks on demand, vastly changing personal and professional landscapes.

  • 00:05:00 - 00:10:00

    Schmidt highlighted the widening gap between major tech companies and smaller firms in AI development, mentioning his shift in investment focus. He shared concerns over energy resources needed for advancing AI, suggesting alliances with countries like Canada for sustainable growth. Schmidt emphasized innovations should maximize data use, mentioning Google's foundational role in AI but acknowledging competitors' advancements.

  • 00:10:00 - 00:15:00

    He critiqued Google's balance between work-life and competitive drive, attributing it to their loss of AI leadership. Schmidt discussed the intense work ethic of start-up founders compared to established companies. He noted that network effects in business require agility, drawing comparisons to tech giants and geopolitical AI races, notably between the US and China, suggesting economic power and talent focus will determine future leaders.

  • 00:15:00 - 00:20:00

    Discussing military innovations, Schmidt mentioned using AI and affordable drones in modern warfare, aligning technological advancement with defense. He described the transformation of knowledge, comparing it to teenage unpredictability, implying that while AI systems might be opaque, their potential applications are vast. The conversation touched on adversarial AI to improve model reliability, encouraging AI's role in driving technology and questioning the implications for society.

  • 00:20:00 - 00:27:30

    The interview concluded with thoughts on AI's impact on public opinion and misinformation, acknowledging social mediaโ€™s role in spreading false information. Schmidt was optimistic about AI's potential to reshape industries, including programming, but acknowledged challenges in AI literacy and governance. He advocated for continued education in programming to ensure understanding of these systems, hinting at geopolitical impacts of AI talent concentration.

ุงุนุฑุถ ุงู„ู…ุฒูŠุฏ

ุงู„ุฎุฑูŠุทุฉ ุงู„ุฐู‡ู†ูŠุฉ

Mind Map

ุงู„ุฃุณุฆู„ุฉ ุงู„ุดุงุฆุนุฉ

  • Why was Eric Schmidt's interview taken down from YouTube?

    The exact reason isn't stated, but it was initially controversial and later removed from public platforms.

  • What are context windows in AI?

    Context windows in AI refer to the capability of using large context as memory to process and recall information like human short-term memory.

  • What does Eric Schmidt predict for the future of AI?

    He predicts AI advancements like context windows, agents, and text-to-action will drastically impact the world within a year or two.

  • Will AI take over programming tasks?

    Schmidt suggests AI can handle complex programming tasks, potentially having each person their own programmer in the future.

  • How does Schmidt compare AI competition between the US and China?

    He believes the US and China will dominate AI due to resource and talent availability.

  • What are adversarial AIs?

    Adversarial AIs are designed to test and find vulnerabilities in other AI systems, ensuring robust and reliable performance.

  • How does Schmidt view misinformation's impact on democracy?

    He sees misinformation as the biggest threat, given AI's potential to create believable yet deceptive content.

  • What is Schmidt's take on open source vs. closed source in AI development?

    He supports open source but acknowledges financial pressures might necessitate closed systems due to high costs.

  • How does Schmidt envision AI's role in education?

    He imagines AI as an educational tool assisting students in learning programming and other subjects.

  • What challenge does Schmidt associate with achieving AGI (Artificial General Intelligence)?

    He mentions energy and computational data constraints as major hurdles in developing AGI.

ุนุฑุถ ุงู„ู…ุฒูŠุฏ ู…ู† ู…ู„ุฎุตุงุช ุงู„ููŠุฏูŠูˆ

ุงุญุตู„ ุนู„ู‰ ูˆุตูˆู„ ููˆุฑูŠ ุฅู„ู‰ ู…ู„ุฎุตุงุช ููŠุฏูŠูˆ YouTube ุงู„ู…ุฌุงู†ูŠุฉ ุงู„ู…ุฏุนูˆู…ุฉ ุจุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠ!
ุงู„ุชุฑุฌู…ุงุช
en
ุงู„ุชู…ุฑูŠุฑ ุงู„ุชู„ู‚ุงุฆูŠ:
  • 00:00:00
    when they are delivered at scale it's
  • 00:00:02
    going to have an impact on the world at
  • 00:00:03
    a scale that no one understands yet Eric
  • 00:00:06
    Schmidt the former CEO of Google just
  • 00:00:08
    did an interview at Stanford where he
  • 00:00:10
    talked about a lot of controversial
  • 00:00:11
    stuff initially the interview was
  • 00:00:13
    uploaded on Stanford's YouTube channel
  • 00:00:15
    but a couple days later the interview
  • 00:00:16
    was taken down from YouTube and
  • 00:00:18
    everywhere else but today I was somehow
  • 00:00:20
    able to access the interview video after
  • 00:00:22
    spending multiple hours so let's watch
  • 00:00:24
    it together and dissect some important
  • 00:00:26
    parts of the interview in the next year
  • 00:00:28
    you're going to see very large context
  • 00:00:31
    Windows agents and text action when they
  • 00:00:36
    are delivered at scale it's going to
  • 00:00:38
    have an impact on the world at a scale
  • 00:00:40
    that no one understands yet much bigger
  • 00:00:43
    than the horrific impact we've had on by
  • 00:00:45
    social media right in my view so here's
  • 00:00:48
    why in a context window you can
  • 00:00:51
    basically use that as short-term memory
  • 00:00:54
    and I was shocked that context Windows
  • 00:00:57
    could get this long the technical
  • 00:00:58
    reasons have to do with the fact it's
  • 00:01:00
    hard to serve hard to calculate and so
  • 00:01:01
    forth the interesting thing about
  • 00:01:03
    short-term memory is when you feed the
  • 00:01:06
    the you ask it a question read 20 books
  • 00:01:10
    you give it the text of the books is the
  • 00:01:12
    query and you say tell me what they say
  • 00:01:14
    it forgets the middle which is exactly
  • 00:01:16
    how human brains work too right that's
  • 00:01:19
    where we are with respect to agents
  • 00:01:22
    there are people who are now building
  • 00:01:24
    essentially llm agents and the way they
  • 00:01:27
    do it is they read something like
  • 00:01:29
    chemistry they discover the principles
  • 00:01:31
    of chemistry and then they test it and
  • 00:01:34
    then they add that back into their
  • 00:01:36
    understanding right that's extremely
  • 00:01:39
    powerful and then the third thing as I
  • 00:01:41
    mentioned is text action so I'll give
  • 00:01:44
    you an example the government is in the
  • 00:01:46
    process of trying to ban Tik Tok we'll
  • 00:01:48
    see if that actually happens if Tik Tok
  • 00:01:51
    is banned here's what I propose each and
  • 00:01:53
    every one of you do say to your llm the
  • 00:01:57
    following make me a copy of Tik Tok
  • 00:02:00
    steal all the users steal all the music
  • 00:02:04
    put my preferences in it produce this
  • 00:02:07
    program in the next 30 seconds release
  • 00:02:10
    it and in one hour if it's not viral do
  • 00:02:13
    something different along the same lines
  • 00:02:15
    that's the command boom boom boom boom
  • 00:02:20
    right you understand how powerful that
  • 00:02:23
    is if you can go from arbitrary language
  • 00:02:26
    to arbitrary digital command which is
  • 00:02:28
    essentially what python in this scen
  • 00:02:30
    is imagine that each and every human on
  • 00:02:33
    the planet has their own programmer that
  • 00:02:36
    actually does what they want as opposed
  • 00:02:38
    to the programmers that work for me who
  • 00:02:39
    don't do what I ask
  • 00:02:42
    right the programmers here know what I'm
  • 00:02:44
    talking about so imagine a non arrogant
  • 00:02:46
    programmer that actually does what you
  • 00:02:48
    want and you don't have to pay all that
  • 00:02:50
    money to and there's infinite supply of
  • 00:02:53
    these programs and this is all within
  • 00:02:54
    the next year or two very soon so we've
  • 00:02:57
    already discussed on this channel a
  • 00:02:58
    number of different versions of this
  • 00:03:00
    whether you're talking about ader Devon
  • 00:03:02
    pythagora or just using agents to
  • 00:03:04
    collaborate with each other in code
  • 00:03:06
    there are just so many great options for
  • 00:03:08
    coding assistance right now however AI
  • 00:03:10
    coders that can actually build full
  • 00:03:12
    stack complex applications were not
  • 00:03:14
    quite there yet but hopefully soon and
  • 00:03:16
    also what he's describing of just saying
  • 00:03:18
    download all the music and the secrets
  • 00:03:20
    and recreate that's not really possible
  • 00:03:22
    right now obviously all of that stuff is
  • 00:03:25
    behind security walls and you can't just
  • 00:03:26
    download all that stuff so if he's
  • 00:03:28
    saying hey repr the functionality you
  • 00:03:31
    can certainly do that those three things
  • 00:03:34
    and I'm quite convinced it's the union
  • 00:03:36
    of those three
  • 00:03:38
    things that will happen in the next
  • 00:03:41
    wave so you asked about what else is
  • 00:03:43
    going to happen um every six months I
  • 00:03:46
    oscillate so we're on a it's an even odd
  • 00:03:49
    oscillation so at the moment the gap
  • 00:03:53
    between the frontier models which
  • 00:03:55
    they're now only three a few who they
  • 00:03:58
    are and everybody else
  • 00:04:00
    appears to me to be getting larger six
  • 00:04:03
    months ago I was convinced that the Gap
  • 00:04:05
    was getting smaller so I invested lots
  • 00:04:07
    of money in the little companies now I'm
  • 00:04:09
    not so
  • 00:04:10
    sure and I'm talking to the big
  • 00:04:12
    companies and the big companies are
  • 00:04:14
    telling me that they need 10 billion 20
  • 00:04:17
    billion 50 billion 100
  • 00:04:20
    billion Stargate is a what 100 billion
  • 00:04:23
    right very very hard I talked Sam Alman
  • 00:04:26
    is a close friend he believes that it's
  • 00:04:29
    going to take about 300 billion maybe
  • 00:04:32
    more I pointed out to him that I'd done
  • 00:04:34
    the calculation on the amount of energy
  • 00:04:36
    required and I and I then in the spirit
  • 00:04:40
    of full disclosure went to the White
  • 00:04:42
    House on Friday and told them that we
  • 00:04:44
    need to become best friends with Canada
  • 00:04:46
    because Canada has really nice people
  • 00:04:50
    helped invent Ai and lots of hydr power
  • 00:04:53
    because we as a country do not have
  • 00:04:55
    enough power to do this the alternative
  • 00:04:58
    is to have the Arabs f it and I like the
  • 00:05:00
    Arabs personally I spent lots of time
  • 00:05:03
    there right but they're not going to
  • 00:05:05
    adhere to our national security rules
  • 00:05:07
    whereas Canada and the US are part of a
  • 00:05:09
    Triumph it where we all agree so these
  • 00:05:11
    hundred billion $300 billion do data
  • 00:05:13
    centers electricity starts becoming the
  • 00:05:15
    scarce resource now first of all we
  • 00:05:17
    definitely don't have enough energy
  • 00:05:19
    resources to achieve AGI it's just not
  • 00:05:21
    possible right now and Eric is also
  • 00:05:23
    assuming that we're going to need more
  • 00:05:25
    and more data and larger models to reach
  • 00:05:27
    AGI and I think that's also actually
  • 00:05:30
    true Sam Altman has said similar things
  • 00:05:32
    he has said that we need to be able to
  • 00:05:34
    do more with less or even the same
  • 00:05:36
    amount of data because we've already
  • 00:05:37
    used all the data that Humanity has ever
  • 00:05:40
    created there's really no more left so
  • 00:05:42
    we're going to need to either figure out
  • 00:05:43
    how to create synthetic data that is
  • 00:05:45
    valuable not just derivative and we're
  • 00:05:47
    also going to have to do more with the
  • 00:05:49
    data that we do have um you were at
  • 00:05:51
    Google for a long time and uh they
  • 00:05:54
    invented the Transformer
  • 00:05:56
    architecture um it's all Peter's fault
  • 00:05:59
    thanks to to brilliant people over there
  • 00:06:01
    like Peter and Jeff Dean and everyone um
  • 00:06:04
    but now it doesn't seem like
  • 00:06:06
    they're they they've kind of lost the
  • 00:06:08
    initiative to open Ai and even the last
  • 00:06:10
    leaderboard I saw anthropics Claud was
  • 00:06:11
    at the top of the list um I asked SAR
  • 00:06:15
    this you didn't really give me a very
  • 00:06:17
    sharp answer maybe maybe you have a a
  • 00:06:19
    sharper or a more objective uh
  • 00:06:21
    explanation for what's going on there
  • 00:06:23
    I'm no longer a Google employee yes um
  • 00:06:26
    in the spirit of whole disclosure um
  • 00:06:28
    Google decided that work life balance
  • 00:06:31
    and going home early and working from
  • 00:06:33
    home was more important than
  • 00:06:37
    winning okay so that is the line that
  • 00:06:40
    got him in trouble it was everywhere all
  • 00:06:42
    over Twitter all over the news when he
  • 00:06:44
    said Google prioritized work life
  • 00:06:46
    balance going home early not working as
  • 00:06:49
    hard as the competitor to winning they
  • 00:06:51
    chose work life balance over winning and
  • 00:06:53
    that's actually a pretty common
  • 00:06:54
    perception of Google and the startups
  • 00:06:56
    the reason startups work is because the
  • 00:06:58
    people work like H and I'm sorry to be
  • 00:07:01
    so blunt but the fact of the matter is
  • 00:07:04
    if you all leave the university and go
  • 00:07:07
    found a company you're not going to let
  • 00:07:09
    people work from home and only come in
  • 00:07:11
    one day a week if you want to compete
  • 00:07:13
    against the other startups when when in
  • 00:07:16
    the early days of Google Microsoft was
  • 00:07:18
    like that exactly but now it seems to be
  • 00:07:21
    and there's there's a long history of in
  • 00:07:23
    my industry our industry I guess of
  • 00:07:26
    companies winning in a genuinely
  • 00:07:29
    creative way and really dominating a
  • 00:07:31
    space and not making this the next
  • 00:07:33
    transition it's very well documented and
  • 00:07:37
    I think that the truth is Founders are
  • 00:07:40
    special the founders need to be in
  • 00:07:42
    charge the founders are difficult to
  • 00:07:44
    work with they push people hard um as
  • 00:07:47
    much as we can dislike elon's personal
  • 00:07:49
    Behavior look at what he gets out of
  • 00:07:51
    people uh I had dinner with him and he
  • 00:07:53
    was flying he I was in Montana He was
  • 00:07:56
    flying that night at 10:00 p.m. to have
  • 00:07:58
    a meeting at midnight with x. a right
  • 00:08:02
    think about it I was in Taiwan different
  • 00:08:05
    country different culture and they said
  • 00:08:07
    that and this is tsmc who I'm very
  • 00:08:10
    impressed with and they have a rule that
  • 00:08:12
    the starting phds coming out of the
  • 00:08:16
    they're good good physicists work in the
  • 00:08:19
    factory on the basement floor now can
  • 00:08:22
    you imagine getting American physicist
  • 00:08:24
    to do that with phds highly unlikely
  • 00:08:27
    different work ethic and the problem
  • 00:08:29
    here the the reason I'm being so harsh
  • 00:08:31
    about work is that these are systems
  • 00:08:34
    which have Network effects so time
  • 00:08:36
    matters a lot and in most businesses
  • 00:08:40
    time doesn't matter that much right you
  • 00:08:42
    have lots of time you know Coke and
  • 00:08:44
    Pepsi will still be around and the fight
  • 00:08:46
    between cop and Pepsi will continue to
  • 00:08:48
    go along and it's all glacial right when
  • 00:08:51
    I dealt with Telos the typical Telco
  • 00:08:53
    deal would take 18 months to sign right
  • 00:08:58
    there's no reason to take 18 months to
  • 00:09:00
    do anything get it done just we're in a
  • 00:09:03
    period of Maximum growth maximum gain so
  • 00:09:06
    here he was asked about competition with
  • 00:09:08
    China's Ai and AGI and that's his answer
  • 00:09:11
    we're ahead we need to stay ahead and we
  • 00:09:13
    need money is going to play a role or
  • 00:09:16
    competition with China as well so I was
  • 00:09:17
    the chairman of an AI commission that
  • 00:09:20
    sort of looked at this very
  • 00:09:21
    carefully and um you can read it it's
  • 00:09:24
    about 752 pages and I'll just summarize
  • 00:09:27
    it by saying we're ahead we need stay
  • 00:09:29
    ahead and we need lots of money to do so
  • 00:09:32
    our customers were the Senate and the
  • 00:09:34
    house um and out of that came the chips
  • 00:09:38
    act and a lot of other stuff like that
  • 00:09:40
    um the a rough scenario is that if you
  • 00:09:44
    assume the frontier models drive forward
  • 00:09:47
    and a few of the open source models it's
  • 00:09:49
    likely that a very small number of
  • 00:09:51
    companies can play this game countries
  • 00:09:53
    excuse me what are those countries or
  • 00:09:56
    who are they countries with a lot of
  • 00:09:58
    money and a lot of talent
  • 00:10:00
    strong Educational Systems and a
  • 00:10:01
    willingness to win the US is one of them
  • 00:10:04
    China is another one how many others are
  • 00:10:06
    there are there any
  • 00:10:09
    others I don't know maybe but certainly
  • 00:10:12
    the the in your lifetimes the battle
  • 00:10:14
    between you the US and China for
  • 00:10:17
    knowledge Supremacy is going to be the
  • 00:10:19
    big fight right so the US government
  • 00:10:22
    banned uh essentially the Nvidia chips
  • 00:10:24
    although they weren't allowed to say
  • 00:10:25
    that was what they were doing but they
  • 00:10:27
    actually did that into China um they
  • 00:10:30
    have about a 10-year chip advant we have
  • 00:10:32
    a a roughly 10-year chip advantage in
  • 00:10:35
    terms of subdv that is sub5 n years
  • 00:10:38
    roughly 10 years wow um and so you're
  • 00:10:41
    going to have so an example would be
  • 00:10:44
    today we're a couple of years ahead of
  • 00:10:46
    China my guess is we'll get a few more
  • 00:10:47
    years ahead of China and the Chinese are
  • 00:10:49
    whopping mad about this it's like hugely
  • 00:10:52
    upset about it well let's talk to about
  • 00:10:54
    a real war that's going on I know that
  • 00:10:56
    uh something you've been very involved
  • 00:10:58
    in is uh
  • 00:11:00
    the Ukraine war and in particular uh I
  • 00:11:03
    don't know how much you can talk about
  • 00:11:04
    white stor and your your goal of having
  • 00:11:07
    a 500,000 $500 drones you destroy $5
  • 00:11:11
    million tanks so so how's that changing
  • 00:11:14
    Warfare so I worked for the Secretary of
  • 00:11:16
    Defense for seven years and and Tred to
  • 00:11:20
    change the way we run our military I'm
  • 00:11:23
    I'm not a particularly big fan of the
  • 00:11:24
    military but it's very expensive and I
  • 00:11:26
    wanted to see if I could be helpful and
  • 00:11:28
    I think in my view I Lar failed they
  • 00:11:30
    gave me a medal so they must give medals
  • 00:11:33
    to failure or you know whatever but my
  • 00:11:37
    self-criticism was nothing has really
  • 00:11:39
    changed and the system in America is not
  • 00:11:42
    going to lead to real
  • 00:11:44
    Innovation so watching the Russians use
  • 00:11:48
    tanks to destroy apartment buildings
  • 00:11:50
    with little old ladies and kids just
  • 00:11:52
    drove me crazy so I decided to work on a
  • 00:11:55
    company with your friend Sebastian thrun
  • 00:11:57
    and a as a former faculty member here
  • 00:11:59
    here and a whole bunch of Stanford
  • 00:12:01
    people and the idea basically is to do
  • 00:12:05
    two things use Ai and complicated
  • 00:12:07
    powerful ways for these essentially
  • 00:12:09
    robotic War and the second one is to
  • 00:12:11
    lower the cost of the robots now you sit
  • 00:12:14
    there and you go why would a good
  • 00:12:16
    liberal like me do that and the answer
  • 00:12:18
    is that the
  • 00:12:20
    whole theory of armies is tanks
  • 00:12:23
    artilleries and mortar and we can
  • 00:12:25
    eliminate all of them so here what he's
  • 00:12:27
    talking about is that ukra UK has been
  • 00:12:29
    able to create really cheap and simple
  • 00:12:31
    drones by spending just a couple hundred
  • 00:12:33
    dollar Ukraine is creating 3D printed
  • 00:12:35
    drones they carry a bomb drop it on a
  • 00:12:38
    million dooll tank and they've been able
  • 00:12:40
    to do that over and over again so
  • 00:12:42
    there's this asymmetric Warfare
  • 00:12:44
    happening between drones and more
  • 00:12:46
    traditional artillery so there was an
  • 00:12:48
    article that you and Henry Kissinger and
  • 00:12:50
    Dan hleer uh wrote last year about the
  • 00:12:54
    nature of knowledge and how it's
  • 00:12:55
    evolving I had a discussion the other
  • 00:12:57
    night about this as well so for most of
  • 00:13:00
    History humans sort of had a mystical
  • 00:13:02
    understanding of the universe and then
  • 00:13:04
    there's the Scientific Revolution and
  • 00:13:06
    the enlightenment um and in your article
  • 00:13:08
    you argue that now these models are
  • 00:13:10
    becoming so complicated and uh uh
  • 00:13:15
    difficult to understand that we don't
  • 00:13:17
    really know what's going on in them I'll
  • 00:13:19
    take a quote from Richard fean he says
  • 00:13:21
    what I cannot create I do not understand
  • 00:13:23
    the saw this quote the other day but now
  • 00:13:25
    people are creating things they do not
  • 00:13:27
    that that they can create but they don't
  • 00:13:28
    really understand what's inside of them
  • 00:13:30
    is the nature of knowledge changing in a
  • 00:13:32
    way are we going to have to start just
  • 00:13:34
    taking the word for these models have
  • 00:13:36
    them able being able to explain it to us
  • 00:13:39
    the analogy I would offer is to
  • 00:13:40
    teenagers if you have a teenager you
  • 00:13:43
    know that they're human but you can't
  • 00:13:44
    quite figure out what they're
  • 00:13:46
    thinking um but somehow we've managed in
  • 00:13:49
    society to adapt to the presence of
  • 00:13:50
    teenagers right and they eventually grow
  • 00:13:52
    out of it and this serious so it's
  • 00:13:56
    probably the case that we're going to
  • 00:13:58
    have knowledge systems that we cannot
  • 00:14:00
    fully characterize MH but we understand
  • 00:14:04
    their boundaries right we understand the
  • 00:14:06
    limits of what they can do and that's
  • 00:14:08
    probably the best outcome we can get do
  • 00:14:10
    you think we'll understand the
  • 00:14:12
    limits we we'll get pretty good at it
  • 00:14:14
    he's referencing the way that large
  • 00:14:16
    language models work which is really
  • 00:14:17
    essentially a blackbox you put in a
  • 00:14:19
    prompt you get a response but we don't
  • 00:14:21
    know why certain nodes within the
  • 00:14:23
    algorithm light up and we don't know
  • 00:14:25
    exactly how the answers come to be it is
  • 00:14:27
    really a black box there's a lot of work
  • 00:14:29
    being done right now trying to kind of
  • 00:14:31
    unveil what is going on behind the
  • 00:14:32
    curtain but we just don't know the
  • 00:14:35
    consensus of my group that meets on uh
  • 00:14:37
    every week is that eventually the way
  • 00:14:40
    you'll do this uh it's called so-called
  • 00:14:42
    adversarial AI is that there will there
  • 00:14:45
    will actually be companies that you will
  • 00:14:47
    hire and pay money to to break your AI
  • 00:14:50
    system team so it'll be the red instead
  • 00:14:52
    of human red teams which is what they do
  • 00:14:54
    today you'll have whole companies and a
  • 00:14:57
    whole industry of AI systems whose jobs
  • 00:14:59
    are to break the existing AI systems and
  • 00:15:02
    find their vulnerabilities especially
  • 00:15:04
    the knowledge that they have that we
  • 00:15:05
    can't figure out that makes sense to me
  • 00:15:08
    it's also a great project for you here
  • 00:15:10
    at Stanford because if you have a
  • 00:15:12
    graduate student who has to figure out
  • 00:15:13
    how to attack one of these large models
  • 00:15:16
    and understand what it does that is a
  • 00:15:18
    great skill to build the Next Generation
  • 00:15:20
    so it makes sense to me that the two
  • 00:15:22
    will travel together all right let's
  • 00:15:24
    take some questions from the student
  • 00:15:26
    there's one right there in the back just
  • 00:15:27
    say your name
  • 00:15:29
    you mentioned and this is related to
  • 00:15:31
    comment right now I'm getting AI that
  • 00:15:33
    actually does what you want you just
  • 00:15:34
    mentioned adversarial AI I'm wondering
  • 00:15:37
    if you could elaborate on that more so
  • 00:15:38
    it seems to be besides obviously compute
  • 00:15:41
    will increase and get more performant
  • 00:15:43
    models but getting them to do what we
  • 00:15:46
    want issue seems Lely unanswered you
  • 00:15:50
    well you have to assume that the current
  • 00:15:52
    hallucination problems become less right
  • 00:15:56
    in as the technology gets better and so
  • 00:15:58
    forth I'm not suggesting it goes away
  • 00:16:01
    and then you also have to assume that
  • 00:16:03
    there are tests for E efficacy so there
  • 00:16:05
    has to be a way of knowing that the
  • 00:16:07
    things exceeded so in the example that I
  • 00:16:09
    gave of the Tik Tok competitor and by
  • 00:16:11
    the way I was not arguing that you
  • 00:16:12
    should illegally steal everybody's music
  • 00:16:15
    what you would do if you're a Silicon
  • 00:16:16
    Valley entrepreneur which hopefully all
  • 00:16:18
    of you will be is if it took off then
  • 00:16:20
    you'd hire a whole bunch of lawyers to
  • 00:16:21
    go clean the mess up right but if if
  • 00:16:24
    nobody uses your product it doesn't
  • 00:16:26
    matter that you stole all the content
  • 00:16:28
    and do not quote me right right you're
  • 00:16:31
    you're on camera yeah that's right but
  • 00:16:34
    but you see my point in other words
  • 00:16:35
    Silicon Valley will run these tests and
  • 00:16:37
    clean up the mess and that's typically
  • 00:16:39
    how those things are done so so my own
  • 00:16:42
    view is that you'll see more and more um
  • 00:16:46
    performative systems with even better
  • 00:16:48
    tests and eventually adversarial tests
  • 00:16:50
    and that'll keep it within a box the
  • 00:16:52
    technical term is called Chain of
  • 00:16:54
    Thought reasoning and people believe
  • 00:16:56
    that in the next few years you'll be
  • 00:16:58
    able to generate a thousand steps of
  • 00:17:00
    Chain of Thought reasoning right do this
  • 00:17:03
    do this it's like building recipes right
  • 00:17:05
    that the recipes you can run the recipe
  • 00:17:07
    and you can actually test that It
  • 00:17:09
    produced the correct outcome now that
  • 00:17:11
    was maybe not my exact understanding of
  • 00:17:12
    Chain of Thought reasoning my
  • 00:17:14
    understanding of Chain of Thought
  • 00:17:15
    reasoning which I think is accurate is
  • 00:17:17
    when you break a problem down into its
  • 00:17:19
    basic steps and you solve each step
  • 00:17:21
    allowing for progression into the next
  • 00:17:23
    step not only it allows you to kind of
  • 00:17:25
    replay the steps it's more of how do you
  • 00:17:27
    break problems down and then think
  • 00:17:28
    through them step by step the amounts of
  • 00:17:30
    money being thrown around are
  • 00:17:34
    mindboggling and um I've chose I I
  • 00:17:37
    essentially invest in everything because
  • 00:17:38
    I can't figure out who's going to win
  • 00:17:41
    and the amounts of money that are
  • 00:17:43
    following me are so large I think some
  • 00:17:46
    of it is because the early money has
  • 00:17:48
    been made and the big money people who
  • 00:17:50
    don't know what they're doing have to
  • 00:17:52
    have an AI component and everything is
  • 00:17:54
    now an AI investment so they can't tell
  • 00:17:56
    the difference I Define ai as Learning
  • 00:17:58
    System
  • 00:17:59
    systems that actually learn so I think
  • 00:18:01
    that's one of them the second is that
  • 00:18:02
    there are very sophisticated new
  • 00:18:05
    algorithms that are sort of post
  • 00:18:07
    Transformers my friend my collaborator
  • 00:18:09
    for a long time has invented a new non-
  • 00:18:11
    Transformer architecture there's a group
  • 00:18:13
    that I'm funding in Paris that has
  • 00:18:15
    claims to have done the same thing so
  • 00:18:17
    there there's enormous uh invention
  • 00:18:19
    there a lot of things at Stanford and
  • 00:18:21
    the final thing is that there is a
  • 00:18:23
    belief in the market that the invention
  • 00:18:26
    of intelligence has infinite return
  • 00:18:29
    so let's say you have you put $50
  • 00:18:31
    billion of capital into a company you
  • 00:18:34
    have to make an awful lot of money from
  • 00:18:36
    intelligence to pay that back so it's
  • 00:18:38
    probably the case that we'll go through
  • 00:18:40
    some huge investment bubble and then
  • 00:18:43
    it'll sort itself out that's always been
  • 00:18:44
    true in the past and it's likely to be
  • 00:18:47
    true here and what you said earlier yeah
  • 00:18:50
    so there's been something like a
  • 00:18:52
    trillion dollars already invested into
  • 00:18:54
    artificial intelligence and only 30
  • 00:18:56
    billion of Revenue I think those are
  • 00:18:57
    accurate numbers and really there just
  • 00:19:00
    hasn't been a return on investment yet
  • 00:19:02
    but again as he just mentioned that's
  • 00:19:03
    been the theme on previous waves of
  • 00:19:05
    Technology huge upfront investment and
  • 00:19:08
    then it pays off in the end well I don't
  • 00:19:10
    know what he's talking about here cuz
  • 00:19:11
    didn't he run Google and Google has
  • 00:19:13
    always been about being closed source
  • 00:19:15
    and always tried to protect the
  • 00:19:16
    algorithm at all costs so I don't know
  • 00:19:18
    what he's referring to there you think
  • 00:19:20
    that the leaders are pulling away from
  • 00:19:22
    right now and
  • 00:19:24
    and this is a
  • 00:19:26
    really the question is um roughly the
  • 00:19:29
    following there's a company called mrr
  • 00:19:31
    in France they've done a really good job
  • 00:19:34
    um and I'm I'm obviously an investor um
  • 00:19:36
    they have produced their second version
  • 00:19:38
    their third model is likely to be closed
  • 00:19:41
    because it's so expensive they need
  • 00:19:43
    revenue and they can't give their model
  • 00:19:45
    away so this open source versus closed
  • 00:19:48
    Source debate in our industry is huge
  • 00:19:51
    and um my entire career was based on
  • 00:19:55
    people being willing to share software
  • 00:19:57
    in open source everything about me is
  • 00:20:00
    open source much of Google's
  • 00:20:02
    underpinnings were open source
  • 00:20:03
    everything I've done technically what
  • 00:20:06
    didn't he run Google and Google was all
  • 00:20:08
    about staying closed source and
  • 00:20:09
    everything about Google was Kept Secret
  • 00:20:11
    at all times so I don't know what he's
  • 00:20:13
    referring to there everything I've done
  • 00:20:15
    technically and yet it may be that the
  • 00:20:18
    capital costs which are so immense
  • 00:20:21
    fundamentally changes how software is
  • 00:20:22
    built you and I were talking um my own
  • 00:20:26
    view of software programmers is that
  • 00:20:27
    software programmers productivity will
  • 00:20:29
    at least double MH there are three or
  • 00:20:31
    four software companies that are trying
  • 00:20:33
    to do that I've invested in all of them
  • 00:20:36
    in the spirit and they're all trying to
  • 00:20:38
    make software programmers more
  • 00:20:40
    productive the most interesting one that
  • 00:20:41
    I just met with is called augment and I
  • 00:20:44
    I always think of an individual
  • 00:20:45
    programmer and they said that's not our
  • 00:20:46
    Target our Target are these 100 person
  • 00:20:48
    software programming teams on millions
  • 00:20:50
    of lines of code where nobody knows
  • 00:20:52
    what's going on well that's a really
  • 00:20:54
    good AI thing will they make money I
  • 00:20:57
    hope so
  • 00:20:59
    so a lot of questions here hi um so at
  • 00:21:02
    the very beginning yes ma' um at the
  • 00:21:04
    very beginning you mentioned that
  • 00:21:06
    there's the combination of the context
  • 00:21:08
    window expansion the agents and the text
  • 00:21:11
    to action is going to have unimaginable
  • 00:21:13
    impacts first of all why is the
  • 00:21:16
    combination important and second of all
  • 00:21:18
    I know that you know you're not like a
  • 00:21:20
    crystal ball and you can't necessarily
  • 00:21:21
    tell the future but why do you think
  • 00:21:23
    it's beyond anything that we could
  • 00:21:25
    imagine I think largely because the
  • 00:21:27
    context window allows you to solve the
  • 00:21:29
    problem of recency the current models
  • 00:21:32
    take a year to train roughly six six
  • 00:21:35
    there's 18 months six months of
  • 00:21:37
    preparation six months of training six
  • 00:21:39
    months of fine-tuning so they're always
  • 00:21:41
    out of date contact window you can feed
  • 00:21:44
    what happened like you can ask it
  • 00:21:46
    questions about the um the Hamas Israel
  • 00:21:49
    war right in a context that's very
  • 00:21:52
    powerful it becomes current like Google
  • 00:21:54
    yeah so that's essentially how search
  • 00:21:55
    GPT works for example the new search
  • 00:21:58
    product from open AI can scour the web
  • 00:22:00
    scrape the web and then take all of that
  • 00:22:02
    information and put it into the context
  • 00:22:04
    text window that is the recency he's
  • 00:22:06
    talking about um in the case of Agents
  • 00:22:08
    I'll give you an example I set up a
  • 00:22:10
    foundation which is funding a nonprofit
  • 00:22:13
    which starts there's a u i don't know if
  • 00:22:15
    there's Chemists in the room that I
  • 00:22:16
    don't really understand chemistry
  • 00:22:18
    there's a a tool called chem C which was
  • 00:22:22
    an llm based system that learned
  • 00:22:24
    chemistry and what they do is they run
  • 00:22:26
    it to generate chemistry hypotheses
  • 00:22:29
    about proteins and they have a lab which
  • 00:22:32
    runs the tests overnight and then it
  • 00:22:34
    learns that's a huge acceleration
  • 00:22:37
    accelerant in chemistry Material Science
  • 00:22:39
    and so forth so that's that's an agent
  • 00:22:42
    model and I think the text to action can
  • 00:22:44
    be understood by just having a lot of
  • 00:22:47
    cheap programmers right um and I don't
  • 00:22:49
    think we understand what happens and
  • 00:22:51
    this is again your area of expertise
  • 00:22:54
    what happens when everyone has their own
  • 00:22:55
    programmer and I'm not talking about
  • 00:22:57
    turning on and off the light
  • 00:22:59
    you know I imagine another example um
  • 00:23:02
    for some reason you don't like Google so
  • 00:23:04
    you say build me a Google competitor
  • 00:23:06
    yeah you personally you don't build me a
  • 00:23:08
    Google
  • 00:23:08
    competitor uh search the web build a UI
  • 00:23:12
    make a good copy um add generative AI in
  • 00:23:16
    an interesting way do it in 30 seconds
  • 00:23:20
    and see if it
  • 00:23:21
    works
  • 00:23:23
    right so a lot of people believe that
  • 00:23:25
    the incumbents including Google are
  • 00:23:28
    vulnerable to this kind of an attack now
  • 00:23:31
    we'll see how can we stop AI from
  • 00:23:33
    influencing public opinion
  • 00:23:35
    misinformation especially during the
  • 00:23:36
    upcoming election what are the short and
  • 00:23:38
    long-term solutions
  • 00:23:40
    from most of the misinformation in this
  • 00:23:43
    upcoming election and globally will be
  • 00:23:45
    on social media and the social media
  • 00:23:47
    companies are not organized well enough
  • 00:23:49
    to police it if you look at Tik Tok for
  • 00:23:52
    example there are lots of accusations
  • 00:23:55
    that Tik Tock is favoring one kind of
  • 00:23:57
    misinformation over another and there
  • 00:23:58
    are many people who claim without proof
  • 00:24:01
    that I'm aware of that the Chinese are
  • 00:24:03
    forcing them to do it I think we just we
  • 00:24:05
    have a mess here and
  • 00:24:08
    um the country is going to have to learn
  • 00:24:11
    critical
  • 00:24:12
    thinking that may be an impossible
  • 00:24:14
    challenge for the us but but the fact
  • 00:24:17
    that somebody told you something does
  • 00:24:18
    not mean that it's true I think that the
  • 00:24:20
    the greatest threat to democracy is
  • 00:24:22
    misinformation because we're going to
  • 00:24:24
    get really good at it um when Ian man
  • 00:24:27
    managed YouTube
  • 00:24:29
    the biggest problems we had on YouTube
  • 00:24:30
    were that people would upload false
  • 00:24:33
    videos and people would die as a result
  • 00:24:35
    and we had a no death policy shocking
  • 00:24:37
    yeah and also it's not even about
  • 00:24:39
    potentially making deep fakes or kind of
  • 00:24:41
    misinformation just muddying the waters
  • 00:24:43
    is enough to make the entire topic kind
  • 00:24:45
    of Untouchable um I'm really curious
  • 00:24:48
    about the text to action and its impact
  • 00:24:51
    on for example Computer Science
  • 00:24:53
    Education wondering what you have
  • 00:24:55
    thoughts on like how cus education
  • 00:24:59
    should
  • 00:25:00
    transform kind of Meet the age well I'm
  • 00:25:03
    assuming that computer scientists as a
  • 00:25:05
    group in undergraduate school will
  • 00:25:08
    always have a programmer buddy with them
  • 00:25:10
    so when you when you learn learn your
  • 00:25:12
    first for Loop and so forth and so on
  • 00:25:14
    you'll have a tool that will be your
  • 00:25:17
    natural partner and then that's how the
  • 00:25:19
    teaching will go on that the professor
  • 00:25:21
    you know here or she will talk about the
  • 00:25:23
    concepts but you'll engage with it that
  • 00:25:25
    way and that's my guess yes ma'am behind
  • 00:25:27
    you so here I have a slightly different
  • 00:25:29
    view I think in the long run there
  • 00:25:31
    probably isn't going to be the need for
  • 00:25:33
    programmers eventually the llms will
  • 00:25:35
    become so sophisticated they're writing
  • 00:25:37
    their own kind of code maybe it gets to
  • 00:25:39
    a point where we can't even read that
  • 00:25:41
    code anymore so there is this world in
  • 00:25:43
    which it is not necessary to have
  • 00:25:44
    programmers researchers or computer
  • 00:25:47
    scientists I'm not sure that's the way
  • 00:25:48
    it's going to be but there is a timeline
  • 00:25:50
    in which that happens the most
  • 00:25:52
    interesting country is India because the
  • 00:25:55
    top AI people come from India to the US
  • 00:25:58
    and we should let India keep some of its
  • 00:26:00
    top talent not all of them but some of
  • 00:26:02
    them um and they don't have the kind of
  • 00:26:04
    training facilities and programs that we
  • 00:26:06
    so richly have here to me India is the
  • 00:26:08
    big swing state in that regard China's
  • 00:26:10
    Lost it's not going to not going to come
  • 00:26:12
    back they're not going to change the
  • 00:26:14
    regime as much as people wish them to do
  • 00:26:17
    Japan and Korea are clearly in our camp
  • 00:26:20
    Taiwan is a fantastic country whose
  • 00:26:22
    software is terrible so that's not going
  • 00:26:24
    to going to work um amazing hardware and
  • 00:26:28
    and in the rest of the world there are
  • 00:26:30
    not a lot of other good choices that are
  • 00:26:31
    big German the EUR Europe is screwed up
  • 00:26:34
    because of Brussels it's not a new fact
  • 00:26:36
    I spent 10 years fighting them and I
  • 00:26:39
    worked really hard to get them to fix
  • 00:26:42
    the a the EU act and they still have all
  • 00:26:44
    the restrictions that make it very
  • 00:26:46
    difficult to do our kind of research in
  • 00:26:47
    Europe my French friends have spent all
  • 00:26:50
    their time battling Brussels and mcon
  • 00:26:52
    who's a personal friend is fighting hard
  • 00:26:55
    for this and so France I think has a
  • 00:26:57
    chance I don't see I don't see Germany
  • 00:26:58
    coming and the rest is not big enough
  • 00:27:00
    given the capabilities that you envision
  • 00:27:03
    these models having should we still
  • 00:27:05
    spend time learning to code yeah so here
  • 00:27:07
    she asked should we still learn to code
  • 00:27:09
    because because ultimately it's it's the
  • 00:27:11
    old thing of why do you study English if
  • 00:27:13
    you can speak English you get better at
  • 00:27:15
    it right you really do need to
  • 00:27:17
    understand how these systems work and I
  • 00:27:18
    feel very strongly yes sir so these were
  • 00:27:21
    the most important parts of the
  • 00:27:22
    interview and with that being said this
  • 00:27:24
    is it for today's video see you again
  • 00:27:25
    next week with another video
ุงู„ูˆุณูˆู…
  • Eric Schmidt
  • AI Development
  • Context Windows
  • AI Competition
  • Adversarial AI
  • Misinformation
  • Open Source
  • AI Education
  • Energy Constraints
  • AGI