THE TEN RECKONINGS OF AGI // THE RECKONING OF POWER: Ben Goertzel in Conversation with James Barrat

00:12:34
https://www.youtube.com/watch?v=pRL2FFtVyLM

概要

TLDRIn a conversation between Ben Goertzel, CEO of Singularity Net, and filmmaker James Barrett, they discuss the complexities and implications of artificial intelligence (AI) and the pursuit of artificial general intelligence (AGI). They highlight the existential risks associated with AI, including its potential misalignment with human values and the ethical considerations surrounding its development. The speakers express concerns about the motivations of those in the AI field, emphasizing the need for a focus on beneficial outcomes rather than profit. They also explore the impact of AGI on human work, questioning whether it will enhance or diminish human purpose and autonomy. Ultimately, they agree on the necessity of cautious and ethical approaches to AGI development, recognizing the complexities of capitalism in driving innovation while also posing risks.

収穫

  • 🤖 Ben Goertzel is a pioneer in AGI development since the 1980s.
  • 🎥 James Barrett is a filmmaker and author focused on AI's implications.
  • ⚠️ AI poses existential risks if misaligned with human values.
  • 💡 Understanding AI technology is crucial for its safe development.
  • 🔍 The motivations behind AI development often prioritize profit over ethics.
  • 🛑 A pause in AI development is suggested to assess risks.
  • 💼 AGI could change the nature of work and human purpose.
  • 🌍 Capitalism drives AI innovation but can lead to ethical dilemmas.
  • 🤝 Collaboration is needed for a beneficial AGI future.
  • 📉 The majority of AI resources are not aimed at achieving a positive singularity.

タイムライン

  • 00:00:00 - 00:05:00

    Ben Gil, CEO of Singularity Net and other AI entities, discusses his long-term work towards AGI since the 1980s. He emphasizes the importance of understanding generative AI, despite its complexities, and compares it to other scientific fields where trial and error is common. James Barrett, a filmmaker and author, raises concerns about the existential risks of AI, arguing for a pause in its development until it aligns with human values. He highlights the need for ethical considerations in AI development, contrasting it with technologies that have clear dangers, like nuclear weapons.

  • 00:05:00 - 00:12:34

    The conversation shifts to the implications of AGI on human purpose and autonomy. Ben argues that a beneficial AGI could enhance human life, allowing for creativity and social engagement, while James warns that removing the need for work could lead to dissatisfaction. They both agree that the current focus in AI development is often on profit rather than societal benefit, questioning the necessity of billionaires in this context. The discussion concludes with an acknowledgment of the complexities of the capitalist economy and the need for a more thoughtful approach to AI.

マインドマップ

ビデオQ&A

  • Who is Ben Goertzel?

    Ben Goertzel is the CEO of Singularity Net and the ASI Alliance, and has been working towards artificial general intelligence (AGI) since the 1980s.

  • What is James Barrett's background?

    James Barrett is a filmmaker and author, known for his work with National Geographic and PBS, and for writing the book 'Our Final Invention: Artificial Intelligence and the End of the Human Era.'

  • What are the risks associated with AI?

    The risks include AI becoming unmanageable, misaligned with human values, and the potential for existential threats.

  • What is the main concern about AGI development?

    The main concern is that AGI may not be aligned with human values and could lead to negative consequences if not carefully managed.

  • How do the speakers view the current state of AI technology?

    They acknowledge that while AI technology is powerful, there is a lack of understanding about how it works and the implications of its use.

  • What is the potential impact of AGI on human work?

    There is concern that AGI could free humans from drudgery but also lead to a loss of purpose, autonomy, and responsibility.

  • What do the speakers suggest about the future of work with AGI?

    They suggest that while AGI could change the nature of work, humans will still find purpose and meaning in life through various activities.

  • What is the role of capitalism in AI development?

    Capitalism has driven innovation in AI but has also led to a focus on profit over beneficial outcomes for society.

  • What do the speakers agree on regarding AI resources?

    They agree that the majority of resources in AI are not focused on achieving a beneficial singularity.

  • What is the conclusion of their discussion?

    They conclude that the development of AGI must be approached with caution and ethical considerations.

ビデオをもっと見る

AIを活用したYouTubeの無料動画要約に即アクセス!
字幕
en
オートスクロール:
  • 00:00:03
    Uh, I'm Ben Gil. I'm the CEO of
  • 00:00:06
    Singularity Net and the
  • 00:00:08
    ASI Alliance and True AGI and more other
  • 00:00:14
    AI oriented entities that I'm going to
  • 00:00:16
    list right now. And uh, I've been
  • 00:00:18
    working on building toward AGI really
  • 00:00:22
    since the
  • 00:00:25
    1980s. you know, the the fun is really
  • 00:00:28
    starting now. And I'm James Barrett. I'm
  • 00:00:31
    a primarily a filmmaker and uh I I do a
  • 00:00:35
    lot of stuff for National Geographic and
  • 00:00:37
    and PBS and uh affiliates in America and
  • 00:00:41
    around the world. But I'm also an author
  • 00:00:43
    and I years ago I wrote a book called
  • 00:00:45
    Our Final Invention: Artificial
  • 00:00:47
    Intelligence and the End of the Human
  • 00:00:48
    Era. And it was during that writing of
  • 00:00:51
    that book that I had the pleasure of
  • 00:00:53
    meeting Ben
  • 00:00:55
    Girtzel. I was uh interviewing a bunch
  • 00:00:59
    of experts for upcoming book and many of
  • 00:01:01
    them said, "Isn't it ironic that we're
  • 00:01:03
    plunging headlong into this technology
  • 00:01:06
    and nobody can really explain how it
  • 00:01:07
    works?" I'm talking about generative AI.
  • 00:01:10
    Um a lot of very prominent people have
  • 00:01:13
    said we really really don't know how it
  • 00:01:15
    works. So why should we spend billions
  • 00:01:17
    of dollars and uh expose ourselves to to
  • 00:01:22
    a lot of threats over technology that we
  • 00:01:25
    really don't
  • 00:01:26
    understand interrogating the particulars
  • 00:01:30
    of LLMs and such is it's interesting
  • 00:01:33
    it's important to do I don't think it's
  • 00:01:36
    exactly to the point regarding the first
  • 00:01:39
    AGIS that that
  • 00:01:41
    said I think their
  • 00:01:44
    incomprehensibility
  • 00:01:45
    is a bit exaggerated actually. I mean I
  • 00:01:49
    mean we it's true we don't know exactly
  • 00:01:51
    what output they're going to give when
  • 00:01:52
    given a certain query but I mean we do
  • 00:01:56
    we can probe inside transforming neural
  • 00:01:58
    nets and and we design new versions all
  • 00:02:02
    the time and there's a lot of other
  • 00:02:04
    important things in technology that are
  • 00:02:08
    not fully comprehensible to us in
  • 00:02:10
    different ways like quantum mechanics is
  • 00:02:12
    very hard for folks to understand and
  • 00:02:14
    there's a lot of trial and error
  • 00:02:16
    involved in designing quantum based
  • 00:02:18
    machinery as as well as math. We don't
  • 00:02:20
    understand how the immune system works.
  • 00:02:22
    Like every vaccine discovered has been
  • 00:02:24
    discovered by trial and error. That's
  • 00:02:26
    just very often how science works,
  • 00:02:30
    right? Existential risk is a big reason
  • 00:02:34
    why we should stop not not forever, but
  • 00:02:36
    pause AI. Could AI become unmanageable,
  • 00:02:40
    uncontrollable, misaligned with human
  • 00:02:42
    values? Right now, uh AI is not aligned
  • 00:02:46
    with human values. I think that people
  • 00:02:48
    that are very deep into it as you are
  • 00:02:51
    don't understand that we can stop and we
  • 00:02:53
    have a history of stopping technologies
  • 00:02:55
    that aren't really uh that are very very
  • 00:02:57
    dangerous. Um I don't know that we have
  • 00:02:59
    a history of stopping technologies that
  • 00:03:01
    are as widespread and easy to do and
  • 00:03:05
    delivering so much economic value. I'm
  • 00:03:08
    not sure that's true. I mean like
  • 00:03:10
    nuclear bombs just don't have or
  • 00:03:13
    biological weapons don't have the
  • 00:03:17
    positive uses and immediate economic
  • 00:03:20
    benefit that that AI has. Well uh
  • 00:03:23
    asbestous was a darn good insulator.
  • 00:03:26
    Chloro fluorocarbons are you know were
  • 00:03:29
    incredibly effective. AI is vastly more
  • 00:03:34
    widespread. It can help countries with
  • 00:03:36
    military superiority. It can help it can
  • 00:03:39
    help every big
  • 00:03:40
    company make more money. It can
  • 00:03:42
    transform every domain of ind industry.
  • 00:03:46
    Right? So it's it's it's not as
  • 00:03:49
    isolated a
  • 00:03:52
    thing. Transformer neural net is simply
  • 00:03:56
    a
  • 00:03:57
    large predictor which is trained on a
  • 00:04:00
    bunch of data like given a sequence of
  • 00:04:03
    things which could be words in many
  • 00:04:05
    cases. given a sequence of things, make
  • 00:04:07
    a guess as to what will the next thing
  • 00:04:09
    be in the sequence, right? And so people
  • 00:04:11
    train these sequence predictors on like
  • 00:04:14
    a huge proportion of the text on the
  • 00:04:16
    internet and they became quite good at
  • 00:04:19
    predicting the next token in the
  • 00:04:21
    sequence, the next word in the sentence,
  • 00:04:23
    the next the next program command in the
  • 00:04:25
    program, right? And they just do that by
  • 00:04:28
    training them by sort of reinforcement
  • 00:04:30
    over and over and over again on a lot of
  • 00:04:32
    data. So then why the why why these
  • 00:04:35
    things make the next predict why these
  • 00:04:37
    things predict the next word is 'the' or
  • 00:04:39
    the next item in the code is a semicolon
  • 00:04:41
    is a complex story involving many many
  • 00:04:44
    billions of nodes and links on an
  • 00:04:46
    internal neural net which is the part we
  • 00:04:50
    don't know you don't know exactly why it
  • 00:04:52
    made that prediction and you can try to
  • 00:04:56
    make systems more and more sort of
  • 00:04:59
    deterministic in what they do but I I
  • 00:05:03
    think and there's a bunch of math that
  • 00:05:04
    would back this up that we can't go
  • 00:05:06
    into, but I think
  • 00:05:08
    that's got severe limitations, but I
  • 00:05:11
    mean you can you can put what are called
  • 00:05:13
    probes in the inside of the network to
  • 00:05:14
    try to measure measure what it does to
  • 00:05:16
    give a certain output. You can make some
  • 00:05:18
    progress there, but by and large as we
  • 00:05:21
    make things smarter and smarter, they're
  • 00:05:23
    getting it's getting harder and harder
  • 00:05:24
    to tell a story about what happened
  • 00:05:27
    inside. they may be getting more and
  • 00:05:30
    more uh deceptive that you know uh
  • 00:05:33
    neural nets are notoriously bad liars.
  • 00:05:36
    Well, they're notoriously good liars.
  • 00:05:38
    They lie. They lie a lot. I mean, a
  • 00:05:39
    neural net is a very flexible
  • 00:05:41
    technology. So, that's sort of like
  • 00:05:43
    saying computer programs are bad liars.
  • 00:05:47
    Now the actual world situation of course
  • 00:05:51
    is quite different because none of us is
  • 00:05:54
    in charge of the global economy and
  • 00:05:58
    military situation. Right? So from the
  • 00:06:01
    standpoint of an individual or group
  • 00:06:04
    developing AI right now like the
  • 00:06:07
    question for me or for meta or
  • 00:06:10
    10-centent even the question isn't
  • 00:06:13
    really what should the human species as
  • 00:06:16
    a whole be doing because none of us is
  • 00:06:19
    in no one is in control of the world
  • 00:06:21
    right the question is more given what
  • 00:06:24
    everybody else is doing on the planet
  • 00:06:27
    right now like what's the most
  • 00:06:29
    beneficial thing for me as a as an actor
  • 00:06:33
    in that network to be to be doing. So I
  • 00:06:36
    mean for me
  • 00:06:38
    personally I feel like trying to develop
  • 00:06:42
    AGI which
  • 00:06:43
    is beneficial and compassionate and is
  • 00:06:48
    guided and controlled in a participary
  • 00:06:51
    way by a broad global network of people.
  • 00:06:53
    I feel that injecting that into the
  • 00:06:57
    situation has a better chance of
  • 00:07:00
    achieving good than me, you know,
  • 00:07:03
    sitting back and playing music or
  • 00:07:06
    working on on math theory right now. My
  • 00:07:09
    point is that you're doing good things
  • 00:07:10
    and you're the kind of a AGI person we
  • 00:07:13
    need. Uh but you're not the kind of AGI
  • 00:07:15
    person we have. Uh we have a few, but
  • 00:07:18
    mostly we have um people that really
  • 00:07:21
    don't don't care about about human
  • 00:07:23
    suffering. And so they're causing it. Uh
  • 00:07:26
    they don't care about about liberty.
  • 00:07:28
    They don't they uh they don't care that
  • 00:07:31
    we might lose control of these systems.
  • 00:07:34
    They're not thinking about that. What
  • 00:07:35
    they're thinking about are dollars. Boy,
  • 00:07:37
    every time we pull out a new a new shiny
  • 00:07:40
    object, people will go for it. and
  • 00:07:42
    they'll go for it no matter what the
  • 00:07:43
    consequences are, especially if it pays
  • 00:07:45
    money or kills their enemies. And we're
  • 00:07:48
    just flawed that way. And until we're
  • 00:07:50
    until we've got a way to unflaw
  • 00:07:52
    ourselves or to to think about this in a
  • 00:07:54
    productive way, I think we should just
  • 00:07:56
    lay it down and and
  • 00:07:58
    and continue to research it, but but not
  • 00:08:01
    use it. I mean the people I know who
  • 00:08:05
    used to work for me who were in the team
  • 00:08:07
    at Google developing transformer neural
  • 00:08:10
    nets like those guys are just heads down
  • 00:08:14
    thinking about how do you predict the
  • 00:08:16
    next token right and they're just
  • 00:08:17
    thinking about the computer science and
  • 00:08:20
    software and math math problems of it
  • 00:08:23
    and they they certainly they certainly
  • 00:08:26
    are not doing a moral calculation of
  • 00:08:29
    like will this technology be used to
  • 00:08:31
    kill people or How much will it be used
  • 00:08:33
    for good? But they should but they
  • 00:08:35
    should.
  • 00:08:37
    I I have done that sort of calculation
  • 00:08:39
    which is part of the reason I haven't
  • 00:08:40
    taken a big tech job in my life in spite
  • 00:08:42
    of plenty of lucrative offers. Right.
  • 00:08:44
    But but I think they're not thinking
  • 00:08:46
    about it that way.
  • 00:08:50
    People assume that AGI will free us from
  • 00:08:52
    from h from drudgery.
  • 00:08:55
    But will it if it if it frees us from
  • 00:08:58
    judgery and our and our boring boring
  • 00:09:00
    jobs, will it also free us from purpose,
  • 00:09:02
    autonomy, and responsibility? I there's
  • 00:09:05
    nothing worse to me than having some
  • 00:09:07
    something else do my job. And I I'm sure
  • 00:09:09
    a lot of people feel that way. To me,
  • 00:09:12
    it's pretty clear when you look at how
  • 00:09:15
    human motivations work and everything we
  • 00:09:17
    know about human psychology. I mean I I
  • 00:09:20
    think if we were to succeed in creating
  • 00:09:24
    an AGI which is compassionate and
  • 00:09:27
    beneficially oriented toward humans
  • 00:09:30
    which is what I'm working on with my
  • 00:09:31
    colleagues at Singularity ASI alliance
  • 00:09:34
    true AGI and so on and if that system
  • 00:09:36
    then it proved itself towards super
  • 00:09:39
    intelligence keeping compassion and
  • 00:09:41
    benefit in mind as it drives this
  • 00:09:44
    process like if this succeeded and we
  • 00:09:46
    got a beneficial AGI I mean I I have
  • 00:09:49
    little doubt that after a transition
  • 00:09:53
    period humans would adapt to it and
  • 00:09:56
    would find plenty of purpose, meaning
  • 00:09:59
    and joy in life going forward in that
  • 00:10:02
    situation. I mean we can we can create
  • 00:10:05
    art. We can do science and math for it
  • 00:10:08
    for its its own sake. We can explore the
  • 00:10:10
    world and climb up mountains. We will
  • 00:10:13
    still be engaged in our own social
  • 00:10:15
    network and chasing girlfriends and
  • 00:10:17
    boyfriends and playing sports against
  • 00:10:19
    each other. Like there's all manner of
  • 00:10:21
    ways we can find purpose and meaning and
  • 00:10:24
    have our own will and autonomy within
  • 00:10:26
    our social network of humans and and our
  • 00:10:29
    creations and interactions. These are
  • 00:10:31
    subtle problems, but um universal basic
  • 00:10:35
    income and what we do when AGI is here
  • 00:10:37
    and we're all, you know, unemployed is
  • 00:10:40
    someday will be very relevant and it's
  • 00:10:42
    relevant to a bunch of people right now.
  • 00:10:44
    Um I I think it's relevant because uh I
  • 00:10:48
    think it's a it's a miscalculation. I
  • 00:10:50
    think if you pay everybody to do
  • 00:10:52
    nothing, uh you'll get you'll get very
  • 00:10:54
    very very unhappy people. Um, I think a
  • 00:10:58
    lot of people get a lot of meaning and
  • 00:11:00
    value and pleasure out of work, the work
  • 00:11:03
    that they get paid for. Um, and I think
  • 00:11:06
    if you I think if you take that away and
  • 00:11:07
    you just say, you know, you just be you
  • 00:11:09
    just be you, you be you. And I don't
  • 00:11:12
    think you'll find a lot of suddenly
  • 00:11:14
    there'll be a lot more poets and a lot
  • 00:11:15
    more authors. I think there'll be, you
  • 00:11:17
    know, probably a lot more drug drug
  • 00:11:19
    users, uh, a lot more alcoholics. there
  • 00:11:22
    will be a hell of a lot more musicians
  • 00:11:24
    because almost all musicians I know have
  • 00:11:25
    day jobs and they'll be really happy
  • 00:11:27
    just to play music instead. Right.
  • 00:11:29
    That's true and that's a good thing.
  • 00:11:31
    More musicians you can always use.
  • 00:11:35
    Well, one thing we agree on is the the
  • 00:11:38
    bulk of resources in the AI field is not
  • 00:11:42
    explicitly pushing toward a beneficial
  • 00:11:44
    singularity. Right. It's like pushing We
  • 00:11:46
    agree on that. Yes. I get more power or
  • 00:11:48
    money for myself and then the
  • 00:11:50
    singularity will sort of happen as a
  • 00:11:53
    side effect and that's pretty ridiculous
  • 00:11:57
    from a bigger bigger point of view.
  • 00:11:59
    Yeah. How many how many billionaires do
  • 00:12:01
    we need?
  • 00:12:03
    Yeah. Well, we don't we don't need any
  • 00:12:05
    billionaires. That that's right. On the
  • 00:12:07
    other hand, the capitalist economy has
  • 00:12:11
    also led to a lot of wonderful things.
  • 00:12:13
    Right. So that's the the world that
  • 00:12:15
    we've selforganized. Yeah, it's
  • 00:12:17
    complicated. Really complex. And I think
  • 00:12:20
    I think we we are out of out of time
  • 00:12:22
    now. But Well, it was a pleasure. It was
  • 00:12:25
    a pleasure talking to you, Ben. I look
  • 00:12:27
    forward to talking to you again before
  • 00:12:29
    long.
タグ
  • AI
  • AGI
  • Ben Goertzel
  • James Barrett
  • existential risk
  • technology
  • ethics
  • capitalism
  • human values
  • automation