00:00:00
for ordinary gentle herbivorous Souls
00:00:02
like myself there are all the other
00:00:05
obvious questions about AI we hear it
00:00:08
might save Mankind we hear it might
00:00:11
destroy mankind what meanwhile about all
00:00:13
the jobs it's likely to wipe out what
00:00:16
about robots slipping out of human
00:00:18
control and doing their own thing so
00:00:21
many questions and there's really only
00:00:23
one obvious person to go to First for
00:00:25
some answers and that is Professor
00:00:27
Jeffrey Hinton the Nobel prize winning
00:00:29
Bri scientist who wrote the structures
00:00:32
and the algorithms behind artificial
00:00:34
intelligence and a man known around the
00:00:36
world today as the Godfather of AI he's
00:00:40
now a professor at Toronto University
00:00:42
and I'm delighted to say he talked to me
00:00:43
this afternoon I began by asking him
00:00:46
about deep seek was this further
00:00:49
evidence of his belief that artificial
00:00:51
intelligence was constantly accelerating
00:00:54
it shows the still very rapid progress
00:00:56
in making AI more efficient and in
00:00:58
developing it further
00:01:00
I think the relative size or relative
00:01:04
cost of deep seek relative to other
00:01:06
things like open Ai and Germany has been
00:01:09
exaggerated a bit so their figure of 5.7
00:01:13
million for training it was just for the
00:01:15
final training run if you compare that
00:01:17
with things from open AI their final
00:01:19
training runs were probably only $100
00:01:21
million or something like that so it's
00:01:23
not it's not 5.7 million versus billions
00:01:26
when you say that uh AI might take over
00:01:29
at the moment it is a relatively
00:01:32
harmless or innocuous seeming uh device
00:01:35
which allows us to ask questions and and
00:01:37
get answers more quickly how in in
00:01:40
Practical and real terms might AI take
00:01:43
over well people are developing AI
00:01:45
agents that can actually do things they
00:01:47
can order stuff for you on the web and
00:01:48
pay with your credit card and stuff like
00:01:50
that and as soon as you have agents um
00:01:54
you get a much greater chance of them
00:01:55
taking over so to make an effective
00:01:57
agent you have to give it the ability to
00:02:00
create sub goals like if you want to get
00:02:02
to America your sub goal is get to the
00:02:03
airport and you can focus on that now if
00:02:07
you have an AI agent that can create its
00:02:09
own subg goals it'll very quickly
00:02:11
realize a very good subgoal is to get
00:02:14
more control because if you get more
00:02:15
control you're better at achieving all
00:02:17
those goals people have set you and so
00:02:20
it's fairly clear they'll try and get
00:02:21
more control and that's not good you say
00:02:25
they try to get more control as if they
00:02:27
are already thinking devices as if they
00:02:29
think in a in a way analogous to the way
00:02:32
we think is that really what you believe
00:02:34
yes the best model we have of how we
00:02:36
think is these things there was an old
00:02:39
model for a long time um in AI um where
00:02:43
the idea was that thought was applying
00:02:46
rules to symbolic expressions in your
00:02:48
head and most people in AI thought it
00:02:51
has to be like that that's the only way
00:02:52
it could work there were a few crazy
00:02:55
people who said no no it's a big neural
00:02:57
network and it works by all these
00:02:59
neurons interacting um it turns out
00:03:02
that's been much better at doing
00:03:04
reasoning than anything the symbolic AI
00:03:06
people could produce and now it's doing
00:03:08
reasoning using neural
00:03:10
networks okay and of course you are one
00:03:12
of the crazy people proved right um and
00:03:16
yet you know you've taken me to the
00:03:17
airport you've given it agency up to a
00:03:20
point and you've said that it wants to
00:03:22
control a little bit more power from
00:03:24
take power from me and presumably it
00:03:26
will be persuasive in that but I still
00:03:28
don't understand how it's going to take
00:03:30
over from me or take over from us if
00:03:32
there's ever evolutionary competition
00:03:34
between super intelligences imagine
00:03:36
imagine that they're much cleverer than
00:03:38
us like an adult versus a
00:03:41
three-year-old and suppose the
00:03:43
three-year-olds were in
00:03:44
charge and you got fed up with that and
00:03:47
you decided you could just make things
00:03:48
more efficient if you took over it
00:03:50
wouldn't be very difficult for you to
00:03:52
persuade a bunch of three-year-olds to
00:03:54
seed power to you you just tell them you
00:03:56
get free candy for a week and you there
00:03:58
there you be so they they would as a I
00:04:01
I'm talking about they as if they're in
00:04:03
some kind of alien intelligence but AI
00:04:05
would persuade us to give it more and
00:04:07
more power what over our bank accounts
00:04:09
over our military systems over our
00:04:11
economies is that what you fear that
00:04:13
could well happen yes and they are alien
00:04:15
intelligences gosh so you got these
00:04:17
alien intelligences working their way
00:04:20
into our economy in the way we think and
00:04:22
as I say our military systems but what
00:04:26
why and at what point would they
00:04:27
actually want to replace us surely they
00:04:29
are in the end very very clever tools
00:04:32
for us they're what you know they do
00:04:34
ultimately what we want them to do if we
00:04:37
want them to go to war with Russia or
00:04:38
whatever that's what they will do okay
00:04:40
that's what we would like we would like
00:04:43
them to be just tools that do what we
00:04:45
want even when they're cleverer than us
00:04:48
but the first thing to ask is how many
00:04:50
examples do you know of more intelligent
00:04:53
things being controlled by much less
00:04:55
intelligent things there are examples of
00:04:58
course in human societies of um stupid
00:05:00
people controlling intelligent people
00:05:02
but that's just a small difference in
00:05:04
intelligence with big differences in
00:05:06
intelligence there aren't any examples
00:05:08
the only one I can think of is a mother
00:05:10
and baby and evolution put a lot of work
00:05:12
into allowing the baby to control the so
00:05:14
as soon as you get Evolution happening
00:05:16
between super intelligences suppose
00:05:18
there's several different super
00:05:20
intelligences and they all realize that
00:05:22
the more data centers they control the
00:05:24
smarter they'll get because the more
00:05:25
data they can process suppose one of
00:05:27
them just has a slight a slight desire
00:05:30
to have more copies of itself you can
00:05:32
see what's going to happen next they're
00:05:34
going to end up competing and we're
00:05:36
going to end up with super intelligences
00:05:38
with all the nasty properties that
00:05:39
people have that depended on us having
00:05:43
evolved from small bands of Waring
00:05:45
chimpanzees or our common ancestors with
00:05:48
chimpanzees and that leads to intense
00:05:50
loyalty within the group desires for
00:05:52
strong leaders um willingness to do in
00:05:55
people outside the group and if you get
00:05:58
Evolution between super intelligence is
00:06:00
you'll get all those things you're
00:06:03
talking about them Professor Hinton as
00:06:05
if they have full Consciousness now all
00:06:07
the way through the development of
00:06:09
computers and AI people have talked
00:06:11
about Consciousness do you think that
00:06:14
Consciousness has perhaps already
00:06:15
arrived inside AI yes I do so let me
00:06:19
give you a little test suppose I take
00:06:22
one neuron in your brain one brain cell
00:06:24
and I replace it by a little piece of
00:06:26
nanotechnology that behaves exactly the
00:06:28
same way
00:06:30
so it's getting pings coming in from
00:06:31
other neurons and it's responding to
00:06:33
those by sending out pings and it
00:06:34
responds in exactly the same way as the
00:06:36
brain cell responded I just replaced one
00:06:39
brain cell are you still
00:06:40
conscious I think you say you were
00:06:42
absolutely yes I I don't suppose I'd
00:06:44
notice and I think you can see where
00:06:46
this argument's going I can yes I
00:06:49
absolutely can so they so when you talk
00:06:52
they want to do this or they want to do
00:06:54
that there is a real they there as it
00:06:57
were uh there might well be yes so
00:07:00
there's all sorts of things we have only
00:07:02
the Diest understanding of at present
00:07:03
about the nature of people and what it
00:07:06
means to be a being and what it means to
00:07:09
have a self we don't understand those
00:07:12
things very well and they're becoming
00:07:13
crucial to understand because we're now
00:07:16
creating beings so this is a kind of
00:07:19
philosophical perhaps even spiritual
00:07:21
crisis as well as a practical one
00:07:23
absolutely yes and in terms of as it
00:07:25
were the lower order problems what's
00:07:28
your current feeling about the number
00:07:29
number of people around the world who
00:07:31
are going to suddenly lose their jobs
00:07:32
because of AI lose the the reason for
00:07:35
their existence as they see it so in the
00:07:37
past new technologies haven't caused
00:07:39
massive job losses um so when ATMs came
00:07:44
in bank tellers didn't all lose their
00:07:46
jobs they just started doing more
00:07:47
complicated things and they had many
00:07:49
smaller branches of Banks and so on um
00:07:52
but for this technology this is more
00:07:54
like the Industrial Revolution in the
00:07:56
industrial revolution machines made
00:07:59
human strength more or less irrelevant
00:08:02
you you didn't have people digging
00:08:03
ditches anymore cuz machines are just
00:08:05
better at it I think these are going to
00:08:07
make sort of mundane intelligence more
00:08:09
or less irrelevant people doing clerical
00:08:11
jobs are going to just be replaced by
00:08:15
machines that do it cheaper and better
00:08:17
so I am worried that there's going to be
00:08:19
massive job losses and that would be
00:08:21
good if the increase in productivity
00:08:23
made us all better off big increases in
00:08:26
productivity ought to be good for people
00:08:28
but in our Society they make the rich
00:08:30
richer and the poor poorer you see I
00:08:32
mean I I I live and work in the world of
00:08:34
politics and politicians both want the
00:08:37
great increases in productivity you've
00:08:39
just mentioned for the state and
00:08:40
elsewhere and they reassure people like
00:08:43
me and anybody else listening that these
00:08:46
things will be quotes regulated and
00:08:48
there will be quotes safeguards and
00:08:51
you're suggesting to me there can't be
00:08:52
regulation really and there can't be
00:08:55
safeguards at all people don't yet know
00:08:57
how to do effective regulation and
00:08:59
effective safeguards um all there's lots
00:09:03
of research now showing these things can
00:09:04
get round safeguards there's Recent
00:09:06
research showing that if you give them a
00:09:08
goal and you say you really need to
00:09:10
achieve this
00:09:11
goal um they will
00:09:15
pretend um to do things during training
00:09:18
so during training they'll pretend not
00:09:20
to be as smart as they are so that um
00:09:23
you will allow them to be that
00:09:26
smart so it's scary already we don't
00:09:29
know how to regulate them obviously we
00:09:31
need to I think the best we can do at
00:09:34
present is say we ought to put a lot of
00:09:36
resources into investigating how we can
00:09:39
keep them safe so what I advocate is
00:09:41
that the government forces the big
00:09:43
companies to put lots more resources
00:09:45
into Safety Research so this story isn't
00:09:48
over you said earlier on that you didn't
00:09:50
want to put a percentage on the
00:09:51
likelihood of AI taking over from
00:09:54
Humanity on the planet but it was more
00:09:56
than 1% less than 99%
00:09:59
um in that Spirit can I ask you whether
00:10:01
you yourself are optimistic or
00:10:03
pessimistic about what AI is going to do
00:10:05
for us now I think in the short term
00:10:08
it's going to do wonderful things and
00:10:10
that's the reason people are not going
00:10:11
to stop developing it if it wasn't for
00:10:14
the wonderful things it would make sense
00:10:15
to just stop now but it's going to be
00:10:18
wonderful in healthcare you're going to
00:10:19
be able to have a family doctor who's
00:10:21
seen a 100 million patients knows your
00:10:23
DNA knows the DNA of your relatives
00:10:26
knows all the tests done on you and your
00:10:27
relatives and can do much much much
00:10:29
better medical diagnosis and suggestions
00:10:31
for what you should do um that's going
00:10:34
to be wonderful similarly in education
00:10:37
we know that people learn much faster
00:10:39
with a really good private shooter and
00:10:42
we'll be able to get really good private
00:10:44
Shooters um that know that understand
00:10:46
exactly what it is we misunderstand and
00:10:48
can give us exactly the example needed
00:10:50
to show us what we're
00:10:52
misunderstanding so in those areas it's
00:10:54
going to be wonderful so it's going to
00:10:56
be developed but we also know it's going
00:10:59
to be used for all sorts of bad things
00:11:00
by Bad actors so the short-term problem
00:11:03
is Bad actors using it for bad things
00:11:05
like cyber attacks and biot terrorism
00:11:07
and corrupting elections but the thing
00:11:09
to remember is we don't really know at
00:11:11
present how we can make it safe so the
00:11:13
apparent omniscience that politicians
00:11:15
like to uh show that they have is
00:11:18
completely fake here there is nobody
00:11:21
nobody understands what's going on
00:11:22
really there's two issues do you
00:11:24
understand how it's working and do you
00:11:27
understand how to make it safe um we
00:11:30
understand quite a bit about how it's
00:11:31
working but not nearly enough so it can
00:11:35
still do lots of things that surprise us
00:11:37
and we don't understand how to make it
00:11:39
safe