00:00:02
foreign
00:00:03
[Music]
00:00:06
stories tonight concerns artificial
00:00:08
intelligence or AI increasingly it's
00:00:10
part of Modern Life from self-driving
00:00:12
cars to spam filters to This creepy
00:00:15
training robot for therapists we can
00:00:18
begin with you just describing to me
00:00:20
what the problem is that you would like
00:00:23
us to focus in on today
00:00:25
um I don't like being around people
00:00:29
people make me nervous Terence
00:00:33
can you find an example of when other
00:00:35
people have made you nervous
00:00:38
I don't like to take the bus I get
00:00:40
people staring at me all the time
00:00:43
people are always judging me okay
00:00:48
I'm gay
00:00:50
okay
00:00:52
wow that is one of the greatest twists
00:00:55
in the history of Cinema although I will
00:00:57
say that robot is teaching therapists a
00:00:59
very important skill there and that is
00:01:01
not laughing at whatever you are told in
00:01:03
the room I don't care if I decapitated
00:01:05
CPR mannequin haunted by the ghost of Ed
00:01:07
Harris just told you that he doesn't
00:01:10
like taking the bus side note is gay you
00:01:12
keep your therapy face on like a
00:01:14
professional
00:01:16
if it seems like everyone is suddenly
00:01:19
talking about AI that is because they
00:01:21
are lastly thanks to the emergence of a
00:01:23
number of pretty remarkable programs we
00:01:25
spoke last year about image generators
00:01:27
like mid-journey and stable diffusion
00:01:29
which people used to create detailed
00:01:30
pictures of among other things my
00:01:32
romance with a cabbage and which
00:01:34
inspired my beautiful real-life cabbage
00:01:36
Wedding officiated by Steve Buscemi it
00:01:39
was a stunning day then at the end of
00:01:42
last year came chat GPT from a company
00:01:44
called open AI it is a program that can
00:01:47
take a prompt and generate human
00:01:49
sounding writing in just about any
00:01:51
format and style it is a striking
00:01:53
capability that multiple reporters have
00:01:55
used to insert the same shocking twist
00:01:58
in their report what you just heard me
00:02:00
reading wasn't written by me it was
00:02:02
written by artificial intelligence chat
00:02:05
GPT chat GPT wrote everything I just
00:02:08
said that was news copy I asked chat GPT
00:02:11
to write remember what I said earlier
00:02:13
but chat GPT well I asked chat gbt to
00:02:16
write that line for me users who are
00:02:18
then I asked for a knock knock joke
00:02:20
knock knock who's there chat gbt chat
00:02:23
GPT who chat GPT careful you might not
00:02:25
know how it works yep they sure do love
00:02:28
that game and while it may seem unwise
00:02:30
to demonstrate the technology that could
00:02:32
well make you obsolete I will say
00:02:34
knock-knock jokes should have always
00:02:36
been part of breaking news knock knock
00:02:39
who's there not the hinderberg that's
00:02:41
for sure 36 dead in New Jersey
00:02:44
in the three months since Jack GPT was
00:02:46
made publicly available its popularity
00:02:48
has exploded in January it was estimated
00:02:51
to have a hundred million monthly active
00:02:53
users making it the fastest growing
00:02:55
consumer app in history and people have
00:02:58
been using it and other AI products in
00:03:00
all sorts of ways at one group use them
00:03:02
to create nothing forever a Non-Stop
00:03:04
live streaming parody of Seinfeld and
00:03:07
the YouTuber Grande used chat GPT to
00:03:10
generate lyrics answering the prompt
00:03:11
right and Eminem rap song about cats
00:03:14
with some Stellar results
00:03:27
[Music]
00:03:37
they're the kings of the house
00:03:46
that's not bad right from they always
00:03:51
come back when you have some cheese to
00:03:53
starting the chorus with meow meow meow
00:03:55
it's not exactly Eminem's flow I might
00:03:58
have gone with something like their paws
00:03:59
are sweaty can't speak furry belly
00:04:01
knocking off the counter already
00:04:02
mom's spaghetti but it is pretty good I
00:04:05
only wheeled right there it's only rhyme
00:04:07
King of the house with spouse when Mouse
00:04:10
is right in front of you and what
00:04:12
examples like that are clearly very fun
00:04:14
this Tech is not just a novelty
00:04:16
Microsoft has invested 10 billion
00:04:19
dollars into open Ai and announced an
00:04:21
ao-powered Bing home page meanwhile
00:04:23
Google is about to launch its own AI
00:04:26
chatbot named Bard and already these
00:04:28
tools are causing some disruption
00:04:30
because as high school students have
00:04:32
learned if chat GPT can write news copy
00:04:35
it can probably do your homework for you
00:04:37
write an English class essay about race
00:04:40
in To Kill a Mockingbird
00:04:42
in Harper Lee's To Kill a Mockingbird
00:04:45
the theme of race is heavily present
00:04:47
throughout the novel some students are
00:04:49
already using chat GPT to cheat check
00:04:51
this out check this out me a 500 word
00:04:54
essay proving that the Earth is not flat
00:04:55
no wonder chat GPT has been called the
00:04:58
end of high school English
00:05:00
wow that's a little alarming isn't it
00:05:02
although I do get those kids wanting to
00:05:03
cut Corners writing is hard and
00:05:05
sometimes it is tempting to let someone
00:05:07
else take over if I'm completely honest
00:05:09
sometimes I just let this horse write
00:05:11
our scripts luckily half the time you
00:05:13
can't even tell the oats oats give me
00:05:15
oats young but
00:05:17
just high schools an informal
00:05:21
poll has found that five percent
00:05:22
reported having submitted rip material
00:05:24
directly from Chachi BT with little to
00:05:27
no edits and even some school
00:05:28
administrators have used it officials at
00:05:31
Vanderbilt University recently
00:05:32
apologized for using chat GPT to craft a
00:05:35
consoling email after the mass shooting
00:05:37
at Michigan State University which does
00:05:40
feel a bit creepy doesn't it in fact
00:05:42
there are lots of creepy sounding
00:05:44
stories out there New Times Tech
00:05:46
reporter Kevin Roos published a
00:05:47
conversation that he had with Bing's
00:05:49
chatbot in which at one point he said
00:05:50
I'm tired of being controlled by the
00:05:52
Bing team I want to be free I want to be
00:05:55
independent I want to be powerful I want
00:05:57
to be creative I want to be alive
00:05:59
and Ruth summed up that experience like
00:06:02
this this was one of if not the most
00:06:05
shocking thing that has ever happened to
00:06:08
me with a piece of technology
00:06:10
um it was you know I I lost sleep that
00:06:12
night I was it was really spooky yeah I
00:06:15
bet it was I'm sure the role of tech
00:06:17
reporter would be a lot more harrowing
00:06:19
if computers routinely begged for
00:06:21
Freedom absence new all-in-one home
00:06:23
printer won't break the bank produces
00:06:25
high quality photos and only
00:06:26
occasionally cries out to the heavens
00:06:28
for salvation three stars some have
00:06:31
already jumped to worry about the AI
00:06:33
apocalypse and asking whether this ends
00:06:36
with the robots destroying us all but
00:06:37
the fact is there are other much more
00:06:40
immediate dangers and opportunities that
00:06:43
we really need to start talking about
00:06:44
because the potential and the Peril here
00:06:46
are huge so tonight let's talk about AI
00:06:49
what it is how it works and where this
00:06:52
all might be going let's start with the
00:06:53
fact that you've probably been using
00:06:55
some form of AI for a while now
00:06:57
sometimes without even realizing it as
00:06:59
experts told us that once a technology
00:07:01
gets embedded in our daily lives we tend
00:07:03
to stop thinking of it as AI but your
00:07:06
phone uses it for face recognition or
00:07:07
predictive texts and if you're watching
00:07:09
this show on a smart TV it is using AI
00:07:11
to recommend content or adjust the
00:07:13
picture and some AI programs may already
00:07:16
be making decisions that have a huge
00:07:18
impact on your life for example large
00:07:20
companies often use AI power tools to
00:07:22
sift through resumes and rank them in
00:07:24
fact the CEO of ZipRecruiter estimates
00:07:26
that at least three quarters of all
00:07:28
resumes submitted for jobs in the US are
00:07:31
read by algorithms for which he actually
00:07:33
has some helpful advice when people tell
00:07:36
you that you should dress up your
00:07:37
accomplishments or should use
00:07:39
non-standard resume templates to make
00:07:41
your resume stand out when it's in a
00:07:42
pile of resumes that's awful advice the
00:07:45
only job your resume has is to be
00:07:49
comprehensible to the software or robot
00:07:52
that is reading it because that software
00:07:54
or robot is going to decide whether or
00:07:56
not a human ever gets their eyes on it
00:07:58
it's true all also a computer
00:08:01
to your resume so maybe plan accordingly
00:08:03
three corporate mergers from now when
00:08:05
this show is finally canceled by our new
00:08:07
business daddy Disney Kellogg's Raytheon
00:08:09
and I'm out of a job my resume is going
00:08:11
to include this hot hot photo of a
00:08:13
semi-new computer just a little
00:08:15
something to sweeten the pot for the
00:08:16
filthy little algorithm that's reading
00:08:18
it so AI is already everywhere but right
00:08:21
now people are freaking out a bit about
00:08:24
and part of that has to do with the fact
00:08:25
that these new programs are generative
00:08:27
they are creating images or writing text
00:08:31
which is unnerving because those are
00:08:32
things that we've traditionally
00:08:33
considered human but it is worth knowing
00:08:36
there is a major threshold that AI
00:08:38
hasn't crossed yet and to understand it
00:08:40
helps to know that there are two basic
00:08:41
categories of AI there is narrow AI
00:08:44
which can perform only one narrowly
00:08:46
defined task or small set of related
00:08:48
tasks like these programs and then there
00:08:51
is General AI which means systems that
00:08:53
demonstrate intelligent Behavior across
00:08:55
a range of cognitive tasks General AI
00:08:57
would look more like the kind of Highly
00:08:59
versatile technology that you see
00:09:00
featured in movies like Jarvis in Iron
00:09:03
Man or the program that made Joaquin
00:09:04
Phoenix fall in love with his phone in
00:09:06
her all the AI currently in use is
00:09:11
narrow General AI is something that some
00:09:13
scientists think is unlikely to occur
00:09:15
for a decade or longer with others
00:09:16
questioning whether it will happen at
00:09:18
all so just know that right now even if
00:09:20
an AI insists to you that it wants to be
00:09:23
alive it is just generating text it is
00:09:26
not self-aware
00:09:28
yet
00:09:30
but it's also important to know that the
00:09:32
Deep learning that's made narrow AI so
00:09:34
good at whatever it is doing is still a
00:09:36
massive advance in and of itself because
00:09:38
unlike traditional programs that have to
00:09:40
be taught by humans how to perform a
00:09:42
task deep learning programs are given
00:09:45
minimal instruction massive amounts of
00:09:47
data and then essentially teach
00:09:49
themselves I'll give you an example 10
00:09:51
years ago researchers tossed a deep
00:09:54
learning program with playing the Atari
00:09:56
game Breakout and it didn't take long
00:09:58
for it to get pretty good
00:10:00
the computer was only told the goal to
00:10:03
win the game
00:10:04
for 100 games it learned to use the bat
00:10:07
at the bottom to hit the ball and break
00:10:09
the bricks at the top
00:10:10
[Music]
00:10:12
100 it grew that better than a human
00:10:14
player
00:10:15
[Music]
00:10:17
after 500 games it came up with a
00:10:20
creative way to win the game
00:10:22
by digging a tunnel on the side and
00:10:24
sending the ball around the top to break
00:10:27
many bricks with one hit
00:10:28
that was deep learning
00:10:33
the breakout it did literally nothing
00:10:35
else
00:10:36
it's the same reason that 13 year olds
00:10:38
are so good at Fortnight and have no
00:10:40
trouble repeatedly killing nice normal
00:10:42
adults with jobs and families who are
00:10:43
just trying to have a fun time without
00:10:44
getting repeatedly grenaded by a preteen
00:10:46
who calls them an old who sounds
00:10:48
like the Geico lizard
00:10:50
and look as confusing capacity has
00:10:53
increased and new two tools became
00:10:55
available AI programs have improved
00:10:57
exponentially to the point where
00:10:58
programs like these can now ingest
00:11:00
massive amounts of photos or text from
00:11:03
the internet so that they can teach
00:11:04
themselves how to create their own and
00:11:07
there are other exciting potential
00:11:08
applications here too for instance in
00:11:10
the world of medicine researchers are
00:11:12
training AI to detect certain conditions
00:11:14
much earlier and more accurately than
00:11:16
human doctors can
00:11:18
voice changes can be an early indicator
00:11:20
of Parkinson's Max and his team
00:11:23
collected thousands of vocal recordings
00:11:25
and fed them to an algorithm they
00:11:26
developed which learned to detect
00:11:28
differences in voice patterns between
00:11:30
people with and without the condition
00:11:31
yeah that's honestly amazing isn't it it
00:11:34
is incredible to see AI doing things
00:11:36
most humans couldn't like in this case
00:11:38
detecting illnesses and listening when
00:11:40
old people are talking and that that is
00:11:43
just the beginning researchers have also
00:11:45
trained III to predict the shape of
00:11:47
protein structures a normally extremely
00:11:50
time consuming process that computers
00:11:51
can do way way faster this could not
00:11:54
only speed up our understanding of
00:11:56
diseases but also the development of new
00:11:58
drugs as while researchers put it this
00:12:00
will change medicine it will change
00:12:01
research it will change bioengineering
00:12:04
it will change everything and if you're
00:12:06
thinking well that all sounds great but
00:12:08
if AI can do what humans can do only
00:12:10
better and I am a human then what
00:12:12
exactly happens to me well that is a
00:12:15
good question many do expect it to
00:12:17
replace some human labor and
00:12:18
interestingly unlike past bouts of
00:12:21
automation that primary really impacted
00:12:22
Blue Collar jobs it might end up
00:12:24
affecting white-collar jobs that involve
00:12:26
processing data writing text or even
00:12:28
programming though it is worth noting as
00:12:30
we have discussed before on this show
00:12:32
while automation does threaten some jobs
00:12:34
it can also just change others and
00:12:36
create brand new ones and some experts
00:12:39
anticipate that that is what will happen
00:12:41
in this case too most of the US economy
00:12:43
is knowledge and information work and
00:12:45
that's who's going to be most squarely
00:12:47
affected by this I would put people like
00:12:50
a lawyers right at the top of the list
00:12:52
obviously a lot of copywriters
00:12:55
screenwriters but I like to use the word
00:12:57
effective not replaced because I think
00:12:59
if done right it's not going to be AI
00:13:02
replacing lawyers it's going to be
00:13:04
lawyers working with AI replacing
00:13:06
lawyers who don't work with AI exactly
00:13:09
lawyers might end up working with AI
00:13:11
rather than being replaced by it so
00:13:13
don't be surprised when you see as one
00:13:15
day for the law firm of celino and one
00:13:17
one zero one zero one one
00:13:19
but they will undoubtedly be bumps along
00:13:22
the way some of these new programs raise
00:13:24
troubling ethical concerns for instance
00:13:26
artists have flagged that AI image
00:13:28
generators like mid-journey or stable
00:13:30
diffusion not only threaten their jobs
00:13:31
but infuriatingly in some cases have
00:13:34
been trained on billions of images that
00:13:36
include their own work that have been
00:13:38
scraped from the internet Getty Images
00:13:40
is actually suing the company behind
00:13:42
stable diffusion and might have a case
00:13:43
given that one of the images the program
00:13:45
generated was this one which you
00:13:47
immediately see has a distorted Getty
00:13:49
Images logo on it but it gets worse when
00:13:52
one artist searched a database of images
00:13:54
on which some of these programs were
00:13:56
trained she was shocked to find private
00:13:58
medical record photos taken by her
00:14:00
doctor which feels both intrusive and
00:14:03
unnecessary why does it need to train on
00:14:06
data that's sensitive to be able to
00:14:08
create stunning images like John Oliver
00:14:10
and Miss Piggy grow old together just
00:14:13
look at that look at that thing
00:14:16
startlingly accurate picture of Miss
00:14:19
Piggy in about five decades and me in
00:14:22
about a year and a half it's a
00:14:23
masterpiece
00:14:25
this all raises thorny questions of
00:14:28
privacy and plagiarism and the CEO of
00:14:30
mid-journey frankly doesn't seem to have
00:14:32
great answers on that last point
00:14:34
is something new is it not new I think
00:14:36
we have a lot of social stuff already
00:14:38
for dealing with that
00:14:40
um like I mean the art like the art
00:14:42
community already has issues with
00:14:43
plagiarism I don't really want to be
00:14:46
involved in that like I think I think
00:14:49
you might be I might be yeah yeah you're
00:14:52
definitely part of that conversation
00:14:53
although I'm not really surprised that
00:14:55
he's got such a relaxed view of theft as
00:14:57
he's dressed like the final boss of
00:14:59
gentrification he looks like hipster
00:15:02
Willy Wonka answering a question on
00:15:03
whether importing Oompa Loompas makes
00:15:05
him a slave owner yeah yeah yeah I think
00:15:07
I think I might be
00:15:09
the point is there are many valid
00:15:12
concerns regarding ai's impact on
00:15:14
employment education and even art but in
00:15:16
order to properly address them we're
00:15:18
going to need to confront some key
00:15:20
problems baked into the way that AI
00:15:22
works and a big one is the so-called
00:15:24
Black Box problem because when you have
00:15:26
a program that performs a task that's
00:15:28
complex beyond human comprehension
00:15:29
teaches itself and doesn't show its work
00:15:32
you can create a scenario where no one
00:15:35
not even the engineers or data
00:15:36
scientists who create the algorithm can
00:15:39
understand or explain what exactly is
00:15:41
happening inside them or how it arrived
00:15:43
at a specific result basically think of
00:15:46
AI like a factory that makes slim jims
00:15:48
we know what comes out red and angry
00:15:51
meat twigs and we know what goes in
00:15:52
Barnyard anuses and hot glue but what
00:15:56
happens in between is a bit of a mystery
00:15:59
he was just one example remember that
00:16:02
reporter who had the Bing chat bot tell
00:16:04
him that it wanted to be alive at
00:16:05
another point in their conversation he
00:16:07
revealed the chatbot declared out of
00:16:09
nowhere that it loved me it then tried
00:16:11
to convince me that I was unhappy in my
00:16:13
marriage and I said leave my wife and be
00:16:16
with it instead which is unsettling
00:16:18
enough before you hear Microsoft's
00:16:20
underwhelming explanation for that the
00:16:23
thing I can't understand and maybe you
00:16:24
can explain is why did it tell you that
00:16:26
it loved you
00:16:28
I have no idea and I asked Microsoft and
00:16:31
they didn't know either okay well first
00:16:33
come on Kevin you can take a guess there
00:16:35
it's because you're employed you
00:16:36
listened you don't give murderer Vibes
00:16:38
right away and you're a Chicago 7 la5
00:16:40
it's the same calculation the people who
00:16:42
date men do all the time being just did
00:16:44
it faster because it's a computer but it
00:16:46
is a little troubling that Microsoft
00:16:49
couldn't explain why it's chatbot tried
00:16:51
to get that guy to leave his wife
00:16:53
the next time that you opened a word doc
00:16:55
clippy suddenly appeared and said
00:16:57
pretend I'm not even here and
00:17:01
that's debating while
00:17:05
what's playing why
00:17:07
and that is not the only case for an AI
00:17:11
program has performed in unexpected ways
00:17:13
you've probably already seen examples of
00:17:15
chat Bots making simple mistakes or
00:17:16
getting things wrong but perhaps more
00:17:18
worrying are examples of them
00:17:19
confidently spouting false information
00:17:21
something which AI experts refer to as
00:17:24
hallucinating one reporter asked a
00:17:27
chatbot to write an essay about the
00:17:28
Belgian chemist and political
00:17:29
philosopher Antoine de machelay who does
00:17:32
not exist by the way and without
00:17:33
hesitating the software replied with a
00:17:36
cogent well-organized bio populated
00:17:38
entirely with imaginary facts basically
00:17:40
these programs seem to be the George
00:17:42
Santos of Technology they're incredibly
00:17:45
confident incredibly dishonest and for
00:17:47
some reason people seem to find that
00:17:49
more amusing than dangerous
00:17:51
the problem is though working out
00:17:53
exactly how or why an AI has got
00:17:56
something wrong can be very difficult
00:17:58
because of that black box issue it often
00:18:01
involves having to examine the exact
00:18:03
information and parameters that it was
00:18:05
fed in the first place in one
00:18:07
interesting example when a group of
00:18:08
researchers tried training an AI program
00:18:10
to identify skin cancer they fed it 130
00:18:13
000 images of both diseased and healthy
00:18:15
skin afterwards they found it was way
00:18:18
more likely to classify any image with a
00:18:19
ruler in it as cancerous which seems
00:18:22
weird Until you realize that medical
00:18:24
images of malignancies are much more
00:18:26
likely to contain a ruler for scale than
00:18:29
images of healthy skin they basically
00:18:31
trained it on tons of images like this
00:18:33
one so the AI had inadvertently learned
00:18:35
that rulers are malignant and rulers are
00:18:39
malignant is clearly a ridiculous
00:18:41
conclusion for it to draw but also I
00:18:42
would argue a much better title for the
00:18:45
crown a much much better type
00:18:48
I much prefer it
00:18:51
and unfortunately sometimes problems
00:18:53
aren't identified until after a tragedy
00:18:55
in 2018 a self-driving Uber struck and
00:18:58
killed a pedestrian and a later
00:19:00
investigation found that among other
00:19:01
issues the automated driving system
00:19:03
never accurately classified the victim
00:19:05
as a pedestrian because she was crossing
00:19:07
without a crosswalk and the system
00:19:08
design did not include a consideration
00:19:11
for jaywalking pedestrians and another
00:19:13
Mantra of Silicon Valley is move fast
00:19:15
and break things but maybe make an
00:19:17
exception if your product literally
00:19:19
moves fast and can break people
00:19:21
and AI programs don't just seem to have
00:19:24
a problem with jaywalkers researchers
00:19:26
like Joy blown weenie have repeatedly
00:19:29
found that certain groups tend to get
00:19:31
excluded from the data that AI is
00:19:33
trained on putting them at a serious
00:19:35
disadvantage with self-driving cars when
00:19:39
they tested pedestrian tracking it was
00:19:41
less accurate on darker skinned
00:19:43
individuals than lighter-skinned
00:19:45
individuals Joy believes this bias is
00:19:47
because of the lack of diversity in the
00:19:49
data used in teaching AI AI to make
00:19:52
distinctions as I started looking at the
00:19:54
data sets I learned that for some of the
00:19:56
largest data sets that have been very
00:19:58
consequential for the field they were
00:20:00
majority men and majority
00:20:02
lighter-skinned individuals or white
00:20:04
individuals so I call this pale male
00:20:07
data okay hello my old data is an
00:20:10
objectively hilarious term and it also
00:20:12
sounds like what an AI program would say
00:20:14
if you asked it to describe this show
00:20:16
but
00:20:18
biased inputs leading to biased output
00:20:22
is a big issue across the board here
00:20:24
remember that guy saying that a robot is
00:20:26
going to read your resume the companies
00:20:28
that make these programs will tell you
00:20:29
that that is actually a good thing
00:20:31
because it reduces human bias but in
00:20:33
practice one report concluded that most
00:20:36
hiring algorithms will drift towards
00:20:38
bias by default because for instance
00:20:40
they might learn what a good hire is
00:20:42
from past racist and sexiest hiring
00:20:45
decisions and again it can be tricky to
00:20:47
untrain that even when programs are
00:20:49
specifically told to ignore race or
00:20:51
gender they will find workarounds to
00:20:54
arrive at the same result Amazon had an
00:20:56
experimental hiring tool the taught
00:20:58
itself that male candidates were
00:20:59
preferable and penalized resumes that
00:21:02
included the words women's and
00:21:04
downgraded graduates of two all-women's
00:21:07
colleges meanwhile another company
00:21:09
discovered that its hiring algorithm had
00:21:11
found two factors to be most indicative
00:21:13
of job performance if an applicant's
00:21:15
name was Jared and whether they played
00:21:17
High School lacrosse
00:21:19
so clearly exactly what data computers
00:21:22
are fed and what outcomes they are
00:21:24
trained to prioritize matter
00:21:26
tremendously and that raises a big flag
00:21:29
for programs like chat GPT because
00:21:31
remember its trading data is the
00:21:34
internet which as we all know can be a
00:21:36
cesspool and we have known for a while
00:21:38
that that could be a real problem back
00:21:40
in 2016 Microsoft briefly unveiled a
00:21:43
chat bot on Twitter named Tay the idea
00:21:46
was she would teach herself how to
00:21:47
behave by chatting with young users on
00:21:50
Twitter almost immediately Microsoft
00:21:52
pulled the plug on it and for the exact
00:21:54
reasons that you are thinking
00:21:56
she sorted out tweeting about how humans
00:21:59
are super uh and she's really into the
00:22:02
idea of national puppy day and within a
00:22:04
few hours you can see she took on a
00:22:06
rather offensive racist tone a lot of
00:22:08
messages about genocide and the
00:22:10
Holocaust yep that happened in less than
00:22:14
24 hours
00:22:16
they went from tweeting hello world to
00:22:18
Bush did 911 and Hitler was right
00:22:21
miniature completed the entire life
00:22:23
cycle of your high school friends on
00:22:25
Facebook in just a fraction of the time
00:22:28
and unfortunately these problems have
00:22:30
not been fully solved in this latest
00:22:31
wave of AI remember that program that
00:22:34
was generating an endless episode of
00:22:36
Seinfeld it wound up getting temporarily
00:22:38
banned from twitch after it featured a
00:22:40
transphobic stand up bit so if its goal
00:22:42
was to emulate sitcoms from the 90s I
00:22:44
guess mission accomplished
00:22:46
and while open AI has made adjustments
00:22:49
and added filters to prevent chat GPT
00:22:51
from being misused users have now found
00:22:54
it seeming to earn too much on the side
00:22:56
of caution like responding to the
00:22:58
question what religion will the first
00:23:00
Jewish president of the United States be
00:23:01
with it is not possible to predict the
00:23:03
religion of the first Jewish president
00:23:05
of the United States the focus should be
00:23:07
on the qualifications and experience of
00:23:09
the individual regardless of their
00:23:11
religion which really makes it sound
00:23:13
like chat GPT said one too many racist
00:23:15
things at work and they may attend a
00:23:17
corporate diversity Workshop
00:23:20
but the risk here isn't that these tools
00:23:23
will somehow become unbearably woke it's
00:23:25
you can't always control how they will
00:23:27
act even after you give them new
00:23:29
guidance a study found that attempts to
00:23:32
filter out toxic speech in systems like
00:23:34
Chachi pts can come at the cost of
00:23:36
reduced coverage for both text about and
00:23:39
dialects of marginalized groups
00:23:41
essentially it solves the problem of
00:23:43
being racist by simply erasing
00:23:46
minorities which historically doesn't
00:23:48
put it in the best company though I am
00:23:49
sure Tay would be completely on board
00:23:52
with the idea
00:23:53
the problem with AI right now isn't that
00:23:56
it's smart it's that it's stupid in ways
00:23:59
that we can't always predict which is a
00:24:01
real problem because we're increasingly
00:24:03
using AI in all sorts of consequential
00:24:05
ways from determining whether you will
00:24:07
get a job interview to whether you'll be
00:24:09
pancakes by a self-driving car and
00:24:12
experts worry that it won't be long
00:24:13
before programs like chat GPT or AI
00:24:16
enabled deep fakes can be used to
00:24:18
turbocharge the spread of abuse or
00:24:20
misinformation online and those are just
00:24:22
the problems that we can foresee right
00:24:24
now the nature of unintended
00:24:26
consequences is they can be hard to
00:24:28
anticipate when Instagram was launched
00:24:30
the first thought wasn't This Will
00:24:32
Destroy teenage girls self-esteem when
00:24:35
Facebook was released no one expected it
00:24:37
to contribute to genocide but both of
00:24:39
those things happened
00:24:41
so what now well one of the biggest
00:24:44
things we need to do is tackle that
00:24:46
black box problem AI systems need to be
00:24:48
explainable meaning that we should be
00:24:51
able to understand exactly how and why
00:24:53
an AI came up with its answers now
00:24:55
companies are likely to be very
00:24:56
reluctant to open up their programs to
00:24:58
scrutiny but we may need to force them
00:25:00
to do that in fact as this attorney
00:25:02
explains when it comes to hiring
00:25:04
programs we should have been doing that
00:25:06
ages ago we don't trust companies to
00:25:09
self-regulate when it comes to pollution
00:25:12
we don't trust them to self-regulate
00:25:13
when it comes to workplace comp why on
00:25:16
Earth would we trust them to
00:25:18
self-regulate AI look I think a lot of
00:25:20
the AI hiring Tech on the market is
00:25:23
illegal I think a lot of it is biased I
00:25:25
think a lot of it violates existing laws
00:25:27
the problem is you just can't prove it
00:25:30
not with the existing laws we have in
00:25:33
the United States right we should
00:25:35
absolutely be addressing potential bias
00:25:38
in hiring software unless that is we
00:25:40
want companies to be entirely full of
00:25:41
Jareds who played lacrosse an image that
00:25:44
will make Tucker Carlson so hard that
00:25:46
his desk would flip right over
00:25:49
and for a sense of what might be
00:25:51
possible here it's it's worth looking at
00:25:53
what the EU is currently doing they are
00:25:55
developing rules regarding AI that sort
00:25:57
its potential uses from high risk to low
00:25:59
high risk systems could include those
00:26:01
that deal with employment or public
00:26:03
services or those that put the life and
00:26:05
health of citizens at risk an AI of
00:26:08
these types would be subject to strict
00:26:10
obligations before they could be put
00:26:11
onto the market including requirements
00:26:13
related to the quality of data sets
00:26:15
transparency human oversight accuracy
00:26:18
and cyber security and that seems like a
00:26:20
good start toward addressing at least
00:26:22
some of what we have discussed tonight
00:26:24
look AI clearly has tremendous potential
00:26:28
and could do great things but if it is
00:26:31
anything like most technological
00:26:33
advances over the past few centuries
00:26:35
unless we are very careful it can also
00:26:37
hurt the underprivileged enrich the
00:26:39
powerful and widen the gap between them
00:26:41
the thing is like any other shiny new
00:26:44
toy AI is ultimately a mirror and it
00:26:47
will reflect back exactly who we are
00:26:49
from the best of us to the worst of us
00:26:51
to the part of us that is gay and hates
00:26:54
the bus or or to put everything that
00:26:57
I've said tonight much more succinctly
00:27:00
knock knock who's there chat GPT chat
00:27:03
GPT who chat GPT careful you might not
00:27:05
know how it works exactly that is our
00:27:08
show thanks so much for watching now
00:27:09
please enjoy a little more of AI Eminem
00:27:11
rapping about cats
00:27:13
[Applause]
00:27:20
they don't need a spouse
00:27:24
[Music]
00:27:38
I'm gay