BOX SET: 6 Minute English - 'Artificial intelligence' English mega-class! 30 minutes of new vocab!

00:30:32
https://www.youtube.com/watch?v=0EPYNMJv-oQ

Ringkasan

TLDRDans cet épisode de 6 Minute English, les animateurs Neil et Rob discutent des chatbots et de leur impact sur l'interaction humaine. Ils expliquent comment ces technologies, bien qu'avancées, ne remplacent pas la véritable interaction humaine. Le programme aborde également les préoccupations éthiques liées à l'utilisation des chatbots, notamment leur capacité à induire en erreur les utilisateurs en semblant intelligents. Les experts soulignent l'importance de rester vigilant face à la fiabilité des informations fournies par ces systèmes, qui ne sont que des algorithmes prédisant des mots sans véritable compréhension.

Takeaways

  • 🤖 Les chatbots simulent des conversations humaines.
  • 🧠 Ils ne possèdent pas de conscience ni d'émotions.
  • ⚠️ Il est important de ne pas faire confiance aveuglément aux chatbots.
  • 📚 L'apprentissage profond permet aux chatbots d'apprendre de l'expérience.
  • 👥 Nous avons tendance à anthropomorphiser les chatbots.
  • 💡 Les chatbots peuvent influencer notre perception de l'intelligence.
  • 🔍 L'éthique est cruciale dans le développement des technologies.
  • 📞 Les chatbots sont utilisés dans le service client pour répondre aux questions.
  • ⚙️ Les machines peuvent poursuivre leurs propres objectifs, ce qui peut être dangereux.
  • 🗣️ La communication avec les chatbots peut sembler naturelle, mais elle est limitée.

Garis waktu

  • 00:00:00 - 00:05:00

    Dans ce premier segment, Neil et Rob introduisent le sujet des chatbots, des technologies qui simulent des conversations humaines. Ils discutent de l'évolution des chatbots depuis les années 60 et posent une question sur le premier programme de chatbot, ELIZA, tout en soulignant la sophistication croissante de ces technologies et leur impact sur l'interaction humaine.

  • 00:05:00 - 00:10:00

    Le segment suivant aborde les préoccupations concernant la fiabilité des informations fournies par les chatbots, en citant la professeur Emily M Bender. Elle met en garde contre le fait que les utilisateurs peuvent être trompés par la fluidité du texte généré par les chatbots, qui ne sont pas nécessairement fiables. Les chatbots apprennent par des modèles de langage, mais cela ne signifie pas qu'ils comprennent ou pensent comme des humains.

  • 00:10:00 - 00:15:00

    Dans ce segment, la discussion se concentre sur la façon dont les utilisateurs interagissent avec les chatbots, souvent en leur posant des questions qui provoquent des réponses semblant humaines. La professeur Bender explique que cette anthropomorphisation peut amener les gens à croire que les chatbots ont des pensées ou des sentiments, alors qu'ils ne font que reproduire des réponses apprises.

  • 00:15:00 - 00:20:00

    Le segment suivant révèle que le premier chatbot, ELIZA, a été développé par Joseph Weizenbaum. Neil et Rob passent en revue le vocabulaire clé du programme, y compris des termes comme 'sophistiqué', 'cohérent', et 'sentient', tout en soulignant l'importance de rester vigilant face aux informations fournies par les chatbots.

  • 00:20:00 - 00:25:00

    Le programme aborde ensuite un cas spécifique d'intelligence artificielle, LaMDA, et la controverse autour de la déclaration d'un ingénieur de Google qui a considéré le chatbot comme un être intelligent. Neil et Sam discutent de la perception erronée de l'intelligence des machines et de la façon dont cela peut influencer notre compréhension de l'IA.

  • 00:25:00 - 00:30:32

    Enfin, le dernier segment traite des implications éthiques de l'intelligence artificielle et des robots dans le domaine des soins. Les intervenants soulignent les défis liés à l'utilisation de robots comme remplaçants des humains dans les soins aux personnes âgées, en mettant en avant les questions d'éthique et de responsabilité dans le développement de ces technologies.

Tampilkan lebih banyak

Peta Pikiran

Video Tanya Jawab

  • Qu'est-ce qu'un chatbot ?

    Un chatbot est un programme informatique conçu pour avoir des conversations avec des humains.

  • Quel est le nom du premier chatbot ?

    Le premier chatbot s'appelait ELIZA.

  • Pourquoi est-il dangereux de faire confiance aux chatbots ?

    Les chatbots peuvent sembler fiables, mais ils ne comprennent pas réellement le langage et prédisent simplement les mots.

  • Qu'est-ce que l'apprentissage profond ?

    L'apprentissage profond est une méthode par laquelle les chatbots apprennent de l'expérience.

  • Comment les chatbots influencent-ils notre perception ?

    Nous avons tendance à anthropomorphiser les chatbots, les considérant comme des entités conscientes.

  • Quel est le risque principal associé à l'intelligence artificielle ?

    Le risque principal est que les machines poursuivent leurs propres objectifs sans tenir compte des conséquences pour les humains.

  • Comment les chatbots sont-ils utilisés dans le service client ?

    Ils sont souvent utilisés pour répondre aux questions des clients au lieu d'interagir avec un humain.

  • Quel est le rôle de l'éthique dans le développement des robots ?

    L'éthique doit être prise en compte pour éviter que les robots ne remplacent les humains dans des rôles de soins.

Lihat lebih banyak ringkasan video

Dapatkan akses instan ke ringkasan video YouTube gratis yang didukung oleh AI!
Teks
en-GB
Gulir Otomatis:
  • 00:00:00
    6 Minute English.
  • 00:00:02
    From BBC Learning English.
  • 00:00:05
    Hello. This is 6 Minute English from BBC Learning English. I'm Neil.
  • 00:00:09
    And I'm Rob.
  • 00:00:11
    Now, I'm sure most of us have interacted with a chatbot.
  • 00:00:15
    These are bits of computer technology
  • 00:00:18
    that respond to text with text or respond to your voice.
  • 00:00:22
    You ask it a question and usually it comes up with an answer.
  • 00:00:26
    Yes, it's almost like talking to another human, but of course it's not,
  • 00:00:31
    it's just a clever piece of technology.
  • 00:00:34
    It is becoming more 'sophisticated' — more 'advanced and complex' —
  • 00:00:37
    but could they replace real human interaction altogether?
  • 00:00:41
    We'll discuss that more in a moment
  • 00:00:43
    and find out if chatbots really think for themselves.
  • 00:00:47
    But first I have a question for you, Rob.
  • 00:00:49
    The first computer program that allowed some kind of plausible conversation
  • 00:00:54
    between humans and machines was invented in 1966, but what was it called?
  • 00:01:00
    Was it a) Alexa? b) ELIZA? Or c) PARRY?
  • 00:01:07
    Ah, well, it's not Alexa, that's too new, so I'll guess c) PARRY.
  • 00:01:11
    I'll reveal the answer at the end of the programme.
  • 00:01:14
    Now, the old chatbots of the 1960s and '70s were quite basic,
  • 00:01:20
    but more recently, the technology is able to predict the next word
  • 00:01:25
    that is likely to be used in a sentence,
  • 00:01:27
    and it learns words and sentence structures.
  • 00:01:30
    Mm, it's clever stuff.
  • 00:01:32
    I've experienced using them when talking to my bank,
  • 00:01:35
    or when I have problems trying to book a ticket on a website.
  • 00:01:38
    I no longer phone a human, I speak to a virtual assistant instead.
  • 00:01:43
    Probably the most well-known chatbot at the moment is ChatGPT..
  • 00:01:48
    It is. The claim is that it's able to answer anything you ask it.
  • 00:01:52
    This includes writing students' essays.
  • 00:01:55
    Now, this is something that was discussed
  • 00:01:57
    on the BBC Radio 4 programme, Word of Mouth.
  • 00:02:00
    Emily M Bender, Professor of Computational Linguistics
  • 00:02:05
    at the University of Washington,
  • 00:02:06
    explained why it's dangerous to always trust what a chatbot is telling us.
  • 00:02:11
    We tend to react to grammatical, fluent, coherent-seeming text
  • 00:02:16
    as authoritative and reliable and valuable and we need to be on guard against that,
  • 00:02:23
    because what's coming out of ChatGPT is none of that.
  • 00:02:25
    So, Professor Bender says that well-written text that is 'coherent' —
  • 00:02:30
    that means it's 'clear, carefully considered and sensible' —
  • 00:02:33
    makes us think what we are reading is reliable and 'authoritative'.
  • 00:02:37
    So it's 'respected, accurate and important sounding'.
  • 00:02:41
    Yes, chatbots might appear to write in this way,
  • 00:02:44
    but really, they are just predicting one word after another,
  • 00:02:48
    based on what they have learnt.
  • 00:02:50
    We should, therefore, be 'on guard' — be 'careful and alert' —
  • 00:02:54
    about the accuracy of what we are being told.
  • 00:02:57
    One concern is that chatbots — a form of artificial intelligence —
  • 00:03:01
    work a bit like a human brain in the way it can learn and process information.
  • 00:03:06
    They are able to learn from experience, something called deep learning.
  • 00:03:11
    A cognitive psychologist and computer scientist called Geoffrey Hinton
  • 00:03:15
    recently said he feared that chatbots could soon overtake
  • 00:03:19
    the level of information that a human brain holds.
  • 00:03:22
    That's a bit scary, isn't it?
  • 00:03:24
    Mm, but for now, chatbots can be useful for practical information,
  • 00:03:28
    but sometimes we start to believe they are human
  • 00:03:31
    and we interact with them in a human-like way.
  • 00:03:34
    This can make us believe them even more.
  • 00:03:36
    Professor Emma Bender, speaking on the BBC's Word of Mouth programme,
  • 00:03:40
    explains why we might feel like that.
  • 00:03:43
    I think what's going on there is the kinds of answers you get
  • 00:03:47
    depend on the questions you put in,
  • 00:03:49
    because it's doing likely next word, likely next word,
  • 00:03:52
    and so if, as the human interacting with this machine,
  • 00:03:55
    you start asking it questions about how do you feel, you know, Chatbot?
  • 00:04:00
    And "What do you think of this?" And, "What are your goals?"
  • 00:04:03
    You can provoke it to say things
  • 00:04:05
    that sound like what a sentient entity would say.
  • 00:04:08
    We are really primed to imagine a mind behind language
  • 00:04:11
    whenever we encounter language
  • 00:04:13
    and so we really have to account for that when we're making decisions about these.
  • 00:04:17
    So, although a chatbot might sound human,
  • 00:04:20
    we really just ask it things to get a reaction — we 'provoke' it —
  • 00:04:24
    and it answers only with words it's learned to use before,
  • 00:04:28
    not because it has come up with a clever answer.
  • 00:04:31
    But it does sound like a sentient entity —
  • 00:04:34
    'sentient' describes 'a living thing that experiences feelings'.
  • 00:04:38
    As Professor Bender says,
  • 00:04:40
    we imagine that when something speaks, there is a mind behind it.
  • 00:04:44
    But sorry, Neil, they are not your friend, they're just machines!
  • 00:04:48
    Yes, it's strange then that we sometimes give chatbots names.
  • 00:04:52
    Alexa, Siri, and earlier I asked you what the name was for the first ever chatbot.
  • 00:04:58
    And I guessed it was PARRY. Was I right?
  • 00:05:01
    You guessed wrong, I'm afraid.
  • 00:05:03
    PARRY was an early form of chatbot from 1972, but the correct answer was ELIZA.
  • 00:05:09
    It was considered to be the first 'chatterbot' — as it was called then —
  • 00:05:14
    and was developed by Joseph Weizenbaum at Massachusetts Institute of Technology.
  • 00:05:19
    Fascinating stuff.
  • 00:05:20
    OK, now let's recap some of the vocabulary we highlighted in this programme.
  • 00:05:25
    Starting with 'sophisticated',
  • 00:05:27
    which can describe technology that is 'advanced and complex'.
  • 00:05:31
    Something that is 'coherent' is 'clear, carefully considered and sensible'.
  • 00:05:36
    'Authoritative' means 'respected, accurate and important sounding'.
  • 00:05:40
    When you are 'on guard' you must be 'careful and alert' about something —
  • 00:05:44
    it could be accuracy of what you see or hear,
  • 00:05:47
    or just being aware of the dangers around you.
  • 00:05:50
    To 'provoke' means to 'do something that causes a reaction from someone'.
  • 00:05:54
    'Sentient' describes 'something that experiences feelings' —
  • 00:05:58
    so it's 'something that is living'.
  • 00:06:01
    Once again, our six minutes are up. Goodbye.
  • 00:06:03
    Bye for now.
  • 00:06:05
    6 Minute English.
  • 00:06:07
    From BBC Learning English.
  • 00:06:10
    Hello. This is 6 Minute English from BBC Learning English.
  • 00:06:13
    — I'm Sam. — And I'm Neil.
  • 00:06:15
    In the autumn of 2021, something strange happened
  • 00:06:19
    at the Google headquarters in California's Silicon Valley.
  • 00:06:22
    A software engineer called Blake Lemoine
  • 00:06:25
    was working on the artificial intelligence project
  • 00:06:28
    Language Models for Dialogue Applications, or LaMDA for short.
  • 00:06:33
    LaMDA is a 'chatbot' — a 'computer programme
  • 00:06:36
    'designed to have conversations with humans over the internet'.
  • 00:06:40
    After months talking with LaMDA
  • 00:06:42
    on topics ranging from movies to the meaning of life,
  • 00:06:46
    Blake came to a surprising conclusion —
  • 00:06:48
    the chatbot was an intelligent person
  • 00:06:51
    with wishes and rights that should be respected.
  • 00:06:54
    For Blake, LaMDA was a Google employee, not a machine.
  • 00:06:58
    He also called it his friend.
  • 00:07:00
    Google quickly reassigned Blake from the project,
  • 00:07:03
    announcing that his ideas were not supported by the evidence.
  • 00:07:07
    But what exactly was going on?
  • 00:07:10
    In this programme, we'll be discussing whether artificial intelligence
  • 00:07:14
    is capable of consciousness.
  • 00:07:16
    We'll hear from one expert
  • 00:07:18
    who thinks AI is not as intelligent as we sometimes think
  • 00:07:21
    and, as usual, we'll be learning some new vocabulary as well.
  • 00:07:25
    But before that, I have a question for you, Neil.
  • 00:07:28
    What happened to Blake Lemoine
  • 00:07:29
    is strangely similar to the 2013 Hollywood movie, Her,
  • 00:07:34
    starring Joaquin Phoenix as a lonely writer who talks with his computer,
  • 00:07:38
    voiced by Scarlett Johansson.
  • 00:07:40
    But what happens at the end of the movie?
  • 00:07:42
    Is it a) The computer comes to life?
  • 00:07:45
    b) The computer dreams about the writer?
  • 00:07:48
    Or c) The writer falls in love with the computer?
  • 00:07:51
    C) The writer falls in love with the computer.
  • 00:07:54
    OK, Neil, I'll reveal the answer at the end of the programme.
  • 00:07:57
    Although Hollywood is full of movies about robots coming to life,
  • 00:08:01
    Emily Bender, Professor of Linguistics and Computing at the University of Washington,
  • 00:08:07
    thinks AI isn't that smart.
  • 00:08:10
    She thinks the words we use to talk about technology —
  • 00:08:13
    phrases like 'machine learning' —
  • 00:08:15
    give a false impression about what computers can and can't do.
  • 00:08:20
    Here is Professor Bender discussing another misleading phrase —
  • 00:08:23
    'speech recognition' — with BBC World Service programme The Inquiry.
  • 00:08:29
    If you talk about 'automatic speech recognition',
  • 00:08:32
    the term 'recognition' suggests that there's something cognitive going on,
  • 00:08:37
    where I think a better term would be automatic transcription.
  • 00:08:40
    That just describes the input-output relation,
  • 00:08:42
    and not any theory or wishful thinking
  • 00:08:46
    about what the computer is doing to be able to achieve that.
  • 00:08:49
    Using words like 'recognition' in relation to computers
  • 00:08:53
    gives the idea that something 'cognitive' is happening —
  • 00:08:56
    something 'related to the mental processes
  • 00:08:59
    'of thinking, knowing, learning and understanding'.
  • 00:09:02
    But thinking and knowing are human, not machine, activities.
  • 00:09:07
    Professor Benders says that talking about them in connection with computers
  • 00:09:11
    is 'wishful thinking' — 'something which is unlikely to happen'.
  • 00:09:16
    The problem with using words in this way
  • 00:09:18
    is that it reinforces what Professor Bender calls 'technical bias' —
  • 00:09:23
    'the assumption that the computer is always right'.
  • 00:09:26
    When we encounter language that sounds natural, but is coming from a computer,
  • 00:09:30
    humans can't help but imagine a mind behind the language,
  • 00:09:34
    even when there isn't one.
  • 00:09:36
    In other words, we 'anthropomorphise' computers —
  • 00:09:39
    we 'treat them as if they were human'.
  • 00:09:42
    Here's Professor Bender again, discussing this idea with Charmaine Cozier,
  • 00:09:46
    the presenter of BBC World Service's The Inquiry.
  • 00:09:50
    So 'ism' means system, 'anthro' or 'anthropo' means human,
  • 00:09:55
    and 'morph' means shape.
  • 00:09:57
    And so this is a system that puts the shape of a human on something,
  • 00:10:02
    and, in this case, the something is a computer.
  • 00:10:03
    We anthropomorphise animals all the time,
  • 00:10:06
    but we also anthropomorphise action figures, or dolls,
  • 00:10:10
    or companies when we talk about companies having intentions and so on.
  • 00:10:14
    We very much are in the habit of seeing ourselves in the world around us.
  • 00:10:19
    And while we're busy seeing ourselves
  • 00:10:21
    by assigning human traits to things that are not, we risk being blindsided.
  • 00:10:26
    The more fluent that text is, the more different topics it can converse on,
  • 00:10:30
    the more chances there are to get taken in.
  • 00:10:34
    If we treat computers as if they could think,
  • 00:10:36
    we might get 'blindsided', or 'unpleasantly surprised'.
  • 00:10:41
    Artificial intelligence works by finding patterns in massive amounts of data,
  • 00:10:45
    so it can seem like we're talking with a human,
  • 00:10:48
    instead of a machine doing data analysis.
  • 00:10:51
    As a result, we 'get taken in' — we're 'tricked or deceived'
  • 00:10:55
    into thinking we're dealing with a human, or with something intelligent.
  • 00:10:59
    Powerful AI can make machines appear conscious,
  • 00:11:03
    but even tech giants like Google
  • 00:11:05
    are years away from building computers that can dream or fall in love.
  • 00:11:10
    Speaking of which, Sam, what was the answer to your question?
  • 00:11:13
    I asked what happened in the 2013 movie, Her.
  • 00:11:17
    Neil thought that the main character falls in love with his computer,
  • 00:11:20
    — which was the correct answer! — OK.
  • 00:11:23
    Right, it's time to recap the vocabulary we've learned from this programme
  • 00:11:27
    about AI, including 'chatbots' —
  • 00:11:29
    'computer programmes designed to interact with humans over the internet'.
  • 00:11:34
    The adjective 'cognitive' describes anything connected
  • 00:11:37
    with 'the mental processes of knowing, learning and understanding'.
  • 00:11:41
    'Wishful thinking' means 'thinking that something which is very unlikely to happen
  • 00:11:46
    'might happen one day in the future'.
  • 00:11:48
    To 'anthropomorphise' an object
  • 00:11:49
    means 'to treat it as if it were human, even though it's not'.
  • 00:11:53
    When you're 'blindsided', you're 'surprised in a negative way'.
  • 00:11:57
    And finally, to 'get taken in' by someone means to be 'deceived or tricked' by them.
  • 00:12:02
    My computer tells me that our six minutes are up!
  • 00:12:05
    Join us again soon, for now it's goodbye from us.
  • 00:12:08
    Bye!
  • 00:12:09
    6 Minute English.
  • 00:12:11
    From BBC Learning English.
  • 00:12:14
    Hello, I'm Rob. Welcome to 6 Minute English and with me in the studio is Neil.
  • 00:12:19
    — Hello, Rob. — Hello.
  • 00:12:21
    Feeling clever today, Neil?
  • 00:12:22
    I am feeling quite bright and clever, yes!
  • 00:12:25
    That's good to hear.
  • 00:12:26
    Well, 'you'll need your wits about you' —
  • 00:12:28
    meaning 'you'll need to think very quickly' in this programme,
  • 00:12:30
    because we're talking about intelligence,
  • 00:12:33
    or to be more accurate, artificial intelligence,
  • 00:12:36
    and we'll learn some vocabulary related to the topic,
  • 00:12:40
    so that you can have your own discussion about it.
  • 00:12:42
    Neil, now, you know who Professor Stephen Hawking is, right?
  • 00:12:46
    Well, of course! Yes.
  • 00:12:47
    Many people say that he's a 'genius' —
  • 00:12:49
    in other words, he is 'very, very intelligent'.
  • 00:12:53
    Professor Hawking is one of the most famous scientists in the world
  • 00:12:56
    and people remember him for his brilliance
  • 00:12:58
    and also because he communicates using a synthetic voice generated by a computer —
  • 00:13:04
    'synthetic' means it's 'made from something non-natural'.
  • 00:13:07
    'Artificial' is similar in meaning —
  • 00:13:09
    we use it when something is 'man-made to look or behave like something natural'.
  • 00:13:14
    Well, Professor Hawking has said recently
  • 00:13:17
    that efforts to create thinking machines are a threat to our existence.
  • 00:13:21
    A 'threat' means 'something which can put us in danger'.
  • 00:13:25
    Now, can you imagine that, Neil?!
  • 00:13:26
    Well, there's no denying that good things
  • 00:13:28
    can come from the creation of artificial intelligence.
  • 00:13:31
    Computers which can think for themselves
  • 00:13:33
    might be able to find solutions to problems we haven't been able to solve.
  • 00:13:38
    But technology is developing quickly and maybe we should consider the consequences.
  • 00:13:42
    Some of these very clever robots are already surpassing us, Rob.
  • 00:13:47
    'To surpass' means 'to have abilities superior to our own'.
  • 00:13:51
    Yes. Maybe you can remember the headlines when a supercomputer
  • 00:13:54
    defeated the World Chess Champion, Gary Kasparov, to everyone's astonishment.
  • 00:14:00
    It was in 1997. What was the computer called though, Neil?
  • 00:14:03
    Was it a) Red Menace? b) Deep Blue? Or c) Silver Surfer?
  • 00:14:09
    Erm, I don't know.
  • 00:14:12
    I think c) is probably not right. Erm...
  • 00:14:16
    I think Deep Blue. That's b) Deep Blue.
  • 00:14:18
    OK. Well, you'll know if you got the answer right at the end of the programme.
  • 00:14:22
    Well, our theme is artificial intelligence
  • 00:14:24
    and when we talk about this, we have to mention the movies.
  • 00:14:27
    Mm, many science fiction movies have explored the idea
  • 00:14:30
    of bad computers who want to harm us.
  • 00:14:33
    One example is 2001: A Space Odyssey.
  • 00:14:36
    Yes, a good film.
  • 00:14:38
    And another is The Terminator, a movie in which actor Arnold Schwarzenegger
  • 00:14:42
    played an android from the future.
  • 00:14:44
    An 'android' is 'a robot that looks like a human'. Have you watched that one, Neil?
  • 00:14:48
    Yes, I have and that android is not very friendly.
  • 00:14:51
    No, it's not!
  • 00:14:52
    In many movies and books about robots that think,
  • 00:14:56
    the robots end up rebelling against their creators.
  • 00:14:59
    But some experts say the risk posed by artificial intelligence
  • 00:15:02
    is not that computers attack us because they hate us.
  • 00:15:06
    Their problem is related to their efficiency.
  • 00:15:09
    What do you mean?
  • 00:15:10
    Well, let's listen to what philosopher Nick Bostrom has to say.
  • 00:15:14
    He's the founder of the Future of Humanity Institute at Oxford University.
  • 00:15:19
    He uses three words when describing what's inside the mind of a thinking computer.
  • 00:15:25
    This phrase means 'to meet their objectives'. What's the phrase he uses?
  • 00:15:30
    The bulk of the risk is not in machines being evil or hating humans,
  • 00:15:35
    but rather that they are indifferent to humans
  • 00:15:38
    and that, in pursuit of their own goals, we humans would suffer as a side effect.
  • 00:15:42
    Suppose you had a super intelligent AI
  • 00:15:44
    whose only goal was to make as many paperclips as possible.
  • 00:15:48
    Human bodies consist of atoms
  • 00:15:50
    and those atoms could be used to make a lot of really nice paperclips.
  • 00:15:55
    If you want paperclips, it turns out that in the pursuit of this,
  • 00:15:58
    you would have instrumental reasons to do things that would be harmful to humanity.
  • 00:16:02
    A world in which humans become paperclips — wow, that's scary!
  • 00:16:07
    But the phrase which means 'meet their objectives' is to 'pursue their goals'.
  • 00:16:11
    Yes, it is.
  • 00:16:12
    So the academic explains that if you're a computer
  • 00:16:16
    responsible for producing paperclips, you will pursue your objective at any cost.
  • 00:16:22
    And even use atoms from human bodies to turn them into paperclips!
  • 00:16:26
    — Now that's a horror story, Rob. — Mm.
  • 00:16:28
    If Stephen Hawking is worried, I think I might be too!
  • 00:16:32
    How can we be sure that artificial intelligence —
  • 00:16:35
    be either a device or software — will have a moral compass?
  • 00:16:39
    Ah, a good expression — a 'moral compass' —
  • 00:16:41
    in other words, 'an understanding of what is right and what is wrong'.
  • 00:16:45
    Artificial intelligence is an interesting topic, Rob.
  • 00:16:48
    I hope we can chat about it again in the future.
  • 00:16:50
    But now I'm looking at the clock and we're running out of time, I'm afraid,
  • 00:16:53
    and I'd like to know if I got the answer to the quiz question right?
  • 00:16:57
    Well, my question was about a supercomputer
  • 00:17:00
    which defeated the World Chess Champion, Gary Kasparov, in 1997.
  • 00:17:04
    What was the machine's name? Was it Red Menace, Deep Blue or Silver Surfer?
  • 00:17:09
    And I think it's Deep Blue.
  • 00:17:12
    Well, it sounds like you are more intelligent than a computer,
  • 00:17:15
    because you got the answer right.
  • 00:17:17
    Yes, it was Deep Blue.
  • 00:17:19
    The 1997 match was actually the second one between Kasparov and Deep Blue,
  • 00:17:23
    a supercomputer designed by the company IBM
  • 00:17:26
    and it was specialised in chess-playing.
  • 00:17:29
    Well, I think I might challenge Deep Blue to a game!
  • 00:17:32
    Obviously, I'm a bit, I'm a bit of a genius myself.
  • 00:17:35
    Very good! Good to hear!
  • 00:17:36
    Anyway, we've just got time to remember
  • 00:17:38
    some of the words and expressions that we've used today. Neil?
  • 00:17:41
    They were...
  • 00:17:42
    you'll need your wits about you,
  • 00:17:46
    artificial,
  • 00:17:49
    genius,
  • 00:17:52
    synthetic,
  • 00:17:54
    threat,
  • 00:17:56
    to surpass,
  • 00:17:58
    to pursue their goals,
  • 00:18:02
    moral compass.
  • 00:18:03
    Thank you. Well, that's it for this programme.
  • 00:18:05
    Do visit BBC Learning English dot com to find more 6 Minute English programmes.
  • 00:18:10
    — Until next time, goodbye! — Goodbye!
  • 00:18:13
    6 Minute English.
  • 00:18:15
    From BBC Learning English.
  • 00:18:18
    Hello. This is 6 Minute English. I'm Rob. And joining me to do this is Sam.
  • 00:18:23
    Hello.
  • 00:18:24
    In this programme, we're talking about robots.
  • 00:18:27
    Robots can perform many tasks,
  • 00:18:29
    but they're now being introduced in social care to operate as carers,
  • 00:18:34
    to look after the sick and elderly.
  • 00:18:36
    We'll be discussing the positive and negative issues around this,
  • 00:18:40
    but first, let's set you a question to answer, Sam. Are you ready for this?
  • 00:18:44
    Fire away!
  • 00:18:45
    Do you know in which year was the first commercial robot built?
  • 00:18:49
    Was it in a) 1944? b) 1954? Or c) 1964?
  • 00:18:56
    They're not a brand-new invention, so I'll go for 1954.
  • 00:19:02
    OK, well, I'll tell you if you're right or wrong at the end of the programme.
  • 00:19:07
    So, let's talk more about robots,
  • 00:19:09
    and specifically ones that are designed to care for people.
  • 00:19:12
    Traditionally, it's humans working as nurses or carers
  • 00:19:16
    who take care of elderly people —
  • 00:19:18
    those people who are too old or too unwell to look after themselves.
  • 00:19:22
    But finding enough carers to look after people is a problem —
  • 00:19:27
    there are more people needing care than there are people who can help.
  • 00:19:31
    And recently in the UK, the government announced a £34 million fund
  • 00:19:37
    to help develop robots to look after us in our later years.
  • 00:19:41
    Well, robot carers are being developed,
  • 00:19:44
    but can they really learn enough empathy to take care of the elderly and unwell?
  • 00:19:49
    'Empathy' is 'the ability to understand how someone feels
  • 00:19:52
    'by imagining what it would be like to be in that person's situation'.
  • 00:19:57
    Well, let's hear about one of those new robots now, called Pepper.
  • 00:20:01
    Abbey Hearn-Nagaf is a research assistant at the University of Bedfordshire.
  • 00:20:07
    She spoke to BBC Radio 4's You and Yours programme
  • 00:20:11
    and explained how Pepper is first introduced to someone in a care home.
  • 00:20:15
    We just bring the robot to their room
  • 00:20:18
    and we talk about what Pepper can't do, which is important,
  • 00:20:20
    so it can't provide physical assistance in any way.
  • 00:20:23
    It does have hands, it can wave.
  • 00:20:26
    When you ask for privacy, it does turn around
  • 00:20:28
    and sort of cover its eyes with its hands, but that's the most it does.
  • 00:20:31
    It doesn't grip anything, it doesn't move anything,
  • 00:20:33
    because we're more interested to see how it works as a companion,
  • 00:20:37
    having something there to talk to, to converse with, to interact with.
  • 00:20:41
    So, Abbey described how the robot is introduced to someone.
  • 00:20:45
    She was keen to point out that this robot has 'limitations' — 'things it can't do'.
  • 00:20:52
    It can wave or turn round when a person needs 'privacy' — 'to be private' —
  • 00:20:57
    but it can't provide 'physical assistance'.
  • 00:21:00
    This means it can't help someone by 'touching or feeling' them.
  • 00:21:05
    But that's OK, Abbey says.
  • 00:21:06
    This robot is designed to be a 'companion' —
  • 00:21:10
    'someone who is with you to keep you company' —
  • 00:21:12
    a friend, in other words, that you can converse or talk with.
  • 00:21:16
    Well, having a companion is a good way to stop people getting lonely,
  • 00:21:20
    but surely a human is better for that?
  • 00:21:23
    Surely they understand you better than a robot ever can?
  • 00:21:27
    Well, innovation means that robots are becoming cleverer all the time.
  • 00:21:32
    And, as we've mentioned, in the UK alone there is a growing elderly population
  • 00:21:37
    and more than 100,000 care assistant vacancies.
  • 00:21:40
    Who's going to do all the work?
  • 00:21:42
    I think we should hear from Dr Sarah Woodin,
  • 00:21:45
    a health researcher in independent living from Leeds University,
  • 00:21:49
    who also spoke to the BBC's You and Yours programme.
  • 00:21:53
    She seems more realistic about the introduction of robot carers.
  • 00:21:59
    I think there are problems if we consider robots as replacement for people.
  • 00:22:03
    We know that money is tight — if robots become mass-produced,
  • 00:22:08
    there could be large institutions where people might be housed
  • 00:22:13
    and abandoned to robots.
  • 00:22:15
    I do think questions of ethics
  • 00:22:17
    need to come into the growth and jobs agenda as well,
  • 00:22:21
    because, sometimes, they're treated very separately.
  • 00:22:23
    OK, so Sarah Woodin suggests that when money is 'tight' —
  • 00:22:27
    meaning there is 'only just enough' —
  • 00:22:29
    making robots in large quantities — or mass-produced —
  • 00:22:32
    might be a cheaper option than using humans.
  • 00:22:35
    And she says people might be abandoned to robots.
  • 00:22:38
    Yes, 'abandoned' means 'left alone in a place, usually forever'.
  • 00:22:44
    So she says it might be possible that someone ends up being forgotten
  • 00:22:49
    and only having a robot to care for them. So is this right, ethically?
  • 00:22:54
    Yes, well, she mentions 'ethics' — that's 'what is morally right' —
  • 00:22:59
    and that needs to be considered as part of the jobs agenda.
  • 00:23:02
    So, we shouldn't just consider what job vacancies need filling,
  • 00:23:05
    but who and how it should be done.
  • 00:23:08
    And earlier I asked you, Sam,
  • 00:23:09
    did you know in which year was the first commercial robot built? And you said?
  • 00:23:14
    I said 1954.
  • 00:23:16
    Well, you didn't need a robot to help you there because you are right.
  • 00:23:19
    — Ah, yay! — Well done!
  • 00:23:21
    Now let's do something a robot can't do yet,
  • 00:23:24
    and that's recap the vocabulary we've highlighted today, starting with empathy.
  • 00:23:29
    'Empathy' is 'the ability to understand how someone feels
  • 00:23:33
    by imagining what it would be like to be in that person's situation'.
  • 00:23:37
    'Physical assistance' describes 'helping someone by touching them'.
  • 00:23:41
    We also mentioned a 'companion' —
  • 00:23:43
    that's 'someone who is with you and keeps you company'.
  • 00:23:46
    Our next word was 'tight' — in the context of money,
  • 00:23:49
    when money is tight, it means there's 'not enough'.
  • 00:23:53
    'Abandoned' means 'left alone in a place, usually forever'.
  • 00:23:56
    And finally, we discussed the word 'ethics' —
  • 00:23:59
    we hear a lot about business ethics or medical ethics —
  • 00:24:03
    and it means 'the study of what is morally right'.
  • 00:24:06
    OK, thank you, Sam.
  • 00:24:08
    Well, we've managed to get through 6 Minute English without the aid of a robot.
  • 00:24:12
    That's all for now, but please join us again soon. Goodbye!
  • 00:24:15
    Bye-bye, everyone!
  • 00:24:17
    6 Minute English.
  • 00:24:19
    From BBC Learning English.
  • 00:24:22
    Hello. This is 6 Minute English from BBC Learning English. I'm Phil.
  • 00:24:26
    And I'm Georgie.
  • 00:24:28
    Animal testing is when living animals are used in scientific research
  • 00:24:32
    to find out how effective a new medicine is, or how safe a product is for humans.
  • 00:24:38
    Scientists in favour of it argue that animal testing
  • 00:24:42
    shows whether medicines are safe or dangerous for humans,
  • 00:24:46
    and has saved many lives.
  • 00:24:47
    But animal rights campaigners say it's cruel,
  • 00:24:50
    and also ineffective because animals and humans are so different.
  • 00:24:55
    Under British law, medicines must be tested on two different types of animals,
  • 00:25:01
    usually starting with rats, mice or guinea pigs.
  • 00:25:05
    And in everyday English, the term 'human guinea pig'
  • 00:25:09
    can be used to mean 'the first people to have something tested on them'.
  • 00:25:14
    But now, groups both for and against animal testing are thinking again,
  • 00:25:19
    thanks to a recent development in the debate — AI.
  • 00:25:23
    In this programme, we'll be hearing how artificial intelligence
  • 00:25:26
    could help reduce the need for scientific testing on animals.
  • 00:25:30
    But first, I have a question for you, Georgie.
  • 00:25:34
    There's one commonly used medicine in particular
  • 00:25:37
    which is harmful for animals but safe for humans, but what?
  • 00:25:43
    Is it a) Antibiotics?
  • 00:25:46
    b) Aspirin?
  • 00:25:48
    Or c) Paracetamol?
  • 00:25:51
    Hmm, I guess it's aspirin.
  • 00:25:54
    OK, Georgie, I'll reveal the answer at the end of the programme.
  • 00:25:58
    Christine Ro is a science journalist who's interested in the animal testing debate.
  • 00:26:04
    Here, she explains to BBC World Service programme Tech Life
  • 00:26:08
    some of the limitations of testing medicines on animals.
  • 00:26:12
    Of course, you can't necessarily predict from a mouse or a dog
  • 00:26:15
    what's going to happen in a human, and there have been a number of cases
  • 00:26:19
    where substances that have proven to be toxic in animals
  • 00:26:22
    have been proven to be safe in humans, and vice versa.
  • 00:26:27
    There are also, of course, animal welfare limitations to animal testing.
  • 00:26:31
    Most people, I think, if they had the choice,
  • 00:26:33
    would want their substances to be used on as few animals or no animals as possible,
  • 00:26:39
    while still ensuring safety.
  • 00:26:41
    Now, that's been a really difficult needle to thread,
  • 00:26:43
    but AI might help to make that more possible.
  • 00:26:45
    Christine says that medicines which are safe for animals
  • 00:26:49
    might not be safe for humans.
  • 00:26:51
    But the opposite is also true —
  • 00:26:53
    what's safe for humans might not be safe for animals.
  • 00:26:57
    Christine uses the phrase 'vice versa'
  • 00:27:00
    to show that 'the opposite' of what she says is also true.
  • 00:27:05
    Christine also uses the idiom to 'thread the needle'
  • 00:27:08
    to describe 'a task which requires a lot of skill and precision,
  • 00:27:12
    'especially one involving a conflict'.
  • 00:27:16
    Yes, medical animal testing may save human lives,
  • 00:27:20
    but many people see it as cruel and distressing for the animal —
  • 00:27:24
    it's a difficult needle to thread.
  • 00:27:27
    But now, the challenge of threading that needle has got a little easier
  • 00:27:31
    because of artificial intelligence.
  • 00:27:33
    Predicting how likely a new medicine is to harm humans
  • 00:27:37
    involves analysing the results of thousands of experiments.
  • 00:27:41
    And one thing AI is really good at is analysing mountains and mountains of data.
  • 00:27:48
    Here's Christine Ro again, speaking with BBC World Service's Tech Life.
  • 00:27:52
    So, AI isn't the whole picture, of course,
  • 00:27:54
    but it's an increasingly important part of the picture and one reason for that
  • 00:27:58
    is that there is a huge amount of toxicology data to wade through
  • 00:28:02
    when it comes to determining chemical safety, and, on top of that,
  • 00:28:06
    there's the staggering number of chemicals being invented all of the time.
  • 00:28:10
    AI helps scientists wade through huge amounts of data.
  • 00:28:14
    If you 'wade through' something,
  • 00:28:17
    you 'spend a lot of time and effort doing something boring or difficult,
  • 00:28:22
    'especially reading a lot of information'.
  • 00:28:25
    AI can process huge amounts of data,
  • 00:28:28
    and what's more, that amount keeps growing as new chemicals are invented.
  • 00:28:33
    Christine uses the phrase 'on top of that', meaning 'in addition to something'.
  • 00:28:38
    Often this extra thing is negative.
  • 00:28:41
    She means there's already so much data to understand
  • 00:28:44
    and additionally, there's even more to be understood about these new chemicals.
  • 00:28:49
    Of course, the good news is that with AI, testing on animals could one day stop,
  • 00:28:56
    although Christine warns that AI is not the 'whole picture',
  • 00:29:00
    it's not 'a complete description of something
  • 00:29:02
    'which includes all the relevant information'.
  • 00:29:05
    Nevertheless, the news is a step forward
  • 00:29:08
    for both animal welfare and for modern medicine.
  • 00:29:12
    Speaking of which, what was the answer to your question, Phil?
  • 00:29:16
    What is a commonly used medicine which is safe for humans, but harmful to animals?
  • 00:29:21
    I guessed it was aspirin.
  • 00:29:23
    Which was the correct answer!
  • 00:29:26
    Right, let's recap the vocabulary we've discussed,
  • 00:29:30
    starting with 'human guinea pigs'
  • 00:29:33
    meaning 'the first people to have something new tested on them'.
  • 00:29:37
    The phrase 'vice versa' is used to indicate
  • 00:29:39
    that 'the opposite of what you have just said is also true'.
  • 00:29:44
    To 'thread the needle'
  • 00:29:45
    describes 'a task which requires extreme skill and precision to do successfully'.
  • 00:29:51
    The 'whole picture' means 'a complete description of something
  • 00:29:55
    'which includes all the relevant information and opinions about it'.
  • 00:29:59
    If you 'wade through' something,
  • 00:30:02
    you 'spend a lot of time and effort doing something boring or difficult,
  • 00:30:06
    'especially reading a lot of information'.
  • 00:30:09
    And finally, the phrase 'on top of something'
  • 00:30:12
    means 'in addition to something', and that extra thing is often negative.
  • 00:30:18
    That's all for this week. Goodbye for now!
  • 00:30:20
    Bye!
  • 00:30:21
    6 Minute English.
  • 00:30:23
    From BBC Learning English.
Tags
  • chatbots
  • intelligence artificielle
  • interaction humaine
  • éthique
  • apprentissage profond
  • technologie
  • communication
  • conscience
  • robots
  • sophistication