AI Ethics & Governance

00:38:55
https://www.youtube.com/watch?v=VfbIePLWjww

Ringkasan

TLDRBidiyon ya tattauna muhimmancin amfani da fasaha ta AI a bangaren sadarwa daga software dinsa, musamman a tsakiyar Philippines inda kungiyar jama’a ta ke amfani da AI sosai fiye da kasashen Japan da Indonesia. An nuna tsoran amfani mara kyau na AI daga dalibai da malaman makaranta wajen nazarin jarabawa. Hakan ya haifar da bukatar kiran hanzari don duba tsare-tsare na adalci da tsarin mulki mai dacewa da AI don bunkasa ci gaban kowa. An kawo misalin kayan aiki da AI ke cire bambance-bambance ta jinsi a Amazon, wanda yasa ya fifita maza. An jaddada bukatar duba tsare-tsaren kare hakkin dan Adam da adalci, tsaro da amincewar mutane game da amfani da AI. Ana kaffa-kunshe da fahimtar yadda AI zata iya kawo cikas tare da kafa ka'idoji masu kyau da ke nuna gaskiya da amintattu tare da ingantattun doka da tsarin mulki.

Takeaways

  • ⚙️ AI tana taka rawa a ci gaban abubuwan software na sadarwa.
  • 📈 Philippines suna amfani sosai da AI fiye da Japan da Indonesia.
  • 🚸 Dalibai da malamai suna amfani mara kyau da AI wajen cika bukatu da jarabawa.
  • ⚖️ Akwai bukatar tsare-tsaren kare hakkin dan Adem da adalci a AI.
  • 🥇 Amazon ya nuna bambance-bambance ta jinsi da AI a daukar aiki.
  • 🔍 Bukatar muryar dan Adam a cikin tsarin AI na kara samun karbuwa.
  • 🛡️ Tukuru da tsaro shine babbar manufa wajen amfanin AI.
  • 🖥️ Ana amfani da AI sosai a masana’antu kamar masu kula da bayanai.
  • 🌐 Dole ne mu duba daidaito da gaskiya a yadda muke amfani da AI.
  • 🔊 Tambaya kan ko AI na kawo duk wani sabani cikin yanke hukunci.

Garis waktu

  • 00:00:00 - 00:05:00

    A cikin yanayin sadarwa ta wayar tafi da gidanka, ana maida hankali kan amfani da AI cikin sadarwa bisa tsarin software. AI yana da mahimmanci, amma wajibi ne a yi amfani da shi yadda ya dace, musamman wajen yin nazari kan kyawun amfani da tsauri. Ana amfani da dandalin ChatGPT don bayar da amsoshin tambayoyin kwasa-kwasai, wanda ke nuna yadda AI ya zama ruwan dare gama duniya. Hasashe na al'ummar kasashen duniya akan yadda ake amfani da AI sun bambanta, musamman wajen neman bayanai.

  • 00:05:00 - 00:10:00

    An gabatar da dandalin ingantaccen cigaban AI tare da manufofi kamar cigaban da ke nasaba da jama'a, darajar mutum, gaskiya, karfi da kuma hadda. Ana duba yadda AI zai taimaka wajen cigaban kasashe ta fuskar cigaban dawwama da kuma samun cigaba mai dorewa. Haka zalika, ana la'akari da yadda AI ya kamata ya mutunta hakkokin dan adam da kuma kare sirrin bayanai. Yi amfani da AI na bukatar kasancewa cikin kulawa ta hannun dan adam domin gujewa mummunan sakamako.

  • 00:10:00 - 00:15:00

    AI ya kamata a gina ta bisa dogara da karfi, tsaro da aminci. Misalai na yadda AI zai iya zama abin damuwa sun hada da wani yanayi na Belgium wanda yayi shawara da chatbot har tsawon makonni guda shida wanda daga karshe ya zargi kansa da kashe kansa saboda damuwar yanayin muhalli. Ana kuma bukatar AI ya kasance mai kara inganci wajen amincin sa. Hakan na nufin yin aiki tukuru wajen tabbatar da cewa AI na kawo ci gaba a rayuwarmu.

  • 00:15:00 - 00:20:00

    Alhakin wadanda suka kirkiri AI na aikinsa ya kamata su kasance bayyane. Misalai sun hada da mai gari a Ostiraliya wanda AI ya kuskure wajen jingina shi da wata badakala. Ana tattauna batun ko ya kamata a jingina wannan kuskure ga wadanda suka kirkiri AI. Mahimmancin bayanin dalilin dalilin yanke hukunci a tsarin AI na kara akwai a ci gaba da amincewa da shi.

  • 00:20:00 - 00:25:00

    Gaskiya, bayyanawa da bin diddigin ayyukan AI suna da mahimmanci. Kamar yadda aka nuna, ana bukatar a bayyana ayyuka da kuma bayanan da suka shafi AI ga masu amfani da shi. Ayyuka irin su cire ka’idoji masu karantawa cikin dan adam suna kara bayyanawa. Wannan yana taimaka wajen samar da muhawarar jama'a da kuma amincewarsu da lamuran AI.

  • 00:25:00 - 00:30:00

    Amintuwa ita ce ainihin mahimmancin AI, musamman a aikace-aikace kamar yadda suka shafi rayuwa. Duk da hakan, ba koyaushe ba ne gaskiya ke jawo amincewa. Akwai bukatar a jadada amintuwa wajen bunkasa cigaban fasahar AI. Yana da muhimmanci a kara jaddada matakai da tsare-tsare na fasahar AI.

  • 00:30:00 - 00:38:55

    Tattaunawa kan gwamnatin AI ya nuna yadda ya kamata shugabanni da cibiyoyi su dauki AI akan muhimmanci. An bayar da misali kan kasashen da suke daukar matakai wajen hana amfani da AI a matsayin makamin yaki. Ana jadada bukatar ceto hakkokin sirri da kare fararen hula daga sanya idanu bata lokaci.

Tampilkan lebih banyak

Peta Pikiran

Video Tanya Jawab

  • Me ya sa fasaha ta AI ke taka muhimmiyar rawa a masarufin sadarwa?

    Saboda fasahar sadarwa tana sauya zuwa tsarin kayan aiki na software, AI tana kara taka muhimmiyar rawa wajen inganta ayyuka da kuma kula da amfani da kungiyoyin mutane.

  • Ta yaya AI ke kara amfani a Philippines fiye da kasashen Indonesia da Japan?

    Kasashen Philippines suna amfani da AI musamman a bincike ta chat GPT, ko da kasashen suna da mutane fiye da su.

  • Menene matsalar amfani da AI ta fuskar dalibai da malaman makaranta?

    Duk dalibai da malamai suna amfani da AI wajen kammala jarrabawa da cike bukatu, wanda yake haifar da amfani mara dacewa da AI.

  • Ta yaya AI ke bambanta mutane ta fuskar jinsi?

    Da misali na Amazon wanda ya hada da kayan aiki na daukar aiki da ya nuna bambance-bambance ta bangaren jinsi, inda ya fifita maza kan mata.

  • Menene gwamnatocin duniya suke yi don tabbatar da AI tana biya bukatun mutane da adalci?

    Suna gabatar da tsare-tsaren da ke tabbatar da ci gaban jama’a, adalcin dan adam, bayyana tsare-tsaren AI da kuma lura da daidaito da dorewa.

  • Wane ne babban kalubale a bangaren tsaro da sahihanci na AI?

    Kalubalen shine gina amincewa wajen amfani da AI don tabbatar da tsaro da dorewar tsarin, misali a motocin kansu da aikace-aikacen likita.

  • Menene yake sa wasu mutane su daina dogara da AI?

    Rashin bayyanar da yadda AI ke yanke hukunci da kuma daukan mataki da rashin bayyanawa ke haifar da rashin dogaro.

  • Dalilin da yasa kasashe da dama ke son samun ikon AI mai 'yanci daga mutane?

    Saboda suna aiki da batutuwa na nesa guda da ikon dan Adam, wanda ke buƙatar dogaro ga tsarin don yin aiki da kansa a wurare masu haɗari.

Lihat lebih banyak ringkasan video

Dapatkan akses instan ke ringkasan video YouTube gratis yang didukung oleh AI!
Teks
en
Gulir Otomatis:
  • 00:00:00
    [Music]
  • 00:00:17
    well uh in the Telco context uh since
  • 00:00:19
    we're shifting towards more
  • 00:00:21
    software-based uh
  • 00:00:23
    telecommunications uh AI will even be
  • 00:00:26
    more uh prominent so we'll discuss a a
  • 00:00:29
    fundament Al concern in uh in
  • 00:00:33
    telecommunications in our Modern Life
  • 00:00:35
    and that is uh AI uh and uh when when we
  • 00:00:39
    talk about technology inevitably we'll
  • 00:00:41
    talk about its proper use what it's
  • 00:00:44
    really about how we uh see ourselves uh
  • 00:00:48
    the lives we want to lead and that is uh
  • 00:00:52
    falling within the perview of AI ethics
  • 00:00:55
    and uh
  • 00:00:57
    governance so uh as you can see even for
  • 00:01:01
    those School based uh we're pretty much
  • 00:01:03
    uh inundated by uh good news and bad
  • 00:01:07
    news with AI uh good use and misuse uh
  • 00:01:11
    even scientists are not spared uh I see
  • 00:01:14
    that my colleagues for instance are
  • 00:01:15
    already using uh Chad GPT to create quiz
  • 00:01:20
    uh uh if you are a student please raise
  • 00:01:22
    your hand uh if you're still a
  • 00:01:26
    student so you have a sense of uh so
  • 00:01:29
    it's not as if your only students would
  • 00:01:32
    use AI to uh to get past exams or to try
  • 00:01:38
    to uh overcome uh requirements or yeah
  • 00:01:41
    fulfill requirements also teachers so um
  • 00:01:45
    as you can see but but
  • 00:01:48
    for um the for the Philippines uh as a
  • 00:01:52
    as a country we are small country
  • 00:01:54
    relative to uh India United States we're
  • 00:01:57
    about 110 111 million yeah as you can
  • 00:02:00
    see we're pretty heavy users of AI now
  • 00:02:04
    what do you think is that good or bad as
  • 00:02:06
    you can see Japan uh small use and then
  • 00:02:10
    some others and even Indonesia which is
  • 00:02:13
    bigger than us 350 million we outrank
  • 00:02:16
    Indonesia in terms of traffic on chat
  • 00:02:19
    GPT uh but if you probe deeper uh the
  • 00:02:22
    problem here is that we're using uh AI
  • 00:02:25
    to search for information especially
  • 00:02:28
    chat gbt which is may not which may not
  • 00:02:31
    be the proper use of of
  • 00:02:34
    AI all right
  • 00:02:38
    so so what is AI it is a field to uh
  • 00:02:42
    dedicate to developing systems capable
  • 00:02:44
    of Performing tasks and solving problems
  • 00:02:46
    associated with human
  • 00:02:47
    intelligence most if not all systems
  • 00:02:50
    that make decisions normally recording
  • 00:02:52
    human expertise fall within the purview
  • 00:02:55
    of of AI there's also a conflation of
  • 00:02:58
    terms uh data science it looks like uh
  • 00:03:02
    uh be becoming less sexy a discipline
  • 00:03:05
    because of of this conflation but
  • 00:03:07
    there's an overlap actually you need a
  • 00:03:10
    robust uh understanding of data
  • 00:03:13
    scientific understanding of data to be
  • 00:03:15
    able to do Ai and uh you have machine
  • 00:03:18
    learning at more deeper uh more more
  • 00:03:22
    deeply level and you have deep learning
  • 00:03:25
    uh AI especially with the models we're
  • 00:03:27
    dealing with now which is which are
  • 00:03:29
    really is artificial neural Nets no the
  • 00:03:31
    ones popular anyway like chat PT but uh
  • 00:03:36
    it's not just one area uh there's
  • 00:03:39
    there's more uh natural language
  • 00:03:42
    processing uh knowledge representation
  • 00:03:44
    machine learning computer vision uh
  • 00:03:46
    speech recognition Robotics and the
  • 00:03:49
    challenge really is to combine all these
  • 00:03:52
    and to have a a singular uh contiguous
  • 00:03:55
    uh uh Services no uh of of AI so uh how
  • 00:04:01
    do we deal with that we will be talking
  • 00:04:04
    about the principles that govern AI
  • 00:04:07
    right today we embark on a journey for
  • 00:04:09
    the values informing the future of AI
  • 00:04:12
    before we begin let's reflect on a real
  • 00:04:14
    life story that highlights the
  • 00:04:16
    importance of ethical principles and
  • 00:04:18
    considerations in AI in 2018 Amazon
  • 00:04:21
    developed an AI powered recruiting tool
  • 00:04:23
    to assist with hiring the tool was
  • 00:04:25
    designed to scan resumes and identify
  • 00:04:27
    the most qualified candidates however
  • 00:04:29
    ever it was later discovered that the
  • 00:04:31
    tool was biased against female
  • 00:04:33
    candidates the reason it was trained on
  • 00:04:36
    resumes submitted to Amazon over the
  • 00:04:38
    past 10 years which were predominantly
  • 00:04:40
    from male applicants as a result the
  • 00:04:43
    system learned to favor male candidates
  • 00:04:45
    and downrank rums with words commonly
  • 00:04:48
    used by women with that in mind let's
  • 00:04:51
    take a look at the principles that
  • 00:04:53
    hopefully can help us be fair and
  • 00:04:56
    develop AI to serve our best goals and
  • 00:04:59
    aspirations
  • 00:05:00
    as a people a report on an AI
  • 00:05:03
    development framework available at
  • 00:05:07
    ai.org
  • 00:05:09
    theframe offers a set of value based
  • 00:05:12
    guidelines covering inclusive growth
  • 00:05:15
    human centered values transparency
  • 00:05:18
    robustness and accountability these
  • 00:05:21
    principles are the foundation of
  • 00:05:23
    responsible AI development I strongly
  • 00:05:26
    suggest that you check out this live
  • 00:05:28
    online doent for a detailed discussion
  • 00:05:32
    of today's topic principle one inclusive
  • 00:05:35
    growth sustainable development and
  • 00:05:37
    well-being artificial intelligence plays
  • 00:05:39
    a crucial role in sustainable
  • 00:05:41
    development intertwined with our
  • 00:05:42
    national goals for inclusive growth and
  • 00:05:45
    well-being as countries Embrace AI it is
  • 00:05:47
    essential to consider both its
  • 00:05:49
    advantages and risk mitigating potential
  • 00:05:51
    negative effects is vital ensuring AI
  • 00:05:54
    benefits are shared equitably across
  • 00:05:56
    Society principle two human centered
  • 00:05:59
    values and fairness fairness is a
  • 00:06:01
    Cornerstone of AI bias in AI systems can
  • 00:06:04
    lead to discriminatory outcomes
  • 00:06:06
    affecting various sectors in society
  • 00:06:08
    defining and evaluating fairness in AI
  • 00:06:10
    is a challenge but we must ensure AI
  • 00:06:13
    respects human rights and data privacy
  • 00:06:15
    rights instead of relying solely on AI
  • 00:06:18
    robots or automation it's essential to
  • 00:06:20
    involve humans directly especially for
  • 00:06:22
    high-risk systems while AI can offer
  • 00:06:24
    innovative solutions human participation
  • 00:06:27
    remains crucial to ensure these systems
  • 00:06:29
    enhance human capabilities rather than
  • 00:06:31
    causing harm ai's potential for
  • 00:06:34
    Innovation is Limitless but it also
  • 00:06:36
    opens doors to potential misuse ensuring
  • 00:06:39
    fairness in the development of AI is
  • 00:06:41
    challenging but our stakeholders argue
  • 00:06:43
    that end users should have transparency
  • 00:06:46
    into ai's decision-making process and
  • 00:06:48
    the ability to influence results in some
  • 00:06:50
    cases human involvement is necessary to
  • 00:06:53
    avoid purely algorithmic decision making
  • 00:06:56
    ensuring clear human accountability and
  • 00:06:58
    system audit ability however it's
  • 00:07:00
    essential to recognize that autonomous
  • 00:07:03
    systems may not always be under human
  • 00:07:05
    control to some degree therefore we must
  • 00:07:08
    qualify human involvement in AI systems
  • 00:07:11
    particularly in high-risk applications
  • 00:07:13
    in such cases having humans in the loop
  • 00:07:16
    HL is crucial for high-risk AI systems
  • 00:07:20
    the EU AI act mandates human oversite to
  • 00:07:24
    ensure safe and responsible use human
  • 00:07:26
    involvement in AI goes beyond hit a
  • 00:07:30
    successful approach involves leveraging
  • 00:07:32
    both human and machine competences in a
  • 00:07:34
    virtuous cycle to produce valuable and
  • 00:07:37
    positive outcomes at its core AI must
  • 00:07:39
    prioritize the protection of Human
  • 00:07:42
    Rights principle three robustness
  • 00:07:45
    security and safety building trust in AI
  • 00:07:48
    requires us to prioritize robust secure
  • 00:07:51
    and Safe Systems whether it's
  • 00:07:53
    self-driving cars or medical
  • 00:07:54
    applications reliability is of utmost
  • 00:07:57
    importance to ensure safety standards
  • 00:07:59
    and protect human rights adequate
  • 00:08:02
    regulations and oversight play a vital
  • 00:08:04
    role while it's essential to acknowledge
  • 00:08:07
    that the majority of AI systems deployed
  • 00:08:09
    so far are largely safe it's
  • 00:08:12
    understandable that people might get
  • 00:08:14
    fixated on the more dramatic incidents
  • 00:08:17
    for instance earlier this year there was
  • 00:08:19
    a tragic incident involving a Belgian
  • 00:08:21
    man who reportedly engaged in a six week
  • 00:08:24
    long conversation with an AI chatbot
  • 00:08:27
    called Eliza about the ecological future
  • 00:08:29
    of the planet the chatbot supported his
  • 00:08:32
    Echo anxiety and tragically encouraged
  • 00:08:34
    him to take his own life to save the
  • 00:08:37
    planet instances like this remind us of
  • 00:08:40
    the responsibility we hold as AI
  • 00:08:42
    developers to prioritize safety and
  • 00:08:44
    well-being recently the launch of open
  • 00:08:47
    AI chat GPT language model stirred mixed
  • 00:08:51
    reactions this model showcased its
  • 00:08:54
    ability to mimic human conversations and
  • 00:08:56
    generate unique text based on users
  • 00:08:58
    prompts however this has also raised
  • 00:09:00
    concerns about potential misuse or
  • 00:09:03
    unintended consequences moving forward
  • 00:09:06
    it is crucial for AI developers to
  • 00:09:08
    strive for continuous Improvement in
  • 00:09:10
    making their products and services safer
  • 00:09:13
    to use by emphasizing robustness
  • 00:09:15
    security and safety we can Foster Public
  • 00:09:18
    trust and ensure that AI technology is a
  • 00:09:21
    Force for good in our
  • 00:09:23
    lives principle four
  • 00:09:26
    accountability a actors must be
  • 00:09:28
    accountable for their actions and
  • 00:09:29
    decisions responsible AI involves
  • 00:09:32
    transparency and ability to explain the
  • 00:09:34
    reasoning behind AI system choices
  • 00:09:37
    auditability helps ensure compliance
  • 00:09:39
    with regulations and mitigates potential
  • 00:09:41
    risk associated with AI the risk of
  • 00:09:44
    thisinformation has gained prominence
  • 00:09:46
    recently with the Advent of chat GPT and
  • 00:09:49
    generative AI consider the case of Brian
  • 00:09:52
    Hood an Australian mayor H was a
  • 00:09:57
    whistleblower praised for showing trend
  • 00:09:59
    discourage by exposing a worldwide
  • 00:10:01
    bribery Scandal linked to Australia's
  • 00:10:04
    national Reserve Bank however his voters
  • 00:10:07
    told him that chat GPT named him as a
  • 00:10:10
    guilty party and was jailed for it in
  • 00:10:13
    such a bribery scandal in the early
  • 00:10:15
    2000s should open AI the company behind
  • 00:10:18
    chat GPT be held responsible for such
  • 00:10:21
    apparent disinformation and reputational
  • 00:10:23
    harm even it could not possibly know in
  • 00:10:26
    advance what their generative AI would
  • 00:10:28
    say the question of whether open AI
  • 00:10:31
    should be responsible for this is a
  • 00:10:33
    complex one on the one hand open AI
  • 00:10:35
    could argue that it is not responsible
  • 00:10:38
    for the content that its AI system
  • 00:10:40
    generates on the other hand open AI
  • 00:10:43
    could also be seen as having a
  • 00:10:45
    responsibility to ensure that its AI
  • 00:10:47
    system is not used to spread this
  • 00:10:51
    information principle five transparency
  • 00:10:53
    explainability and traceability
  • 00:10:55
    transparency in AI policies and
  • 00:10:57
    decisions is vital for a democratic
  • 00:10:58
    societ
  • 00:11:00
    understanding AI systems even for
  • 00:11:01
    non-technical stakeholders Foster truth
  • 00:11:04
    and informed decision making
  • 00:11:05
    explainability allows us to identify
  • 00:11:07
    potential biases and ensure Fair AI
  • 00:11:10
    outcomes in Singapore it's required that
  • 00:11:12
    AI decisions and Associated data can be
  • 00:11:15
    explained in non-technical terms to end
  • 00:11:17
    users and other stakeholders this
  • 00:11:19
    openness promotes informed public debate
  • 00:11:21
    and Democratic legitimacy for AI however
  • 00:11:24
    the concern of AI systems being
  • 00:11:26
    perceived as black boxes lacking
  • 00:11:28
    transparency and explainability has been
  • 00:11:29
    raised during our stakeholder
  • 00:11:31
    consultations AI systems navigate
  • 00:11:33
    through billions trillions of variables
  • 00:11:36
    that influence outcomes in complex ways
  • 00:11:38
    making it challenging to comprehend even
  • 00:11:40
    with human attention large language
  • 00:11:41
    models like chat GPT with trillions of
  • 00:11:44
    parameters have made explainability
  • 00:11:46
    elusive even to their own developers
  • 00:11:49
    nonlinear models further complicate
  • 00:11:51
    understanding the connection between
  • 00:11:53
    inputs and outputs despite these
  • 00:11:55
    challenges developers are working on
  • 00:11:57
    Solutions more interpret able models
  • 00:12:00
    like decision trees and rule-based
  • 00:12:01
    systems are being explored techniques
  • 00:12:04
    such as human readable rule extraction
  • 00:12:06
    sensitivity analysis and localized
  • 00:12:07
    explanations are also enhancing
  • 00:12:09
    explainability additionally detailed
  • 00:12:11
    documentation of model architecture
  • 00:12:13
    training data and evaluation metrics and
  • 00:12:15
    provide valuable insights into AI system
  • 00:12:18
    Behavior regarding transparency some
  • 00:12:20
    stakeholders propose focusing on
  • 00:12:22
    policies and processes rather than
  • 00:12:24
    revealing AI algorithms entirely this
  • 00:12:27
    approach acknowledges potential risk as
  • 00:12:29
    excessive transparency might hinder
  • 00:12:31
    Innovation by diverting resources from
  • 00:12:33
    improving safety and performance as the
  • 00:12:36
    European Union moves towards adopting
  • 00:12:38
    the AI act there's another important
  • 00:12:40
    principle linked to transparency called
  • 00:12:42
    traceability traceability is distinct
  • 00:12:44
    from explainability but equally
  • 00:12:46
    significant while explainability focuses
  • 00:12:48
    on understanding how an AI system works
  • 00:12:50
    traceability involves actively tracking
  • 00:12:52
    its use to identify potential issues
  • 00:12:55
    this empowers AI system operators to
  • 00:12:57
    spot and address risk like data bias and
  • 00:13:00
    coding errors achieving traceability
  • 00:13:02
    means keeping records of the data used
  • 00:13:04
    the decisions made and the reasons
  • 00:13:06
    behind them explainability on the other
  • 00:13:08
    hand plays a critical role in building
  • 00:13:10
    user trust and aiding informed decision
  • 00:13:12
    making it provides a human readable
  • 00:13:14
    explanation of how an AI system makes
  • 00:13:16
    decisions both traceability and
  • 00:13:18
    explainability contribute to the broader
  • 00:13:20
    principle of transparency however it's
  • 00:13:22
    important to recognize that transparency
  • 00:13:24
    alone may not automatically build public
  • 00:13:27
    trust Professor Anor O'Neil highlighted
  • 00:13:29
    this concern in her BBC right lectures
  • 00:13:31
    two decades ago noting that while
  • 00:13:33
    transparency and openness have advanced
  • 00:13:36
    they have not done much to build public
  • 00:13:38
    trust in fact trust may have even
  • 00:13:40
    diminished as transparency increased
  • 00:13:42
    this Insight remains relevant in today's
  • 00:13:44
    discussions about Ai and H regulations
  • 00:13:47
    some stakeholders propose focusing on
  • 00:13:50
    policies and processes rather than
  • 00:13:51
    revealing
  • 00:13:53
    AI systems navigate through billions
  • 00:13:55
    trillions of variable this approach
  • 00:13:58
    acknowledges potential r as excessive
  • 00:14:00
    transparency might hinder Innovation by
  • 00:14:02
    diverting resources from improving
  • 00:14:04
    safety and performance as the European
  • 00:14:07
    Union moves towards adopting the AI act
  • 00:14:09
    there's another important principle
  • 00:14:11
    linked to transparency called
  • 00:14:12
    traceability traceability is distinct
  • 00:14:15
    from explainability but equally
  • 00:14:16
    significant while explainability focuses
  • 00:14:18
    on understanding how an AI system works
  • 00:14:20
    traceability involves actively tracking
  • 00:14:23
    its use to identify potential issues
  • 00:14:25
    this empowers AI system operators to
  • 00:14:27
    spot and address risk like data bias and
  • 00:14:30
    coding errors achieving traceability
  • 00:14:32
    means keeping records of the data used
  • 00:14:34
    the decisions made and the reasons
  • 00:14:36
    behind them explainability on the other
  • 00:14:38
    hand plays a critical role in building
  • 00:14:40
    user trust and aiding informed decision
  • 00:14:42
    making it provides a human readable
  • 00:14:44
    explanation of how an AI system makes
  • 00:14:46
    decisions both traceability and
  • 00:14:48
    explainability contribute to the broader
  • 00:14:50
    principle of transparency however it's
  • 00:14:52
    important to recognize that transparency
  • 00:14:55
    alone may not automatically build public
  • 00:14:57
    trust professor O'Neil highlighted this
  • 00:15:00
    concern in her BBC right lectures two
  • 00:15:02
    decades ago noting that while
  • 00:15:04
    transparency and openness have advanced
  • 00:15:06
    they have not done much to build public
  • 00:15:08
    trust in fact trust may have even
  • 00:15:10
    diminished as transparency increased
  • 00:15:13
    this Insight remains relevant in today's
  • 00:15:15
    discussions about Ai and H regulations
  • 00:15:18
    principle six trust trust is a crucial
  • 00:15:21
    element in AI adoption AI systems must
  • 00:15:24
    prove themselves to be reliable and safe
  • 00:15:26
    especially in applications impacting
  • 00:15:28
    lives livelihoods earning trust requires
  • 00:15:31
    adherence to high standards and
  • 00:15:33
    inclusive AI governance we now know that
  • 00:15:36
    transparency does not automatically
  • 00:15:38
    translate to trust we need trust to
  • 00:15:40
    provide space for our Filipino AI
  • 00:15:42
    developers to pursue Innovation that
  • 00:15:45
    benefit Society in turn they have to act
  • 00:15:47
    responsibly and be trustworthy AI
  • 00:15:50
    research is a public good that needs to
  • 00:15:53
    be supported by all
  • 00:15:55
    stakeholders this is where my
  • 00:15:57
    presentation ends even as we all
  • 00:15:59
    continue with our journey through AI
  • 00:16:02
    principles for more details check out
  • 00:16:04
    our report on AI governance framework
  • 00:16:06
    for the Philippines available at ai.org
  • 00:16:12
    theframe the values and principles we
  • 00:16:14
    discuss today are the compass guiding AI
  • 00:16:17
    featured let's continue to develop AI
  • 00:16:19
    responsibly ensuring it benefits
  • 00:16:21
    everyone while respecting human rights
  • 00:16:24
    and promoting a fair and Equitable
  • 00:16:26
    Society
  • 00:16:30
    all right
  • 00:16:33
    uh so uh just uh run through some of the
  • 00:16:37
    points made there uh inclusive growth
  • 00:16:40
    sustainable
  • 00:16:41
    development uh and uh well-being we see
  • 00:16:44
    that there is uh this is something that
  • 00:16:46
    is embedded in our Philippine Innovation
  • 00:16:49
    act uh a burdens and benefits have to be
  • 00:16:52
    shared uh equitably we also see how AI
  • 00:16:55
    could potentially bring in uh trill ions
  • 00:16:59
    of uh of economic activity uh trillions
  • 00:17:02
    of uh of benefits uh valued at trillions
  • 00:17:07
    of of dollars uh we're also seeing uh
  • 00:17:10
    70% of companies would have uh adopted
  • 00:17:14
    at least one type of AI technology uh
  • 00:17:16
    right now for the boo industry for
  • 00:17:18
    instance 60% the last survey are already
  • 00:17:22
    a uh using AI so if you're headed to Bo
  • 00:17:24
    most likely you'll be using Ai and uh
  • 00:17:27
    some other companies in the Philippines
  • 00:17:28
    as well uh there is increased uh
  • 00:17:31
    productivity that's why in my workplace
  • 00:17:34
    um it's default that uh my my staff
  • 00:17:37
    would be using AI so that uh the burden
  • 00:17:41
    I mean the justification would be on
  • 00:17:42
    people the Honus on them if they don't
  • 00:17:46
    uh use AI um however you see that um
  • 00:17:51
    it's not uh it's not something that is
  • 00:17:53
    straightforward it's easier said than
  • 00:17:55
    done um AI as a matter of fact will
  • 00:17:59
    potentially also bring in added
  • 00:18:01
    dimension of inequity um as opposed to
  • 00:18:05
    just simply a providing access to um say
  • 00:18:09
    internet so if you're in tawi tawi you
  • 00:18:11
    probably would experience uh internet
  • 00:18:15
    via uh by uh uh starlink and that's fine
  • 00:18:18
    and dandy however there's an other
  • 00:18:21
    dimension there that if you are uh going
  • 00:18:23
    to be using AI um there are going to be
  • 00:18:27
    additional skills that are expected of
  • 00:18:29
    you that are required of you algorithmic
  • 00:18:32
    skills uh the your ability to access
  • 00:18:36
    Fair datab basis and uh the um the
  • 00:18:40
    capacity to be treated fairly or to um
  • 00:18:43
    the the right to be treated fairly in
  • 00:18:45
    those databases and it's quite a leap
  • 00:18:47
    it's no longer just access to AI as as
  • 00:18:50
    you can see in the previous slides I had
  • 00:18:52
    I had a slide on Philippines being on
  • 00:18:55
    top of uh countries using uh chat GB
  • 00:18:59
    the problem with our use according to
  • 00:19:02
    data is that our we use chat GPT to look
  • 00:19:05
    for facts to look for uh certain
  • 00:19:08
    information uh and those um uh the
  • 00:19:12
    information the bits and pieces of
  • 00:19:14
    information could have been
  • 00:19:15
    hallucination so in other words our
  • 00:19:18
    usage of AI so far is uh shallow so so
  • 00:19:22
    that it's a problem when you have to uh
  • 00:19:24
    think in terms of inclusive growth
  • 00:19:26
    because even as we have access to AI
  • 00:19:29
    it's not just access we're talking about
  • 00:19:31
    it's about being able to access properly
  • 00:19:34
    and that requires more than just uh
  • 00:19:37
    access no um you also see that uh right
  • 00:19:40
    now ai is getting to be uh stale in some
  • 00:19:44
    areas uh it's uh it's people are um not
  • 00:19:48
    seeing beyond the hype um it it appears
  • 00:19:51
    that we have a peak in uh uh of there's
  • 00:19:55
    a peak already of inflated expectations
  • 00:19:58
    so it's a let down for others for
  • 00:19:59
    instance if they're expecting to AI to
  • 00:20:02
    do more uh so we could be seeing uh this
  • 00:20:05
    illusionment already and some are
  • 00:20:08
    enlightened hopefully that uh when we
  • 00:20:11
    truly understand AI we are experiencing
  • 00:20:14
    a plateau of productivity and this is
  • 00:20:16
    really where it matters most we see
  • 00:20:18
    beyond the hype and we go straight to uh
  • 00:20:22
    productivity uh in our workplace I see
  • 00:20:25
    this happening uh I'm not so sure in in
  • 00:20:29
    in other areas of of the country no so
  • 00:20:33
    uh as this has been emphasized earlier
  • 00:20:35
    as well uh human- centered values um
  • 00:20:37
    treating people fairly um avoiding
  • 00:20:40
    algorithmic decisions and their
  • 00:20:42
    discriminatory consequences so if you
  • 00:20:44
    look at uh algorithms they have the
  • 00:20:46
    tendency to perpetuate or if not
  • 00:20:49
    amplify uh existing social economic uh
  • 00:20:53
    and cultural
  • 00:20:54
    inequalities uh so that the idea really
  • 00:20:57
    is to really have
  • 00:20:59
    fairness uh and being respectful of
  • 00:21:02
    Human Rights and and data privacy um
  • 00:21:06
    all practically all disciplines all
  • 00:21:09
    professions are already affected you
  • 00:21:10
    might think that if you're a hairdresser
  • 00:21:13
    you would not be affected or a makeup
  • 00:21:15
    artist you would not be affected by AI
  • 00:21:17
    but as you can see in this uh in this
  • 00:21:20
    headline um and a a a makeup artist lost
  • 00:21:23
    his job uh assessing by with with AI
  • 00:21:27
    assessing his body her body language so
  • 00:21:30
    um it looks like there's no job anymore
  • 00:21:33
    that is safe from Ai No at least
  • 00:21:35
    directly or indirectly uh you see this
  • 00:21:38
    also uh some some countries um being
  • 00:21:42
    defensive about about AI uh um but that
  • 00:21:45
    is already reversed in Italy um they now
  • 00:21:48
    have access um uh to chat GPT there are
  • 00:21:52
    also certain areas of concern especially
  • 00:21:54
    when uh um open AI is uh I mean
  • 00:21:57
    introduces new uh new version of of chat
  • 00:22:01
    GPT uh you have uh relatively increased
  • 00:22:04
    risk as well no and in some companies
  • 00:22:07
    they are uh worry about intellectual
  • 00:22:09
    property being exposed to um some Trade
  • 00:22:13
    Secrets being exposed to to Ai and
  • 00:22:17
    meaning to the rest of the world as well
  • 00:22:19
    no so uh we have discussed this uh um uh
  • 00:22:23
    well enough in in the video but but just
  • 00:22:26
    to point out that this is an ongoing
  • 00:22:28
    concern uh every time you have a new
  • 00:22:30
    model of AI um there is increased
  • 00:22:33
    security as I said there is increased
  • 00:22:34
    safety as well even as you learn from
  • 00:22:36
    previous models because the more you
  • 00:22:38
    push the boundaries of AI the more you
  • 00:22:41
    are exposing yourselves actually to risk
  • 00:22:45
    accountability is something that is a
  • 00:22:47
    moving Target as well uh as as uh AI
  • 00:22:51
    progresses uh that is and as as new
  • 00:22:54
    domains of applications are being
  • 00:22:56
    considered new areas of
  • 00:22:58
    of expertise are being generated in AI
  • 00:23:02
    that is a continuing uh problem as uh
  • 00:23:05
    discussed earlier on transparency uh is
  • 00:23:10
    um is something that is almost
  • 00:23:12
    intractable to to some Regulators
  • 00:23:15
    because for the simple reason that
  • 00:23:19
    um systems uh tend to be blackboxes uh
  • 00:23:23
    and by transparency we mean um you know
  • 00:23:26
    operations as well of of AI
  • 00:23:28
    uh that may tend to be inexplainable so
  • 00:23:32
    uh neural nets for instance um there is
  • 00:23:35
    no um straightforward explanation why
  • 00:23:38
    input gets to have certain outputs for
  • 00:23:41
    instance no so uh the interaction of
  • 00:23:44
    with humans especially when you have uh
  • 00:23:46
    in learning context in the in in in in
  • 00:23:49
    relation to for instance CH gbt and
  • 00:23:51
    other large language models uh the more
  • 00:23:54
    you put in human elements the more
  • 00:23:57
    mysterious serous the outcomes become no
  • 00:24:00
    so that is a a problem no and then when
  • 00:24:04
    it comes to producing context uh you see
  • 00:24:07
    perhaps uh more recently in the
  • 00:24:08
    Philippines uh you see your messenger
  • 00:24:12
    having a a uh an AI uh uh tab already an
  • 00:24:17
    AI button uh where you can interact with
  • 00:24:21
    uh uh with an AI agent you see this uh
  • 00:24:25
    generating images uh that could
  • 00:24:27
    potentially be misused no I was uh uh
  • 00:24:30
    checking out for instance uh certain
  • 00:24:33
    images of Jose Rizal and uh um you know
  • 00:24:38
    uh combining him with uh with certain
  • 00:24:40
    scenarios and I could see potential for
  • 00:24:43
    uh for misuse as well no so let me just
  • 00:24:47
    uh um Breeze through these points
  • 00:24:49
    because uh we're running out of time uh
  • 00:24:51
    in a way I'll be sharing the the slides
  • 00:24:54
    with
  • 00:24:55
    you and just to point out that uh when
  • 00:24:57
    you talk about AI governance there are
  • 00:25:00
    many elements of there as well
  • 00:25:02
    leadership is one if you are in the
  • 00:25:05
    context of a company or School uh your
  • 00:25:09
    um overlords your bosses the board of
  • 00:25:12
    regions or trustees need to be really
  • 00:25:14
    engaged uh AI ethics to be front and
  • 00:25:17
    center looking at the core technical
  • 00:25:19
    elements of of AI this is not something
  • 00:25:22
    that you will just have to be left just
  • 00:25:24
    have to be left to the technical people
  • 00:25:27
    uh you see how this involves in an
  • 00:25:29
    organization more importantly you have
  • 00:25:31
    to consider the people of your uh in
  • 00:25:33
    your organization and the culture that
  • 00:25:36
    is uh um that is dominant in that
  • 00:25:40
    area you have to look at um risk in
  • 00:25:44
    terms of uh deciding go or no go for
  • 00:25:48
    certain AI operations uh looking at
  • 00:25:50
    operational structures and processes and
  • 00:25:52
    mechanisms as well uh especially how how
  • 00:25:56
    um how AI performs in in your
  • 00:25:59
    organizational context no so I will just
  • 00:26:02
    uh skip of uh this these elements and uh
  • 00:26:05
    leave this with uh with you um of the
  • 00:26:08
    link later on uh this has been alluded
  • 00:26:11
    to uh earlier on in in the in the
  • 00:26:13
    discussion so um some of one of the last
  • 00:26:17
    points I have to um to discuss with you
  • 00:26:19
    would be the human involvement uh
  • 00:26:22
    scenario or or consideration in AI
  • 00:26:25
    because if you come to think about it AI
  • 00:26:28
    is really about autonomy so um this is
  • 00:26:32
    an an area as well that is distinctive
  • 00:26:35
    of uh say simple data science um AI is
  • 00:26:39
    always about developing systems that are
  • 00:26:42
    aimed uh at becoming autonomous and if
  • 00:26:46
    you consider the notion of autonomy
  • 00:26:49
    actually uh by definition it's out of
  • 00:26:52
    control uh from humans or out of human
  • 00:26:55
    reach no even if you say oh uh I want
  • 00:26:58
    want to just insert myself there and uh
  • 00:27:00
    take over um over time you increasingly
  • 00:27:04
    lose control because your aim is to
  • 00:27:07
    develop autonomy in a way in in machines
  • 00:27:11
    no so there are um potentially
  • 00:27:13
    conflicting Tendencies there with human
  • 00:27:15
    control and autonomy so you have to be
  • 00:27:18
    qualifying what you really mean by by AI
  • 00:27:22
    autonomy because as you um as you
  • 00:27:25
    progress as technology progresses um
  • 00:27:28
    there is greater autonomy and therefore
  • 00:27:30
    less human control so the idea is um
  • 00:27:33
    especially for highrisk applications you
  • 00:27:36
    would need humans in the loop and that
  • 00:27:37
    is a concept that is uh hard to
  • 00:27:40
    operationalize actually because you have
  • 00:27:42
    a long continuous process um and some of
  • 00:27:46
    these are pretty boring and humans are
  • 00:27:49
    are terrible at dealing with boredom as
  • 00:27:51
    a matter of fact we uh um we try
  • 00:27:56
    everything we have everything to just uh
  • 00:27:59
    Escape boredom uh possibly including
  • 00:28:02
    from uh boring lectures no so prohibited
  • 00:28:05
    use of AI um when we talk about human
  • 00:28:08
    involvement we don't want AI to be
  • 00:28:11
    applied of as weapon systems uh New
  • 00:28:14
    Zealand is leading the way uh in in
  • 00:28:19
    advancing The View that uh we shouldn't
  • 00:28:22
    be um using Killer Robots uh AIS Killer
  • 00:28:25
    Robots we shouldn't be looking looking
  • 00:28:28
    at uh manipulation and exploitation uh
  • 00:28:31
    with the use of AI unfortunately some
  • 00:28:33
    countries this is more of a norm rather
  • 00:28:35
    than an exception uh in discriminate
  • 00:28:38
    surveillance there are societies that
  • 00:28:39
    are um
  • 00:28:41
    basically uh dominated by surveillance
  • 00:28:44
    Technologies um um cameras for instance
  • 00:28:48
    surveillance cameras and so on but even
  • 00:28:50
    if we might think that we are free there
  • 00:28:53
    is actually surveillance go going on
  • 00:28:55
    there is a book on surveillance
  • 00:28:57
    capitalism which is essentially
  • 00:28:59
    monetizing about monetizing our
  • 00:29:02
    activities online so if you use Facebook
  • 00:29:04
    if you use um um other social media
  • 00:29:09
    you're pretty much being monitored even
  • 00:29:10
    as you surf the Internet even as you
  • 00:29:13
    browse uh sites you are still being
  • 00:29:16
    surveilled at and the cookies will uh be
  • 00:29:20
    gathered and uh uh and and certain
  • 00:29:23
    patterns will be um uh will be
  • 00:29:26
    determined so so that when you and
  • 00:29:28
    probably even listening to you no
  • 00:29:31
    sometimes when you have conversation
  • 00:29:32
    with your friends about certain dress or
  • 00:29:35
    certain products you'll be surprised
  • 00:29:37
    sometimes that when you open your
  • 00:29:39
    internet uh when you open your browser
  • 00:29:41
    you see a a a an advertisement of the
  • 00:29:45
    product similar product that you are
  • 00:29:47
    interested in Social scoring is another
  • 00:29:51
    area where supposed to be prohibited but
  • 00:29:53
    this is happening in one country at
  • 00:29:55
    least uh when you are not doing well on
  • 00:29:58
    online if you are misbehaving online you
  • 00:30:00
    will not have your passport and you
  • 00:30:02
    cannot travel because uh you have very
  • 00:30:05
    low social score so so these
  • 00:30:08
    considerations um will have to be um put
  • 00:30:12
    front and center when we talk when we
  • 00:30:14
    talk about uh AI uh governance now so uh
  • 00:30:19
    there's there are risk profile in in
  • 00:30:21
    different areas of the society Criminal
  • 00:30:23
    Justice System Financial Services Health
  • 00:30:25
    and Social care social and digital media
  • 00:30:29
    uh energy and utilities there is a u an
  • 00:30:32
    accounting of of the risk that are
  • 00:30:34
    involved here uh although I think there
  • 00:30:35
    are variations uh when we come to the
  • 00:30:38
    Philippines or when we apply to the
  • 00:30:40
    Philippines for instance we have higher
  • 00:30:41
    risk of social media manipulations
  • 00:30:43
    during elections for instance in the US
  • 00:30:46
    now uh there is a uh there are
  • 00:30:49
    controversies around the use of certain
  • 00:30:52
    images uh use of centered synthetic data
  • 00:30:56
    uh and you can pretty much uh see how
  • 00:30:59
    how how they stuck up against other
  • 00:31:01
    other risk of of AI so we can knowing
  • 00:31:05
    this risk would be um would be a
  • 00:31:08
    prerequisite to being able to deal with
  • 00:31:10
    them no
  • 00:31:12
    so more uh bad news so to speak but we
  • 00:31:16
    already alluded to this earlier on
  • 00:31:19
    no uh in Southeast Asia uh there is U
  • 00:31:22
    it's changing now this is this was last
  • 00:31:25
    year but uh the recent initiatives they
  • 00:31:28
    want to come up with with AI regulation
  • 00:31:30
    AI but uh it's not happening anytime
  • 00:31:33
    soon uh my colleagues are participating
  • 00:31:36
    I think right now in uh Lao where this
  • 00:31:39
    is being discussed but uh uh I don't see
  • 00:31:42
    this happening the regulation of AI in
  • 00:31:44
    Southeast Asia in in two years not even
  • 00:31:48
    in two years because uh while there is
  • 00:31:50
    uh clamour there is stock uh it's a long
  • 00:31:53
    shot to get this uh um to get into some
  • 00:31:57
    kind of regul framework that is
  • 00:31:59
    applicable to all of Southeast Asia so
  • 00:32:01
    right now we're still pretty much a wild
  • 00:32:03
    wild west uh Philippines Pogo
  • 00:32:07
    situation uh Thailand Cambodia where
  • 00:32:10
    Filipinos are human traffic to serve uh
  • 00:32:13
    in the underbellies of AI in Thailand
  • 00:32:17
    and Cambodia we see that happening so
  • 00:32:20
    that is still a a problem now so um top
  • 00:32:24
    down approach may be a problem um we see
  • 00:32:27
    our Regulators who are so uh gangho
  • 00:32:30
    about regulating AI but my my discomfort
  • 00:32:33
    really is that uh they may have been
  • 00:32:36
    misinformed there is one uh uh one one
  • 00:32:40
    uh law lawmaker saying that uh AI
  • 00:32:44
    research needs to be regulated you need
  • 00:32:46
    to register your research in AI uh I
  • 00:32:50
    don't think that is a good idea uh so so
  • 00:32:54
    um I'm we're trying to reach out to that
  • 00:32:56
    regulator that uh at least provide him
  • 00:32:59
    with
  • 00:33:00
    proper um expertise no when it comes to
  • 00:33:03
    to AI there are many unintended anti
  • 00:33:06
    unanticipated consequences especially
  • 00:33:08
    for us Philippines we are very good at
  • 00:33:11
    crafting law without thinking about
  • 00:33:14
    their uh unintended unanticipated
  • 00:33:17
    consequences uh so we shoot ourselves in
  • 00:33:19
    the foot when we do regulation uh for
  • 00:33:23
    the for the reason that we lack
  • 00:33:25
    understanding of this technology
  • 00:33:28
    so um we have to look at various
  • 00:33:31
    Technologies to be able to see in
  • 00:33:33
    comparative terms of how this may pan
  • 00:33:36
    out how they're really properly
  • 00:33:38
    regulated when when uh when people think
  • 00:33:41
    about regulation they immediately think
  • 00:33:43
    of boss
  • 00:33:45
    so Congressman uh that may not be a uh a
  • 00:33:50
    good practice so you have to look at AI
  • 00:33:52
    as a range of uh of interventions when
  • 00:33:56
    you deal with AI so governance uh
  • 00:33:58
    regulation and legislation so they are
  • 00:34:01
    not the same so um there are
  • 00:34:05
    discriminatory biases uh if you until
  • 00:34:08
    now uh if you look at pronouns being
  • 00:34:10
    used by chat GPT there are stereotypes
  • 00:34:12
    that are being perpetuated so driver for
  • 00:34:15
    instance or scientist it's almost always
  • 00:34:18
    uh a guy so it's going to be a he but uh
  • 00:34:22
    the reality is that there are more women
  • 00:34:25
    uh scientists in some areas already uh
  • 00:34:28
    drivers are no longer just men and so on
  • 00:34:31
    and so forth no so biases are being
  • 00:34:33
    Amplified by by AI so we have to take a
  • 00:34:36
    look at our training data which is a
  • 00:34:39
    potential source of bias algorithmic uh
  • 00:34:42
    bias is also another possibility that
  • 00:34:44
    the way we parse data is biased already
  • 00:34:47
    no so um there's also the general aspect
  • 00:34:51
    of uh data Pat patrony so to speak uh do
  • 00:34:55
    we allow our national data to be fed to
  • 00:34:58
    the large language models of uh of open
  • 00:35:03
    open AI Microsoft Amazon because if that
  • 00:35:06
    is just what is going on uh then we're
  • 00:35:09
    pretty much really at the raw end of
  • 00:35:11
    what we call data colonialism uh as a
  • 00:35:14
    large contrast here is the effort for
  • 00:35:16
    instance of France they are trying to
  • 00:35:18
    come up with their own National large
  • 00:35:20
    language model based on Lama 3 and it's
  • 00:35:23
    an ongoing project uh for the government
  • 00:35:26
    of France uh precisely to combat what we
  • 00:35:29
    call uh data colonialism where where
  • 00:35:32
    French data or Filipino data uh would
  • 00:35:35
    just be um a training data for would
  • 00:35:38
    just be training data for large language
  • 00:35:41
    models that are owned by uh uh big Tech
  • 00:35:44
    no and there is no uh no conscious
  • 00:35:48
    effort to uplift uh the the um the
  • 00:35:52
    interest of the country so um very
  • 00:35:56
    quickly
  • 00:35:57
    uh you see that uh this is the
  • 00:36:00
    technology is really progressing by lips
  • 00:36:02
    and Bounds uh although I'm not going to
  • 00:36:04
    say that there's going to be General uh
  • 00:36:07
    intelligence
  • 00:36:09
    uh superhuman intelligence but we
  • 00:36:12
    already see that uh there is um greater
  • 00:36:15
    uh progress in this area we're now
  • 00:36:18
    approaching Lama 3 I think the one
  • 00:36:19
    you're using in your Facebook is already
  • 00:36:21
    Lama 3 point something but as you can
  • 00:36:24
    see the way it's by weeks uh it's been
  • 00:36:26
    estimated um that uh the compute
  • 00:36:30
    requirement for this because there's an
  • 00:36:31
    energy requirement for for compute and
  • 00:36:34
    that is the doubling time is 100 days so
  • 00:36:36
    if you're using 100 Watts now in 100
  • 00:36:39
    days uh just to power your AI you will
  • 00:36:42
    need uh uh 200 Watts uh for for for for
  • 00:36:46
    that Baseline now so um there are
  • 00:36:50
    benefits to this uh there are um UPS
  • 00:36:53
    there are um limitations we just have to
  • 00:36:56
    take a look at it but we we as as
  • 00:36:58
    Filipino researchers we have to be
  • 00:37:00
    trying this out we have to apply this in
  • 00:37:02
    our uh in our context that's why I'm
  • 00:37:05
    inviting you to the October 2425
  • 00:37:09
    conference uh I I put it in the chat and
  • 00:37:13
    finally uh so that we can talk about uh
  • 00:37:16
    application areas in in the Philippines
  • 00:37:18
    agriculture health and so on so how do
  • 00:37:22
    we deal with the respons uh with AI we
  • 00:37:24
    have to deal with AI responsibly looking
  • 00:37:27
    at legal and Regulatory Frameworks focus
  • 00:37:29
    on privacy fairness and Equity uh we
  • 00:37:32
    have to build local capacity in AI we
  • 00:37:35
    have to look at multi-stakeholder that's
  • 00:37:38
    why um it's a whole of society approach
  • 00:37:41
    uh we we look to Finland for instance
  • 00:37:44
    where they have a conscious effort to
  • 00:37:46
    educate their citizens at least 10% of
  • 00:37:49
    Vish citizens have undergone training in
  • 00:37:53
    AI at least uh familiarization with the
  • 00:37:56
    technology and Finland is now presenting
  • 00:37:59
    itself as the educator of the entire
  • 00:38:02
    Europe advocacy for greater
  • 00:38:04
    representation in global AI governance
  • 00:38:06
    we understand that we don't have the
  • 00:38:07
    compute uh right now I'm looking for 450
  • 00:38:11
    million so we can do an 8 node uh uh uh
  • 00:38:17
    compute for AI uh that is uh that is
  • 00:38:21
    really quite low but uh that's what what
  • 00:38:24
    what it really amounts to so if you have
  • 00:38:26
    400 50 million pesos uh that can help AI
  • 00:38:30
    research at least in my University uh
  • 00:38:33
    investment in AI enabled social research
  • 00:38:35
    to prioritize well-being and Equity um
  • 00:38:38
    this is not something that uh that is
  • 00:38:40
    just an afterthought right from the
  • 00:38:42
    get-go we have to design our systems to
  • 00:38:44
    produce well-being and uh equity
Tags
  • AI
  • Sadarwa
  • Ethics
  • Philippines
  • SEO
  • Ta'adin bayanan
  • Kare hakkin dan adam
  • Rayuwa ta zamani
  • Transparency
  • Tsaron AI