AI Unscripted #003: 🚨 AI’s Biggest Flaw? Why Machine Unlearning is the Future

00:53:25
https://www.youtube.com/watch?v=V25tK4xFBAo

Summary

TLDRThis episode delves into AI unlearning with guest Ben Lura, CEO of Herondo. The conversation highlights the necessity for AI models to not only learn but also unlearn data, especially outdated or biased information. Ben explains how his startup develops solutions for enterprises to remove harmful data from their AI systems, thereby enhancing their accuracy and compliance. The episode explores various use cases, challenges associated with bias in AI, and the potential impact of robust unlearning practices on improving organizational AI performance. The discussion emphasizes the ongoing transformation in AI technology and its future implications for industries.

Takeaways

  • 🤖 AI can 'unlearn' by removing outdated or inaccurate data.
  • 📊 Unlearning improves the accuracy and compliance of AI models.
  • 🧠 Bias in AI is a critical issue that needs addressing.
  • 🏢 Herondo targets enterprises to help them manage AI data.
  • 💡 Startups can lead the charge in developing unlearning technologies.
  • 📈 Unlearning can save organizations time and resources on retraining.
  • 🚗 Use cases include improving autonomous driving AI performance.
  • 🥇 Organizations are beginning to recognize the importance of unlearning.
  • ⚖️ Ethical concerns in AI rely on effective bias mitigation strategies.
  • 📅 The future of AI may heavily involve unlearning techniques.

Timeline

  • 00:00:00 - 00:05:00

    The host welcomes viewers back to AI Unscripted, introducing the concept of 'unlearning' in AI, which contrasts with the familiar notion of AI learning.

  • 00:05:00 - 00:10:00

    Special guest Ben Lura, CEO of Hundo, explains his startup's focus on AI unlearning, allowing models to forget incorrect or outdated data without needing retraining.

  • 00:10:00 - 00:15:00

    Ben gives an overview of his background and the formation of Hundo, highlighting the importance of correcting inaccuracies in AI data processing for enterprises.

  • 00:15:00 - 00:20:00

    The conversation shifts to examples of biased AI models, including risks encountered by companies like Amazon and Facebook, leading to the need for AI unlearning solutions.

  • 00:20:00 - 00:25:00

    They discuss how Hundo targets business-to-business services for data science teams, emphasizing the growing necessity of unlearning in future AI applications and compliance.

  • 00:25:00 - 00:30:00

    Ben details the technical aspects of Hundo's processes, agreeing that while startups face high pressure to deliver advanced solutions, they must also maintain a focus on existing models.

  • 00:30:00 - 00:35:00

    The discussion shifts to AI in various industries, explaining how unlearning applies to self-driving models and generative AI in mitigating bias and misinformation.

  • 00:35:00 - 00:40:00

    Ben shares his perspective on the evolving landscape of AI, asserting that while generative models are gaining attention, necessary improvements in classical AI remain crucial.

  • 00:40:00 - 00:45:00

    The guests weigh in on the implications of AI on jobs and future employment landscapes, considering how technological advancement can both displace and create new roles.

  • 00:45:00 - 00:53:25

    The conversation wraps up with reflections on the progress in AI, the acknowledgement of existing biases, and the necessity of 'unlearning' for responsible AI development.

Show more

Mind Map

Video Q&A

  • What is AI unlearning?

    AI unlearning is the process of selectively removing unwanted, outdated, or biased data from AI models, allowing them to 'forget' without needing to be retrained.

  • Why is unlearning important for AI?

    Unlearning is crucial for improving the accuracy and compliance of AI models, helping them avoid biases and inaccuracies that can arise from bad data.

  • What are some use cases for AI unlearning?

    Use cases include improving autonomous driving models by removing mislabeled data, and eliminating sensitive personal information from AI outputs in compliance with regulations.

  • How does Ben's startup approach unlearning?

    Herondo provides a platform that allows organizations to easily identify and remove harmful data from their AI models, improving their performance with minimal friction.

  • Is AI unlearning currently being practiced in the industry?

    Yes, although it is still a developing field, organizations are starting to recognize the value of unlearning, and Bench's startup is pioneering this area.

  • What challenges does AI unlearning face?

    Challenges include ensuring that unlearning is thorough enough that it effectively removes the unwanted data while not causing unintended consequences on the model's performance.

  • What impact does bias in AI have?

    Bias in AI can lead to inaccurate outcomes, reinforce stereotypes, and create ethical concerns, making bias mitigation an essential part of AI development.

  • Can AI technically 'forget' things?

    Yes, through unlearning, AI can be designed to remove specific data points or biases from its memory, but it requires careful execution and validation.

  • How can businesses benefit from unlearning technologies?

    Businesses can improve their AI's compliance with regulations, enhance product safety and accuracy, and reduce the need for extensive data retraining efforts.

  • What are the future implications for AI unlearning?

    As AI continues to evolve, unlearning will become increasingly important for maintaining ethical standards, improving AI reliability, and supporting business needs.

View more video summaries

Get instant access to free YouTube video summaries powered by AI!
Subtitles
en
Auto Scroll:
  • 00:00:00
    1 2
  • 00:00:04
    3 hello and welcome back to AI
  • 00:00:07
    unscripted the show where we dive deep
  • 00:00:09
    into the world of AI unscripted with no
  • 00:00:12
    fluff no BS and with the people that are
  • 00:00:14
    shaping AI today we have a very special
  • 00:00:16
    guest joining the usual Squad of Effie
  • 00:00:19
    and Roy say hi guys hey hey guys how's
  • 00:00:22
    it going wait where's my
  • 00:00:24
    camera I want to look at the camera I
  • 00:00:26
    don't see a camera hi hit the link
  • 00:00:30
    subscribe so today guys we're talking
  • 00:00:32
    about something a little bit
  • 00:00:33
    counterintuitive we all talk about AI
  • 00:00:35
    learning models but what about AI
  • 00:00:38
    unlearning That's crazy that's actually
  • 00:00:41
    crazy that is crazy we're going to talk
  • 00:00:43
    about it because it's interesting I
  • 00:00:44
    actually made some notes
  • 00:00:48
    first so today's guest is Ben Lura the
  • 00:00:51
    CEO of hundu am I saying that correctly
  • 00:00:54
    almost okay hero herondo sorry yep and
  • 00:00:57
    uh basically Ben if you'd like to tell
  • 00:01:00
    was a little bit about your cool startup
  • 00:01:02
    and yourself of course yeah okay so my
  • 00:01:04
    name is Ben I'm the CEO of rundo We're
  • 00:01:06
    the first machine and learning startup
  • 00:01:08
    so basically if you think about AI
  • 00:01:09
    models something that's very obvious and
  • 00:01:12
    inherent to them they learn on data and
  • 00:01:15
    regardless of how much you try to
  • 00:01:16
    perfect the data that they being trained
  • 00:01:18
    on or fine tuned on there's always going
  • 00:01:20
    to be something there that shouldn't be
  • 00:01:21
    it could be inaccurate data
  • 00:01:22
    non-compliant data outdated information
  • 00:01:25
    a non-compliant confidential and similar
  • 00:01:28
    to human beings uh you can can't just
  • 00:01:30
    tell a model to forget right so in
  • 00:01:32
    lectures I present a picture of a pink
  • 00:01:34
    elephant I tell the audience please
  • 00:01:36
    don't remember this pink elephant and
  • 00:01:38
    obviously later on in the conference
  • 00:01:40
    they're like hey Pink Elephant guy
  • 00:01:41
    remember what the
  • 00:01:45
    pink basically that leads to
  • 00:01:48
    non-compliant models inaccurate models
  • 00:01:49
    and so forth we developed a way to undo
  • 00:01:52
    that so to selectively remove the bad
  • 00:01:55
    unwanted data from a models make them
  • 00:01:56
    forget without having to rebuild them
  • 00:01:58
    retrain them and so forth wow that's
  • 00:02:01
    very cool I wonder um would it be more
  • 00:02:05
    of a business use case or could it be
  • 00:02:07
    also if I want the computer to forget
  • 00:02:09
    what I searched yesterday afternoon
  • 00:02:11
    perhaps so I think like any other
  • 00:02:13
    Israeli startup we B2B so we're
  • 00:02:15
    targeting like Enterprises those that
  • 00:02:17
    are you know very wary of adopting these
  • 00:02:20
    new things because of the risk involved
  • 00:02:21
    whether it be a regulatory risk or just
  • 00:02:24
    like the risk of providing inaccurate
  • 00:02:25
    answers biased answers ER revealing some
  • 00:02:28
    information that should be a proprietary
  • 00:02:31
    and disclos only to those who are
  • 00:02:33
    eligible for it yeah um so it's a B2B
  • 00:02:36
    startup targeting data science teams in
  • 00:02:38
    big organizations across multiple V and
  • 00:02:41
    where did you come up with the idea for
  • 00:02:42
    this I mean there's a few famous cases
  • 00:02:44
    right like Amazon for example with the
  • 00:02:46
    HR debacle where it was hiring guys I
  • 00:02:49
    think and the target uh we know you're
  • 00:02:51
    pregnant before you do yeah that was
  • 00:02:54
    Facebook but that was AI yeah there's
  • 00:02:57
    also um what's the house there the house
  • 00:03:00
    property Market I think they start
  • 00:03:02
    valuating houses at like $500 million
  • 00:03:05
    regular houses just their data went wild
  • 00:03:08
    so it's obviously a big problem I should
  • 00:03:10
    get on that platform yeah you need to
  • 00:03:12
    how did you come up with the idea so I
  • 00:03:15
    didn't explain also like about me and
  • 00:03:17
    the team and we'll get to that but
  • 00:03:18
    basically I think like um a lot of
  • 00:03:20
    different startups it's all like a both
  • 00:03:23
    a moment of realization but also like a
  • 00:03:24
    very deep process of looking for the
  • 00:03:26
    right thing the thing that the market
  • 00:03:27
    needs now and even more than that the
  • 00:03:30
    thing that the market will need you know
  • 00:03:33
    as a must in the years to come so a bit
  • 00:03:37
    about my background which is usually
  • 00:03:38
    surprising given this is a deep Tech
  • 00:03:40
    startup so like at the trenches of AI
  • 00:03:42
    infrastructure I'm a non-technical
  • 00:03:44
    person so I come from background both of
  • 00:03:46
    a little bit entrepreneurship I had a
  • 00:03:48
    fintech startup a I co-founded a
  • 00:03:50
    nonprofit that helps Israel is switch
  • 00:03:52
    International higher education but my
  • 00:03:54
    background is more in strategy and
  • 00:03:55
    policy so I was a visiting fellow at
  • 00:03:57
    Oxford for public policy and innovation
  • 00:04:01
    it's alongside the likes of Bill Clinton
  • 00:04:03
    I think who was also roads right yeah
  • 00:04:05
    yeah so one of the first roads callers
  • 00:04:07
    from Israel and before that I was a
  • 00:04:08
    captain rank officer dealing with Israel
  • 00:04:10
    us relations my co-founders or my
  • 00:04:12
    partners in crime are much more from
  • 00:04:14
    like the Deep technical expertise that
  • 00:04:17
    you need for such a thing so Misha or
  • 00:04:19
    Michael R CTO was an award-winning R&D
  • 00:04:21
    officer at the Israeli air forceit
  • 00:04:23
    called off um and he was before that a
  • 00:04:26
    friend and maybe even more importantly
  • 00:04:28
    he was a research secher under the
  • 00:04:30
    supervision of a professor called ODI
  • 00:04:33
    and ODed is our chief
  • 00:04:36
    scientist and back then they worked on
  • 00:04:38
    some a research that has actually
  • 00:04:42
    nothing to do with what we do today but
  • 00:04:44
    they thought about commercializing it
  • 00:04:46
    and uh the story is like they sent me a
  • 00:04:48
    blurb about what uh they're working on
  • 00:04:51
    and they looked for a business lead so
  • 00:04:53
    like someone to take it from the
  • 00:04:54
    Academia into the market I read it and I
  • 00:04:57
    was like okay no clue you guys are
  • 00:05:00
    talking
  • 00:05:01
    about like read it again read it again
  • 00:05:03
    like still can't understand but clearly
  • 00:05:06
    you need someone like me who's more of a
  • 00:05:07
    story teller this before chat GPT could
  • 00:05:10
    have summarized it for you before um and
  • 00:05:14
    then we partnered as a team and and
  • 00:05:15
    started like initially thinking about
  • 00:05:18
    how can we potentially commercialize it
  • 00:05:19
    and started interviewing data teams
  • 00:05:21
    across different
  • 00:05:23
    organizations um and while we found that
  • 00:05:26
    their previous research so like
  • 00:05:28
    interesting and
  • 00:05:30
    the you know people care about it but
  • 00:05:32
    it's more of a nice to have from the
  • 00:05:34
    commercial set of things but we kept
  • 00:05:36
    hearing about these deep deep problems
  • 00:05:38
    in data science te that just kept
  • 00:05:40
    reappearing again and again and it's
  • 00:05:42
    basically the process that I've
  • 00:05:43
    described but when you reach it at the
  • 00:05:45
    end of the cycle right so when when they
  • 00:05:47
    tell us like look we spent all this time
  • 00:05:49
    collecting data curating it validating
  • 00:05:52
    it we built a model it's just not good
  • 00:05:55
    enough to reach production and we don't
  • 00:05:57
    know what to do we're spending like it's
  • 00:05:58
    the most highly paid organ like I wish I
  • 00:06:02
    had aary of PhD a scientist in a
  • 00:06:06
    corporate but then you understanding
  • 00:06:08
    that like they're spending so much time
  • 00:06:09
    just like looking for the needle in the
  • 00:06:11
    H stack both to find the data that got
  • 00:06:14
    the model to underperform and also once
  • 00:06:17
    you found it are you going to like
  • 00:06:18
    restart the process again and you
  • 00:06:20
    mentioned cha PT cha PT so like came to
  • 00:06:23
    our lives pretty much at the same time
  • 00:06:25
    like a month or two after we started
  • 00:06:27
    working together and when we saw saw
  • 00:06:30
    like how this you know drastically
  • 00:06:32
    changes how people perceive AI how
  • 00:06:34
    businesses adopt Ai and also our
  • 00:06:38
    Regulators look at this evolving thing H
  • 00:06:40
    we understood that there is first of all
  • 00:06:42
    that we need to change a concept so we
  • 00:06:44
    got we let go of all of like the techan
  • 00:06:47
    related research we started at Clean
  • 00:06:50
    Slate then formed the company and we
  • 00:06:52
    wanted to solve these deep challenges
  • 00:06:54
    that usually just like go a people skip
  • 00:06:58
    them because it seems
  • 00:07:00
    sometimes even too difficult to solve
  • 00:07:02
    and we're like okay we have some bright
  • 00:07:04
    Minds we're going to dig deeper we're
  • 00:07:06
    going to solve something that's very
  • 00:07:08
    inherent to the challenges of data
  • 00:07:09
    science teams um and Vis A that the
  • 00:07:13
    market is going through this
  • 00:07:14
    transformation so we see it as the
  • 00:07:16
    perfect Stone amazing that's crazy and
  • 00:07:19
    you see it like on a layer above the the
  • 00:07:22
    the in internal systems of the company
  • 00:07:25
    yeah so I will say right now we're
  • 00:07:27
    working with open source uh models I
  • 00:07:29
    didn't mention but it's not only for
  • 00:07:30
    large language models it's also for
  • 00:07:32
    classical AI so non-generative models
  • 00:07:34
    Vision radar spech to text as long as we
  • 00:07:37
    have access to the weights in some use
  • 00:07:39
    cases if we have access to the data
  • 00:07:41
    itself but in llms it's not necessary we
  • 00:07:46
    can do this process and then it's
  • 00:07:47
    basically part of the infrastructure
  • 00:07:49
    layer with organizations that are
  • 00:07:51
    working internally can you give me a use
  • 00:07:54
    case something like dumb it down yes get
  • 00:07:57
    it to my language um okay so two very
  • 00:08:00
    different examples one for
  • 00:08:01
    non-generative AI and one for generative
  • 00:08:04
    AI so let's say that you're working on
  • 00:08:06
    an autonomous driving model right like
  • 00:08:09
    cars are progressing it's a very shaky
  • 00:08:11
    market I'll say but eventually I believe
  • 00:08:13
    we'll get there yeah yeah how this
  • 00:08:14
    process looks like is a let's say that
  • 00:08:17
    you roll out a new kind of camera or a
  • 00:08:18
    new kind of liar you spend a lot of time
  • 00:08:21
    and money on collecting data using this
  • 00:08:23
    all like new uh devices or new
  • 00:08:26
    technologies from different sceneries so
  • 00:08:28
    like you wanted to have a road signs
  • 00:08:30
    with different
  • 00:08:31
    languages different kind of seasons and
  • 00:08:33
    so forth different terrains then in
  • 00:08:36
    these cases you send it to labeling a
  • 00:08:38
    lot of times it's just like happening
  • 00:08:39
    offshore in developing countries
  • 00:08:41
    sometimes by computers themselves and
  • 00:08:44
    then you train models on top of that
  • 00:08:45
    that could understand when is it the
  • 00:08:47
    right time to take a a left turn when do
  • 00:08:50
    you see an object that a car should
  • 00:08:51
    avoid or stop and so forth now during
  • 00:08:55
    the labeling of this process think about
  • 00:08:58
    it it's such a tedious job you have
  • 00:08:59
    millions of objects yeah H the people
  • 00:09:02
    are really really underpaid and they
  • 00:09:04
    don't have much time so it's optimizing
  • 00:09:06
    for quantity over quality and you
  • 00:09:09
    understand that maybe the model fails in
  • 00:09:12
    so like a different specific kind of
  • 00:09:14
    scenario right like it can recognize
  • 00:09:16
    motorbikes but if it's a yellow
  • 00:09:17
    motorbike it just never understands like
  • 00:09:20
    what is the object and then you read in
  • 00:09:21
    the news that a car from Brand X crashed
  • 00:09:24
    into a wall and uh us regulatory
  • 00:09:28
    agencies are in investigating until
  • 00:09:30
    there's a change of of government um a
  • 00:09:33
    lot of times these problems happen
  • 00:09:34
    because of labeling issues so we have
  • 00:09:37
    some public case studies that you can
  • 00:09:38
    check out on our website where we
  • 00:09:39
    detected just like that 10% of one of
  • 00:09:43
    the most trusted data sets for
  • 00:09:44
    autonomous driving included M labels so
  • 00:09:47
    which is a crazy amount right and some
  • 00:09:49
    ridiculous examples where you just see
  • 00:09:51
    20 people each one has a box that says
  • 00:09:53
    car around it instead of pedestrian yeah
  • 00:09:56
    so the first challenge is to find this
  • 00:09:58
    bad data the Second Challenge is are you
  • 00:10:01
    going to retrain them all which is an
  • 00:10:03
    option or what happens if you can
  • 00:10:04
    actually remove it in an instant and
  • 00:10:06
    then improve the accuracy of the model
  • 00:10:08
    from an end to- endend process which
  • 00:10:10
    really just like makes everyone lives
  • 00:10:12
    easier and safer yeah so that's the
  • 00:10:15
    non-generative use case where we use
  • 00:10:16
    unlearning without educating the market
  • 00:10:18
    too much just for the value offering of
  • 00:10:21
    let's increase the accuracy of your
  • 00:10:22
    models instantly in llms we're talking
  • 00:10:25
    now with a big Fortune fire corporate we
  • 00:10:28
    presented them a case that we thought
  • 00:10:30
    will interest them which is removing
  • 00:10:31
    personal sensitive information so let's
  • 00:10:33
    say that they find tun the model on top
  • 00:10:35
    of a database that they had that
  • 00:10:37
    included Social Security numbers right
  • 00:10:40
    that's a major issue both from like what
  • 00:10:42
    happens it's a PR a nightmare if you
  • 00:10:45
    know an LM discloses Regulators will eat
  • 00:10:47
    you alive and it's also a regulatory
  • 00:10:50
    issue H they said great H three weeks
  • 00:10:53
    from now can you present it to our VPS
  • 00:10:55
    but instead of unlearning this sensitive
  • 00:10:58
    information was some you un learned bias
  • 00:11:01
    and we're like okay very different
  • 00:11:03
    challenge very different but we made it
  • 00:11:06
    and in that case a you know you can see
  • 00:11:09
    bias is part of a fairi responsible AI
  • 00:11:13
    maybe some Regulatory Affairs but if I
  • 00:11:16
    put myself in their shoes they're
  • 00:11:18
    thinking about the fact that they just
  • 00:11:19
    want to drisk the adoption of once you
  • 00:11:22
    have a customer facing
  • 00:11:23
    application you must make sure that the
  • 00:11:25
    models act as expected act as the best
  • 00:11:28
    and maybe even better than the best
  • 00:11:30
    customer service agent yeah and bias is
  • 00:11:32
    a big thing I mean I remember even
  • 00:11:33
    watching a Netflix show about Ai and
  • 00:11:36
    bias in AI even before CH GPT came into
  • 00:11:39
    our life the LMS and everything and it
  • 00:11:41
    spoke about how you portray in on the
  • 00:11:44
    internet something like doctors usually
  • 00:11:45
    portray it with a man a white man right
  • 00:11:48
    it create B it creates bias and and it's
  • 00:11:52
    crazy it's not just with corporat it's
  • 00:11:53
    everywhere like it's all over the LMS
  • 00:11:55
    right yeah absolutely and differently to
  • 00:11:58
    the a classic use case that we had in
  • 00:12:01
    mind it's not about a specific data
  • 00:12:03
    point that went wrong right because
  • 00:12:05
    biases exist in our real
  • 00:12:07
    lives can I can I jump in I think I'm
  • 00:12:10
    representing this is for the listeners
  • 00:12:11
    now uh I'm a bit confused or still I
  • 00:12:15
    mean it's it seems highly Technical and
  • 00:12:17
    it is uh and I open conversation let me
  • 00:12:20
    share with you how I saw it and what I
  • 00:12:22
    thought it was and maybe you know maybe
  • 00:12:26
    I got it wrong maybe it's a good example
  • 00:12:28
    to clarify but when you when we talked
  • 00:12:29
    about we also had the preall we talked
  • 00:12:31
    about how I love the term machine
  • 00:12:33
    unlearning uh but you know when you say
  • 00:12:35
    unlearning I felt it's like forgetting
  • 00:12:38
    so you know the classic case that I had
  • 00:12:40
    for machine unlearning is oh sorry sorry
  • 00:12:44
    Joe the case that I have for machine and
  • 00:12:46
    learning is if I'm a publicly traded
  • 00:12:48
    company and one of my sales my senior
  • 00:12:51
    salesman did something or has something
  • 00:12:52
    I want to make it forget now that's on
  • 00:12:54
    the let's call it gry line but there are
  • 00:12:56
    other things you can forget that it's
  • 00:12:58
    wrongful data and if you know for
  • 00:13:00
    example C GPT or the most popular llms
  • 00:13:02
    people query them like they query Google
  • 00:13:04
    so I want to make sure I have the right
  • 00:13:06
    information access by Google you know
  • 00:13:09
    change it to to the chat coming to
  • 00:13:11
    information and that's like crucial
  • 00:13:13
    information about my company about my
  • 00:13:15
    product about my text spe that I'm
  • 00:13:16
    making sure it's right what you said
  • 00:13:18
    about visual computer vision and
  • 00:13:19
    analyzing and self driving this sounds
  • 00:13:21
    like data correction and anomaly
  • 00:13:23
    detection and when you also mentioned
  • 00:13:26
    about doing it when people do it
  • 00:13:27
    overseas to me me personally this seems
  • 00:13:30
    like you're 2016 yeah uh 2025 you know I
  • 00:13:34
    would dump it on an llm sorry I would
  • 00:13:36
    dump it on some sort of a model train it
  • 00:13:39
    a thousand times and then another
  • 00:13:41
    thousand times go public and then train
  • 00:13:43
    it again and again and again but that's
  • 00:13:45
    not what I had in mind when I heard
  • 00:13:46
    unlearning yeah it's like unlearn things
  • 00:13:49
    that can make bad decisions later that's
  • 00:13:52
    how I saw it and so so I think like you
  • 00:13:54
    bring a valid point and like I think
  • 00:13:56
    like I talked twice about uh this topic
  • 00:14:00
    today for us decision not here I'm not
  • 00:14:03
    cheting
  • 00:14:06
    withc I think like for us a company in
  • 00:14:10
    2025 it seems peculiar sometimes to side
  • 00:14:14
    viewers that were're also targeting
  • 00:14:16
    classic AI it seems like the world has
  • 00:14:18
    forgotten about them like right what
  • 00:14:20
    year is this um it's a different
  • 00:14:22
    conversation that un learning
  • 00:14:24
    specifically but I think like one thing
  • 00:14:26
    that the bubble bursting in 2021 and
  • 00:14:29
    it's aftermath has t us is so like a lot
  • 00:14:32
    of startups are building just for the
  • 00:14:34
    future that those are a based on a lot
  • 00:14:37
    of underlying assumptions and sometimes
  • 00:14:39
    like they fail to deliver before that
  • 00:14:41
    future comes to materialized that's
  • 00:14:42
    that's a great sentence that really
  • 00:14:44
    makes sense when also when I think about
  • 00:14:46
    data it's like yeah you can build stuff
  • 00:14:49
    going forward so don't interrupt you it
  • 00:14:50
    just clicked uh but you have to take it
  • 00:14:53
    to you know fix the past so look at the
  • 00:14:55
    past data also yeah and so both like in
  • 00:14:58
    Ai and like as a business decision we
  • 00:15:00
    felt like we want to be there where all
  • 00:15:02
    of the hype is where it's clear that the
  • 00:15:04
    future of the industry is relying on
  • 00:15:06
    which is generative Ai and like I
  • 00:15:08
    definitely don't uh you know like object
  • 00:15:11
    what you said in this but for us if you
  • 00:15:14
    look at a lot of different Industries we
  • 00:15:16
    talked about Healthcare so medical
  • 00:15:18
    diagnosis a autonomous driving industry
  • 00:15:20
    4.0 defense like something that's very
  • 00:15:23
    present in our lives in Israel all the
  • 00:15:25
    are classic models classic meaning like
  • 00:15:28
    post 2014 is that are still very very
  • 00:15:32
    much relevant to the industries a lot of
  • 00:15:34
    cash is being burnt on them and the same
  • 00:15:38
    old problems since 2014 now we're bring
  • 00:15:40
    this new approach where from one end as
  • 00:15:43
    a quite a small startup at this point we
  • 00:15:45
    have like this futuristic stuff
  • 00:15:47
    educating the market coming to these
  • 00:15:49
    sort of events to tell people about by
  • 00:15:51
    the way AI models need to forget and now
  • 00:15:53
    they can forget but at the same time
  • 00:15:55
    talking to a data scientist that has
  • 00:15:56
    been working on maybe like a model that
  • 00:15:59
    has to do with National Defense or with
  • 00:16:00
    autonomous driving and not educating
  • 00:16:02
    them about what is UN learning but just
  • 00:16:04
    telling them look we have a new approach
  • 00:16:06
    that can find the anomalies resolve them
  • 00:16:09
    instantly in the model itself and you
  • 00:16:11
    don't need to care about anything
  • 00:16:12
    besides clicking a button and your
  • 00:16:14
    accuracy could come up 10% more I I just
  • 00:16:17
    had a thought you know it's like you
  • 00:16:18
    have the it's like The Wizard of Oz
  • 00:16:20
    right the sexiness is hidden behind this
  • 00:16:22
    big curtain but then when you go behind
  • 00:16:24
    it there's like uh people working
  • 00:16:26
    24-hour shifts to label things and you
  • 00:16:29
    know the voice of O is actually IID
  • 00:16:32
    about
  • 00:16:34
    start I I actually saw it as therapy for
  • 00:16:38
    data it's like you know psychology
  • 00:16:40
    something like that like it is something
  • 00:16:42
    buried in your past that you know that
  • 00:16:45
    you know like ego or a bad experience or
  • 00:16:47
    bad trauma that can let you move forward
  • 00:16:49
    let's fix that correct that and then
  • 00:16:51
    build a better version of it reminds me
  • 00:16:55
    so so we talked briefly about like me
  • 00:16:56
    being a non-technical person and a
  • 00:16:58
    Technical startup
  • 00:16:59
    I remember one of my conversations with
  • 00:17:01
    a dad he started talking and then like
  • 00:17:03
    you need to go through a process of data
  • 00:17:05
    massaging and I could not tell like
  • 00:17:08
    laughing about me or if this is a
  • 00:17:11
    serious term that people use in
  • 00:17:14
    theformation yeah like if if someone
  • 00:17:16
    deserves a massage here not a DAT are
  • 00:17:19
    you the are you the first company doing
  • 00:17:21
    that uh as as a company did did did
  • 00:17:24
    companies had departments that were
  • 00:17:26
    doing that like how's how the market
  • 00:17:28
    looking okay so so a few basic facts
  • 00:17:31
    maybe about unlearning first of all we
  • 00:17:33
    didn't invent a term since I think like
  • 00:17:35
    2015 it's been an academic term a until
  • 00:17:38
    three years ago a lot of data science
  • 00:17:40
    leaders thought it's just an impossible
  • 00:17:42
    challenge that this is science fiction I
  • 00:17:43
    think like Visa our realization Theory
  • 00:17:45
    wasn't in practice the only a real
  • 00:17:50
    serious uh papers about it talked about
  • 00:17:52
    just a different way of training models
  • 00:17:54
    in a much more distributed Manner and
  • 00:17:56
    then you can basically remove a mini
  • 00:17:59
    models so everyone was looking at the
  • 00:18:01
    future but no one was looking at the
  • 00:18:02
    past everybody was say let's let's
  • 00:18:04
    create it better to to work better at
  • 00:18:06
    the future but not correct the fact but
  • 00:18:07
    then the question is is it better or is
  • 00:18:09
    it just more enabling of this thing that
  • 00:18:11
    you have a fixation on which is
  • 00:18:12
    unlearning and I think when you think
  • 00:18:15
    business you think minimum friction like
  • 00:18:17
    I'm not going to tell data science seems
  • 00:18:19
    to change anything about I currently
  • 00:18:22
    build their models otherwise it will
  • 00:18:24
    hurt your feelings yeah no clients
  • 00:18:27
    anymore exactly and LM Rose and I think
  • 00:18:29
    like there was suddenly a a push like
  • 00:18:33
    the It Rose
  • 00:18:35
    back and I think like in 2024 Gartner
  • 00:18:39
    put on learning as one of the four hot
  • 00:18:41
    topics to look at in technology oh G had
  • 00:18:45
    made it and it's mainstream already
  • 00:18:46
    you're too late okay um if you look at
  • 00:18:49
    the research published there's research
  • 00:18:51
    by Microsoft Amazon meta IBM Google
  • 00:18:55
    Google even had like a public data
  • 00:18:57
    science competition around offer us
  • 00:18:59
    solutions for un learning a moment
  • 00:19:01
    before the startup was born and yet
  • 00:19:03
    there's zero products in the market so
  • 00:19:05
    we are the first un learning startup and
  • 00:19:08
    uh
  • 00:19:09
    products but in Academy it's already a
  • 00:19:12
    thing or and you're the only one who
  • 00:19:14
    commercialized it or the first one it's
  • 00:19:15
    a thing existing research is very flawed
  • 00:19:18
    I think like it's a work in progress and
  • 00:19:19
    we are looking at everything that is
  • 00:19:21
    being published you know taking notes
  • 00:19:24
    taking inspiration understanding like
  • 00:19:25
    what do they do better what do we do
  • 00:19:27
    better and how can we a improve but so
  • 00:19:30
    far there's been a lot of challenges and
  • 00:19:32
    also when you think about this act of
  • 00:19:33
    forgetting about the depth of removing
  • 00:19:36
    things like what happens if you a tell
  • 00:19:39
    an Enterprise we remove this social
  • 00:19:41
    security number or this inaccurate
  • 00:19:44
    information and then it's not there
  • 00:19:46
    Microsoft's I think like the most
  • 00:19:48
    published piece around un learning is a
  • 00:19:50
    paper called forgetting har reporter by
  • 00:19:53
    Microsoft research and about two days
  • 00:19:56
    after they released it on hugging face a
  • 00:19:59
    people just got the model to remember
  • 00:20:01
    Harry Potter that's brilliant right so
  • 00:20:03
    like yeah he never existed yeah but then
  • 00:20:06
    like it it did moft came back yeah and
  • 00:20:11
    sort it was like a cliffhanger and then
  • 00:20:13
    like Harry Potter resurfaced like look
  • 00:20:15
    at the Pink Elephant right exactly so
  • 00:20:17
    it's as if the audience still remembers
  • 00:20:19
    the Pink Elephant after the lecture I'm
  • 00:20:21
    very disappointed because I told them to
  • 00:20:23
    forget a so first of all how do you
  • 00:20:25
    actually make it deep enough and
  • 00:20:26
    thorough enough second of all how do you
  • 00:20:28
    validate
  • 00:20:29
    cuz clearly Microsoft validated but it
  • 00:20:31
    wasn't good enough by the way like they
  • 00:20:33
    the reason that they released it
  • 00:20:35
    publicly was in order for like the
  • 00:20:36
    public domain to Red Team it which they
  • 00:20:39
    have done successfully and I think like
  • 00:20:41
    the last and maybe even more challenging
  • 00:20:44
    is how do you make sure that you didn't
  • 00:20:45
    hurt the rest of them all so if you
  • 00:20:47
    think about it as a brain I was about to
  • 00:20:48
    ask you about what's what's the
  • 00:20:49
    butterfly effect of changing the past
  • 00:20:52
    because it's like you know in the the
  • 00:20:54
    Sci-Fi movies right you change the past
  • 00:20:55
    you have a butterfly effect yeah what's
  • 00:20:58
    the butterfly
  • 00:20:59
    I'm collecting more references usually I
  • 00:21:01
    useal of the Spotless Mind or the these
  • 00:21:05
    are chaotic systems with big or Back to
  • 00:21:07
    the Future the classic right
  • 00:21:10
    exactly um so basically like that has
  • 00:21:13
    been like the major I think like a
  • 00:21:15
    holdback for this field of research and
  • 00:21:18
    and then potential product CU there's
  • 00:21:22
    always been a
  • 00:21:23
    trade-off if you erase too much or uh
  • 00:21:27
    you try to eras
  • 00:21:29
    harder to make sure that it isn't daily
  • 00:21:31
    raced then there's peripheral damage how
  • 00:21:34
    how much of these are just like black
  • 00:21:35
    boxes then when you really get down to
  • 00:21:37
    it right like we talked about Wizard of
  • 00:21:38
    Oz and then the curtain and you go
  • 00:21:40
    behind is it a black box
  • 00:21:43
    like yeah yeah when you're kind of
  • 00:21:45
    looking at it from that level above and
  • 00:21:48
    you know the weights you don't know the
  • 00:21:49
    information inside right it's
  • 00:21:51
    interesting because we we can't look but
  • 00:21:53
    he's looking
  • 00:21:54
    inside but I mean like a lot of people
  • 00:21:56
    are looking have access to the same
  • 00:21:59
    right like but you have access to a
  • 00:22:01
    better mind than
  • 00:22:02
    mine but not my mind so it's a little
  • 00:22:06
    bit both because I think both in our
  • 00:22:09
    capabilities and also things that you
  • 00:22:11
    can find out there there's a lot of
  • 00:22:13
    progress around things that could
  • 00:22:15
    explain the behaviors and also
  • 00:22:18
    regulation so like forces now
  • 00:22:20
    Enterprises or foundational model
  • 00:22:22
    developers to disclose more about how
  • 00:22:24
    they build and some do it not because of
  • 00:22:26
    Regulation just because they bring
  • 00:22:27
    better data scientist
  • 00:22:32
    buop releas research to build really
  • 00:22:36
    great
  • 00:22:37
    models but at the same time I think
  • 00:22:40
    explainability is I'm I'm talking to
  • 00:22:43
    someone who a lot of our Technologies
  • 00:22:46
    could be and somewhat also use as
  • 00:22:48
    explainability techniques explainity
  • 00:22:50
    meaning the ability to explain the
  • 00:22:52
    behaviors of models but at the same time
  • 00:22:53
    I think explainability will never be
  • 00:22:56
    perfect we'll never be able to really
  • 00:22:59
    decipher a a direct correlative so like
  • 00:23:03
    a cause and effect in that sense just
  • 00:23:06
    because the better that we build the
  • 00:23:08
    models and the whole point of AI moving
  • 00:23:11
    from like classic algorithms is things
  • 00:23:13
    need to get more complex like models
  • 00:23:15
    need to make better decisions and they
  • 00:23:17
    need to do these like a correlations
  • 00:23:19
    that sometimes are not intuitive for the
  • 00:23:22
    naked eye what we can do is both try our
  • 00:23:25
    best to explain find different mechan Ms
  • 00:23:28
    that are into play and understand what
  • 00:23:31
    comes in first and how to so like
  • 00:23:34
    resolve the issues that come out later
  • 00:23:36
    so one example is um not something
  • 00:23:39
    that's developed uh By Us by the way in
  • 00:23:42
    Transformers um before that at some
  • 00:23:45
    point it was seen as you know a pretty
  • 00:23:47
    big black box but now there's an
  • 00:23:49
    understanding of there's a specific area
  • 00:23:51
    in the weights of models that are based
  • 00:23:52
    on a Transformer architecture that's
  • 00:23:55
    more correlated with storing information
  • 00:23:58
    so it's almost like a knowledge base now
  • 00:24:00
    of course there's more areas that
  • 00:24:02
    include some information not just
  • 00:24:03
    transforming it's actually storing
  • 00:24:05
    things yeah now again I'm talking here
  • 00:24:06
    as the non-technical leader of tech by
  • 00:24:09
    the way Transformers is the T in GPT
  • 00:24:11
    just you know for General know for those
  • 00:24:12
    who don't not sure so that's when we
  • 00:24:15
    think about un learning one of the
  • 00:24:16
    challenges is finding where this coming
  • 00:24:19
    back to not how to remove Behavior but
  • 00:24:20
    how to remove specific data the first
  • 00:24:24
    challenge is okay where is this data
  • 00:24:25
    stored we're not going to spray and pre
  • 00:24:28
    yeah right across the model so that
  • 00:24:32
    understanding that comes from again
  • 00:24:33
    public research that was not done done
  • 00:24:35
    by us gave us like the first clue into
  • 00:24:38
    okay we know this is the area where data
  • 00:24:39
    is stored then the next challenge is
  • 00:24:41
    isolating the specific area which this
  • 00:24:44
    specific information is stored and so
  • 00:24:45
    forth and I think like another cool
  • 00:24:47
    thing in our Technologies is um we
  • 00:24:51
    basically a lot of
  • 00:24:53
    our work is reliant on idea that we call
  • 00:24:56
    data influence which is Tak inspiration
  • 00:24:59
    from a mathematical field called
  • 00:25:00
    influence functions and basically uh we
  • 00:25:04
    use it to explain how each sample that's
  • 00:25:07
    introduced to a model during training
  • 00:25:09
    affects the learning of this model how
  • 00:25:12
    it perceives this sample how this how
  • 00:25:14
    does the model perceive other samples
  • 00:25:16
    after it learns this sample and later on
  • 00:25:19
    where the model makes predictions which
  • 00:25:22
    training samples that it learned in the
  • 00:25:23
    past were the most important ones that
  • 00:25:26
    made them mod decide can youtune then
  • 00:25:29
    all these micro models inside the big
  • 00:25:32
    model to make it more accurate like the
  • 00:25:35
    like the actual it's not the data
  • 00:25:37
    scientist right but the architect of the
  • 00:25:40
    the the system I guess okay so if I'm
  • 00:25:44
    rephrasing so like we talked about
  • 00:25:46
    Transformers and so like how they're
  • 00:25:47
    built yeah and the question is like can
  • 00:25:49
    we build a better
  • 00:25:51
    architecture so that's one of our
  • 00:25:53
    long-term ideas got it again I think
  • 00:25:55
    like we talked about having too many
  • 00:25:57
    things on the play of your room though
  • 00:25:59
    so talking about the next step is is
  • 00:26:00
    quite a lot but so I I I sorry I just
  • 00:26:03
    got a little bit freaked out just night
  • 00:26:05
    because you
  • 00:26:06
    know my work here is done yeah that's it
  • 00:26:10
    okay so robots are going to take over
  • 00:26:11
    one day right are you like at the
  • 00:26:13
    Forefront of protecting us from the AI
  • 00:26:16
    takeover right I actually thought about
  • 00:26:18
    that and also is there a dark side for
  • 00:26:21
    our learning right okay because then you
  • 00:26:23
    can influence data let's talk about an
  • 00:26:25
    interesting and and then it connects to
  • 00:26:27
    the robots because you know yeah you
  • 00:26:29
    know forgetting history is is
  • 00:26:32
    yeah you're never here this is one of
  • 00:26:34
    the like hot topics right like what
  • 00:26:37
    influence it becomes sentient like all
  • 00:26:39
    the rest of it you know and there's
  • 00:26:43
    obvious like very practical like cause
  • 00:26:48
    of a use case for this which is you have
  • 00:26:50
    a car you have maybe governments you
  • 00:26:53
    have like life and death things that
  • 00:26:54
    make a difference to everyone's life
  • 00:26:57
    going on in the background you know like
  • 00:26:59
    these are very clear and it's not
  • 00:27:01
    something that we think about today
  • 00:27:03
    necessarily like we just kind of laugh
  • 00:27:05
    sometimes at chat GPT with the the
  • 00:27:08
    things that gets wrong but like are do
  • 00:27:11
    you see that as like a serious use case
  • 00:27:12
    are you like the frontier Guardian of
  • 00:27:15
    the
  • 00:27:16
    Galaxy bridging
  • 00:27:18
    AI he just Mak things for this is my God
  • 00:27:22
    complex this this real by the way is
  • 00:27:24
    going to get like 500,000 views his deck
  • 00:27:27
    his dech in a physical device is the man
  • 00:27:29
    in Black
  • 00:27:31
    Gadget someone from the future is going
  • 00:27:33
    to come now and
  • 00:27:35
    try sh so they don't have the codes for
  • 00:27:38
    the door yeah so are you the guardian of
  • 00:27:40
    the galaxy for the AI takeover
  • 00:27:43
    world I was not prepared for that
  • 00:27:45
    question I think that's good that's why
  • 00:27:48
    unscripted yeah that the real answer is
  • 00:27:50
    I don't know I think like one day I wake
  • 00:27:53
    up optimistic about like the future of
  • 00:27:55
    Mankind versus the future of AI one day
  • 00:27:58
    coming up waking up a bit more I don't
  • 00:28:01
    know concerned is the right word not
  • 00:28:03
    pessimistic yeah and I feel like in
  • 00:28:07
    Hebrew we say like the prophecy was
  • 00:28:08
    given to fools M right in the case of
  • 00:28:12
    like a Skynet Revolution I think like
  • 00:28:14
    I'll try to be on the good side will un
  • 00:28:16
    learning be solution to everything yeah
  • 00:28:19
    maybe we'll play a a small part are you
  • 00:28:21
    polite when you speak to chat GPT do you
  • 00:28:24
    say please you yes I am no okay I trust
  • 00:28:27
    this guy he knows some of control we
  • 00:28:29
    don't know about I I I'm actually polite
  • 00:28:31
    until they do something like with
  • 00:28:32
    lovable when it gets wrong and deletes
  • 00:28:35
    everything then I'm I'm British I get
  • 00:28:37
    passive
  • 00:28:38
    aggressive that's fine try it's an
  • 00:28:41
    American model so it it passes over do
  • 00:28:45
    emotion it's like the US version of the
  • 00:28:46
    office you know but you know what I was
  • 00:28:48
    thinking about what uh in in um for
  • 00:28:52
    example in in a time where AI will start
  • 00:28:54
    making decisions of its own and he's
  • 00:28:56
    going to start un learning stuff on its
  • 00:28:59
    own then it can go to the dark side
  • 00:29:01
    right is because the AI has the ability
  • 00:29:04
    to change our minds like Google today
  • 00:29:06
    Google can affect the search result
  • 00:29:08
    results and change the whole country's
  • 00:29:11
    Minds like with the companies and
  • 00:29:13
    politicians Twitter uh Facebook so what
  • 00:29:15
    happens if AI does that so so I feel
  • 00:29:18
    like it's it's a nice like full circle
  • 00:29:21
    because when Joe asked so like about
  • 00:29:24
    again like in my own words about the
  • 00:29:25
    architectures I think the next step is
  • 00:29:27
    an architecture allows this process of
  • 00:29:29
    both finding the best data and
  • 00:29:31
    unlearning constantly forgetting the bad
  • 00:29:34
    data um is the next step and uh we have
  • 00:29:37
    some things like in store future plans H
  • 00:29:40
    for this that we call Dynamic Ai and
  • 00:29:43
    recently Google also published like a
  • 00:29:45
    research about an architecture that
  • 00:29:46
    they're bring to the table called Titans
  • 00:29:48
    which in the long term they see it as a
  • 00:29:51
    potential replac Transformers and one of
  • 00:29:54
    the things that characterizes it is the
  • 00:29:55
    differentiation between a long-term
  • 00:29:58
    memory and short-term memory and that
  • 00:30:00
    will you know that's a step that towards
  • 00:30:02
    allowing you to forget stuff that are
  • 00:30:04
    not part of like your DNA and that's
  • 00:30:06
    crazy because that's the part A lot of
  • 00:30:08
    people are not looking when they're
  • 00:30:10
    talking about AI they're talking about
  • 00:30:11
    like me oppus clip you know uh building
  • 00:30:14
    uh creating scripts for videos on chpt
  • 00:30:17
    versus dips you're not thinking about
  • 00:30:19
    you know the biases and and and how how
  • 00:30:23
    how to optimize the results not just on
  • 00:30:27
    new models but also perfecting the old
  • 00:30:30
    models right yeah and and it's something
  • 00:30:32
    that I think everybody should start
  • 00:30:35
    thinking about you know so I'm glad
  • 00:30:37
    you're here and another thing this is
  • 00:30:40
    this is you know we we're talking like
  • 00:30:42
    it sounds all very academic but you guys
  • 00:30:44
    have paying customers you're seeing like
  • 00:30:47
    what kind of like improvements are you
  • 00:30:48
    seeing on data sets do you have like
  • 00:30:50
    statistics around that H so first of all
  • 00:30:53
    like anything that we can release
  • 00:30:54
    publicly we release publicly in the
  • 00:30:56
    sense of there's a lot of case studies
  • 00:30:58
    on public data sets and models that are
  • 00:31:00
    available on our blog when we're working
  • 00:31:01
    with
  • 00:31:02
    customers we we're not really allowed to
  • 00:31:05
    talk about like the improvements that we
  • 00:31:07
    see with them and a lot of times like we
  • 00:31:09
    don't see what they're doing with a
  • 00:31:11
    platform because it also works on Prem
  • 00:31:14
    but again like I gave one example of
  • 00:31:16
    like the less sexy stuff of like classic
  • 00:31:19
    AI like autonomous driving and again
  • 00:31:22
    like I I mentioned a number which is
  • 00:31:24
    crazy but I found like satellite imagery
  • 00:31:27
    17% of frames like in a benchmark
  • 00:31:29
    worldwi data set 17% of the frames
  • 00:31:32
    included like inaccuracies our end to
  • 00:31:34
    end process improved that reduce the
  • 00:31:36
    errors of the model by
  • 00:31:38
    17% 177% and if you think okay forget
  • 00:31:41
    forget but this is me being a CEO so we
  • 00:31:44
    improve M accuracy by 5.1% the reduction
  • 00:31:47
    in Errors By the model was 17% okay but
  • 00:31:50
    then take always double check what
  • 00:31:52
    the someone's fact checking right now as
  • 00:31:55
    we speak we the other room doing it
  • 00:31:57
    right now yeah but think how much it
  • 00:31:59
    cost to train the latest GPT model right
  • 00:32:04
    from but 5% of that budget you can just
  • 00:32:08
    fix like by plugging into this platform
  • 00:32:10
    that's a yeah and with DM like right now
  • 00:32:14
    we I talked about like the so like crazy
  • 00:32:16
    tasks to unlearn bias right which we're
  • 00:32:18
    like okay let's just think about how we
  • 00:32:20
    soling that um we took the liberty of
  • 00:32:23
    like riding the hype wave everyone is
  • 00:32:24
    talking about DPS we took a DPS R1 is
  • 00:32:27
    still llama which is basically released
  • 00:32:29
    by deepik but so like a variation of
  • 00:32:32
    llama that they fine tuned in their own
  • 00:32:35
    mechanisms and their own mechanisms
  • 00:32:37
    being API
  • 00:32:40
    we'll and the first thing is like we
  • 00:32:42
    noticed that there's double like almost
  • 00:32:45
    twice the bias that existed in
  • 00:32:48
    Lama like exists in EP in deep seek in
  • 00:32:51
    in this specific deeps variation so
  • 00:32:53
    think now we're not measuring it using
  • 00:32:55
    our own method we send it to like a
  • 00:32:57
    impart partial so like valuation method
  • 00:33:00
    and we're saying that like the bias
  • 00:33:02
    towards nationality towards race towards
  • 00:33:04
    gender is just crazy higher crazy higher
  • 00:33:09
    wow and then we're using our methods um
  • 00:33:12
    to undo this and again not undo specific
  • 00:33:15
    data points but undo behaviors using
  • 00:33:17
    similar methods and we reduce like the
  • 00:33:20
    bias of dips on if I remember correctly
  • 00:33:22
    nationality 76% and gender and race like
  • 00:33:25
    6 5% like reduction wow right so this is
  • 00:33:29
    czy this is something that is not really
  • 00:33:31
    achiev in any method it took us like
  • 00:33:33
    using our platform it takes less than an
  • 00:33:34
    of compute wow that's insane so the
  • 00:33:37
    whole model is improved yeah essentially
  • 00:33:40
    76% around race related yeah exactly so
  • 00:33:44
    basically if you run your model with
  • 00:33:46
    Netflix it will give me better
  • 00:33:48
    recommendations I actually wrote a post
  • 00:33:50
    about hyper personalization now we
  • 00:33:52
    talking about that and I was like H if
  • 00:33:54
    you if you if you fix their bias I'm
  • 00:33:56
    going to get better results right better
  • 00:33:59
    but it won't fix their content it won't
  • 00:34:01
    fix their content no I I I have no place
  • 00:34:04
    in the ecosyst after this podcast I'm
  • 00:34:05
    just talking
  • 00:34:07
    about you should approach him right but
  • 00:34:10
    guys you know even everyday life like I
  • 00:34:12
    think one of the examples you gave to me
  • 00:34:14
    was you know like just recently
  • 00:34:16
    obviously uh president Trump came into
  • 00:34:18
    power and all the models are trained
  • 00:34:20
    around Biden and you remember how
  • 00:34:23
    controversial that was with data across
  • 00:34:25
    the internet and things but just even on
  • 00:34:27
    a dat today basis like using these
  • 00:34:30
    models we don't understand how biased
  • 00:34:32
    they are I guess and 76% I mean do do
  • 00:34:36
    you know how bias alpaka was then or
  • 00:34:39
    yeah I don't know have the stat we can
  • 00:34:41
    we can run it but you know I use deep
  • 00:34:42
    seek right and I don't know if I used a
  • 00:34:44
    specific version but I never
  • 00:34:47
    identified anything that seemed did you
  • 00:34:50
    ask it about
  • 00:34:53
    the the first thing you think about
  • 00:34:56
    using but it's a good example are you
  • 00:34:57
    using dips itself hosted on like another
  • 00:35:00
    server or you using like DPS website
  • 00:35:02
    yeah on dp's website yeah okay yeah good
  • 00:35:05
    luck it's not biased at all at all no no
  • 00:35:09
    they just own you right now so that's it
  • 00:35:12
    I'm it's honestly I'm like in a trance
  • 00:35:13
    when I'm looking it's like chat GPT now
  • 00:35:16
    as well right where you can actually see
  • 00:35:17
    the reasoning and the thinking of it
  • 00:35:19
    like know three amazing yeah yeah I
  • 00:35:21
    really like the way it kind of works
  • 00:35:23
    yeah it is and sometimes you you you you
  • 00:35:25
    watch it while it writes and you're like
  • 00:35:28
    that's that's a new age right you know
  • 00:35:30
    one one thing I'm not impressed by I
  • 00:35:32
    have to say is operator have you tried
  • 00:35:34
    operator no I haven't Tri yet so slow
  • 00:35:37
    yeah but they said in in the in the
  • 00:35:39
    video they released they they said it's
  • 00:35:41
    a very early version of it they were
  • 00:35:43
    just jumping the gun you know to be
  • 00:35:45
    first because of dips and everything I
  • 00:35:47
    think uh but but it's it will be
  • 00:35:50
    something crazy yeah also I love how
  • 00:35:52
    crazy fast things are progressing that's
  • 00:35:54
    true that we're like this new like
  • 00:35:58
    but he's able to replace me doing any
  • 00:36:00
    kind of job just like a bit it's like R
  • 00:36:02
    CK when he's in the plane and the
  • 00:36:04
    internet is not working right it's like
  • 00:36:05
    oh the internet's down it's like you're
  • 00:36:08
    in a bird in the
  • 00:36:10
    sky no but it's a great segue to what I
  • 00:36:12
    wanted to ask you um is you know
  • 00:36:15
    personally and based on what you see you
  • 00:36:17
    know doing what you do and and working
  • 00:36:18
    on what you're working is like how do
  • 00:36:20
    you see you said replacing me so how do
  • 00:36:23
    you see like all of us in this age you
  • 00:36:25
    know we said last time that the new
  • 00:36:28
    titles the titles of the jobs are not
  • 00:36:30
    invented yet but that's okay that's a
  • 00:36:31
    good title would love to hear what you
  • 00:36:34
    think about you know jobs will be if
  • 00:36:36
    jobs will be replaced
  • 00:36:37
    Etc so because I'm going to say
  • 00:36:40
    something similar so I think if I said
  • 00:36:43
    that one day I'm waking up optimistic
  • 00:36:45
    one day I'm waking up pessimistic I I
  • 00:36:47
    think like on that front every major
  • 00:36:50
    technological disruption big or small
  • 00:36:52
    there was always the underlying
  • 00:36:54
    assumption that it's going to take out
  • 00:36:56
    so many jobs but as easily it created
  • 00:36:58
    just like whole new jobs that no one
  • 00:37:00
    anticipated the classic example is like
  • 00:37:03
    the introduction of ATMs didn't remove
  • 00:37:05
    the Nique for bank tellers right the
  • 00:37:08
    online banking
  • 00:37:10
    de well today kind of did but now they
  • 00:37:13
    hire more Market I see at the beginning
  • 00:37:15
    ATMs if to touch that point at the
  • 00:37:17
    beginning ATMs didn't really replace
  • 00:37:19
    them but today you can also deposit and
  • 00:37:21
    and deposit checks and everything with
  • 00:37:23
    an ATM and then with that and the online
  • 00:37:25
    tellers are obsolete so it technology
  • 00:37:28
    did take their job but it took took a
  • 00:37:31
    while longer but I think that Ai and all
  • 00:37:33
    this revolution that we talk about is
  • 00:37:34
    the same as going online you know going
  • 00:37:37
    online took it took a lot but
  • 00:37:38
    essentially it started in ' 93 it it
  • 00:37:41
    took a while to get where it is today
  • 00:37:43
    but I think that we are somewhere there
  • 00:37:45
    like you know before the do bubble
  • 00:37:47
    somewhere but the growth of technology
  • 00:37:48
    is much faster than it was before guys
  • 00:37:51
    guys I really want to hear what is it
  • 00:37:53
    utopian is it dystopian like what is
  • 00:37:56
    that future I think it would be like
  • 00:37:58
    like today but just like more flashy and
  • 00:38:00
    cool right so more flashy and cool more
  • 00:38:04
    cuz I mean you know we don't know what
  • 00:38:07
    we don't know right um and I think in
  • 00:38:10
    that sense um like as much as it's bad
  • 00:38:15
    for a tech CEO to say this like at hard
  • 00:38:17
    I'm a Social
  • 00:38:18
    Democrat if we reach a reality where
  • 00:38:21
    univers Universal basic income is
  • 00:38:23
    available because AI is doing all the
  • 00:38:25
    job amazing yeah is ch going to be the
  • 00:38:28
    thing to create it I I doubt it I doubt
  • 00:38:29
    that we're heading there I think like
  • 00:38:31
    we'll still be running the meal and like
  • 00:38:33
    having to wake up in the morning to go
  • 00:38:35
    to work hopefully it will be a bit more
  • 00:38:36
    flexible I will probably work even
  • 00:38:39
    harder that's but just like we'll have
  • 00:38:42
    different tools maybe we'll use our
  • 00:38:44
    brain more maybe like low tech jobs will
  • 00:38:46
    actually be the a more crucial human
  • 00:38:49
    delivered things or maybe like people
  • 00:38:51
    that are now ER using tools to assist
  • 00:38:54
    them will be the human evaluator to
  • 00:38:56
    assist those tools I'm thinking who's
  • 00:38:57
    going to be the Netflix of this era you
  • 00:39:00
    know the the maybe it's maybe it's open
  • 00:39:03
    AI I don't know maybe it's list James
  • 00:39:04
    Cameron's company yeah he he said you
  • 00:39:07
    know it's it's going to get to the stage
  • 00:39:09
    where production is not production it's
  • 00:39:11
    like just generation right so so let's
  • 00:39:13
    say Roy you have a podcast in 10 minutes
  • 00:39:16
    and you want to watch a thriller it's
  • 00:39:18
    going to create something for you that's
  • 00:39:19
    10 minutes long yeah exactly exactly
  • 00:39:22
    what you want to watch just now with
  • 00:39:23
    your favorite actors and actresses and
  • 00:39:25
    you're going to watch it and that's
  • 00:39:26
    going to be there's there's there are uh
  • 00:39:29
    two guys named the door brothers and
  • 00:39:32
    they actually create with V2 the door
  • 00:39:35
    door d o r d o r brothers and they
  • 00:39:39
    create uh animations online they are
  • 00:39:41
    very famous for they got viral for a lot
  • 00:39:43
    of things they did like politically
  • 00:39:44
    stuff but they now created um the first
  • 00:39:47
    their first AI show right so there's
  • 00:39:50
    like an episode and it's it's something
  • 00:39:53
    sci-fi about an astronaut and he's going
  • 00:39:55
    to to space with aliens so it's not not
  • 00:39:57
    something you would watch you replace
  • 00:39:59
    Netflix yet but it's also already
  • 00:40:00
    started these are faceless YouTube
  • 00:40:02
    videos right no no there's there's a
  • 00:40:04
    face there's an astronaut and then he
  • 00:40:06
    takes his helmet off and there's a face
  • 00:40:08
    but yes it's not there yet I mean like
  • 00:40:10
    you don't have a a human being as the
  • 00:40:12
    face of the YouTube channel it's like
  • 00:40:14
    animation created with AI storyline
  • 00:40:17
    created by AI scenes transitions all the
  • 00:40:20
    no but it'll take a year or two years to
  • 00:40:22
    start creating something was which more
  • 00:40:24
    it's more serious yeah something you
  • 00:40:26
    would watch we want well the whole
  • 00:40:28
    Nvidia thing where they're like
  • 00:40:30
    generating on the Fly Right to get super
  • 00:40:33
    high FPS on games oh I saw yesterday a
  • 00:40:36
    comparison of the
  • 00:40:37
    1590 what was that it's like he he
  • 00:40:40
    actually calculates every tray every
  • 00:40:42
    weight every trace of ray of light
  • 00:40:45
    everywhere I mean no no everywhere but
  • 00:40:48
    in so many bounces it's crazy and it's
  • 00:40:50
    running like on steady 250 FPS the
  • 00:40:53
    hardest uh cyber Punk game
  • 00:40:57
    understand anything he says you're not
  • 00:40:59
    alone yeah yeah but one question wearing
  • 00:41:01
    a Super Mario br's shirt
  • 00:41:04
    so yeah yeah no you're right safe
  • 00:41:08
    assumption uh but the question that I
  • 00:41:10
    had in mind it's kind of connected
  • 00:41:11
    everything I lost in mind so lost my
  • 00:41:13
    train thoughts you take it sorry I'll
  • 00:41:15
    edit all this out into this question no
  • 00:41:18
    single me
  • 00:41:22
    out oh that's crazy it was the we're
  • 00:41:25
    talking about the Nvidia and was before
  • 00:41:27
    that was before that but never mind
  • 00:41:29
    we'll get to there don't worry no no
  • 00:41:32
    we'll get there it's going to go I Echo
  • 00:41:33
    what you said about so like hyper
  • 00:41:35
    personalization I think like that's and
  • 00:41:37
    again that has nothing to do I think
  • 00:41:38
    like without learning at the moment but
  • 00:41:40
    just like the fact that content could be
  • 00:41:42
    really adjusted per so it's not
  • 00:41:43
    recommending the right content it's
  • 00:41:45
    actually creating the right content for
  • 00:41:47
    you based on what you've watched and I
  • 00:41:48
    think like there was a Black Mirror
  • 00:41:50
    episode with a similar concept right I
  • 00:41:53
    can definitely see it going there
  • 00:41:54
    hopefully a bit dark yeah oh hopefully I
  • 00:41:58
    remember the question so maybe you know
  • 00:42:00
    connecting it all if tell me if this is
  • 00:42:02
    true about what hu if I said it
  • 00:42:06
    right enter Japanese subtitles um so if
  • 00:42:11
    Joe said that James K uh uh claims that
  • 00:42:14
    okay we're GNA I'm going to tell you
  • 00:42:16
    what I I say James CL Cameron claims
  • 00:42:18
    claims oh sorry actually did yes you
  • 00:42:20
    said I just literally said that two
  • 00:42:22
    minutes ago you're right anyway can you
  • 00:42:26
    make can you make everyone
  • 00:42:27
    forget ex it's called editing I have 10
  • 00:42:30
    minutes show me show me a skateboarding
  • 00:42:32
    videos in the city of Las Vegas skating
  • 00:42:35
    on top of the Las Vegas sphere so it's
  • 00:42:38
    going to generate it but we need physics
  • 00:42:40
    right real life physics to generate
  • 00:42:43
    so if the physics engine is wrong and
  • 00:42:46
    this is what we see when you generate
  • 00:42:48
    skate videos you see them like looping
  • 00:42:50
    all around maybe that's something we
  • 00:42:52
    need to unlearn like get the physics
  • 00:42:54
    right could be but again I feel like I I
  • 00:42:57
    would be arrogant to say like unlearning
  • 00:42:58
    is the solution for the problem of AI
  • 00:43:00
    with physics and I think there's even
  • 00:43:03
    brighter Minds working on so like this
  • 00:43:05
    intersection both in foundational
  • 00:43:07
    physics models and also in another
  • 00:43:10
    approach to generative AI like if you
  • 00:43:12
    see what f f Lee is doing or Yan leun
  • 00:43:16
    they're talking about um world models if
  • 00:43:19
    I remember correctly yes it's one of the
  • 00:43:21
    top four problems identified yeah so so
  • 00:43:25
    basically like as much as we're talking
  • 00:43:27
    now about llms all the
  • 00:43:30
    time again like obviously I'm
  • 00:43:32
    paraphrasing but like they say this is a
  • 00:43:33
    temporary setback like and the fact that
  • 00:43:36
    all the industry is focusing on this is
  • 00:43:37
    actually preventing us from reaching
  • 00:43:39
    real artificial general intelligence
  • 00:43:42
    which is models that understand how the
  • 00:43:43
    world works and then you apply them to
  • 00:43:45
    different missions instead of predicting
  • 00:43:47
    the next probable token for the next war
  • 00:43:50
    that you want in their response so I I
  • 00:43:52
    heard that about about about the
  • 00:43:54
    physical aspect of uh of AI I heard a
  • 00:43:57
    podcast a while ago I forgot who it was
  • 00:43:59
    but it was someone who knew what he was
  • 00:44:01
    talking about and and he said that so
  • 00:44:03
    not this one what's missing not this one
  • 00:44:06
    it was
  • 00:44:07
    mine it was better um no but I heard
  • 00:44:11
    that right now all that's missing to to
  • 00:44:13
    teach physical uh AI is data we're there
  • 00:44:17
    as as far you know as far as the models
  • 00:44:19
    but we need data we need real life data
  • 00:44:21
    like what happens when you roll a bottle
  • 00:44:23
    of a of a table then what happens to the
  • 00:44:25
    bottle when you get reach the end so
  • 00:44:28
    it's all about the data right like we
  • 00:44:30
    had have enough data for textual with a
  • 00:44:32
    internet and everything but we still
  • 00:44:34
    don't have we have that in blender and
  • 00:44:35
    3D software you know for visual effects
  • 00:44:38
    yeah we have phys physics data but you
  • 00:44:40
    don't have enough that's what I that's
  • 00:44:41
    why he said um assuming we don't have
  • 00:44:43
    enough of in still Marvel in in released
  • 00:44:46
    Marvel videos you can see wrong physics
  • 00:44:49
    in visual effects I'm guessing I'm
  • 00:44:51
    guessing um you know cars for example
  • 00:44:54
    like Tesla and electric cars they also
  • 00:44:56
    provide data even when they fail when
  • 00:44:58
    they do when they create an accident
  • 00:45:00
    when they make an accident right it's
  • 00:45:02
    data what happens when you crash a wall
  • 00:45:03
    it's something the AI needs to to learn
  • 00:45:06
    hopefully without everybody anybody
  • 00:45:07
    injuring I'm never going to buy a yellow
  • 00:45:09
    motorbike by the way just so you
  • 00:45:13
    know well it's fixed now you fixed okay
  • 00:45:17
    fix you can buy it I'm color blind by
  • 00:45:20
    the way so maybe so that's how wasn't
  • 00:45:24
    fixing I thought you were going to say
  • 00:45:26
    I'm never going toy this yeah I get
  • 00:45:28
    scared for a second was was your
  • 00:45:30
    categorization the 10% that was
  • 00:45:32
    wrong what's the no the the yellow
  • 00:45:35
    motorbike yeah yeah so I gave a yellow
  • 00:45:37
    motorbike as an example but it's not
  • 00:45:39
    that there was a particular issue with
  • 00:45:40
    yellow motores it's everything you
  • 00:45:42
    shouldn't get on the world right right
  • 00:45:44
    so do you use AI at home not at
  • 00:45:47
    work um I think like everyone now uses
  • 00:45:50
    the it's a very like by the way like
  • 00:45:52
    steep R I feel like we were heading
  • 00:45:55
    somewhere else with a question so I feel
  • 00:45:57
    like everyone is using AI now like right
  • 00:46:00
    I'm I'm asking Chachi PT for advice when
  • 00:46:03
    I need to troll a friend I'm asking it
  • 00:46:05
    to sorry to write a song to it it's not
  • 00:46:09
    just like for for job stuff and also we
  • 00:46:12
    talked about Google can Google Now
  • 00:46:14
    really control governments and public
  • 00:46:17
    perception if you end up using GPT
  • 00:46:19
    search or perplexity exactly so I think
  • 00:46:22
    like AI was very present in our lives
  • 00:46:24
    even before just like less a in its own
  • 00:46:27
    face yeah and now it just like became a
  • 00:46:30
    part of our day-to-day life the
  • 00:46:31
    interface updated to we talked about it
  • 00:46:34
    in the first episode we said it's it was
  • 00:46:36
    the phase where we talk to the product
  • 00:46:38
    and the product talk to the AI and then
  • 00:46:40
    when we started talking to the AI That's
  • 00:46:43
    that was what it was for me what are
  • 00:46:44
    your kind of go-to applications and like
  • 00:46:47
    what are the things that you're playing
  • 00:46:49
    about with at home like in terms of AI
  • 00:46:51
    um first of all like I I work way too
  • 00:46:54
    much yeah there's no I home is if my
  • 00:46:57
    investors are hearing
  • 00:46:59
    me so I think like one of my goals for
  • 00:47:02
    this year is maybe
  • 00:47:03
    to get some work life balance like I I
  • 00:47:06
    have a little little font the word like
  • 00:47:08
    life in in this sense a yeah but I think
  • 00:47:11
    like when it first came out I remember
  • 00:47:13
    like I used like even before Chach PT
  • 00:47:16
    there was like everyone were all about
  • 00:47:17
    do right like image generation I
  • 00:47:19
    remember like I used it for just like
  • 00:47:21
    creative stuff I was like I had like a
  • 00:47:23
    dream journal with like sometimes I
  • 00:47:25
    would dream full Netflix like movies and
  • 00:47:28
    then I was like okay I want now given
  • 00:47:30
    that my poor like drawing skills will
  • 00:47:32
    lead me nowhere I can just like describe
  • 00:47:35
    some images some scenes from there and
  • 00:47:37
    see what happens right this is when it
  • 00:47:39
    was just on Discord right I think they
  • 00:47:43
    start yeah yeah that's
  • 00:47:46
    right Journey now now I now I use
  • 00:47:49
    ideogram it's it's it's crazy it's crazy
  • 00:47:51
    good yeah so by the way it's funny that
  • 00:47:53
    like it's been what like a bit more than
  • 00:47:55
    two years but now I'm like oh I missed
  • 00:47:57
    the day creative about it cuz now I'm
  • 00:48:00
    like it's crazy remember Netscape we're
  • 00:48:02
    going to get there remember Netscape no
  • 00:48:03
    one uses it now I don't think even CH
  • 00:48:06
    knows
  • 00:48:07
    what is because of some things you don't
  • 00:48:10
    need to force it to forget right
  • 00:48:14
    yeah but again just like for my dayto
  • 00:48:16
    day I think like you know doing so like
  • 00:48:19
    I like we're all done it like you're
  • 00:48:20
    going into a new place you instead of
  • 00:48:23
    like now Googling for like top 10
  • 00:48:24
    recommended spots in Singapore I'm like
  • 00:48:26
    okay chat GPT what can I do there in
  • 00:48:29
    like two days uh I'll have like a
  • 00:48:31
    layover in Abu Dhabi how can I both like
  • 00:48:34
    maximize 24 hours both for business and
  • 00:48:36
    like for which is what you would do on
  • 00:48:38
    Google sorry yeah but with Google it
  • 00:48:41
    would just like take longer it would
  • 00:48:42
    take longer I now I feel like we're in a
  • 00:48:46
    gap where in Google already like the
  • 00:48:49
    results are Mastered by SEO and then the
  • 00:48:52
    itself it's like paid for within the
  • 00:48:55
    websites again like there a problem
  • 00:48:57
    because then these models are trained on
  • 00:48:59
    the same public data right so they have
  • 00:49:01
    their the same biases or inaccuracies
  • 00:49:04
    but I still think that it's like a
  • 00:49:05
    moment before um like
  • 00:49:10
    AI gets cannibalized by these like
  • 00:49:13
    marketing uh techniques and right now I
  • 00:49:15
    would trust more like the results that
  • 00:49:17
    chat GPT is giving me for like
  • 00:49:18
    recommendations for things to do in
  • 00:49:20
    Singapore for two days then like uh
  • 00:49:22
    first page on Google results and you
  • 00:49:25
    know it's funny because I feel it was
  • 00:49:27
    just yesterday everybody was talking
  • 00:49:28
    that oh people are moving to search
  • 00:49:31
    things from Google in Tik Tok right when
  • 00:49:33
    you go to Singapore you would go to Tik
  • 00:49:35
    Tok and Instagram and what what's the
  • 00:49:37
    best restaurants to go and yeah know I'm
  • 00:49:39
    saying oh my God we got there yeah and
  • 00:49:41
    then and then it's kind of like it's
  • 00:49:43
    forgotten now you talk about searching
  • 00:49:45
    those stuff in in in AI it was very
  • 00:49:47
    shortlived by the way I think people
  • 00:49:49
    talked about it like for a year and then
  • 00:49:51
    chpt came and and that's it everybody is
  • 00:49:53
    like oh cgp has a search function let's
  • 00:49:55
    go there oh yeah
  • 00:49:57
    I don't want to say anything about like
  • 00:49:58
    three adult like white men sitting in
  • 00:50:00
    one room talking about you know my
  • 00:50:02
    younger sisters or like
  • 00:50:04
    siblings they are using Tik Tok I don't
  • 00:50:06
    know like talking about bias you can fix
  • 00:50:09
    that guys let me tell you how to do it
  • 00:50:12
    yeah H so we're coming to the end let's
  • 00:50:15
    H go around the room like we normally do
  • 00:50:17
    it flew by this time didn't it flew by
  • 00:50:19
    not this time that makes it sound like
  • 00:50:20
    every other
  • 00:50:22
    episode
  • 00:50:24
    promise um let's uh
  • 00:50:27
    let's maybe just uh summarize what our
  • 00:50:30
    men thoughts have been from the the cast
  • 00:50:33
    no me again this is uh this episode and
  • 00:50:37
    talking to Ben about you know seeing we
  • 00:50:39
    talk about shovels and stuff and but you
  • 00:50:41
    know I'm trying to always get a macro
  • 00:50:43
    view of where are we going is this the
  • 00:50:45
    internet or is this Netscape and this
  • 00:50:47
    episode is like we still have a lot in
  • 00:50:50
    front of us there's a lot to do there's
  • 00:50:52
    a lot of solutions a lot of problems to
  • 00:50:54
    solve a lot of opportunities there going
  • 00:50:56
    to get excited and scary and exciting
  • 00:50:58
    that's the great balance of of
  • 00:51:01
    Life yeah I get what you're saying for
  • 00:51:03
    me is the realization of we're still in
  • 00:51:06
    the early stage we're the ear early
  • 00:51:08
    adopters we still have a lot of things
  • 00:51:09
    to do and you know I I didn't hear about
  • 00:51:12
    um the unlearning and uh last episode I
  • 00:51:16
    didn't even think about an an analytics
  • 00:51:18
    between Ai and consumers so every time
  • 00:51:21
    we we're we're doing this episode I'm
  • 00:51:23
    thinking oh man there's so many layers
  • 00:51:25
    to this thing and it feels like it's
  • 00:51:27
    just the beginning right and it's super
  • 00:51:30
    exciting like I'm ready I'm ready for
  • 00:51:32
    the next episode I don't know what's
  • 00:51:34
    gonna happen what am I going to learn
  • 00:51:36
    but unlearning and biases is something
  • 00:51:38
    that I I watched in Netflix but just now
  • 00:51:41
    I understand that it's just started and
  • 00:51:43
    you have the competitive Advantage
  • 00:51:44
    because you're doing it in real life
  • 00:51:46
    scenarios not in Academy yeah you know
  • 00:51:48
    so it's crazy yeah um yeah I feel like I
  • 00:51:53
    mostly like talked about what I'm doing
  • 00:51:54
    so I I don't want to talk about this
  • 00:51:57
    again but just like it's for me I'm
  • 00:52:00
    talking constantly to data scientists
  • 00:52:02
    right like and I feel like there's been
  • 00:52:03
    a mutual osis process where my very
  • 00:52:06
    technical team and Technical Partners
  • 00:52:09
    have become a bit more like me
  • 00:52:10
    storytellers and for me now I'm a bit
  • 00:52:13
    more like not that scientist I can't
  • 00:52:15
    code for [ __ ] sorry my French but I can
  • 00:52:18
    carry a conversation and understand the
  • 00:52:19
    concepts but it's been just like very
  • 00:52:21
    fun for me to take a step back talk
  • 00:52:23
    about things like in bird's eye view get
  • 00:52:26
    Outsider perspective on on like what we
  • 00:52:28
    do and the challenges ahead and I think
  • 00:52:30
    like coming out of it if I need to I
  • 00:52:33
    said like each day I'm waking up on a
  • 00:52:34
    different
  • 00:52:35
    side I want to say like I'm an
  • 00:52:37
    optimistic person at heart and I'm
  • 00:52:39
    coming out optimistic from this
  • 00:52:40
    conversation I think for me coming out
  • 00:52:42
    of this conversation as well and having
  • 00:52:44
    spoken to you about this I mean this was
  • 00:52:45
    something that a nightmare I hadn't even
  • 00:52:48
    had like what and you guys are already
  • 00:52:51
    building a very very strong platform
  • 00:52:54
    around it to save it so you know
  • 00:52:56
    just the thought that we have like great
  • 00:52:58
    minds out there kind of thinking about
  • 00:53:00
    the future because it's moving so quick
  • 00:53:02
    you know it's h it's actually very
  • 00:53:04
    reassuring I have to say but Ben thanks
  • 00:53:06
    for coming along it's been really great
  • 00:53:08
    having you here and uh if you enjoyed
  • 00:53:11
    the episode like subscribe let us know
  • 00:53:14
    your comments in the comment section
  • 00:53:16
    below and we'll catch you next week for
  • 00:53:18
    the next episode of AI unscripted thanks
  • 00:53:20
    guys thank you
Tags
  • AI
  • Unlearning
  • Machine Learning
  • Bias in AI
  • Data Compliance
  • Generative AI
  • Autonomous Vehicles
  • Data Science
  • Technology Innovation
  • Startup