So Instagram Is Banning Everyone...

00:20:31
https://www.youtube.com/watch?v=-5fSe1Fwc8E

Sintesi

TLDRIn this video, Mudahar discusses the growing issue of AI moderation on social media platforms like Instagram and Facebook, where users are facing false bans due to automated systems. He shares his own experience of being banned for impersonation and highlights the serious implications for users who rely on these platforms for their livelihoods. The video also addresses the mental health impact on human moderators and the need for better accountability in AI systems, emphasizing that while AI can help reduce human exposure to disturbing content, it should not replace human oversight entirely.

Punti di forza

  • 🤖 AI moderation can lead to false bans.
  • 📉 False bans can ruin livelihoods.
  • ⚖️ Users have limited legal recourse.
  • 🧠 Moderators face mental health challenges.
  • 🔍 Human oversight is essential in AI systems.
  • 🚫 Mudahar shares his personal ban experience.
  • 📱 Many users rely on social media for business.
  • ⚠️ Serious allegations can arise from AI errors.
  • 📜 Appeals often result in automated responses.
  • 💡 AI should not replace human moderators.

Linea temporale

  • 00:00:00 - 00:05:00

    Mudahar introduces himself and shares his experience of being banned from Instagram for impersonation, highlighting the challenges of social media moderation and the reliance on AI for account management.

  • 00:05:00 - 00:10:00

    He discusses the impact of social media bans on individuals and businesses, emphasizing the instability of relying solely on platforms like Instagram and Facebook for income.

  • 00:10:00 - 00:15:00

    Mudahar addresses the issue of false bans related to serious allegations, particularly concerning child exploitation material, and the potential for AI errors in moderation processes.

  • 00:15:00 - 00:20:31

    He concludes by stressing the need for human oversight in AI moderation systems, as the current reliance on AI can lead to unjust consequences for users, and calls for accountability from social media companies.

Mostra di più

Mappa mentale

Video Domande e Risposte

  • What happened to Mudahar's Instagram account?

    Mudahar's Instagram account was banned for impersonation, which he claims was a misunderstanding.

  • Why are people getting banned on Instagram?

    Many users are being banned due to AI moderation errors, often related to false accusations of serious violations.

  • What is CSAM?

    CSAM stands for Child Sexual Abuse Material, and it is a serious offense that can lead to account bans.

  • How does AI moderation work?

    AI moderation uses algorithms to detect and flag content that violates community guidelines, but it can lead to false positives.

  • What are the consequences of false bans?

    False bans can ruin livelihoods, especially for those who rely on social media for business.

  • What is the role of human moderators?

    Human moderators are needed to review flagged content, but many platforms rely heavily on AI.

  • What should users do if they are falsely banned?

    Users can appeal the ban, but often face automated responses and lack of human support.

  • What are the mental health implications for moderators?

    Moderators can suffer from mental health issues due to exposure to disturbing content.

  • What legal recourse do users have against false accusations?

    There may be limited legal recourse due to user agreements, but users can seek advice from legal professionals.

  • How can AI moderation be improved?

    AI moderation should be supplemented with human oversight to reduce errors and ensure accountability.

Visualizza altre sintesi video

Ottenete l'accesso immediato ai riassunti gratuiti dei video di YouTube grazie all'intelligenza artificiale!
Sottotitoli
en
Scorrimento automatico:
  • 00:00:00
    Hello guys and gals. Me Mudahar and
  • 00:00:03
    somewhere in the other side of the world
  • 00:00:04
    there's a guy that looks suspiciously
  • 00:00:06
    similar to me doing your AI moderation
  • 00:00:09
    all on your social media platforms. Now
  • 00:00:12
    of course ladies and gentlemen uh for me
  • 00:00:13
    I'm not a Facebook Instagram kind of
  • 00:00:15
    guy. In fact if you know any lore about
  • 00:00:17
    Mudahar then uh four years ago I
  • 00:00:19
    uploaded a video where I got banned on
  • 00:00:21
    Instagram. So technically, if you see an
  • 00:00:24
    account of me, and there is, I'm
  • 00:00:26
    actually bananabating, and I could be
  • 00:00:28
    liable to be banned again. You might be
  • 00:00:30
    wondering, how would I get banned off
  • 00:00:32
    Instagram? What did I do to get such a
  • 00:00:34
    ban? Well, I was actually impersonating.
  • 00:00:37
    See, they said they ba they banned me
  • 00:00:39
    because I was pretending to be somebody
  • 00:00:41
    else. I wasn't able to log into my
  • 00:00:43
    account and nobody else was able to see
  • 00:00:45
    it. So, again, you might be like, well,
  • 00:00:47
    who was I impersonating? Well,
  • 00:00:49
    apparently I was impersonating this guy
  • 00:00:50
    called Mudahar Anus. And of course,
  • 00:00:53
    Facebook offered me a pretty simple like
  • 00:00:55
    request at the time. They're like, "Hey,
  • 00:00:56
    you want you have a you have a driver's
  • 00:00:58
    license?" And I'm like, "Of course I do.
  • 00:01:00
    You want to pass that driver's license
  • 00:01:02
    over so we can just verify?" And
  • 00:01:04
    obviously, no. All right. There's only
  • 00:01:06
    one social media service that I've ever
  • 00:01:08
    given my driver's license, tax
  • 00:01:10
    information, address, and all that stuff
  • 00:01:12
    to. And that's probably YouTube because
  • 00:01:14
    YouTube actually pays me. Okay? It's a
  • 00:01:16
    very simple thing. YouTube gives me
  • 00:01:18
    money. I give them my tax information as
  • 00:01:20
    is required. Facebook, Instagram does
  • 00:01:24
    [ __ ] and all for me. So I don't have to
  • 00:01:26
    do anything that they ask for. But
  • 00:01:28
    obviously this is not nothing for
  • 00:01:30
    somebody that has their business all up
  • 00:01:32
    in Instagram. So for instance, if you go
  • 00:01:34
    to the top websites in the world,
  • 00:01:35
    obviously Google is number one, YouTube
  • 00:01:37
    is number two, but Facebook and
  • 00:01:38
    Instagram come above Chat GPT and all
  • 00:01:41
    these other social services. So yeah,
  • 00:01:43
    there's a lot of people that use
  • 00:01:44
    Facebook and Instagram. There's a lot of
  • 00:01:46
    people whose business relies on these
  • 00:01:48
    social media services. And that's one of
  • 00:01:50
    the reasons why YouTube is not a good
  • 00:01:52
    idea for a job. Because think about it
  • 00:01:54
    like this. Imagine you're making good
  • 00:01:55
    money on these sites, right? Well, the
  • 00:01:57
    algorithm throws you out of favor and
  • 00:01:59
    you're not making as much money as you
  • 00:02:00
    were the previous year. Or you just get
  • 00:02:02
    your account banned and all of a sudden,
  • 00:02:06
    congratulations, you have no actual
  • 00:02:08
    income coming into your house. That is a
  • 00:02:10
    scary thought. stability is always
  • 00:02:12
    better than uh just kind of putting your
  • 00:02:15
    eggs into one big basket like this, even
  • 00:02:17
    if that basket is super [ __ ]
  • 00:02:19
    lucrative. So, basically the whole story
  • 00:02:22
    about this is a few weeks ago I noticed
  • 00:02:25
    that there was a new movement Facebook
  • 00:02:27
    disabled me, Instagram disabled me. In
  • 00:02:30
    fact, if you go to the actual like
  • 00:02:32
    Instagram Twitter right now, ladies and
  • 00:02:35
    gentlemen now, it's pretty obvious you
  • 00:02:37
    know what's going on the moment you
  • 00:02:38
    touch onto r/Instagram, a community
  • 00:02:41
    about a million users who are all
  • 00:02:43
    talking about Instagram, right? It's
  • 00:02:45
    like kind of how the YouTube subreddit
  • 00:02:47
    is all about, yeah, they banned ad
  • 00:02:49
    blockers. Can we get them back? This is
  • 00:02:50
    all about people who have literally
  • 00:02:52
    said, Instagram suspended my business
  • 00:02:54
    account, my full-time job. One of the
  • 00:02:57
    biggest nightmares I just talked about
  • 00:02:58
    happened. Now, this guy said that out of
  • 00:03:01
    nowhere, he went to Facebook, logged me
  • 00:03:02
    out, opened IG, was presented with a
  • 00:03:04
    page that said he had 180 days to appeal
  • 00:03:07
    my account suspension over community
  • 00:03:10
    guideline violations that were clearly
  • 00:03:12
    not true. My first thought was that it's
  • 00:03:14
    an AI glitch, and it still may be true,
  • 00:03:16
    and I'm not alone in this. I am hoping
  • 00:03:18
    I'm able to have access to my account
  • 00:03:20
    soon, as this is my livelihood, my
  • 00:03:22
    full-time job. I heavily rely on
  • 00:03:25
    Instagram for leads. So, yeah. And the
  • 00:03:27
    comments are just filled with people who
  • 00:03:29
    had all of their stuff here. You know,
  • 00:03:30
    their tattoo shops, their their stores,
  • 00:03:32
    their livelihoods advertised through
  • 00:03:35
    social media completely [ __ ] gone.
  • 00:03:37
    But where it gets really dark is about 2
  • 00:03:40
    weeks ago, I saw this post. I am a
  • 00:03:42
    17-year-old girl and Instagram falsely
  • 00:03:44
    banned me for cse. My mental health is
  • 00:03:47
    ruined. Now, for anybody that doesn't
  • 00:03:48
    know, cse is child sexual exploitation
  • 00:03:50
    material or just, you know, that realm
  • 00:03:53
    of stuff. So, it gets to a pretty
  • 00:03:54
    serious thing. Now generally when people
  • 00:03:56
    get banned for this I'm a little
  • 00:03:58
    hesitant on taking their side because
  • 00:04:01
    you know in my understanding a lot of
  • 00:04:02
    the systems that power cam like
  • 00:04:04
    detection and banning are generally
  • 00:04:06
    pretty good but there have been a lot of
  • 00:04:09
    cases unfortunately that have surfaced
  • 00:04:12
    where sometimes using AI to detect
  • 00:04:14
    illegal material oftent times develops a
  • 00:04:17
    false positive. Now, one of the reasons
  • 00:04:19
    that I was a little uh hesitant on kind
  • 00:04:21
    of making this video is when I found out
  • 00:04:22
    that a lot of people are getting
  • 00:04:23
    flagged, I guess, for CSAM material, uh
  • 00:04:26
    my understanding is generally speaking,
  • 00:04:28
    a lot of the AIs that detect uh this
  • 00:04:31
    kind of stuff are actually pretty good.
  • 00:04:33
    And while there are false positives, a
  • 00:04:35
    lot of the technology is stuff that is
  • 00:04:37
    available. For instance, YouTube hands
  • 00:04:39
    out its CSIA tool entirely for uh you
  • 00:04:43
    know, free. I believe you can just like
  • 00:04:45
    request access. So the way that it works
  • 00:04:47
    is you have video fingerprinting. So for
  • 00:04:49
    instance, a known CSAM image, an illegal
  • 00:04:53
    video or photo will have a hash
  • 00:04:55
    fingerprint. And this hash fingerprint
  • 00:04:57
    doesn't necessarily need to have you
  • 00:05:00
    open up an individual's photo. You can
  • 00:05:02
    actually just match this hash across
  • 00:05:05
    several platforms. So whether you have a
  • 00:05:07
    photo on I guess your iPhone or you have
  • 00:05:09
    a video uploaded to YouTube, the
  • 00:05:11
    fingerprint should trip just based off
  • 00:05:14
    of the hash that is calculated from the
  • 00:05:16
    illegal material inside the video or
  • 00:05:18
    audio or not audio but the video or
  • 00:05:20
    photo. Then of course that fingerprint
  • 00:05:23
    goes to the API tool where YouTube's
  • 00:05:25
    finger repository, the fingerprint
  • 00:05:27
    repository of all the illegal ZSAM stuff
  • 00:05:29
    it matches. And then of course a manual
  • 00:05:32
    review is uh then done by YouTube where
  • 00:05:34
    like somebody comes in and checks that
  • 00:05:36
    it's hopefully not a false positive and
  • 00:05:38
    then action is taken. So YouTube will
  • 00:05:41
    communicate with uh or the partner in
  • 00:05:43
    this case will communicate with law
  • 00:05:45
    enforcement and so on and so forth now
  • 00:05:47
    certain times it's not always perfect
  • 00:05:50
    right and I've talked about this before
  • 00:05:53
    but there was a situation where a father
  • 00:05:55
    was flagged with pictures of their child
  • 00:05:58
    as you know sees stuff and then police
  • 00:06:00
    investigations were started obviously
  • 00:06:02
    but it turned out that these images that
  • 00:06:04
    the dad had captured which wouldn't
  • 00:06:06
    necessarily be something that would be
  • 00:06:08
    out in the open Right? Like since it's
  • 00:06:10
    an image you've taken, it shouldn't
  • 00:06:12
    technically be in a public registry. But
  • 00:06:15
    Google still scanned the image, created
  • 00:06:17
    the hash, and it tripped something. And
  • 00:06:20
    obviously, this image was meant to go to
  • 00:06:22
    the doctor. So the doctor asked for this
  • 00:06:24
    image of, you know, this kid so that
  • 00:06:26
    they could diagnose and prescribe
  • 00:06:27
    antibiotics, right? All of this. Now, I
  • 00:06:30
    imagine with some of the people that
  • 00:06:31
    have been tripping this on Facebook,
  • 00:06:33
    maybe the AI has gone a little bit too
  • 00:06:35
    wild because there is a massive amount
  • 00:06:38
    of reports. And again, the reason why I
  • 00:06:40
    was a little hesitant on this talking
  • 00:06:41
    about it because I don't know if this is
  • 00:06:43
    like a whole group of bad people who
  • 00:06:45
    were just basically like, you know,
  • 00:06:47
    trying to gaslight the rest of the world
  • 00:06:49
    into thinking that, oh no, we're not
  • 00:06:51
    doing anything illegal. Because my
  • 00:06:53
    understanding of these CESAM detection
  • 00:06:55
    bots is that generally they do a pretty
  • 00:06:56
    good job. And while yes, like any
  • 00:06:59
    technology, there's false positives, I
  • 00:07:01
    wasn't necessarily thinking that, you
  • 00:07:03
    know, this was like a Facebook screw-up
  • 00:07:04
    until, of course, even the company had
  • 00:07:06
    to acknowledge that there was a bit of a
  • 00:07:07
    moderation problem or a technical
  • 00:07:09
    problem going on. And obviously, if
  • 00:07:12
    people are getting accused of such a
  • 00:07:13
    heavy thing, that can involve the cops
  • 00:07:15
    being involved in your life,
  • 00:07:16
    investigating you. And if you haven't
  • 00:07:18
    done anything wrong, it's probably one
  • 00:07:20
    of the scariest things to be accused of
  • 00:07:22
    because life ruination actually is the
  • 00:07:25
    least of your concerns right there. So,
  • 00:07:27
    this user says their Instagram account,
  • 00:07:29
    only connection to old school and
  • 00:07:31
    college friends, was permanently banned
  • 00:07:32
    for cse. I'm a minor myself. I'm just
  • 00:07:35
    17. I've never done anything wrong,
  • 00:07:37
    never posted anything inappropriate,
  • 00:07:39
    never harmed anyone, and yet I've been
  • 00:07:41
    falsely accused of one of the most
  • 00:07:43
    serious things imaginable. Instagram
  • 00:07:46
    didn't even give me a proper chance to
  • 00:07:48
    explain. I submitted my government ID. I
  • 00:07:50
    filled out their data rights request. I
  • 00:07:52
    wrote a full emotional appeal through
  • 00:07:54
    their contact form explaining that I'm
  • 00:07:56
    just a girl who lost everything over a
  • 00:07:59
    mistake. To make things worse, their
  • 00:08:01
    system asks for a raw selfie as proof
  • 00:08:04
    from a minor after falsely accusing me
  • 00:08:07
    of exploitation. The ban has completely
  • 00:08:10
    destroyed me. My Instagram was not just
  • 00:08:12
    a social app. was my diary, my gallery,
  • 00:08:15
    my memory box, my link to people I may
  • 00:08:17
    never see again. So, this is a pretty
  • 00:08:20
    serious thing to be accused of. And and
  • 00:08:22
    the reality of it is I don't know the
  • 00:08:23
    the actual poster here, so I can't
  • 00:08:26
    confirm the validity, but it seems, you
  • 00:08:28
    know, kind of weird that you get banned
  • 00:08:30
    off Instagram and the first thing they
  • 00:08:32
    do is obviously they want every single
  • 00:08:34
    image of you, an up-to-date selfie,
  • 00:08:36
    whatever they can, government IDs, the
  • 00:08:38
    whole shebang. It's almost like, you
  • 00:08:41
    know, you get banned and it's like,
  • 00:08:42
    please let us harvest every ounce of
  • 00:08:44
    your data, right? You would think that
  • 00:08:46
    at this point the 17-year-old girl, if
  • 00:08:48
    she was accused of this, you know, the
  • 00:08:49
    cops would absolutely have to be getting
  • 00:08:52
    involved. And maybe they are, maybe they
  • 00:08:53
    aren't. I again, I don't know it. When I
  • 00:08:55
    saw how bad the situation had gotten
  • 00:08:58
    into, I figured I wanted to make a video
  • 00:09:00
    because this is something that I've been
  • 00:09:01
    following for actually years when it
  • 00:09:03
    comes to like human moderation. So for
  • 00:09:06
    instance, anybody that doesn't know
  • 00:09:08
    Facebook and Meta, a lot of these
  • 00:09:10
    websites have actual sweat shops all
  • 00:09:13
    over the world where they have
  • 00:09:14
    moderators. So this is a p this is
  • 00:09:16
    actually from the Time magazine where
  • 00:09:18
    they said in a drab office building near
  • 00:09:20
    a slum in Nairobi, Kenya, 200 young men
  • 00:09:23
    and women from countries across Africa
  • 00:09:26
    sit at desks glued to computer monitors
  • 00:09:28
    where they must watch videos of murders.
  • 00:09:31
    And these guys work for a company known
  • 00:09:32
    as Sama which calls itself an ethical AI
  • 00:09:36
    outsourcing company headquartered in the
  • 00:09:38
    sunny state of California. So its
  • 00:09:41
    mission is to provide people in places
  • 00:09:43
    like Nairobi with dignified digital
  • 00:09:46
    work. Now the problem with this kind of
  • 00:09:48
    work is that at times companies like
  • 00:09:50
    Facebook and it's not just them. have to
  • 00:09:53
    pay millions upon millions of dollars to
  • 00:09:55
    these same moderators because after they
  • 00:09:58
    look at all of this abborant [ __ ] in
  • 00:10:00
    order to moderate their platform of the
  • 00:10:02
    most insane filth out there, uh, these
  • 00:10:06
    people develop obvious mental issues,
  • 00:10:08
    things like depression, various
  • 00:10:09
    addictions just because they're
  • 00:10:11
    moderating some of the filthiest [ __ ]
  • 00:10:13
    out there. And this actually blew my
  • 00:10:16
    mind because it's like these people's
  • 00:10:17
    whole job is to look at gore uh cam a
  • 00:10:21
    lot of the most illegal [ __ ] sometimes
  • 00:10:23
    just to moderate it. And again, this is
  • 00:10:26
    not a healthy job to have clearly. So
  • 00:10:28
    it's because of this kind of a job that
  • 00:10:30
    a lot of companies have started to rely
  • 00:10:32
    on machine learning or artificial
  • 00:10:34
    intelligence in order to remove that
  • 00:10:36
    human element because an AI should in
  • 00:10:38
    theory be able to moderate like a human
  • 00:10:40
    being if it's trained well enough. And
  • 00:10:42
    then of course down the road uh you know
  • 00:10:44
    if the AI gets mentally scarred for
  • 00:10:47
    whatever instance you can just reset it
  • 00:10:49
    back to an earlier checkpoint and you
  • 00:10:51
    know you don't have to worry about
  • 00:10:52
    actual people being hurt you know
  • 00:10:54
    because of this kind of [ __ ] But what
  • 00:10:56
    happens when the AI moderation tool
  • 00:10:58
    [ __ ] up? Well according to Facebook
  • 00:11:01
    they say when we fight content that goes
  • 00:11:03
    against it our community standards we
  • 00:11:05
    may do such thing as reduce the
  • 00:11:07
    distribution of content on Facebook or
  • 00:11:09
    mark it as sensitive. So the way they
  • 00:11:11
    describe it is you have the offensive
  • 00:11:12
    piece of content that goes through AI
  • 00:11:14
    technology and then of course the AI can
  • 00:11:17
    of course decide to remove content.
  • 00:11:19
    Sometimes that AI will send it to actual
  • 00:11:21
    humans and then the humans may make that
  • 00:11:23
    you know revised decision. But of course
  • 00:11:26
    it seems like most of what's happening
  • 00:11:28
    on Instagram and Meta right now is just
  • 00:11:30
    AI. So one user says that you know they
  • 00:11:33
    wanted to post so that I could see this
  • 00:11:35
    and talk about it which I guess I am. My
  • 00:11:38
    account only had photos of me and all my
  • 00:11:40
    friends 18 plus and linked to Facebook
  • 00:11:42
    dating back to 2010. Memories, old
  • 00:11:44
    friends, nothing remotely inappropriate.
  • 00:11:47
    So, he appeals. He was met with an
  • 00:11:48
    instant reply. And I think any YouTuber
  • 00:11:50
    that has tried to appeal something on
  • 00:11:52
    YouTube feels similar, right? Like you
  • 00:11:54
    send in your whole appeal on a 2-hour
  • 00:11:56
    long video, they send you back a minute
  • 00:11:58
    later and it's like, "Yeah, we reviewed
  • 00:11:59
    your content. It's [ __ ] up." Really?
  • 00:12:00
    You reviewed a 2-hour video in a minute?
  • 00:12:02
    You must be crazy. The review was
  • 00:12:05
    clearly automated. Meta says a team
  • 00:12:07
    looked at it, but their wording admits
  • 00:12:08
    the team is automated AI. No human
  • 00:12:11
    actually checked anything. The ban
  • 00:12:13
    literally says their technology flagged
  • 00:12:15
    my content and their technology made the
  • 00:12:17
    decision to ban me. Absolutely wild.
  • 00:12:19
    It's a breach of GDPR article 22, which
  • 00:12:22
    is European data protection laws. So
  • 00:12:24
    naturally, I headed to customer support.
  • 00:12:26
    Meta only offers paid support. So in a
  • 00:12:28
    desperate attempt to recover my account,
  • 00:12:30
    he pays 12 bucks on another unaffected
  • 00:12:33
    account just to speak to somebody. And
  • 00:12:35
    lo and behold, he gets back AI generated
  • 00:12:38
    replies, ignored questions, and then a
  • 00:12:40
    specialist team that never actually
  • 00:12:42
    contacts him. Now, people have written
  • 00:12:44
    about this situation, too. This is from
  • 00:12:46
    January 2025. So, they actually do link
  • 00:12:48
    to a exploit that was being used and
  • 00:12:51
    reported on as early as the beginning of
  • 00:12:53
    this year, where a security hole tricked
  • 00:12:55
    the AI ad meta to apparently disabling
  • 00:12:58
    any user account easily. So this guy
  • 00:13:01
    claims that he was banned for human
  • 00:13:03
    exploitation by AI ad meta. So according
  • 00:13:07
    to records, pre-authentication
  • 00:13:09
    enrollment flaws or flows or login
  • 00:13:11
    sessions were completed several times
  • 00:13:13
    during the month in several countries
  • 00:13:14
    worldwide from the Honduras and Ukraine
  • 00:13:16
    to Iraq and the Philippines. So these
  • 00:13:19
    login sessions continued without my
  • 00:13:20
    knowledge until Meta removed all my
  • 00:13:22
    accounts. So the way they say that they
  • 00:13:24
    trick the AI is through things like
  • 00:13:26
    email enumeration which is testing if
  • 00:13:28
    like emails or usernames exist during
  • 00:13:30
    the signup project process. So if a
  • 00:13:33
    system gives specific responses like the
  • 00:13:35
    emails already in use which you know
  • 00:13:36
    you'll see all over the internet that
  • 00:13:38
    confirms a valid account exists and
  • 00:13:40
    paves the way for targeted attacks. And
  • 00:13:42
    then of course automated account
  • 00:13:43
    creation and flooding with fake
  • 00:13:46
    alternate like accounts. If there's
  • 00:13:47
    enough fake accounts, according to these
  • 00:13:49
    people, the AI moderation triggers and
  • 00:13:51
    just deletes everything. When Meta AI
  • 00:13:54
    receives the fruits of these attacks
  • 00:13:56
    like reports, if they are serious, such
  • 00:13:58
    as this user, remove for child
  • 00:13:59
    exploitation, they disable, move on. And
  • 00:14:01
    again, like they say, it's letting an AI
  • 00:14:05
    drive the whole car. So, the problem
  • 00:14:07
    with this is obviously not that I think
  • 00:14:10
    Meta is removing accounts without, you
  • 00:14:12
    know, just willy-nilly. Obviously, that
  • 00:14:14
    is a problem. But the big issue here
  • 00:14:16
    that I find is when they're saying that
  • 00:14:18
    you're violating, you know, these child
  • 00:14:21
    exploitation guidelines. Again, this is
  • 00:14:23
    a serious crime. And if you're accused
  • 00:14:25
    of this by Meta or any big corporation
  • 00:14:28
    or any company, you know, in my mind,
  • 00:14:31
    it's you take this to the police, you
  • 00:14:33
    report this over there, and you make
  • 00:14:34
    this a big deal. But if you're falsely
  • 00:14:36
    accusing people of being predators,
  • 00:14:38
    diddlers, whatever, uh, and you let an
  • 00:14:41
    AI make that decision, I think there
  • 00:14:43
    needs to be some [ __ ] legal action.
  • 00:14:45
    Okay? Either you go to the big boy
  • 00:14:48
    courts or if you're a business, you
  • 00:14:50
    could probably go to small claims court
  • 00:14:52
    and just accuse this company of calling
  • 00:14:54
    you a diddler. I don't even think, and
  • 00:14:56
    again, I'm not a lawyer. Please talk to
  • 00:14:58
    a lawyer about this. uh there should be
  • 00:15:00
    an avenue for you to get some
  • 00:15:02
    restitution after being falsely smeared
  • 00:15:05
    by one of the largest tech companies on
  • 00:15:08
    the planet. Now, even when I talked
  • 00:15:10
    about me getting banned off Instagram
  • 00:15:12
    earlier, right? When I got banned off
  • 00:15:14
    Instagram, I want you to understand I
  • 00:15:16
    got banned apparently because there was
  • 00:15:18
    actually a scenario where people were
  • 00:15:21
    spamming, right? People were actually
  • 00:15:23
    link spamming accounts that they did not
  • 00:15:25
    own and got accounts disabled. What they
  • 00:15:27
    say is hackers have found ways to link
  • 00:15:30
    Instagram accounts to unrelated Facebook
  • 00:15:32
    profiles before they violate community
  • 00:15:35
    guidelines. And this not only results in
  • 00:15:37
    newly linked Instagram accounts being
  • 00:15:39
    disabled, but also the Facebook profiles
  • 00:15:40
    of unrelated users as they're seen as
  • 00:15:43
    being guilty by association. So it seems
  • 00:15:46
    like somebody in your network gets
  • 00:15:48
    banned and then you get banned because
  • 00:15:49
    the AI dragnetted you into its
  • 00:15:51
    moderation. Now, the thing is, again,
  • 00:15:53
    this could be solved easily, but there
  • 00:15:55
    really isn't an easy way to contact Meta
  • 00:15:58
    or Facebook's customer service. So, a
  • 00:16:00
    lot of people that get falsely smeared,
  • 00:16:02
    you know, they're literally just talking
  • 00:16:03
    to bots all day, every day. There's very
  • 00:16:06
    rarely ever a chance that a human being
  • 00:16:09
    gets to talk to you. For instance, this
  • 00:16:11
    user says, "You're back on Instagram."
  • 00:16:13
    And then they have their whole username.
  • 00:16:14
    Thank you for taking the time to request
  • 00:16:16
    review. reviewed your account and found
  • 00:16:17
    that the activity on it does follow our
  • 00:16:20
    community standards on CSAM material
  • 00:16:23
    abuse. So you can use Instagram again
  • 00:16:25
    and then literally like that was on June
  • 00:16:27
    8th, right? So they were allowed back
  • 00:16:29
    in. Then the next day your Instagram
  • 00:16:31
    account's been suspended. It doesn't
  • 00:16:33
    follow our community standards on Cam
  • 00:16:35
    material. So in one day it's like you're
  • 00:16:37
    good. The next day you're [ __ ] kid.
  • 00:16:39
    You're out. We reviewed the account and
  • 00:16:41
    it still doesn't follow. Now it's
  • 00:16:43
    permanently disabled. This guy says that
  • 00:16:45
    Facebook owes him $30,000. Okay,
  • 00:16:48
    basically they had 40 million views a
  • 00:16:50
    month. Facebook stole $30,000 from me.
  • 00:16:53
    And of course, this guy's thinking maybe
  • 00:16:55
    I should go to small claims court. And
  • 00:16:57
    even then, lawyers are saying there's
  • 00:16:58
    probably no legal recourse because he
  • 00:17:00
    probably signed something in the user
  • 00:17:02
    agreement that just completely got rid
  • 00:17:04
    of any uh you know, litigation. So, this
  • 00:17:06
    other user gets hit, right? Whole
  • 00:17:08
    username, everything. And then the next
  • 00:17:10
    thing is they suspended their account
  • 00:17:12
    because they don't want children to be
  • 00:17:14
    endangered. And again, I feel like if
  • 00:17:16
    they're hitting you with these kind of
  • 00:17:18
    these kind of like, you know, claims,
  • 00:17:20
    you should 100% like be sent to court.
  • 00:17:23
    You absolutely should have law
  • 00:17:25
    enforcement involved. Is the AI just
  • 00:17:27
    picking up anything randomly and kicking
  • 00:17:29
    you out? It seems like there is a lot of
  • 00:17:31
    actual uh reports that may be more false
  • 00:17:35
    than real. So, it's gotten to the point
  • 00:17:36
    where like obviously the big mainstream
  • 00:17:38
    like news guys are covering it. They're
  • 00:17:40
    saying that, "Bro, we're getting mass
  • 00:17:42
    bans everywhere. People are getting hit
  • 00:17:44
    with this crazy allegation. People are
  • 00:17:46
    being alleged of being actual predators,
  • 00:17:49
    bidders, cuz apparently it seems like a
  • 00:17:52
    Facebook AI is losing it." So, nobody
  • 00:17:54
    knows why exactly it's getting banned
  • 00:17:56
    because it's not really brought up.
  • 00:17:57
    Facebook's just like, "Yeah, we're aware
  • 00:17:59
    of a technical error." I mean, I guess
  • 00:18:01
    if the technical error is an AI labeling
  • 00:18:03
    everybody as a terrorist or a diddler,
  • 00:18:06
    maybe we've got a [ __ ] moderation
  • 00:18:08
    problem. Okay. Now, the thing about this
  • 00:18:10
    and and and what really speaks to me is
  • 00:18:12
    a I'm not really an Instagram, Facebook
  • 00:18:14
    user, so this normally wouldn't be that
  • 00:18:16
    big of an issue for me personally, but
  • 00:18:19
    knowing where the internet's headed to
  • 00:18:21
    in terms of AI moderation and just AI
  • 00:18:23
    everything, you know, an issue that
  • 00:18:25
    cropped up on Facebook and and and
  • 00:18:27
    Instagram, you know, this kind of an
  • 00:18:29
    error doesn't, you know, seem
  • 00:18:30
    far-fetched if it starts occurring on
  • 00:18:32
    something like YouTube or it starts
  • 00:18:34
    occurring on something like, you know,
  • 00:18:35
    Twitter or whatever, right? You know,
  • 00:18:37
    any of these platforms people use.
  • 00:18:38
    Imagine if on YouTube like [ __ ] tons
  • 00:18:40
    of your favorite content creators, you
  • 00:18:42
    know, you just see their channels go
  • 00:18:43
    offline because it's like, yeah, this
  • 00:18:45
    this channel has been removed for
  • 00:18:46
    violating community guidelines or CSAM
  • 00:18:49
    guidelines, you'd probably have a lot of
  • 00:18:51
    questions to ask. As I imagine the
  • 00:18:54
    public account owners of like Facebook
  • 00:18:56
    pages and stuff, yes, this is a pretty
  • 00:18:58
    serious allegation to have. This is a
  • 00:19:00
    [ __ ] huge problem and it goes beyond
  • 00:19:03
    a technical error. And like everyone has
  • 00:19:06
    said, there's a frustration despite Meta
  • 00:19:09
    saying, "Yeah, we know there's a
  • 00:19:10
    problem." There is actually no human in
  • 00:19:12
    most cases to speak about any of these
  • 00:19:14
    issues when an AI gets involved. And
  • 00:19:17
    it's eerily similar to like how YouTube
  • 00:19:19
    feels too because like in sometimes when
  • 00:19:21
    like you get you get like an account
  • 00:19:23
    strike on YouTube or something,
  • 00:19:24
    realistically most YouTubers are running
  • 00:19:27
    over to Twitter, a completely different
  • 00:19:28
    platform where they're going at team
  • 00:19:30
    YouTube something [ __ ] up. Please help
  • 00:19:33
    me. And it shouldn't be like that,
  • 00:19:36
    right? Like if these companies are going
  • 00:19:37
    to rely on AIS to moderate, which
  • 00:19:40
    honestly when it comes to the really
  • 00:19:41
    illegal stuff, I think is great just
  • 00:19:43
    because like, you know, reducing as many
  • 00:19:45
    humans from being impacted mentally by
  • 00:19:47
    the [ __ ] they see on the internet is
  • 00:19:48
    probably a great idea in general. But,
  • 00:19:51
    uh, you still got to have some human
  • 00:19:53
    beings on the payroll to be able to look
  • 00:19:55
    at when AI screws up because AI does
  • 00:19:57
    screw up quite a lot. And in this case,
  • 00:20:00
    AI is just pointing out the diddle stick
  • 00:20:01
    at everyone, okay? It's saying, "You're
  • 00:20:02
    a diddler. You're a diddler. You might
  • 00:20:04
    be a terrorist. You're impersonating a
  • 00:20:06
    fat Indian guy on the internet. Ban,
  • 00:20:08
    ban, ban, ban, ban, ban. But ladies and
  • 00:20:12
    gentlemen, I wanted to talk about it
  • 00:20:13
    because I think it's another example of
  • 00:20:15
    AI going apeshit. And if you want to
  • 00:20:17
    know another example of AI going apehit.
  • 00:20:19
    Apparently Grock started thinking it was
  • 00:20:21
    Mecca Hitler last night. Look that up,
  • 00:20:23
    ladies and gentlemen. This is me
  • 00:20:24
    Mutahar. And uh if you like what you
  • 00:20:26
    saw, please like, comment, and
  • 00:20:27
    subscribe. Dislike if you dislike it. I
  • 00:20:29
    am
Tag
  • AI moderation
  • Instagram ban
  • Facebook
  • CSAM
  • false positives
  • social media
  • Mudahar
  • mental health
  • user rights
  • content moderation