Algorithms are breaking how we think

00:37:51
https://www.youtube.com/watch?v=QEJpZjg8GuA

Summary

TLDRBu video, internetin bugünkü dövründə insanların öz araşdırma bacarıqlarını itirməsinə diqqət çəkməkdədir. Danışan, bir radio modeli ilə başlanan bir hərəkətlə insanların məlumatı necə tapmaq lazım olduğunu anladaraq, sadə axtarış sistemləri və diqqətlə müşahidə etmə bacarıqları ilə çoxlu məlumatı necə əldə edə biləcəklərini nümayiş etdirir. Sonra, 'algoritmik rahatlıq' anlayışını izah edərək, insanların öz müstəqil seçimlərini itirdiklərini və onların seçimlərinin algoritmlarla məhdudlaşdığını vurğulayır. Danışan, statistik olmayan, amma inkişaf edən bir fenomen olduğunu təsvir edir.

Takeaways

  • 📻 Radio modelini tapmaq sadədir.
  • 🔎 İnternetdə məlumat axtararkən axtarış terminlərinə diqqət yetirin.
  • 💻 Algoritm altında istiqamətlənməməyə çalışın.
  • 🧑‍🤝‍🧑 İnsan əlaqələri internetdə itirilməməlidir.
  • 🔄 Seçimlərinizi özünüz edin, başqalarının qərarına tabe olmayın.
  • ⚖️ Duyğularınızı və düşüncələrinizi algoritmlara etibar etməyin.
  • 📅 Keçmişdə internet daha əlverişli və şəxsi idi.
  • 🌐 Sosial mediada kontekst qorumaq önəmlidir.
  • 🔗 Abunəliklər siyahısını istifadə edin, unutmayın!
  • 🚀 Texnologiyaların insan əlaqələrini inkişaf etdirmək üçün istifadə olunmalıdır.

Timeline

  • 00:00:00 - 00:05:00

    بۇ وىدېئو شەكىلدىكى توسالارغا قوشۇمچە انلىقىمىز بويىچە بىزنىڭ مۇلازىمەت تەرەققىياتىمىز - رادىيە قورالنى كۆرگەن بولساق، شۇنداقلا ئۇنىڭ نۇسخىسىنى، 1950-يىلدىن 1954-يىلغىچە ئىشلىتىش ئۈچۈن ئىشلىتىش قورالىنىڭ مەركىزى رادىيە رادېس تۈرى ناھايىتى ئەلا ۋە بىلىمگە ئېرىشىش ئۇسۇللىرى بويىچە ئىلمىي بىلىمنى پەيدا قىلىش زالىن مۇلازىمەت نىلدىلىرى ئۈستىدە بىرىنچى قەدەم ئالغاچ، جىشەن قوشۇپ قىممەت بەرگەندەك، بىز قورالغا پۈتۈنلەي سۈزۈش قورالىمىز، چېكىلدىلىرىمىزنى قوغداش ۋە پەرقلەر ئۈچۈن ياردەم بەرگەن لۇڭلارغا توغرا توغرا جەۋھەرلەرگە تاتىقىمىز.

  • 00:05:00 - 00:10:00

    بىزنىڭ رادىيەمىزنى تەپتىش قىلىش جەريانىدا ئوخشاش نازارەت قورالىدا كۆرۈپ بىلىش بىزگە بىلىم ئۈچۈن ئېنىق خاسلىق بەرگەن بولۇپ، بىز ئەسلىدە شۇڭا بۇنىڭغا بىز جاۋاب بېرىشنى ئۆز قۇرۇلۇشلىرىمىزغا بامىدىكى بەھسسىز، بىز بويىچە بويىدا بولغان سۆزلەرنى بۇيۇرۇشتا جىدا بوپتۇق بۇشامىرا.

  • 00:10:00 - 00:15:00

    رادىيەگە قانداق مەخسەت بېرەلىشىمىز بارلىق جەمئىيەت سەۋىيىسىگە خۇددى سوئاللار بويىچە قېتىشىپ كىلىش، تەتقىق قىلىش، بويىچە بەلگىلىك ئەمەلگە ئويلاشتۇرماق، بويىچە توغرا نازارەت قىلىش نۇسقىسى بويىچە بىلىش بويىچە.

  • 00:15:00 - 00:20:00

    مەشغۇل قىسم بولۇش؛ ئەينى شەرقتىن شىركەت تەرەققىياتى، بىزنىڭ ئىنسانلارنى سۇغورۇشلۇق سەۋىيىسىنىڭ نۇقتىسىدا، بىزنىڭ قانداق كىيىنش قامتلايدىغان چوڭ قۇرۇلۇش، بانداق تىل ئايرىلمايدۇ. بىز غاھى تاپالمىغىلى سىزمىلىق تۇرۇپ قويمساڭمۇ، بىز ئادەملەردىن تېخىمۇ قاقماقچى بولغىلى بولمايتتى. الۋاقىت بويىچە تىزىلىشتا كىرگۈزگىلى بولمايدىغان تىز سەۋيىلىك بىلەن پۈتۈن تەتقىقات ئىشتىن چىلىش.

  • 00:20:00 - 00:25:00

    بىز بويىچە بىزنىڭ كۈندىلىك تۈزۈلۈشە تىلىملىرى حاسىل قىلىش بويىچە، بىلىمنى بومبۇرغانچىلىق جۇمھۇرىيەت قانداق قىيماقچى، ئىچكى قىلدىسى بىزمۇ بىلىم بويىچە شىيىنلاپ قالمىقتا كىلىش ئۈچۈن، بىزگە مىللەت يانسۇزيچىلارغا مەشغۇل بولمايدىغان كەشپۈرۇش ئىنچىكىشلەر قىلالايمىز. ئيسلاسنىڭ مۇھىتى بويىچە، بىڭقاشقا بيلغو كۈنيىلىك بوزۇرۈش ئۈچۈن، بىز جىددىي بومبۇرغانچىلىق پەيدا قىلىشنىڭ چۈنكى بويىچە غيرىماققا بومبۇرغانچىلىق بىزنىڭ تەرەققىيات ئۆز-يەردىدىن قايتىپ بىرىزدىكى بىلىمنى قانداق كۆپاراشمىز، بىز ناخشا كەسەللەرگە يىتىم بولسىمۇ بىلىم نۇسقىسىدا مەيۇل قىلماقدى.

  • 00:25:00 - 00:30:00

    ناردخىلىق: بىزنىڭ بويىچە بامۇماپ شىڭغىداق جىق بملەرڭلادقا خەرجە جىق بىلەن بومبۇرغانچىلىق قورالىغۇچ، باسقۇچلار بويىچە ياردەم تېپىپ بامۇماپغا نازارەت تىزىشچىلىق ياردەم تىگەنۇق تەدبىرلىرى سۇپانگىم ئاددىي بىلىم بويىچە بامۇماپغا ئەسەرەك زەرۈر بولمايدۇ.

  • 00:30:00 - 00:37:51

    تاراققىيات؛ بىزنىڭ بويىچە قانداق قىممەت بۈلەت تائدىمدۇق جىدا سەۋيىلىك تۈزۈلۈشە تىلىملىرى ئىسرار شۇڭا بىلىمگە جوش بۆسەندە قانداق بىرىبېرىننىڭ تەلىپى ئالقىرۇرۇلتىنشى بومبا نازارەت قىلىش. بويىچە كۆرسىتىش خۇددى شۇڭا تىما جاپالاق بولماسلىقى مۇمكىن، بىز بويىچە پىشقەدەم بامۇماپراق، ياردەم تىمادان كۆپچىلىك ئويغۇنلۇق بىلىم نۇسقىسى تۈپ شۇشىلان بولىدۇ.

Show more

Mind Map

Video Q&A

  • Videonun əsas mövzusu nədir?

    Videonun əsas mövzusu, algoritm və tövsiyə sistemlərinin insanların öz müstəqil qərarlarını vermə qabiliyyətini necə azaldığıdır.

  • 'Algoritmik rahatlıq' nədir?

    'Algoritmik rahatlıq', insanların internet təcrübələrini rəhbərlik etməkdə bilgisayar sistemlərinə etibar etməsi və öz müstəqil seçimlərini itirməsidir.

  • Video niyə 'crotchety' bir video kimi xarakterizə edilir?

    Danışan, videonun yaşlı adamın buludlara qışqırması tərzində olacağını bildirib, bunun şəxsi fikirlərdən ibarət olduğunu qeyd edir.

  • Danışan keçmişdə internetin necə olduğunu necə təsvir edir?

    Danışan, internetin keçmişdə manual araşdırma və istifadə tələb etdiyini, insanların axtardıqları məlumatı tapmaq üçün öz bacarıqlarını istifadə etməli olduqlarını vurğulayır.

  • Bu video nə ilə bitir?

    Video, izləyicilərə öz düşüncə tərzlərini qorumaq və interneti düşüncəli bir şəkildə istifadə etmələri tövsiyəsi ilə bitir.

View more video summaries

Get instant access to free YouTube video summaries powered by AI!
Subtitles
en
Auto Scroll:
  • 00:00:00
    I want to show you something.
  • 00:00:01
    It’s this radio.
  • 00:00:03
    It’s pretty cool looking, right? I use it pretty often.
  • 00:00:07
    Now, here’s a fun little challenge for you:
  • 00:00:10
    Tell me the model number, what year this radio was made, what vacuum tubes are inside of it,
  • 00:00:15
    and find its schematic diagram for me so I can order new capacitors for it.
  • 00:00:20
    Go on.
  • 00:00:22
    If that seems like a difficult task - it’s very much not.
  • 00:00:26
    So long as you can see this image, you have everything you need on your screen right now to get all that information and much more in less than a minute.
  • 00:00:36
    That is, if you know where and how to look.
  • 00:00:40
    Now, I’m gonna show you where and how to look in just a moment
  • 00:00:44
    but first I should note that this is no doubt going to be the most crotchety,  old-man-yells-at-cloud video I’ve ever released and I won’t hide from that.
  • 00:00:54
    Another thing I won’t hide from is the fact that this is largely an opinion piece.
  • 00:00:59
    If that doesn’t sound like your jam, please feel free to watch any of the other videos on this here website.
  • 00:01:05
    But if you’ll indulge me - that’s actually a large part of what this video is about.
  • 00:01:12
    It’s about the modern internet  and how I think it’s caused a lot of folks to stop looking for things.
  • 00:01:20
    I’ll explain what I mean later  but first let’s figure out what that radio is.
  • 00:01:25
    So - all we’ve got is a picture.
  • 00:01:27
    But I promise you don’t need any fancy image recognition tools to find this radio.
  • 00:01:33
    All it takes is noticing a few of its details.
  • 00:01:36
    The radio is branded “Silvertone” and the tuning dial has two sets of numbers,  one labeled “standard broadcast” and the other labeled “ffsomething modulation.”
  • 00:01:48
    The first word is partially obscured by the dial’s pointer but it’s likely frequency.
  • 00:01:53
    You probably know that frequency modulation is what FM is short for, and this is clearly an old radio.
  • 00:02:00
    So it’s some kind of old FM radio made by Silvertone.
  • 00:02:05
    How can we identify which one?
  • 00:02:08
    Well, let’s see if we can pick it out from a Google Image search.
  • 00:02:11
    A lot of people would call this a vintage radio  so we’ll look for “vintage silvertone FM radio”
  • 00:02:18
    There are lots of pictures of cool old radios  here but look at that - it’s the very same one.
  • 00:02:24
    And with a click on it we’ll see the image description which tells us this is a Silvertone Model 18.
  • 00:02:31
    Wonderful! Now that we have its model number, we can get more details.
  • 00:02:35
    I wanted the radio’s schematic diagram, so let’s do a web  search for "silvertone model 18 schematic."
  • 00:02:42
    The first link is to a website called radiomuseum.org - that looks pretty promising!
  • 00:02:48
    Click through and there’s another picture of the same radio, so we have confirmation that this is indeed a Silvertone Model 18.
  • 00:02:57
    And right on this page we find it was produced between 1950 and 1954,
  • 00:03:02
    plus we have a list of its 8 vacuum tubes.
  • 00:03:05
    And of course there is the schematic available for download,
  • 00:03:08
    though it can also be found on other websites.
  • 00:03:11
    Keeping this information available online is a serious undertaking and I applaud the people who dedicate themselves to maintaining active databases like this
  • 00:03:19
    (as well as those doing the work to back it all up. Especially right now).
  • 00:03:25
    The point of that little exercise was…
  • 00:03:27
    well, exercise!
  • 00:03:29
    What we just did is nothing short of a human superpower.
  • 00:03:34
    From just an image of an old radio you can find out a lot of information using simple search tools and your own observations.
  • 00:03:43
    And if you’re just a little curious, you can keep going.
  • 00:03:46
    Did you notice that the manufacturer was Sears, Roebuck & Co.?
  • 00:03:50
    Yeah, that Sears.
  • 00:03:52
    The one that built the tower I see everyday, weather permitting.
  • 00:03:56
    They made a lot of stuff in-house back in the day, and Silvertone was their brand of electronics and also musical instruments.
  • 00:04:03
    This brown mid-century beauty was one of their radio receivers.
  • 00:04:07
    And if you don’t know anything about antique radios and vacuum tubes and why these old things usually need new capacitors, you can also find all that out!
  • 00:04:16
    A search for “replacing capacitors antique radios” brings you to this lovely website,
  • 00:04:21
    a much more useful resource than whatever AI nonsense Google is synthesizing
  • 00:04:26
    because that’s apparently what search engines are supposed to be now for some reason.
  • 00:04:30
    Ope, there’s the old man yelling at clouds.
  • 00:04:33
    Told ya.
  • 00:04:34
    I am sure a lot of you knew how useful the information in that image was and how you could use it to find the radio,
  • 00:04:42
    and to those of you who did this video probably seems incredibly unremarkable so far.
  • 00:04:48
    But I believe quite strongly now that this is a skill which as time marches on people are forgetting they have and thus don’t think to use even when it could help them.
  • 00:05:00
    I want to reiterate the language I just used there:
  • 00:05:03
    I am not saying people don’t know how to do this - anyone can do this, and hopefully you learned how in school.
  • 00:05:10
    It’s pretty basic research.
  • 00:05:12
    What I am saying is that it appears as though an increasing number of people seem to operate in the world without realizing these are things they can do themselves.
  • 00:05:23
    And that really concerns me.
  • 00:05:26
    Now I’ve spent enough time on forums to know about “let me google that for you” -
  • 00:05:30
    a snarky response to people who ask easy-to-answer-with-a-web search questions.
  • 00:05:35
    I was even on the receiving end of that once.
  • 00:05:37
    But that’s not quite what I want to talk about.
  • 00:05:41
    I want to talk about how we decide what we want to see, watch, and do on the internet.
  • 00:05:47
    Because, well, I’m not sure we realize just how infrequently we’re actually deciding for ourselves these days.
  • 00:05:57
    That’s right, this is video about the problems of recommendation algorithms on social media and the internet at large.
  • 00:06:04
    But I’m not gonna be focusing much on what exactly those algorithms do. I think we all know by now.
  • 00:06:11
    Instead I’m going to be focusing on something which feels new and troubling.
  • 00:06:16
    I’m starting to see evidence that an increasing number of folks actually prefer to let a computer program decide what they will see when they log on,
  • 00:06:27
    even when they know they have alternatives.
  • 00:06:30
    Since this feels like a new phenomenon, I felt it needed a name.
  • 00:06:34
    I’ve chosen to call it “algorithmic complacency.”
  • 00:06:38
    Now, I recognize that I am stepping into a discussion that many many people have had
  • 00:06:43
    and so I don’t want to claim that this is an original thought or even an original term.
  • 00:06:49
    But as I’ve worked to define and articulate it, I’ve come  to believe that it’s a serious problem that needs our immediate attention.
  • 00:06:57
    Succumbing to algorithmic complacency means you’re surrendering your own agency in ways you may not realize.
  • 00:07:06
    And as the internet and real life continue to blend together, that can end very badly.
  • 00:07:13
    Since I believe there’s significant gravity to this topic, I want to present a convincing argument to you.
  • 00:07:20
    And that requires that we first look backward in time.
  • 00:07:24
    Think for a moment about what your experience on the internet is like these days and, if you’re old enough, how it differs from a couple of decades ago.
  • 00:07:33
    The internet used to only exist through a web browser on a desktop computer.
  • 00:07:39
    Maybe a laptop if you’re fancy.
  • 00:07:41
    [dial tone and dialing in background] And your computer had to pretend to be a telephone and shriek at other computers through a phone line
  • 00:07:47
    just to transmit and receive data at blazing slow speeds.
  • 00:07:51
    It was a dark time.
  • 00:07:53
    Back then, Google was just a happy little search  engine which helped you find websites,
  • 00:07:59
    and when you found a cool website which you liked you’d use your web browser to bookmark that website.
  • 00:08:06
    That would make sure you could get back to it later without having to search for it again.
  • 00:08:10
    Like writing a note to yourself.
  • 00:08:12
    In other words, the internet was still very manual and you were in charge of navigating it and curating your own experience with it.
  • 00:08:21
    That also meant the internet didn’t do anything until you asked it to do something.
  • 00:08:28
    You might set your browser’s homepage to your local newspaper’s website so you could get a bit of a news update each time you logged on
  • 00:08:35
    but other than that, information didn’t just come to you.
  • 00:08:40
    Finding news, products, and information on the internet was entirely up to you.
  • 00:08:45
    You had the world’s information at your fingertips, but you needed to use your fingertips and your brain to get to it.
  • 00:08:53
    And it was also up to you to gauge the  trustworthiness and reliability of the information you found.
  • 00:08:59
    If you’re over the age of 30 you probably remember what this was like.
  • 00:09:04
    Overtime, though, things have steadily become a lot more automated.
  • 00:09:09
    And also a lot more in-your-face.
  • 00:09:13
    Many people these days experience the internet  primarily through their smartphones and mobile apps
  • 00:09:19
    in a neatly-packaged ecosystem created and curated by giant tech corporations.
  • 00:09:25
    Even though those apps are often just bespoke web browsers that only take you to a very specific website and keep your traffic inside a walled garden
  • 00:09:33
    while also collecting lots of data about where you go and what you do,
  • 00:09:35
    they still represent a radical shift in how we use and experience the internet as a resource.
  • 00:09:43
    Platforms became the new name of the game.
  • 00:09:45
    We’re not surfing the web and looking for cool and useful things anymore, we’re hanging out on platforms.
  • 00:09:53
    Like, well, this one.
  • 00:09:56
    We’re so used to this now that we imagine apps less as a piece of software which enables connection to people and information
  • 00:10:04
    and more of a place where we spend time.
  • 00:10:07
    Internet historians remind us that this is not our first rodeo. We escaped the walled garden that was AOL, after all.
  • 00:10:16
    But I think most would agree  that today’s internet is distinctly different and intense.
  • 00:10:22
    It has become so integral to our lives  that it’s shaping how we view the world and how we operate within it.
  • 00:10:30
    And most troublingly, we are largely no longer in control of what we see.
  • 00:10:37
    Recommendation algorithms end up putting content in front of our eyes using methods almost nobody really understands
  • 00:10:44
    (but probably have something to do with maximizing revenues) and,
  • 00:10:47
    well, I think it’s breaking our brains.
  • 00:10:52
    When you have that finely-tuned, algorithmically-tailored firehose of information just coming at you like that,
  • 00:10:58
    you might feel like you’re having a good time and learning some interesting things,
  • 00:11:03
    but you’re not necessarily directing your own experience, are you?
  • 00:11:08
    Is your train of thought really your own when the next swipe might derail it?
  • 00:11:14
    Now, I am by no means the first person to ask that question.
  • 00:11:18
    And, full disclosure, I make my living finding information, packaging it into videos which I hope are entertaining and insightful,
  • 00:11:26
    and putting them online for people to hopefully stumble across when the YouTube algorithm recommends it to them.
  • 00:11:32
    So I recognize the awkwardness of this particular person talking about this particular thing on this particular platform.
  • 00:11:40
    But here’s what I think might be new, or at least under-discussed:
  • 00:11:44
    I am seeing mounting evidence that an increasing number of people are so used to algorithmically-generated feeds
  • 00:11:52
    that they no longer care to have a self-directed experience that they are in control of.
  • 00:11:59
    The more time I spend interacting with folks online, the more it feels like large swaths of people have forgotten to exercise their own agency.
  • 00:12:10
    That is what I mean by algorithmic complacency.
  • 00:12:14
    More and more people don’t seem to know or care how to view the world without a computer algorithm guiding what they see.
  • 00:12:23
    That’s a pretty bold claim I just made, and that’s gonna require evidence.
  • 00:12:27
    I will say up front that my evidence is not scientific, so set your expectations accordingly.
  • 00:12:34
    I do have data for one particular thing but I want to talk about that second.
  • 00:12:39
    First, I want to talk about my experiences on new forms of social media and how that has informed this argument.
  • 00:12:46
    I have long given up on “traditional?” social media
  • 00:12:52
    but I’ve been indulging in some of the alternatives like Mastodon and, lately, Bluesky.
  • 00:12:57
    I’m not trying to sell you on using them, to be clear,
  • 00:13:00
    but both of those platforms are far more manual than anything the likes of Meta or Twitter might have spun up.
  • 00:13:08
    I think this is great and quite refreshing!
  • 00:13:11
    I follow accounts that I’m interested in, and I never have to worry about whether an algorithm won’t show me their posts.
  • 00:13:18
    I still discover new accounts all the time, but through  stuff the ones I’m following have shared.
  • 00:13:25
    That’s a much more human experience than letting an algorithm decide to put stuff in front of my eyeballs
  • 00:13:30
    because it will keep me engaged and increase user time on platform.
  • 00:13:35
    It’s also a much more social experience.
  • 00:13:39
    Yet to a lot of new users that are migrating to these platforms, the need to curate your own experience is very frustrating.
  • 00:13:47
    Several of whom have told me this directly and even said they’d prefer not to have to put any work into social media.
  • 00:13:55
    Now, I must admit that sentiment alone concerns me.
  • 00:13:58
    I’m one of those weirdos who think the most rewarding things in life take effort,
  • 00:14:03
    at least outside November,
  • 00:14:05
    so I would not expect to just walk into a new place and have a good time without doing a little ol’ fashioned exploring.
  • 00:14:13
    However, I can sympathize with it.
  • 00:14:16
    For one, I have to recognize that I’m a popular YouTuber and I can just show up somewhere, say “hey guys”
  • 00:14:23
    and expect a lot of people to start following me very quickly, and that helped re-establish connections I formed in other places.
  • 00:14:31
    That’s a privilege I have and skews my perspective a lot, which is only fair that I acknowledge.
  • 00:14:36
    My experience online is not normal.
  • 00:14:41
    But observing Bluesky grow from an extremely niche platform to a still-niche-but-quickly-growing one
  • 00:14:48
    has presented a very interesting case study which I feel compelled to share.
  • 00:14:53
    Bluesky, for those who don’t know, allows for the creation of custom feeds.
  • 00:14:58
    That’s one of its central features which is really cool!
  • 00:15:01
    But baked-into it as a default when you create an account are two feeds with different purposes:
  • 00:15:06
    Following and Discover.
  • 00:15:09
    The Following feed is a reverse-chronological feed  of the posts from accounts you follow.
  • 00:15:15
    Exactly how Twitter used to work.
  • 00:15:18
    But the Discover feed is algorithmic, like those “for you” pages everyone keeps going on about.
  • 00:15:24
    On Bluesky it’s a pretty basic algorithm,
  • 00:15:27
    but its job is to pick posts from accounts across the platform and essentially promote them so people can find new accounts to follow.
  • 00:15:36
    It’s by no means a bad idea to have that feature!
  • 00:15:39
    But as someone with a pretty large following on Bluesky now,
  • 00:15:44
    I can tell the instant a post of mine ends up on the discover feed because the replies get real weird real fast.
  • 00:15:53
    What was previously a post with a nice discussion going on underneath between myself and various people who know who I am and what I mean when I say words
  • 00:16:02
    becomes littered with strange, out-of-context, often antagonistic replies
  • 00:16:07
    as if the only possible response to seeing a post of any kind online is to loudly perform a challenge against it.
  • 00:16:16
    Now, some of these are bots, a known problem on Bluesky.
  • 00:16:20
    But through forming a habit where I check profiles before replying to see if they show signs of being a bot, it’s clear that a lot of them are not.
  • 00:16:29
    They’re people who just start talking with absolutely no idea who they’re talking to and with no desire to figure out the context of the discussion before they write their reply.
  • 00:16:40
    This may feel like I’m veering off track a bit or just complaining about people, which is fun,
  • 00:16:46
    but this is less about calling out the behavior of individuals and more about recognizing the incentives which promote that behavior.
  • 00:16:55
    Algorithmic feeds on social media are unfortunately quite good at fostering something known as context collapse.
  • 00:17:03
    To understand this, imagine you’re dining in a restaurant and you’re close enough to a table of people to hear snippets of their conversation.
  • 00:17:12
    You don’t know who any of the people at that table are,
  • 00:17:15
    but if you manage to overhear them talk about something you’re really interested in you might feel tempted to join their conversation.
  • 00:17:23
    But in the context of a restaurant setting, that’s considered very rude so it rarely ever happens.
  • 00:17:30
    On social media, though, the same kinds of quasi-private conversations between parties who know each other are happening all the time,
  • 00:17:39
    but since the platform is just one big space and it might decide to put that conversation in front of random people,
  • 00:17:47
    that social boundary of etiquette which is normally respected is just not there.
  • 00:17:53
    And lots of conflicts happen as a result.
  • 00:17:56
    A really common one you might accidentally step into on social media
  • 00:18:00
    happens when you stumble across a conversation among friends making sarcastic jokes with each other
  • 00:18:06
    but since you don’t know who those people are you don’t have the context you need to recognize they’re joking.
  • 00:18:13
    And so, if you reply with a serious critique, well that’s a social misfire which some will react poorly to.
  • 00:18:20
    And that’s a pretty mild form of context collapse.
  • 00:18:23
    It can be much, much worse when people want to discuss things like politics.
  • 00:18:30
    And unless we realize recommendation algorithms  are what’s fostering these reactionary conflicts,
  • 00:18:36
    they’re going to continue so long as we  use platforms in the ways that we do.
  • 00:18:41
    It's for all these reasons that I believe algorithmic complacency is creating a crisis of both curiosity and human connection.
  • 00:18:50
    I would even go so far as to say it’s  fostering lots of other disturbing things, too.
  • 00:18:55
    Ever notice how a lot of folks these days need to have a simple
  • 00:18:59
    “good or bad,”
  • 00:19:00
    “black or white,”
  • 00:19:01
    “best or worst” understanding of a topic or issue?
  • 00:19:05
    It seems to me like algorithms which promote content through a simple lens of positive or negative engagement
  • 00:19:11
    would reinforce those binaries and contribute to polarization.
  • 00:19:16
    And as people learn about new products through the slot machine of social media feeds,
  • 00:19:21
    they can develop a learned helplessness where they will wait to be sold on a solution for their problems
  • 00:19:28
    rather than be introspective and explore what their problems actually are and how they might be able to come up with their own solutions which don’t cost any money.
  • 00:19:38
    Introspection will reveal a lot of the problems you think you have are being put in your head by influencers -
  • 00:19:45
    you weren’t unhappy until they told you you should be.
  • 00:19:50
    And, well, I can think of lots of other stuff which has disturbed me for quite a while
  • 00:19:55
    but is now past the point I can ignore as a quirky consequence of connecting large numbers of humans together.
  • 00:20:02
    Social media algorithms don’t nurture human connection - they exploit it.
  • 00:20:08
    And we’re so used to this reality now that I’m not sure many of us care to get off this train.
  • 00:20:14
    But I think we should.
  • 00:20:16
    So now, let’s talk about YouTube!
  • 00:20:19
    I’m sure a good number of you have thought “this is pretty rich comin’ from a guy who makes his living on a platform which does all these things.”
  • 00:20:26
    Well, there’s a funny thing about YouTube.
  • 00:20:30
    Its recommendation algorithm is entirely optional.
  • 00:20:34
    You know that, right?
  • 00:20:36
    You… you do know that, right?
  • 00:20:40
    There’s this feature on this website that has been here pretty much  since it became a thing and it’s called the subscriptions feed.
  • 00:20:49
    It’s not hiding, it’s at the bottom of the mobile app and on the sidebar of the desktop site.
  • 00:20:54
    In fact it's got its own URL and if you’re feeling old-school you can bookmark it!
  • 00:20:59
    This is a completely manually-curated feed which has nothing in it but the videos
  • 00:21:04
    (and shorts, for better and worse) from the creators you have chosen to subscribe to.
  • 00:21:11
    That means it’s entirely in your control.
  • 00:21:14
    This feed has been in plain sight the whole time and, here’s why I am using the term algorithmic complacency,
  • 00:21:22
    nobody cares to use it.
  • 00:21:24
    And that I can back up with hard data.
  • 00:21:28
    YouTube gives us a shocking amount of information regarding audience metrics and in 2024,
  • 00:21:35
    less than three percent of this channel’s views came from the subscriptions feed.
  • 00:21:42
    Almost nobody is using this feature
  • 00:21:44
    and yet it’s the most reliable way to keep track of the things you have explicitly decided you want to watch through hitting the subscribe button.
  • 00:21:54
    The fact that it’s easy to get to, has existed in plain sight almost since this platform was born, yet almost nobody is using it anymore is…
  • 00:22:03
    puzzling.
  • 00:22:04
    I want to stress my use of the word puzzling.
  • 00:22:07
    It’s not my intent to be judgmental here and I’m sorry if it has sounded like that.
  • 00:22:12
    Different people use websites differently and the main “home” feed on YouTube, which functions like a “for-you” page,
  • 00:22:19
    usually does a good job of surfacing new videos from creators you like.
  • 00:22:24
    Subscribing to a channel is a signal to the recommendation algorithm  to boost that channel’s videos in your home feed.
  • 00:22:31
    And it’s not like that algorithm has no value - there are some creators I’m not even subscribed to
  • 00:22:38
    yet I see most of their new videos since the algorithm has figured out I keep watching them so I must want to see the new ones, and it puts them in my home feed.
  • 00:22:47
    Through that feed I’ve also stumbled across countless new channels and have been reminded of videos I watched years ago and will happily watch again.
  • 00:22:57
    Plus of course, I have found so much cool stuff through the recommended videos that appear alongside whatever I happen to be watching.
  • 00:23:05
    I’m glad YouTube does that.
  • 00:23:07
    Wait, nuance?
  • 00:23:10
    On the internet?
  • 00:23:11
    That’s illegal!
  • 00:23:13
    In the interest of time I’ve removed a big section on the flaws with the subs feed and how I think YouTube should address them
  • 00:23:20
    (mainly - please let us remove shorts if we aren’t interested in them
  • 00:23:23
    and please lets us organize the subs feed a little bit so creators who upload a lot don’t end up cluttering it.
  • 00:23:29
    Maybe condense their daily activity into a single tile which we can then click on and expand. Just a thought).
  • 00:23:38
    But the reason I wanted to talk about it
  • 00:23:40
    is that it feels indicative of a growing disinclination to grab the reins of the internet and be the person steering your own experience.
  • 00:23:51
    That’s really what bothers me.
  • 00:23:52
    There has to be a balance between cool stuff you  stumble upon and stuff that you’re actually interested in and matters to you.
  • 00:24:01
    Algorithmic complacency, if not noticed and acted upon,
  • 00:24:05
    means you’re allowing other people who are not you  to decide what matters to you.
  • 00:24:12
    I should not have to spell out why that’s dangerous so I won’t.
  • 00:24:16
    But I will spell out that it’s very easy for people who wish to weaponize this reality to craft a narrative which is not overtly obvious and so might slip past your defenses.
  • 00:24:28
    If you can reduce the presence of algorithmically-curated feeds in your life, you’ll be less susceptible to that.
  • 00:24:34
    And if you build up your own network of people and resources you trust,  you’ll know when you’re being bullshitted.
  • 00:24:41
    And in case you haven’t noticed, people are trying to bullshit us all the time now!
  • 00:24:47
    And algorithmic feeds make this worse.
  • 00:24:49
    They don’t just exist on YouTube and social media, they exist in popular news aggregator apps and that’s meant a lot of really stupid articles keep floating around
  • 00:24:59
    because all that matters to many institutions which make news these days is clicks and ad money.
  • 00:25:06
    Look at this stupid thing which ended up in the Google News app.
  • 00:25:09
    "The end of Walmart and Target in the US - A new retailer appears and is much cheaper."
  • 00:25:16
    I know what an Aldi looks like.
  • 00:25:18
    That’s a picture of the inside of an Aldi.
  • 00:25:21
    And, uh, if you’ve somehow not heard of Aldi or stepped inside one that doesn’t mean Aldi is a new retailer which just appeared.
  • 00:25:30
    This article is a waste of time for the vast majority of people who might be tricked into clicking on it.
  • 00:25:36
    You may have noticed that the publisher of that article was El Diario 24 which, as far as I can tell, might actually be a fake news source.
  • 00:25:46
    How did it get in Google News?
  • 00:25:48
    That’s a great question for Google.
  • 00:25:50
    But even mainstream publications have gone wildly off the rails as they chase metrics rather than the truth.
  • 00:25:58
    The New York Times of 2025 is publishing opinion pieces
  • 00:26:03
    where bloviating morons go through the fun little exercise of what the political ramifications of turning Canada into the 51st state would be for Democrats.
  • 00:26:14
    Folks, since they can’t say this in plain language for some reason, I guess it’s up to me.
  • 00:26:19
    Canada is a sovereign nation.
  • 00:26:23
    It’s a foreign country which has the right to self-determination.
  • 00:26:28
    The United States cannot simply turn Canada into a US state -
  • 00:26:33
    Canadians have clearly indicated they do not want that,
  • 00:26:36
    which means for us to force the issue would be to declare war with Canada and invade.
  • 00:26:43
    No sane person should want that
  • 00:26:46
    and it’s a shameful embarrassment that the New York Times would even entertain this as a possibility and legitimize the awful, inevitably bloody idea.
  • 00:26:56
    How on Earth did we get here?
  • 00:26:59
    I can tell you how I think we got here:
  • 00:27:01
    big news publications have become just as dependent on algorithms to find their readers as their readers are to find the news.
  • 00:27:09
    Which means they’re more concerned with being enticing than being honest.
  • 00:27:13
    Which is a damn shame.
  • 00:27:16
    OK, reel it in.
  • 00:27:18
    [breathes]
  • 00:27:19
    Reel it in.
  • 00:27:21
    I’m about to wrap up this video but before I do, I want to explore one more related but different angle.
  • 00:27:28
    I struggled with whether I wanted to call this phenomenon algorithmic complacency or automation complacency.
  • 00:27:35
    The reason I struggled is that there isn’t a clear distinction between those two things.
  • 00:27:40
    Algorithms are a kind of automation, so you could say everything I’ve been talking about is the result of automatically-curated feeds
  • 00:27:49
    and the video wouldn’t change.
  • 00:27:52
    But automation in itself is not necessarily bad.
  • 00:27:55
    Lots of menial labor tasks have been replaced by automation and this has largely been a great thing.
  • 00:28:02
    The actually-important word in this discussion is curated.
  • 00:28:08
    It’s one thing to automate, say, an elevator.
  • 00:28:10
    Or an inventory system.
  • 00:28:13
    Or a telephone switchboard.
  • 00:28:15
    But it’s a very different thing to automate what information people see.
  • 00:28:21
    There are situations where that sort of automation is necessary.
  • 00:28:26
    There’s an ever-increasing amount  of information being stored online,
  • 00:28:30
    so when you’re looking for something specific amongst that sea of information, keywords by themselves aren't enough.
  • 00:28:38
    The demonstration we did at the beginning
  • 00:28:39
    relied on Google’s search algorithm determining context from the keywords we gave it so it could sort everything it found by relevance to that inferred context.
  • 00:28:50
    And it’s usually really good at that, as we saw.
  • 00:28:54
    But it’s not always.
  • 00:28:56
    I’m sure by now you’ve had the really frustrating experience  of Google latching onto the wrong context and producing a lot of irrelevant results,
  • 00:29:05
    and it can be extremely tedious to refocus the algorithm on the correct context.
  • 00:29:11
    This happens a lot when search queries include a word with many homonyms.
  • 00:29:16
    But we’re rapidly moving away from a paradigm in  which search queries present a list of sources for us to look at, cite and verify,
  • 00:29:24
    and are now being pressured into a new reality
  • 00:29:27
    where large language models synthesize responses to queries which are statistically likely to produce a useful output
  • 00:29:35
    but which do not provide us with sources of verifiable information -
  • 00:29:39
    or at least obfuscate them to the point that many people are not going to check them.
  • 00:29:45
    I don’t think enough of us have put much thought into what that means.
  • 00:29:50
    It means we’re careening towards a future where people just trust computers to do their thinking for them.
  • 00:29:58
    And the thing is, we already know that’s often a bad idea.
  • 00:30:03
    Take, for example, how we navigate the actual physical world.
  • 00:30:08
    If you drive a car, I am certain that by now you’ve used some kind of GPS-navigation app to figure out how to get places.
  • 00:30:16
    I do, too, don’t think I’m about to say "oh we should go back to paper maps."
  • 00:30:21
    Gross.
  • 00:30:22
    But when you use a mapping app to navigate somewhere, how often are you prioritizing the fastest route?
  • 00:30:30
    You probably have that set up as the default, don’t you?
  • 00:30:33
    I do.
  • 00:30:35
    But do you ever question whether that is actually the  most logical way to get somewhere?
  • 00:30:41
    It often isn’t because arrival time is only one of many,  many variables which might be important to you.
  • 00:30:49
    If I ask Google Maps to take me from my home to my office,
  • 00:30:53
    it is going to suggest a route which first requires going the wrong way to then hop on a tollway on which I have to pay a roughly $1 toll
  • 00:31:02
    and have to overshoot my destination to hop on a second expressway and backtrack.
  • 00:31:08
    That suggested route requires an extra 4 miles of travel and  an extra $1 each way all to save me exactly
  • 00:31:17
    one minute over taking much more direct state routes and surface streets.
  • 00:31:23
    If I mindlessly did what Google suggested,
  • 00:31:26
    over a year I’d put an extra 2,000 miles on my car and spend an extra $500 on tolls just to save not even one full work day of time.
  • 00:31:38
    Google is suggesting a terrible route just because it’s one minute faster.
  • 00:31:44
    The only reason I know it’s a terrible route is because I live here and I know what it’s suggesting is asinine,
  • 00:31:50
    but when I don’t know where I’m going I trust it to make the best decision.
  • 00:31:55
    But we don’t necessarily agree on what is best.
  • 00:31:59
    That’s the problem.
  • 00:32:01
    Side-note, the other bad thing about taking the tollway is that when I do,
  • 00:32:06
    I don’t get to see my neighbors and what they’re up to.
  • 00:32:10
    I actually really enjoy driving through town,
  • 00:32:13
    seeing new businesses as they pop up,
  • 00:32:16
    homes getting built and remodeled,
  • 00:32:18
    admiring Christmas lights over the holidays, and
  • 00:32:20
    just seeing people to remind me that other people exist and they live real lives and they are connected to me because they’re my neighbors.
  • 00:32:31
    I’m not the most social person
  • 00:32:33
    but even I don’t like the isolated feeling that a commute on a tollway where there’s nothing to look at but other cars and sound-isolation walls gives me.
  • 00:32:42
    I feel way more connected to my community when I can actually, ya know, see it and the people who define it.
  • 00:32:51
    That’s a preference of mine which I grant but it’s also a priority of mine.
  • 00:32:57
    I am being mindful now to put human connections above technological connections.
  • 00:33:04
    See, what I do here, is I make connections between technologies -
  • 00:33:08
    that way you can learn how they fit together and how best to use them, and maybe you can use one concept in conjunction with another concept to make a third concept!
  • 00:33:17
    That’s what I’m doing here!
  • 00:33:18
    Trying to empower you to make your life better.
  • 00:33:22
    Technologies which make human connection harder or even just more random are, uh,
  • 00:33:29
    bad!
  • 00:33:30
    That I do think is black-and-white true.
  • 00:33:33
    Any piece of technology which gets in between humans who wish to help each other is frustrating at best and exploitative at worst.
  • 00:33:42
    We ought to know by now the real reason those systems get put in place  is so that we need fewer humans in helpful roles.
  • 00:33:50
    And I think there is absolutely no question that this is going to get worse
  • 00:33:56
    until we all start looking inward and begin questioning how we operate in this world and why.
  • 00:34:04
    Silicon valley seems hellbent on creating machines which can do our thinking for us.
  • 00:34:11
    Why should any of us want that?
  • 00:34:14
    I certainly don’t want that - I don’t learn anything unless I do the mental work to create a complete framework of understanding in my mind.
  • 00:34:23
    I don’t talk about things I don’t understand because that’s the fastest way you can make a fool of yourself.
  • 00:34:30
    And it can be dangerous when you have a platform like I do.
  • 00:34:35
    I will never trust a computer program to be able to understand anything in the way a human can,
  • 00:34:41
    nor will I trust it to find information for me.
  • 00:34:45
    If I have to vet everything it’s finding, then I end up doing the same work I would have done myself.
  • 00:34:51
    And if I don’t vet what it’s finding, then what I’m really doing is saying I don’t want to be responsible for what I do.
  • 00:34:59
    It frightens me that even though we’ve all seen the consequences of what a social media recommendation algorithm can do to shape our viewpoints
  • 00:35:07
    that we are somehow falling for the temptation of machines which can offload our thought processes.
  • 00:35:14
    The thing which makes us human.
  • 00:35:17
    If that’s not the purest form of lazy anti-intellectualism, I don’t know what is.
  • 00:35:23
    On that cheery note, let me make sure you know  that I share the same frustration with the AI hype cycle
  • 00:35:30
    as the AI researchers who are actually doing work to create tools to solve real problems, like early detection of cancers from images or blood screens.
  • 00:35:40
    That is valuable research which will undoubtedly save lives.
  • 00:35:44
    It’s beyond frustrating that the only kind of AI that is in the public consciousness at the moment is the one that does some very impressive tricks
  • 00:35:52
    but any honest person will tell you needs intense  supervision because it will hallucinate and produce bad outputs.
  • 00:36:00
    It seems blindingly obvious to me
  • 00:36:02
    that the stakes are way too high to hand over our decision making to a computer which cannot be held responsible for the decisions it makes.
  • 00:36:11
    And that’s, disturbingly, what I think is the real reason for wanting to push this future.
  • 00:36:17
    But you’ve put up with me long enough.
  • 00:36:19
    Thank you for watching and I hope I don’t sound too far off my rocker.
  • 00:36:25
    I made this channel not just to share cool stuff I find but to show all the amazing ways we solved our problems in the past.
  • 00:36:34
    There are invaluable lessons there which we forget at our peril.
  • 00:36:39
    I don’t talk much about computer technology because it doesn’t really interest me.
  • 00:36:44
    And it’s becoming less and less interesting as time goes on.
  • 00:36:48
    Outside of hardcore enthusiasts or people doing research on vast amounts of data,
  • 00:36:53
    let’s be honest: computing is a solved problem.
  • 00:36:57
    I can use my desktop computer from 2017 to make  these videos for you and it would not get in my way at all,
  • 00:37:05
    and the $600 nearly base-spec M2 Mac Mini I bought to dingle around on can do it just as well.
  • 00:37:12
    Video encoding is getting more efficient and we need less bandwidth and storage to send videos like this across the world
  • 00:37:19
    and that’s despite internet connections getting faster and faster.
  • 00:37:24
    So I think it’s pretty clear  that that reality is why silicon valley is doing the stuff it’s doing these days.
  • 00:37:32
    It has to justify itself as a center of innovation in a world where it’s running out of runway.
  • 00:37:38
    And the best answer it’s got is
  • 00:37:40
    “euhhh computers which pretend to think.”
  • 00:37:44
    Forgive me, but I want to think for myself.
  • 00:37:48
    And I think you should, too.
Tags
  • algoritmik rahatlıq
  • internet
  • araşdırma
  • tövsiyə sistemləri
  • müstəqil düşüncə
  • radio
  • Silvertone Model 18
  • vintage radio
  • informasiya əldə etmə
  • sosial media