Algorithms are breaking how we think
Summary
TLDRBu video, internetin bugünkü dövründə insanların öz araşdırma bacarıqlarını itirməsinə diqqət çəkməkdədir. Danışan, bir radio modeli ilə başlanan bir hərəkətlə insanların məlumatı necə tapmaq lazım olduğunu anladaraq, sadə axtarış sistemləri və diqqətlə müşahidə etmə bacarıqları ilə çoxlu məlumatı necə əldə edə biləcəklərini nümayiş etdirir. Sonra, 'algoritmik rahatlıq' anlayışını izah edərək, insanların öz müstəqil seçimlərini itirdiklərini və onların seçimlərinin algoritmlarla məhdudlaşdığını vurğulayır. Danışan, statistik olmayan, amma inkişaf edən bir fenomen olduğunu təsvir edir.
Takeaways
- 📻 Radio modelini tapmaq sadədir.
- 🔎 İnternetdə məlumat axtararkən axtarış terminlərinə diqqət yetirin.
- 💻 Algoritm altında istiqamətlənməməyə çalışın.
- 🧑🤝🧑 İnsan əlaqələri internetdə itirilməməlidir.
- 🔄 Seçimlərinizi özünüz edin, başqalarının qərarına tabe olmayın.
- ⚖️ Duyğularınızı və düşüncələrinizi algoritmlara etibar etməyin.
- 📅 Keçmişdə internet daha əlverişli və şəxsi idi.
- 🌐 Sosial mediada kontekst qorumaq önəmlidir.
- 🔗 Abunəliklər siyahısını istifadə edin, unutmayın!
- 🚀 Texnologiyaların insan əlaqələrini inkişaf etdirmək üçün istifadə olunmalıdır.
Timeline
- 00:00:00 - 00:05:00
بۇ وىدېئو شەكىلدىكى توسالارغا قوشۇمچە انلىقىمىز بويىچە بىزنىڭ مۇلازىمەت تەرەققىياتىمىز - رادىيە قورالنى كۆرگەن بولساق، شۇنداقلا ئۇنىڭ نۇسخىسىنى، 1950-يىلدىن 1954-يىلغىچە ئىشلىتىش ئۈچۈن ئىشلىتىش قورالىنىڭ مەركىزى رادىيە رادېس تۈرى ناھايىتى ئەلا ۋە بىلىمگە ئېرىشىش ئۇسۇللىرى بويىچە ئىلمىي بىلىمنى پەيدا قىلىش زالىن مۇلازىمەت نىلدىلىرى ئۈستىدە بىرىنچى قەدەم ئالغاچ، جىشەن قوشۇپ قىممەت بەرگەندەك، بىز قورالغا پۈتۈنلەي سۈزۈش قورالىمىز، چېكىلدىلىرىمىزنى قوغداش ۋە پەرقلەر ئۈچۈن ياردەم بەرگەن لۇڭلارغا توغرا توغرا جەۋھەرلەرگە تاتىقىمىز.
- 00:05:00 - 00:10:00
بىزنىڭ رادىيەمىزنى تەپتىش قىلىش جەريانىدا ئوخشاش نازارەت قورالىدا كۆرۈپ بىلىش بىزگە بىلىم ئۈچۈن ئېنىق خاسلىق بەرگەن بولۇپ، بىز ئەسلىدە شۇڭا بۇنىڭغا بىز جاۋاب بېرىشنى ئۆز قۇرۇلۇشلىرىمىزغا بامىدىكى بەھسسىز، بىز بويىچە بويىدا بولغان سۆزلەرنى بۇيۇرۇشتا جىدا بوپتۇق بۇشامىرا.
- 00:10:00 - 00:15:00
رادىيەگە قانداق مەخسەت بېرەلىشىمىز بارلىق جەمئىيەت سەۋىيىسىگە خۇددى سوئاللار بويىچە قېتىشىپ كىلىش، تەتقىق قىلىش، بويىچە بەلگىلىك ئەمەلگە ئويلاشتۇرماق، بويىچە توغرا نازارەت قىلىش نۇسقىسى بويىچە بىلىش بويىچە.
- 00:15:00 - 00:20:00
مەشغۇل قىسم بولۇش؛ ئەينى شەرقتىن شىركەت تەرەققىياتى، بىزنىڭ ئىنسانلارنى سۇغورۇشلۇق سەۋىيىسىنىڭ نۇقتىسىدا، بىزنىڭ قانداق كىيىنش قامتلايدىغان چوڭ قۇرۇلۇش، بانداق تىل ئايرىلمايدۇ. بىز غاھى تاپالمىغىلى سىزمىلىق تۇرۇپ قويمساڭمۇ، بىز ئادەملەردىن تېخىمۇ قاقماقچى بولغىلى بولمايتتى. الۋاقىت بويىچە تىزىلىشتا كىرگۈزگىلى بولمايدىغان تىز سەۋيىلىك بىلەن پۈتۈن تەتقىقات ئىشتىن چىلىش.
- 00:20:00 - 00:25:00
بىز بويىچە بىزنىڭ كۈندىلىك تۈزۈلۈشە تىلىملىرى حاسىل قىلىش بويىچە، بىلىمنى بومبۇرغانچىلىق جۇمھۇرىيەت قانداق قىيماقچى، ئىچكى قىلدىسى بىزمۇ بىلىم بويىچە شىيىنلاپ قالمىقتا كىلىش ئۈچۈن، بىزگە مىللەت يانسۇزيچىلارغا مەشغۇل بولمايدىغان كەشپۈرۇش ئىنچىكىشلەر قىلالايمىز. ئيسلاسنىڭ مۇھىتى بويىچە، بىڭقاشقا بيلغو كۈنيىلىك بوزۇرۈش ئۈچۈن، بىز جىددىي بومبۇرغانچىلىق پەيدا قىلىشنىڭ چۈنكى بويىچە غيرىماققا بومبۇرغانچىلىق بىزنىڭ تەرەققىيات ئۆز-يەردىدىن قايتىپ بىرىزدىكى بىلىمنى قانداق كۆپاراشمىز، بىز ناخشا كەسەللەرگە يىتىم بولسىمۇ بىلىم نۇسقىسىدا مەيۇل قىلماقدى.
- 00:25:00 - 00:30:00
ناردخىلىق: بىزنىڭ بويىچە بامۇماپ شىڭغىداق جىق بملەرڭلادقا خەرجە جىق بىلەن بومبۇرغانچىلىق قورالىغۇچ، باسقۇچلار بويىچە ياردەم تېپىپ بامۇماپغا نازارەت تىزىشچىلىق ياردەم تىگەنۇق تەدبىرلىرى سۇپانگىم ئاددىي بىلىم بويىچە بامۇماپغا ئەسەرەك زەرۈر بولمايدۇ.
- 00:30:00 - 00:37:51
تاراققىيات؛ بىزنىڭ بويىچە قانداق قىممەت بۈلەت تائدىمدۇق جىدا سەۋيىلىك تۈزۈلۈشە تىلىملىرى ئىسرار شۇڭا بىلىمگە جوش بۆسەندە قانداق بىرىبېرىننىڭ تەلىپى ئالقىرۇرۇلتىنشى بومبا نازارەت قىلىش. بويىچە كۆرسىتىش خۇددى شۇڭا تىما جاپالاق بولماسلىقى مۇمكىن، بىز بويىچە پىشقەدەم بامۇماپراق، ياردەم تىمادان كۆپچىلىك ئويغۇنلۇق بىلىم نۇسقىسى تۈپ شۇشىلان بولىدۇ.
Mind Map
Video Q&A
Videonun əsas mövzusu nədir?
Videonun əsas mövzusu, algoritm və tövsiyə sistemlərinin insanların öz müstəqil qərarlarını vermə qabiliyyətini necə azaldığıdır.
'Algoritmik rahatlıq' nədir?
'Algoritmik rahatlıq', insanların internet təcrübələrini rəhbərlik etməkdə bilgisayar sistemlərinə etibar etməsi və öz müstəqil seçimlərini itirməsidir.
Video niyə 'crotchety' bir video kimi xarakterizə edilir?
Danışan, videonun yaşlı adamın buludlara qışqırması tərzində olacağını bildirib, bunun şəxsi fikirlərdən ibarət olduğunu qeyd edir.
Danışan keçmişdə internetin necə olduğunu necə təsvir edir?
Danışan, internetin keçmişdə manual araşdırma və istifadə tələb etdiyini, insanların axtardıqları məlumatı tapmaq üçün öz bacarıqlarını istifadə etməli olduqlarını vurğulayır.
Bu video nə ilə bitir?
Video, izləyicilərə öz düşüncə tərzlərini qorumaq və interneti düşüncəli bir şəkildə istifadə etmələri tövsiyəsi ilə bitir.
View more video summaries
Cómo escribir ESSAY C1 |WRITING| ADVANCED| CAMBRIDGE
ئەچمە مەتاڕ تەکفیریە، ئەچمە بازاڕ تەکفیریە 😂 وڕێنەکانی عبدالتیف مەدخەلی! مامۆستا هەڵۆ mamosta halo
شانازی بە کورد بوونی خۆم ئەکەم 😍 مامۆستا هەڵۆ mamosta halo
BITCOIN IS CRASHING! ACT FAST!!
Enhancing Academic Writing with AI
Teach English in China: Primary School teaching, Grade 6
- 00:00:00I want to show you something.
- 00:00:01It’s this radio.
- 00:00:03It’s pretty cool looking, right? I use it pretty often.
- 00:00:07Now, here’s a fun little challenge for you:
- 00:00:10Tell me the model number, what year this radio was made, what vacuum tubes are inside of it,
- 00:00:15and find its schematic diagram for me so I can order new capacitors for it.
- 00:00:20Go on.
- 00:00:22If that seems like a difficult task - it’s very much not.
- 00:00:26So long as you can see this image, you have everything you need on your screen right now to get all that information and much more in less than a minute.
- 00:00:36That is, if you know where and how to look.
- 00:00:40Now, I’m gonna show you where and how to look in just a moment
- 00:00:44but first I should note that this is no doubt going to be the most crotchety, old-man-yells-at-cloud video I’ve ever released and I won’t hide from that.
- 00:00:54Another thing I won’t hide from is the fact that this is largely an opinion piece.
- 00:00:59If that doesn’t sound like your jam, please feel free to watch any of the other videos on this here website.
- 00:01:05But if you’ll indulge me - that’s actually a large part of what this video is about.
- 00:01:12It’s about the modern internet and how I think it’s caused a lot of folks to stop looking for things.
- 00:01:20I’ll explain what I mean later but first let’s figure out what that radio is.
- 00:01:25So - all we’ve got is a picture.
- 00:01:27But I promise you don’t need any fancy image recognition tools to find this radio.
- 00:01:33All it takes is noticing a few of its details.
- 00:01:36The radio is branded “Silvertone” and the tuning dial has two sets of numbers, one labeled “standard broadcast” and the other labeled “ffsomething modulation.”
- 00:01:48The first word is partially obscured by the dial’s pointer but it’s likely frequency.
- 00:01:53You probably know that frequency modulation is what FM is short for, and this is clearly an old radio.
- 00:02:00So it’s some kind of old FM radio made by Silvertone.
- 00:02:05How can we identify which one?
- 00:02:08Well, let’s see if we can pick it out from a Google Image search.
- 00:02:11A lot of people would call this a vintage radio so we’ll look for “vintage silvertone FM radio”
- 00:02:18There are lots of pictures of cool old radios here but look at that - it’s the very same one.
- 00:02:24And with a click on it we’ll see the image description which tells us this is a Silvertone Model 18.
- 00:02:31Wonderful! Now that we have its model number, we can get more details.
- 00:02:35I wanted the radio’s schematic diagram, so let’s do a web search for "silvertone model 18 schematic."
- 00:02:42The first link is to a website called radiomuseum.org - that looks pretty promising!
- 00:02:48Click through and there’s another picture of the same radio, so we have confirmation that this is indeed a Silvertone Model 18.
- 00:02:57And right on this page we find it was produced between 1950 and 1954,
- 00:03:02plus we have a list of its 8 vacuum tubes.
- 00:03:05And of course there is the schematic available for download,
- 00:03:08though it can also be found on other websites.
- 00:03:11Keeping this information available online is a serious undertaking and I applaud the people who dedicate themselves to maintaining active databases like this
- 00:03:19(as well as those doing the work to back it all up. Especially right now).
- 00:03:25The point of that little exercise was…
- 00:03:27well, exercise!
- 00:03:29What we just did is nothing short of a human superpower.
- 00:03:34From just an image of an old radio you can find out a lot of information using simple search tools and your own observations.
- 00:03:43And if you’re just a little curious, you can keep going.
- 00:03:46Did you notice that the manufacturer was Sears, Roebuck & Co.?
- 00:03:50Yeah, that Sears.
- 00:03:52The one that built the tower I see everyday, weather permitting.
- 00:03:56They made a lot of stuff in-house back in the day, and Silvertone was their brand of electronics and also musical instruments.
- 00:04:03This brown mid-century beauty was one of their radio receivers.
- 00:04:07And if you don’t know anything about antique radios and vacuum tubes and why these old things usually need new capacitors, you can also find all that out!
- 00:04:16A search for “replacing capacitors antique radios” brings you to this lovely website,
- 00:04:21a much more useful resource than whatever AI nonsense Google is synthesizing
- 00:04:26because that’s apparently what search engines are supposed to be now for some reason.
- 00:04:30Ope, there’s the old man yelling at clouds.
- 00:04:33Told ya.
- 00:04:34I am sure a lot of you knew how useful the information in that image was and how you could use it to find the radio,
- 00:04:42and to those of you who did this video probably seems incredibly unremarkable so far.
- 00:04:48But I believe quite strongly now that this is a skill which as time marches on people are forgetting they have and thus don’t think to use even when it could help them.
- 00:05:00I want to reiterate the language I just used there:
- 00:05:03I am not saying people don’t know how to do this - anyone can do this, and hopefully you learned how in school.
- 00:05:10It’s pretty basic research.
- 00:05:12What I am saying is that it appears as though an increasing number of people seem to operate in the world without realizing these are things they can do themselves.
- 00:05:23And that really concerns me.
- 00:05:26Now I’ve spent enough time on forums to know about “let me google that for you” -
- 00:05:30a snarky response to people who ask easy-to-answer-with-a-web search questions.
- 00:05:35I was even on the receiving end of that once.
- 00:05:37But that’s not quite what I want to talk about.
- 00:05:41I want to talk about how we decide what we want to see, watch, and do on the internet.
- 00:05:47Because, well, I’m not sure we realize just how infrequently we’re actually deciding for ourselves these days.
- 00:05:57That’s right, this is video about the problems of recommendation algorithms on social media and the internet at large.
- 00:06:04But I’m not gonna be focusing much on what exactly those algorithms do. I think we all know by now.
- 00:06:11Instead I’m going to be focusing on something which feels new and troubling.
- 00:06:16I’m starting to see evidence that an increasing number of folks actually prefer to let a computer program decide what they will see when they log on,
- 00:06:27even when they know they have alternatives.
- 00:06:30Since this feels like a new phenomenon, I felt it needed a name.
- 00:06:34I’ve chosen to call it “algorithmic complacency.”
- 00:06:38Now, I recognize that I am stepping into a discussion that many many people have had
- 00:06:43and so I don’t want to claim that this is an original thought or even an original term.
- 00:06:49But as I’ve worked to define and articulate it, I’ve come to believe that it’s a serious problem that needs our immediate attention.
- 00:06:57Succumbing to algorithmic complacency means you’re surrendering your own agency in ways you may not realize.
- 00:07:06And as the internet and real life continue to blend together, that can end very badly.
- 00:07:13Since I believe there’s significant gravity to this topic, I want to present a convincing argument to you.
- 00:07:20And that requires that we first look backward in time.
- 00:07:24Think for a moment about what your experience on the internet is like these days and, if you’re old enough, how it differs from a couple of decades ago.
- 00:07:33The internet used to only exist through a web browser on a desktop computer.
- 00:07:39Maybe a laptop if you’re fancy.
- 00:07:41[dial tone and dialing in background] And your computer had to pretend to be a telephone and shriek at other computers through a phone line
- 00:07:47just to transmit and receive data at blazing slow speeds.
- 00:07:51It was a dark time.
- 00:07:53Back then, Google was just a happy little search engine which helped you find websites,
- 00:07:59and when you found a cool website which you liked you’d use your web browser to bookmark that website.
- 00:08:06That would make sure you could get back to it later without having to search for it again.
- 00:08:10Like writing a note to yourself.
- 00:08:12In other words, the internet was still very manual and you were in charge of navigating it and curating your own experience with it.
- 00:08:21That also meant the internet didn’t do anything until you asked it to do something.
- 00:08:28You might set your browser’s homepage to your local newspaper’s website so you could get a bit of a news update each time you logged on
- 00:08:35but other than that, information didn’t just come to you.
- 00:08:40Finding news, products, and information on the internet was entirely up to you.
- 00:08:45You had the world’s information at your fingertips, but you needed to use your fingertips and your brain to get to it.
- 00:08:53And it was also up to you to gauge the trustworthiness and reliability of the information you found.
- 00:08:59If you’re over the age of 30 you probably remember what this was like.
- 00:09:04Overtime, though, things have steadily become a lot more automated.
- 00:09:09And also a lot more in-your-face.
- 00:09:13Many people these days experience the internet primarily through their smartphones and mobile apps
- 00:09:19in a neatly-packaged ecosystem created and curated by giant tech corporations.
- 00:09:25Even though those apps are often just bespoke web browsers that only take you to a very specific website and keep your traffic inside a walled garden
- 00:09:33while also collecting lots of data about where you go and what you do,
- 00:09:35they still represent a radical shift in how we use and experience the internet as a resource.
- 00:09:43Platforms became the new name of the game.
- 00:09:45We’re not surfing the web and looking for cool and useful things anymore, we’re hanging out on platforms.
- 00:09:53Like, well, this one.
- 00:09:56We’re so used to this now that we imagine apps less as a piece of software which enables connection to people and information
- 00:10:04and more of a place where we spend time.
- 00:10:07Internet historians remind us that this is not our first rodeo. We escaped the walled garden that was AOL, after all.
- 00:10:16But I think most would agree that today’s internet is distinctly different and intense.
- 00:10:22It has become so integral to our lives that it’s shaping how we view the world and how we operate within it.
- 00:10:30And most troublingly, we are largely no longer in control of what we see.
- 00:10:37Recommendation algorithms end up putting content in front of our eyes using methods almost nobody really understands
- 00:10:44(but probably have something to do with maximizing revenues) and,
- 00:10:47well, I think it’s breaking our brains.
- 00:10:52When you have that finely-tuned, algorithmically-tailored firehose of information just coming at you like that,
- 00:10:58you might feel like you’re having a good time and learning some interesting things,
- 00:11:03but you’re not necessarily directing your own experience, are you?
- 00:11:08Is your train of thought really your own when the next swipe might derail it?
- 00:11:14Now, I am by no means the first person to ask that question.
- 00:11:18And, full disclosure, I make my living finding information, packaging it into videos which I hope are entertaining and insightful,
- 00:11:26and putting them online for people to hopefully stumble across when the YouTube algorithm recommends it to them.
- 00:11:32So I recognize the awkwardness of this particular person talking about this particular thing on this particular platform.
- 00:11:40But here’s what I think might be new, or at least under-discussed:
- 00:11:44I am seeing mounting evidence that an increasing number of people are so used to algorithmically-generated feeds
- 00:11:52that they no longer care to have a self-directed experience that they are in control of.
- 00:11:59The more time I spend interacting with folks online, the more it feels like large swaths of people have forgotten to exercise their own agency.
- 00:12:10That is what I mean by algorithmic complacency.
- 00:12:14More and more people don’t seem to know or care how to view the world without a computer algorithm guiding what they see.
- 00:12:23That’s a pretty bold claim I just made, and that’s gonna require evidence.
- 00:12:27I will say up front that my evidence is not scientific, so set your expectations accordingly.
- 00:12:34I do have data for one particular thing but I want to talk about that second.
- 00:12:39First, I want to talk about my experiences on new forms of social media and how that has informed this argument.
- 00:12:46I have long given up on “traditional?” social media
- 00:12:52but I’ve been indulging in some of the alternatives like Mastodon and, lately, Bluesky.
- 00:12:57I’m not trying to sell you on using them, to be clear,
- 00:13:00but both of those platforms are far more manual than anything the likes of Meta or Twitter might have spun up.
- 00:13:08I think this is great and quite refreshing!
- 00:13:11I follow accounts that I’m interested in, and I never have to worry about whether an algorithm won’t show me their posts.
- 00:13:18I still discover new accounts all the time, but through stuff the ones I’m following have shared.
- 00:13:25That’s a much more human experience than letting an algorithm decide to put stuff in front of my eyeballs
- 00:13:30because it will keep me engaged and increase user time on platform.
- 00:13:35It’s also a much more social experience.
- 00:13:39Yet to a lot of new users that are migrating to these platforms, the need to curate your own experience is very frustrating.
- 00:13:47Several of whom have told me this directly and even said they’d prefer not to have to put any work into social media.
- 00:13:55Now, I must admit that sentiment alone concerns me.
- 00:13:58I’m one of those weirdos who think the most rewarding things in life take effort,
- 00:14:03at least outside November,
- 00:14:05so I would not expect to just walk into a new place and have a good time without doing a little ol’ fashioned exploring.
- 00:14:13However, I can sympathize with it.
- 00:14:16For one, I have to recognize that I’m a popular YouTuber and I can just show up somewhere, say “hey guys”
- 00:14:23and expect a lot of people to start following me very quickly, and that helped re-establish connections I formed in other places.
- 00:14:31That’s a privilege I have and skews my perspective a lot, which is only fair that I acknowledge.
- 00:14:36My experience online is not normal.
- 00:14:41But observing Bluesky grow from an extremely niche platform to a still-niche-but-quickly-growing one
- 00:14:48has presented a very interesting case study which I feel compelled to share.
- 00:14:53Bluesky, for those who don’t know, allows for the creation of custom feeds.
- 00:14:58That’s one of its central features which is really cool!
- 00:15:01But baked-into it as a default when you create an account are two feeds with different purposes:
- 00:15:06Following and Discover.
- 00:15:09The Following feed is a reverse-chronological feed of the posts from accounts you follow.
- 00:15:15Exactly how Twitter used to work.
- 00:15:18But the Discover feed is algorithmic, like those “for you” pages everyone keeps going on about.
- 00:15:24On Bluesky it’s a pretty basic algorithm,
- 00:15:27but its job is to pick posts from accounts across the platform and essentially promote them so people can find new accounts to follow.
- 00:15:36It’s by no means a bad idea to have that feature!
- 00:15:39But as someone with a pretty large following on Bluesky now,
- 00:15:44I can tell the instant a post of mine ends up on the discover feed because the replies get real weird real fast.
- 00:15:53What was previously a post with a nice discussion going on underneath between myself and various people who know who I am and what I mean when I say words
- 00:16:02becomes littered with strange, out-of-context, often antagonistic replies
- 00:16:07as if the only possible response to seeing a post of any kind online is to loudly perform a challenge against it.
- 00:16:16Now, some of these are bots, a known problem on Bluesky.
- 00:16:20But through forming a habit where I check profiles before replying to see if they show signs of being a bot, it’s clear that a lot of them are not.
- 00:16:29They’re people who just start talking with absolutely no idea who they’re talking to and with no desire to figure out the context of the discussion before they write their reply.
- 00:16:40This may feel like I’m veering off track a bit or just complaining about people, which is fun,
- 00:16:46but this is less about calling out the behavior of individuals and more about recognizing the incentives which promote that behavior.
- 00:16:55Algorithmic feeds on social media are unfortunately quite good at fostering something known as context collapse.
- 00:17:03To understand this, imagine you’re dining in a restaurant and you’re close enough to a table of people to hear snippets of their conversation.
- 00:17:12You don’t know who any of the people at that table are,
- 00:17:15but if you manage to overhear them talk about something you’re really interested in you might feel tempted to join their conversation.
- 00:17:23But in the context of a restaurant setting, that’s considered very rude so it rarely ever happens.
- 00:17:30On social media, though, the same kinds of quasi-private conversations between parties who know each other are happening all the time,
- 00:17:39but since the platform is just one big space and it might decide to put that conversation in front of random people,
- 00:17:47that social boundary of etiquette which is normally respected is just not there.
- 00:17:53And lots of conflicts happen as a result.
- 00:17:56A really common one you might accidentally step into on social media
- 00:18:00happens when you stumble across a conversation among friends making sarcastic jokes with each other
- 00:18:06but since you don’t know who those people are you don’t have the context you need to recognize they’re joking.
- 00:18:13And so, if you reply with a serious critique, well that’s a social misfire which some will react poorly to.
- 00:18:20And that’s a pretty mild form of context collapse.
- 00:18:23It can be much, much worse when people want to discuss things like politics.
- 00:18:30And unless we realize recommendation algorithms are what’s fostering these reactionary conflicts,
- 00:18:36they’re going to continue so long as we use platforms in the ways that we do.
- 00:18:41It's for all these reasons that I believe algorithmic complacency is creating a crisis of both curiosity and human connection.
- 00:18:50I would even go so far as to say it’s fostering lots of other disturbing things, too.
- 00:18:55Ever notice how a lot of folks these days need to have a simple
- 00:18:59“good or bad,”
- 00:19:00“black or white,”
- 00:19:01“best or worst” understanding of a topic or issue?
- 00:19:05It seems to me like algorithms which promote content through a simple lens of positive or negative engagement
- 00:19:11would reinforce those binaries and contribute to polarization.
- 00:19:16And as people learn about new products through the slot machine of social media feeds,
- 00:19:21they can develop a learned helplessness where they will wait to be sold on a solution for their problems
- 00:19:28rather than be introspective and explore what their problems actually are and how they might be able to come up with their own solutions which don’t cost any money.
- 00:19:38Introspection will reveal a lot of the problems you think you have are being put in your head by influencers -
- 00:19:45you weren’t unhappy until they told you you should be.
- 00:19:50And, well, I can think of lots of other stuff which has disturbed me for quite a while
- 00:19:55but is now past the point I can ignore as a quirky consequence of connecting large numbers of humans together.
- 00:20:02Social media algorithms don’t nurture human connection - they exploit it.
- 00:20:08And we’re so used to this reality now that I’m not sure many of us care to get off this train.
- 00:20:14But I think we should.
- 00:20:16So now, let’s talk about YouTube!
- 00:20:19I’m sure a good number of you have thought “this is pretty rich comin’ from a guy who makes his living on a platform which does all these things.”
- 00:20:26Well, there’s a funny thing about YouTube.
- 00:20:30Its recommendation algorithm is entirely optional.
- 00:20:34You know that, right?
- 00:20:36You… you do know that, right?
- 00:20:40There’s this feature on this website that has been here pretty much since it became a thing and it’s called the subscriptions feed.
- 00:20:49It’s not hiding, it’s at the bottom of the mobile app and on the sidebar of the desktop site.
- 00:20:54In fact it's got its own URL and if you’re feeling old-school you can bookmark it!
- 00:20:59This is a completely manually-curated feed which has nothing in it but the videos
- 00:21:04(and shorts, for better and worse) from the creators you have chosen to subscribe to.
- 00:21:11That means it’s entirely in your control.
- 00:21:14This feed has been in plain sight the whole time and, here’s why I am using the term algorithmic complacency,
- 00:21:22nobody cares to use it.
- 00:21:24And that I can back up with hard data.
- 00:21:28YouTube gives us a shocking amount of information regarding audience metrics and in 2024,
- 00:21:35less than three percent of this channel’s views came from the subscriptions feed.
- 00:21:42Almost nobody is using this feature
- 00:21:44and yet it’s the most reliable way to keep track of the things you have explicitly decided you want to watch through hitting the subscribe button.
- 00:21:54The fact that it’s easy to get to, has existed in plain sight almost since this platform was born, yet almost nobody is using it anymore is…
- 00:22:03puzzling.
- 00:22:04I want to stress my use of the word puzzling.
- 00:22:07It’s not my intent to be judgmental here and I’m sorry if it has sounded like that.
- 00:22:12Different people use websites differently and the main “home” feed on YouTube, which functions like a “for-you” page,
- 00:22:19usually does a good job of surfacing new videos from creators you like.
- 00:22:24Subscribing to a channel is a signal to the recommendation algorithm to boost that channel’s videos in your home feed.
- 00:22:31And it’s not like that algorithm has no value - there are some creators I’m not even subscribed to
- 00:22:38yet I see most of their new videos since the algorithm has figured out I keep watching them so I must want to see the new ones, and it puts them in my home feed.
- 00:22:47Through that feed I’ve also stumbled across countless new channels and have been reminded of videos I watched years ago and will happily watch again.
- 00:22:57Plus of course, I have found so much cool stuff through the recommended videos that appear alongside whatever I happen to be watching.
- 00:23:05I’m glad YouTube does that.
- 00:23:07Wait, nuance?
- 00:23:10On the internet?
- 00:23:11That’s illegal!
- 00:23:13In the interest of time I’ve removed a big section on the flaws with the subs feed and how I think YouTube should address them
- 00:23:20(mainly - please let us remove shorts if we aren’t interested in them
- 00:23:23and please lets us organize the subs feed a little bit so creators who upload a lot don’t end up cluttering it.
- 00:23:29Maybe condense their daily activity into a single tile which we can then click on and expand. Just a thought).
- 00:23:38But the reason I wanted to talk about it
- 00:23:40is that it feels indicative of a growing disinclination to grab the reins of the internet and be the person steering your own experience.
- 00:23:51That’s really what bothers me.
- 00:23:52There has to be a balance between cool stuff you stumble upon and stuff that you’re actually interested in and matters to you.
- 00:24:01Algorithmic complacency, if not noticed and acted upon,
- 00:24:05means you’re allowing other people who are not you to decide what matters to you.
- 00:24:12I should not have to spell out why that’s dangerous so I won’t.
- 00:24:16But I will spell out that it’s very easy for people who wish to weaponize this reality to craft a narrative which is not overtly obvious and so might slip past your defenses.
- 00:24:28If you can reduce the presence of algorithmically-curated feeds in your life, you’ll be less susceptible to that.
- 00:24:34And if you build up your own network of people and resources you trust, you’ll know when you’re being bullshitted.
- 00:24:41And in case you haven’t noticed, people are trying to bullshit us all the time now!
- 00:24:47And algorithmic feeds make this worse.
- 00:24:49They don’t just exist on YouTube and social media, they exist in popular news aggregator apps and that’s meant a lot of really stupid articles keep floating around
- 00:24:59because all that matters to many institutions which make news these days is clicks and ad money.
- 00:25:06Look at this stupid thing which ended up in the Google News app.
- 00:25:09"The end of Walmart and Target in the US - A new retailer appears and is much cheaper."
- 00:25:16I know what an Aldi looks like.
- 00:25:18That’s a picture of the inside of an Aldi.
- 00:25:21And, uh, if you’ve somehow not heard of Aldi or stepped inside one that doesn’t mean Aldi is a new retailer which just appeared.
- 00:25:30This article is a waste of time for the vast majority of people who might be tricked into clicking on it.
- 00:25:36You may have noticed that the publisher of that article was El Diario 24 which, as far as I can tell, might actually be a fake news source.
- 00:25:46How did it get in Google News?
- 00:25:48That’s a great question for Google.
- 00:25:50But even mainstream publications have gone wildly off the rails as they chase metrics rather than the truth.
- 00:25:58The New York Times of 2025 is publishing opinion pieces
- 00:26:03where bloviating morons go through the fun little exercise of what the political ramifications of turning Canada into the 51st state would be for Democrats.
- 00:26:14Folks, since they can’t say this in plain language for some reason, I guess it’s up to me.
- 00:26:19Canada is a sovereign nation.
- 00:26:23It’s a foreign country which has the right to self-determination.
- 00:26:28The United States cannot simply turn Canada into a US state -
- 00:26:33Canadians have clearly indicated they do not want that,
- 00:26:36which means for us to force the issue would be to declare war with Canada and invade.
- 00:26:43No sane person should want that
- 00:26:46and it’s a shameful embarrassment that the New York Times would even entertain this as a possibility and legitimize the awful, inevitably bloody idea.
- 00:26:56How on Earth did we get here?
- 00:26:59I can tell you how I think we got here:
- 00:27:01big news publications have become just as dependent on algorithms to find their readers as their readers are to find the news.
- 00:27:09Which means they’re more concerned with being enticing than being honest.
- 00:27:13Which is a damn shame.
- 00:27:16OK, reel it in.
- 00:27:18[breathes]
- 00:27:19Reel it in.
- 00:27:21I’m about to wrap up this video but before I do, I want to explore one more related but different angle.
- 00:27:28I struggled with whether I wanted to call this phenomenon algorithmic complacency or automation complacency.
- 00:27:35The reason I struggled is that there isn’t a clear distinction between those two things.
- 00:27:40Algorithms are a kind of automation, so you could say everything I’ve been talking about is the result of automatically-curated feeds
- 00:27:49and the video wouldn’t change.
- 00:27:52But automation in itself is not necessarily bad.
- 00:27:55Lots of menial labor tasks have been replaced by automation and this has largely been a great thing.
- 00:28:02The actually-important word in this discussion is curated.
- 00:28:08It’s one thing to automate, say, an elevator.
- 00:28:10Or an inventory system.
- 00:28:13Or a telephone switchboard.
- 00:28:15But it’s a very different thing to automate what information people see.
- 00:28:21There are situations where that sort of automation is necessary.
- 00:28:26There’s an ever-increasing amount of information being stored online,
- 00:28:30so when you’re looking for something specific amongst that sea of information, keywords by themselves aren't enough.
- 00:28:38The demonstration we did at the beginning
- 00:28:39relied on Google’s search algorithm determining context from the keywords we gave it so it could sort everything it found by relevance to that inferred context.
- 00:28:50And it’s usually really good at that, as we saw.
- 00:28:54But it’s not always.
- 00:28:56I’m sure by now you’ve had the really frustrating experience of Google latching onto the wrong context and producing a lot of irrelevant results,
- 00:29:05and it can be extremely tedious to refocus the algorithm on the correct context.
- 00:29:11This happens a lot when search queries include a word with many homonyms.
- 00:29:16But we’re rapidly moving away from a paradigm in which search queries present a list of sources for us to look at, cite and verify,
- 00:29:24and are now being pressured into a new reality
- 00:29:27where large language models synthesize responses to queries which are statistically likely to produce a useful output
- 00:29:35but which do not provide us with sources of verifiable information -
- 00:29:39or at least obfuscate them to the point that many people are not going to check them.
- 00:29:45I don’t think enough of us have put much thought into what that means.
- 00:29:50It means we’re careening towards a future where people just trust computers to do their thinking for them.
- 00:29:58And the thing is, we already know that’s often a bad idea.
- 00:30:03Take, for example, how we navigate the actual physical world.
- 00:30:08If you drive a car, I am certain that by now you’ve used some kind of GPS-navigation app to figure out how to get places.
- 00:30:16I do, too, don’t think I’m about to say "oh we should go back to paper maps."
- 00:30:21Gross.
- 00:30:22But when you use a mapping app to navigate somewhere, how often are you prioritizing the fastest route?
- 00:30:30You probably have that set up as the default, don’t you?
- 00:30:33I do.
- 00:30:35But do you ever question whether that is actually the most logical way to get somewhere?
- 00:30:41It often isn’t because arrival time is only one of many, many variables which might be important to you.
- 00:30:49If I ask Google Maps to take me from my home to my office,
- 00:30:53it is going to suggest a route which first requires going the wrong way to then hop on a tollway on which I have to pay a roughly $1 toll
- 00:31:02and have to overshoot my destination to hop on a second expressway and backtrack.
- 00:31:08That suggested route requires an extra 4 miles of travel and an extra $1 each way all to save me exactly
- 00:31:17one minute over taking much more direct state routes and surface streets.
- 00:31:23If I mindlessly did what Google suggested,
- 00:31:26over a year I’d put an extra 2,000 miles on my car and spend an extra $500 on tolls just to save not even one full work day of time.
- 00:31:38Google is suggesting a terrible route just because it’s one minute faster.
- 00:31:44The only reason I know it’s a terrible route is because I live here and I know what it’s suggesting is asinine,
- 00:31:50but when I don’t know where I’m going I trust it to make the best decision.
- 00:31:55But we don’t necessarily agree on what is best.
- 00:31:59That’s the problem.
- 00:32:01Side-note, the other bad thing about taking the tollway is that when I do,
- 00:32:06I don’t get to see my neighbors and what they’re up to.
- 00:32:10I actually really enjoy driving through town,
- 00:32:13seeing new businesses as they pop up,
- 00:32:16homes getting built and remodeled,
- 00:32:18admiring Christmas lights over the holidays, and
- 00:32:20just seeing people to remind me that other people exist and they live real lives and they are connected to me because they’re my neighbors.
- 00:32:31I’m not the most social person
- 00:32:33but even I don’t like the isolated feeling that a commute on a tollway where there’s nothing to look at but other cars and sound-isolation walls gives me.
- 00:32:42I feel way more connected to my community when I can actually, ya know, see it and the people who define it.
- 00:32:51That’s a preference of mine which I grant but it’s also a priority of mine.
- 00:32:57I am being mindful now to put human connections above technological connections.
- 00:33:04See, what I do here, is I make connections between technologies -
- 00:33:08that way you can learn how they fit together and how best to use them, and maybe you can use one concept in conjunction with another concept to make a third concept!
- 00:33:17That’s what I’m doing here!
- 00:33:18Trying to empower you to make your life better.
- 00:33:22Technologies which make human connection harder or even just more random are, uh,
- 00:33:29bad!
- 00:33:30That I do think is black-and-white true.
- 00:33:33Any piece of technology which gets in between humans who wish to help each other is frustrating at best and exploitative at worst.
- 00:33:42We ought to know by now the real reason those systems get put in place is so that we need fewer humans in helpful roles.
- 00:33:50And I think there is absolutely no question that this is going to get worse
- 00:33:56until we all start looking inward and begin questioning how we operate in this world and why.
- 00:34:04Silicon valley seems hellbent on creating machines which can do our thinking for us.
- 00:34:11Why should any of us want that?
- 00:34:14I certainly don’t want that - I don’t learn anything unless I do the mental work to create a complete framework of understanding in my mind.
- 00:34:23I don’t talk about things I don’t understand because that’s the fastest way you can make a fool of yourself.
- 00:34:30And it can be dangerous when you have a platform like I do.
- 00:34:35I will never trust a computer program to be able to understand anything in the way a human can,
- 00:34:41nor will I trust it to find information for me.
- 00:34:45If I have to vet everything it’s finding, then I end up doing the same work I would have done myself.
- 00:34:51And if I don’t vet what it’s finding, then what I’m really doing is saying I don’t want to be responsible for what I do.
- 00:34:59It frightens me that even though we’ve all seen the consequences of what a social media recommendation algorithm can do to shape our viewpoints
- 00:35:07that we are somehow falling for the temptation of machines which can offload our thought processes.
- 00:35:14The thing which makes us human.
- 00:35:17If that’s not the purest form of lazy anti-intellectualism, I don’t know what is.
- 00:35:23On that cheery note, let me make sure you know that I share the same frustration with the AI hype cycle
- 00:35:30as the AI researchers who are actually doing work to create tools to solve real problems, like early detection of cancers from images or blood screens.
- 00:35:40That is valuable research which will undoubtedly save lives.
- 00:35:44It’s beyond frustrating that the only kind of AI that is in the public consciousness at the moment is the one that does some very impressive tricks
- 00:35:52but any honest person will tell you needs intense supervision because it will hallucinate and produce bad outputs.
- 00:36:00It seems blindingly obvious to me
- 00:36:02that the stakes are way too high to hand over our decision making to a computer which cannot be held responsible for the decisions it makes.
- 00:36:11And that’s, disturbingly, what I think is the real reason for wanting to push this future.
- 00:36:17But you’ve put up with me long enough.
- 00:36:19Thank you for watching and I hope I don’t sound too far off my rocker.
- 00:36:25I made this channel not just to share cool stuff I find but to show all the amazing ways we solved our problems in the past.
- 00:36:34There are invaluable lessons there which we forget at our peril.
- 00:36:39I don’t talk much about computer technology because it doesn’t really interest me.
- 00:36:44And it’s becoming less and less interesting as time goes on.
- 00:36:48Outside of hardcore enthusiasts or people doing research on vast amounts of data,
- 00:36:53let’s be honest: computing is a solved problem.
- 00:36:57I can use my desktop computer from 2017 to make these videos for you and it would not get in my way at all,
- 00:37:05and the $600 nearly base-spec M2 Mac Mini I bought to dingle around on can do it just as well.
- 00:37:12Video encoding is getting more efficient and we need less bandwidth and storage to send videos like this across the world
- 00:37:19and that’s despite internet connections getting faster and faster.
- 00:37:24So I think it’s pretty clear that that reality is why silicon valley is doing the stuff it’s doing these days.
- 00:37:32It has to justify itself as a center of innovation in a world where it’s running out of runway.
- 00:37:38And the best answer it’s got is
- 00:37:40“euhhh computers which pretend to think.”
- 00:37:44Forgive me, but I want to think for myself.
- 00:37:48And I think you should, too.
- algoritmik rahatlıq
- internet
- araşdırma
- tövsiyə sistemləri
- müstəqil düşüncə
- radio
- Silvertone Model 18
- vintage radio
- informasiya əldə etmə
- sosial media