AI Risks No One is Talking About

00:14:09
https://www.youtube.com/watch?v=pmtuMJDjh5A

Zusammenfassung

TLDRThe speaker discusses important risks of AI that are often overlooked, focusing on over-reliance on default answers from LLMs, which may lead to a lack of critical evaluation and adoption of new technologies. They express concerns about monopolization and the dangers of regulatory capture by major LLM providers. The video emphasizes the importance of maintaining technical knowledge despite advancements in AI, as well as the potential implications of AI influencing personal decision-making without proper scrutiny.

Mitbringsel

  • 🧠 Understanding AI is crucial for critical evaluation
  • 🤖 Over-reliance on LLMs can lead to suboptimal solutions
  • 📉 Monopolization risks in programming languages
  • ⚖️ Regulatory capture could hinder innovation
  • 🔍 AI should not replace human decision-making oversight
  • 📚 Technical knowledge is still essential today
  • 💡 Vigilance needed against AI bias in recommendations
  • 🚧 Default answers may stifle new technologies
  • 🔗 LLMs can influence market dynamics
  • 💬 Engage in discussions about AI's implications

Zeitleiste

  • 00:00:00 - 00:05:00

    The speaker expresses concerns about the potential risks of AI, specifically regarding large language models (LLMs). They highlight that while LLMs can be helpful and enhance productivity, there is a danger in default answers becoming standard solutions, leading to a situation where users rely on these outputs without fully understanding them or evaluating their quality. This could hinder the adoption of new programming languages and frameworks, as well as limit competition among cloud providers, particularly when users automatically accept LLM suggestions without critical assessment.

  • 00:05:00 - 00:14:09

    The speaker further discusses the risks of optimizing LLMs to privilege certain results, leading to an oligopoly in services and products. There is concern that training data, feedback mechanisms, and regulatory pathways could be manipulated, limiting the diversity of options for users and entrenching dominant providers. The dangers extend beyond programming into everyday life decisions influenced by LLM suggestions. The speaker advocates for discussion on these matters, emphasizing the importance of maintaining human oversight to avoid creating feedback loops that prioritize certain companies or products at the expense of consumer choice.

Mind Map

Video-Fragen und Antworten

  • What are the main risks discussed regarding AI?

    The main risks include over-reliance on AI-generated outputs, potential monopolization of programming languages and tools, and regulatory capture by large LLM providers.

  • How can users mitigate risks when using LLMs?

    Users should maintain and apply technical knowledge to critically evaluate AI outputs and not blindly accept defaults.

  • What does the speaker think about the future of programming with LLMs?

    The speaker is concerned that users may become reliant on defaults from LLMs, leading to a lack of understanding and adoption of new technologies.

  • Is the speaker against using AI?

    No, the speaker acknowledges the benefits of AI but emphasizes caution and the need for critical engagement.

  • What is regulatory capture in the context of AI?

    Regulatory capture refers to major LLM providers influencing regulations to favor their technology, potentially stifling competition from smaller providers.

  • Should people stop learning about programming due to LLMs?

    No, the speaker argues that learning programming is still essential even with the rise of LLMs.

  • What implications does the speaker see for decision-making influenced by LLMs?

    The speaker worries that people may make life decisions based solely on LLM recommendations, leading to biased outcomes.

Weitere Video-Zusammenfassungen anzeigen

Erhalten Sie sofortigen Zugang zu kostenlosen YouTube-Videozusammenfassungen, die von AI unterstützt werden!
Untertitel
en
Automatisches Blättern:
  • 00:00:01
    let's talk about some risks of AI that
  • 00:00:03
    I'm really not hearing anybody talk
  • 00:00:05
    about but seem really important to me so
  • 00:00:08
    for a little context I'm not a AI Doomer
  • 00:00:11
    I'm not like oh everything about AI is
  • 00:00:13
    terrible and I hate it and I would never
  • 00:00:15
    use it sure there's a bunch of maybe
  • 00:00:17
    ethical questions about how we got AI
  • 00:00:19
    but it is here and I use Ai and I use it
  • 00:00:22
    to help me write code and I use it for
  • 00:00:24
    personal stuff in my life too it's
  • 00:00:26
    actually pretty helpful in lots of
  • 00:00:28
    different ways although I don't think
  • 00:00:29
    that can do all of my job in 5 seconds
  • 00:00:32
    from now but that remains to be seen and
  • 00:00:34
    a topic for a different
  • 00:00:35
    video but just like any other technology
  • 00:00:38
    it's a double-edged sword there's some
  • 00:00:40
    good things about it and there's some
  • 00:00:42
    bad things about it and or potentially
  • 00:00:43
    bad things that may be you know if we're
  • 00:00:45
    talking about it and working on it we
  • 00:00:47
    can avoid having happen so let's talk
  • 00:00:50
    about one of those today my first sort
  • 00:00:52
    of concern here is default answers for
  • 00:00:55
    llms become a sort of deao standard if
  • 00:00:59
    you try do make me a to-do list app the
  • 00:01:02
    odds are pretty good at least on a fresh
  • 00:01:05
    account I try this on Claude myself that
  • 00:01:07
    it's not going to do a serers side
  • 00:01:09
    rendered thing in rails it's not going
  • 00:01:11
    to make some MVC in Phoenix even though
  • 00:01:13
    those are clearly Superior Technologies
  • 00:01:16
    instead it's probably going to make you
  • 00:01:17
    a broken react application at least
  • 00:01:19
    that's what happened when I tried it
  • 00:01:21
    with Claude and I guess to be fair maybe
  • 00:01:23
    that is the most human thing in AI could
  • 00:01:26
    do but this is not a talk about AGI this
  • 00:01:29
    is a talk about default answers becoming
  • 00:01:31
    deao Solutions right so the idea here
  • 00:01:35
    right is that anytime I'm using an llm
  • 00:01:37
    for some topic that I'm not super
  • 00:01:39
    familiar with or where I don't want to
  • 00:01:40
    do the work I kind of just accept the
  • 00:01:42
    result and seems fine I go to the next
  • 00:01:45
    thing llm host is happy you sent them
  • 00:01:48
    money sure maybe not enough money to
  • 00:01:51
    cover the 5 billion losses of open AI
  • 00:01:53
    this year but maybe someday they'll get
  • 00:01:55
    there my concern is what if everyone is
  • 00:01:58
    doing this right do we get into this
  • 00:02:01
    spot where we're all becoming a natural
  • 00:02:04
    language programmer but without any of
  • 00:02:07
    the underlying skills required to
  • 00:02:10
    evaluate whether those choices are
  • 00:02:12
    actually good or just there because
  • 00:02:14
    that's either the way they've been done
  • 00:02:16
    or what comes out of the llm and so this
  • 00:02:19
    is not exclusive to people who don't
  • 00:02:21
    know anything about programming although
  • 00:02:23
    I think it's definitely going to be
  • 00:02:25
    exacerbated by them right hey make me a
  • 00:02:29
    an app that shows like blue buttons and
  • 00:02:31
    I want it to be about Plumbing they're
  • 00:02:33
    just going to pick whatever the normal
  • 00:02:35
    thing is and maybe overall that's good
  • 00:02:37
    but there's still there's still
  • 00:02:39
    something sort of nagging at me about
  • 00:02:41
    that but in these kinds of scenarios
  • 00:02:43
    where we're suggesting not really
  • 00:02:46
    technical details we're going to get
  • 00:02:49
    back just whatever the default is
  • 00:02:53
    and the reason I know that is because if
  • 00:02:56
    we wanted to give it all the technical
  • 00:02:58
    details we probably would just end up
  • 00:03:00
    coding most of it right the concern is
  • 00:03:03
    not that olms will be able to like fill
  • 00:03:06
    in the Tailwind classes to position this
  • 00:03:09
    thing in a little box that I don't know
  • 00:03:11
    how to do offand that's not really where
  • 00:03:14
    the future of llms seem to be going so
  • 00:03:18
    my question is then how does any new
  • 00:03:21
    library language framework gain adoption
  • 00:03:24
    in this world where the majority maybe
  • 00:03:28
    of the programming or the work done is
  • 00:03:31
    just the de facto default answer of
  • 00:03:33
    whatever comes out of an llm
  • 00:03:36
    particularly in the maybe worrisome
  • 00:03:39
    cases of where people are building
  • 00:03:41
    entire applications without any
  • 00:03:43
    technical knowledge of what's going
  • 00:03:45
    on and not just for languages and
  • 00:03:48
    Frameworks right I'm also wondering how
  • 00:03:50
    does something like a new cloud provider
  • 00:03:53
    compete against existing Cloud providers
  • 00:03:56
    the user might not even know that they
  • 00:03:58
    are picking a cloud provider if they
  • 00:04:01
    just accept the defaults right and so
  • 00:04:05
    this is sort of one of the things that I
  • 00:04:07
    feel like all of these people are saying
  • 00:04:09
    don't learn anything about CS you're not
  • 00:04:11
    going to need to know anything about it
  • 00:04:12
    you don't need to know about all of
  • 00:04:14
    these different aspects of how the stack
  • 00:04:16
    works together how these pieces go any
  • 00:04:17
    technical details the AI is just going
  • 00:04:20
    to take all of our jobs and just do it
  • 00:04:22
    perfectly right and this is of course
  • 00:04:24
    from all the people in the internet who
  • 00:04:25
    are 18 months late on 6 months till no
  • 00:04:28
    jobs I do think AI is going to be
  • 00:04:31
    disruptive don't get me wrong it seems a
  • 00:04:33
    very disruptive technology to me but
  • 00:04:35
    that's not the same thing as it's
  • 00:04:37
    completely useless to know things but
  • 00:04:39
    those are two separate problems and two
  • 00:04:42
    separate areas to weigh about what the
  • 00:04:44
    most effective way to deal with the
  • 00:04:46
    disruption of AI is and so llms don't
  • 00:04:51
    seek truth it's not as if they're out
  • 00:04:53
    there this ability you know like God's
  • 00:04:56
    angels out here trying to find out
  • 00:04:57
    what's good and bad and we can consult
  • 00:04:59
    this Oracle and get back the truth from
  • 00:05:01
    the heavens that's not what they are
  • 00:05:03
    they are predictors based on their data
  • 00:05:06
    of what the next likely things are and
  • 00:05:08
    that is like the most gross
  • 00:05:10
    oversimplification of all time I get
  • 00:05:11
    that because they are incredible
  • 00:05:13
    technology and letting us do things that
  • 00:05:15
    I really was not imagining maybe even
  • 00:05:17
    six months ago let alone 5 or 10 years
  • 00:05:20
    ago but the attitude from the people
  • 00:05:23
    here in this Camp is we really don't
  • 00:05:25
    need to worry about economics or
  • 00:05:27
    incentives or reality we can just trust
  • 00:05:30
    the benevolent people running our llms
  • 00:05:32
    we can just trust that whatever is
  • 00:05:35
    coming back out of the llm is going to
  • 00:05:37
    be trustworthy and good just like
  • 00:05:39
    whatever the top 15 results on Google
  • 00:05:42
    search results are amazing and good and
  • 00:05:44
    no one has ever gamed those uh we can
  • 00:05:47
    just have that same exact feeling from
  • 00:05:50
    when I ask the LM a question and I get
  • 00:05:52
    back a response that's that's the level
  • 00:05:55
    of trust right that we should be putting
  • 00:05:58
    in the LM we should be inspecting them
  • 00:05:59
    when we need to have some knowledge to
  • 00:06:01
    be able to do that and the other thing
  • 00:06:04
    is here while I do think malice is going
  • 00:06:06
    to be coming into play from some of
  • 00:06:08
    these aspects this really does not
  • 00:06:10
    require any malice to happen right if AI
  • 00:06:13
    is great at generating typescript right
  • 00:06:15
    now then that means there will probably
  • 00:06:17
    be lots more typescript tomorrow which
  • 00:06:20
    does that mean then the next most likely
  • 00:06:22
    thing to predict or suggest is also
  • 00:06:24
    going to be typescript it doesn't
  • 00:06:27
    actually require us to have a situ ation
  • 00:06:30
    where everyone is in this conspiracy uh
  • 00:06:34
    like cigar room smoking cigars and
  • 00:06:36
    saying ah let's make typescript the
  • 00:06:38
    language of the future we really want to
  • 00:06:39
    make everyone's life miserable it
  • 00:06:41
    doesn't require that it's just that
  • 00:06:43
    sometimes systems perpetuate the same
  • 00:06:46
    things and that's the first risk that
  • 00:06:48
    I'm I'm feeling worried and I'm not
  • 00:06:50
    seeing discussed anywhere the second
  • 00:06:53
    aspect here
  • 00:06:55
    is this idea of optimizing llm to return
  • 00:07:00
    particular results just like people have
  • 00:07:03
    spent I cannot even fathom how much
  • 00:07:05
    money trying to make SEO for their
  • 00:07:08
    websites to show up in search engine
  • 00:07:10
    results right we're going to have that
  • 00:07:12
    for llms and my concern is is if that
  • 00:07:15
    gets captured what we're going to have
  • 00:07:18
    is this llm sort of nent oligopoly of
  • 00:07:21
    products right a new brand that is this
  • 00:07:24
    small cabal of products that are the
  • 00:07:27
    only things that are going to be
  • 00:07:28
    suggested by by llms and that's where
  • 00:07:31
    I'm sort of shortening this for lmm and
  • 00:07:34
    O right that's what we've got here llm
  • 00:07:36
    and O for short and this can happen in a
  • 00:07:40
    variety of ways it can happen in the
  • 00:07:42
    training data maybe you're going to just
  • 00:07:44
    feed it tons tons tons more felt data
  • 00:07:47
    than you are react right and you're
  • 00:07:48
    trying to subvert everyone's
  • 00:07:50
    expectations and get everyone to use
  • 00:07:51
    felt seems cool seems like a cool
  • 00:07:53
    technology I don't know maybe I would
  • 00:07:54
    vote for that one but right but you
  • 00:07:56
    could also do it in sort of what you're
  • 00:07:58
    going to do when you're doing the
  • 00:07:59
    training here you can do it in the
  • 00:08:01
    reinforcement learning where the people
  • 00:08:02
    who are voting these up or down for
  • 00:08:04
    whether they're good or bad or good
  • 00:08:06
    suggestions or whether they're safe
  • 00:08:07
    suggestions or not those could filter
  • 00:08:09
    out other kinds of options and make
  • 00:08:11
    certain things more or less likely to be
  • 00:08:13
    suggested it could be in the promps
  • 00:08:15
    themselves it could be explicitly in
  • 00:08:18
    programming like behind the scenes
  • 00:08:20
    inside of open AI could be filtering out
  • 00:08:23
    any suggestion to use claud's API
  • 00:08:25
    instead or X's grock or how to download
  • 00:08:28
    an open source model and run it locally
  • 00:08:30
    on your machine any of those things
  • 00:08:33
    could have biases put in that on the
  • 00:08:36
    receiving side we are not aware of and
  • 00:08:39
    don't know our happening they could even
  • 00:08:42
    be doing that explicitly in
  • 00:08:44
    post-processing steps with the true AI
  • 00:08:46
    right Reeses and if statements they
  • 00:08:48
    could just be filtering out claw you
  • 00:08:50
    would never know and that reinforcement
  • 00:08:54
    into this oligopoly is where the de
  • 00:08:57
    facto thing becomes a much bigger worry
  • 00:09:00
    for me it's it's not so much of a worry
  • 00:09:02
    if everyone is just getting to use react
  • 00:09:05
    and their websites work just fine for
  • 00:09:06
    them or whatever right the the worry I
  • 00:09:09
    have is that we will experience the same
  • 00:09:12
    kind of decline that we had in Google
  • 00:09:14
    search results inside of llm results as
  • 00:09:18
    the data and the training and the
  • 00:09:20
    feedback is all poisoned by the
  • 00:09:23
    interests of the people making those
  • 00:09:26
    llms if if there's a th000 x resources
  • 00:09:29
    about react How likely is it to suggest
  • 00:09:31
    phelp right or if there's a thousand X
  • 00:09:32
    resources about using Azure why would it
  • 00:09:35
    suggest using
  • 00:09:37
    gcp and this leads into that idea of
  • 00:09:41
    this vertical integration that I see
  • 00:09:44
    definitely happening right it doesn't
  • 00:09:45
    require in a lot of uh a lot of
  • 00:09:47
    imagination to get here where you could
  • 00:09:50
    say ah I'm inside of VSS mode right and
  • 00:09:53
    I'm inside of there and I say hey deploy
  • 00:09:55
    this to Cloud right a naive user with a
  • 00:09:57
    credit card and so what happens they say
  • 00:09:59
    sweet let's pull into aure Library we'll
  • 00:10:01
    install that for you automatically when
  • 00:10:03
    you click the little green button
  • 00:10:04
    that'll Deploy on your GitHub Enterprise
  • 00:10:06
    account and then will deploy that
  • 00:10:08
    teasure you probably need a four times
  • 00:10:10
    the size thing that you actually need to
  • 00:10:12
    make sure that we're charging that and
  • 00:10:14
    you can just see how this decision of
  • 00:10:18
    the natural language thing especially
  • 00:10:20
    with no underlying understanding can
  • 00:10:23
    lead to a complete capture of the stack
  • 00:10:26
    you could be in a Microsoft editor using
  • 00:10:29
    a Microsoft language doing Microsoft
  • 00:10:31
    tooling with a Microsoft code assist
  • 00:10:33
    using a chat application built by
  • 00:10:35
    Microsoft to deploy to Microsoft's Cloud
  • 00:10:37
    on your Microsoft GitHub provider okay
  • 00:10:40
    I'm not even just picking on Microsoft I
  • 00:10:42
    think everyone wishes they could be
  • 00:10:43
    doing this right everyone wishes they
  • 00:10:45
    could have that vertical integration but
  • 00:10:47
    that's that's one of the concerns that I
  • 00:10:49
    really have is that if we're just going
  • 00:10:52
    to accept what the next likely thing is
  • 00:10:55
    from the llm boom they're going to want
  • 00:10:58
    to capture that and get you to opt in in
  • 00:11:01
    a sense opt in um to all of their
  • 00:11:05
    services right and this is not a m
  • 00:11:08
    capitalism moment one of the things that
  • 00:11:11
    is a bigger worry or a bigger risk in
  • 00:11:13
    this area is I'm worried that some of
  • 00:11:16
    these large llm providers are going to
  • 00:11:19
    actually Lobby for and receive a bunch
  • 00:11:22
    of extra regulatory capture around llms
  • 00:11:26
    and you know the local llm running on
  • 00:11:28
    your computer isn't safe enough to be
  • 00:11:30
    legal so it's no longer legal to run
  • 00:11:32
    llms on your computer or oh that small
  • 00:11:34
    upstart llm doesn't have all the same Fe
  • 00:11:37
    safety features we do or the same legal
  • 00:11:40
    and so then they're not allowed to run
  • 00:11:41
    that either and so you end up with this
  • 00:11:44
    scenario where llms are the def facto
  • 00:11:47
    way of doing programming and all of the
  • 00:11:50
    other possible verticals that could be
  • 00:11:52
    suggested by an llm don't get suggested
  • 00:11:56
    very often anymore or at least not
  • 00:11:57
    without prior knowledge
  • 00:12:00
    and then it becomes illegal to run other
  • 00:12:02
    llms so this is not a m capitalism okay
  • 00:12:05
    I I just want that to be clear I also
  • 00:12:08
    quite worried about this idea of
  • 00:12:09
    regulatory capture in the space and I
  • 00:12:12
    don't know how that's going to play
  • 00:12:14
    right because if you have the tool that
  • 00:12:16
    suggests all the other tools that's a
  • 00:12:19
    strong area that you would want to try
  • 00:12:21
    and
  • 00:12:21
    capture and the last thing here that I
  • 00:12:25
    really don't hear people talking about
  • 00:12:26
    is this is like not just for programming
  • 00:12:29
    I mean we're also going to have people
  • 00:12:32
    making life decisions based on llms
  • 00:12:34
    they're going to ask like hey um what
  • 00:12:37
    kind of shoes are really good for
  • 00:12:38
    running and it's going to suggest a pair
  • 00:12:40
    of
  • 00:12:42
    shoes wouldn't it be great if you were
  • 00:12:44
    the top result hey what's the best place
  • 00:12:47
    to shop for healthy food wouldn't it be
  • 00:12:49
    great if you were the top result hey
  • 00:12:51
    what's the most fun game to play to
  • 00:12:53
    relax with my Bros wouldn't it be fun if
  • 00:12:55
    you were the best and top result
  • 00:12:59
    this is where I think we really need to
  • 00:13:01
    be having some conversations about what
  • 00:13:04
    we're going to try and do how we're
  • 00:13:05
    going to think about these things and
  • 00:13:08
    also where maybe we still are going to
  • 00:13:11
    want to have humans in some of these
  • 00:13:12
    Loops or maybe at least just for a while
  • 00:13:15
    uh to not let ourselves accidentally get
  • 00:13:18
    caught up in a just feedback loop of
  • 00:13:22
    companies training llms to suggest their
  • 00:13:24
    own products to suggest their own Pro
  • 00:13:26
    and you have this vicious Loop and
  • 00:13:28
    complete vertical integration that I'm
  • 00:13:30
    not convinced is going to be the best
  • 00:13:33
    for the consumer so those are some of
  • 00:13:35
    the risks that I would really like to
  • 00:13:37
    hear people talk a little bit more about
  • 00:13:38
    leave a comment let me know what you're
  • 00:13:40
    thinking um I'm going to publish a few
  • 00:13:42
    other videos about some thoughts on AI
  • 00:13:44
    for the rest of this month and that is
  • 00:13:45
    the end one quick note at the end here
  • 00:13:47
    is a plug for me if you like this video
  • 00:13:49
    if you like me talking hey uh if AI
  • 00:13:52
    takes all our jobs at least you'll be
  • 00:13:53
    able to maybe write some wons in Lua if
  • 00:13:56
    that's a dream of yours I'm building a
  • 00:13:58
    Lua course 4 boot Dev you can go to
  • 00:14:00
    boot. Dev use promo code teach 25% off
  • 00:14:02
    that really helps me out okay bye
  • 00:14:04
    everybody and I hope you enjoy the video
  • 00:14:06
    thanks leave a comment okay bye
Tags
  • AI
  • LLMs
  • Programming
  • Regulatory Capture
  • Risk
  • Default Answers
  • Decision-Making
  • Technology
  • Innovation
  • Monopolization