Co-Intelligence: Living and Working with AI

01:01:12
https://www.youtube.com/watch?v=lXCsmnnAByo

概要

TLDRThe webinar features Henning Piezunka and Ethan Mollick, discussing Ethan's extensive work on technology, particularly in AI, its implications, and his recent book "Co-Intelligence: Living and Working with AI." Ethan shares insights on AI's capabilities, its role as a co-intelligence, and the unpredictability of AI's performance, termed the "Jagged Frontier." The discussion touches on AI's transformative potential across industries, its integration into daily tasks, and the challenges of bias and misconceptions surrounding AI's function. Ethan emphasizes the necessity for hands-on AI experience to truly grasp its abilities and states that future enhancements will continue reshaping professional landscapes, pushing people to adapt continually. The conversation reflects on how AI impacts societal constructs, such as authorship and trust, as AI-produced outputs become indistinguishable from those created by humans.

収穫

  • 📘 Ethan Mollick's book explores AI as a co-intelligence.
  • 🤖 AI has unexplored potential that requires hands-on use.
  • ⚖️ Bias in AI remains a significant challenge.
  • 🌀 AI's Jagged Frontier indicates its unpredictable abilities.
  • 💻 Future works might be transformed by AI efficiency.
  • 🧠 AI is being explored for mental health assistance.
  • 🔍 Research shows AI outperforms humans in many areas.
  • 🌍 AI's role is growing rapidly across industries.
  • 🛠 Organizations should encourage AI exploration safely.
  • 👥 AI changes how we perceive and interact with work.

タイムライン

  • 00:00:00 - 00:05:00

    Henning Piezunka, an associate professor, introduces the webinar 'Between the Lines' featuring Ethan Mollick from the Wharton School, highlighting his work in technology and AI.

  • 00:05:00 - 00:10:00

    Ethan Mollick discusses his book 'Co-Intelligence: Living and Working with AI', outlining the evolving capabilities of AI and its potential to enhance human performance.

  • 00:10:00 - 00:15:00

    Ethan describes how AI, specifically GPT-4, can outperform humans in idea generation and professional tasks, emphasizing the importance of using AI as a tool for enhancement.

  • 00:15:00 - 00:20:00

    The concept of 'co-intelligence' is explained, wherein AI acts as a partner in human activities, offering unique support despite its non-sentient nature.

  • 00:20:00 - 00:25:00

    Ethan shares various use cases of AI improving efficiency in business, illustrating AI's role as a transformative tool in innovation and decision-making processes.

  • 00:25:00 - 00:30:00

    Challenges and opportunities presented by AI's capabilities are explored, focusing on how humans must adapt their roles and responsibilities in professional settings.

  • 00:30:00 - 00:35:00

    Ethan demonstrates AI's ability to perform complex tasks such as data analysis and creative writing, showcasing its versatility and understanding of context.

  • 00:35:00 - 00:40:00

    The idea of 'jagged frontier' is introduced, underlining AI's unpredictable strengths and weaknesses, which necessitates human oversight in AI-powered tasks.

  • 00:40:00 - 00:45:00

    Ethan underscores the importance of experimenting with AI to discover its practical applications and limitations, encouraging a hands-on approach.

  • 00:45:00 - 00:50:00

    The ethical and practical considerations of using AI are discussed, including biases in AI systems and the need for responsible application of AI technologies.

  • 00:50:00 - 00:55:00

    Ethan explores potential societal shifts due to AI, questioning the roles of authorship and personal identity in a digital world where AI is increasingly prominent.

  • 00:55:00 - 01:01:12

    The discussion concludes with thoughts on preparing for AI's future, emphasizing ongoing learning and adaptation to integrate AI effectively into personal and professional life.

もっと見る

マインドマップ

Mind Map

よくある質問

  • Who is Ethan Mollick?

    Ethan Mollick is a scholar from the Wharton School at the University of Pennsylvania known for his work on technology, video games, crowdfunding, and AI.

  • What is the book "Co-Intelligence" about?

    The book discusses AI, its capabilities, and its role as a co-intelligence in living and working with it.

  • What are some key concepts discussed in the webinar?

    Key concepts include AI's role as co-intelligence, prompt engineering, biases in AI, and the future impact on professions.

  • How does Ethan Mollick view AI's impact on professional tasks?

    Mollick believes AI can significantly enhance professional tasks, offering improvements in efficiency and creativity, while people should focus on what they do best.

  • What is the "Jagged Frontier" referring to?

    The Jagged Frontier refers to the unpredictable strengths and weaknesses of AI in various tasks.

  • How can organizations facilitate AI adoption?

    Organizations should foster a culture that encourages the use of AI, allowing employees to safely experiment and innovate with it.

  • What are some potential uses of AI in mental health?

    AI is being explored for therapy and support, with some users finding significant benefits in interacting with AI for mental health purposes.

  • What advice does Ethan give for using AI?

    Ethan advises spending at least 10 hours using AI to understand its potential and establish familiarity with its functions.

  • What concerns exist regarding AI biases?

    AI may reflect and perpetuate biases present in its training data and society, affecting fairness and objectivity in outputs.

  • How might AI transform professional environments?

    AI might redefine work by handling more routine tasks, pushing individuals to focus on higher-level creative or strategic pursuits.

ビデオをもっと見る

AIを活用したYouTubeの無料動画要約に即アクセス!
字幕
en
オートスクロール:
  • 00:00:00
    welcome to the webinar my name is
  • 00:00:02
    Henning pizuna I'm an associate
  • 00:00:04
    professor at inad um today at between
  • 00:00:08
    the lines the webinar of inad lifelong
  • 00:00:11
    learning we have Ethan molik from the
  • 00:00:14
    Wharton School at the University of
  • 00:00:17
    Pennsylvania um Ethan it's a
  • 00:00:21
    great if you're online it would be great
  • 00:00:24
    for you to join um fantastic um before
  • 00:00:27
    we get started um kind of jump right
  • 00:00:29
    into the middle of it I'm going to say a
  • 00:00:31
    few few things about you I have actually
  • 00:00:33
    known Ethan's work I believe since yeah
  • 00:00:36
    relatively exactly I actually looked it
  • 00:00:38
    up yesterday 15 years it's the first
  • 00:00:40
    time I encountered one of your working
  • 00:00:41
    papers in 2009 this is your work on
  • 00:00:45
    video games Ethan has been at the
  • 00:00:48
    leading front of technology for a very
  • 00:00:51
    very long time um Ethan um made a big
  • 00:00:55
    splash coming out of MIT kind of making
  • 00:00:58
    video games actually a sub
  • 00:01:00
    in academic research there's been
  • 00:01:02
    literally hundreds of papers afterwards
  • 00:01:04
    building on his work um Ethan then wrote
  • 00:01:06
    one of the pioneering papers on
  • 00:01:09
    crowdfunding I I I didn't look it up
  • 00:01:11
    recently but it has thousands and
  • 00:01:13
    thousands of citations so he became the
  • 00:01:15
    leading scholar on crowdfunding um he
  • 00:01:17
    got tenured at um at the Wharton
  • 00:01:20
    business school and then he took on AI
  • 00:01:23
    and has now become one of the leading
  • 00:01:26
    Scholars I don't think I'm allowed to
  • 00:01:27
    kind of say who gets him out as a guest
  • 00:01:29
    speaker but all the big corporations
  • 00:01:31
    these days get Ethan out and say like we
  • 00:01:34
    want to understand what this is about
  • 00:01:36
    how can we manage this how can how can
  • 00:01:38
    we work with this um I have so many
  • 00:01:41
    positive things to say about Ethan um
  • 00:01:44
    it's hard to it's hard to say um there's
  • 00:01:46
    probably no one who I feel more
  • 00:01:49
    energized um um energized after talking
  • 00:01:52
    with Ethan he's like always bubbling
  • 00:01:54
    with fantastic ideas it's not by
  • 00:01:56
    accident that he's always at the leading
  • 00:01:58
    font um he's in a beautiful beautiful
  • 00:02:01
    way incredibly curious and Incredibly
  • 00:02:04
    empathetic so it's a great great
  • 00:02:06
    pleasure and great great honor to have
  • 00:02:07
    you here Ethan thank you so much for
  • 00:02:09
    joining that is such a flattering intro
  • 00:02:11
    I'm I'm thrilled to be
  • 00:02:12
    here Ethan tell us about the book um the
  • 00:02:16
    book has just come out um po
  • 00:02:19
    intelligence living and working with AI
  • 00:02:21
    what it is about what are the Big Ideas
  • 00:02:23
    here um okay so you know it's a really
  • 00:02:26
    interesting thing I I I found myself at
  • 00:02:29
    accidentally sort of at the center of a
  • 00:02:30
    lot of AI things um and um over the last
  • 00:02:35
    year and a half or so I've been exper I
  • 00:02:37
    worked in the media lab with Marvin
  • 00:02:38
    Minsky back in the day was one of the
  • 00:02:40
    founders of AI but I've never been the
  • 00:02:41
    technical person I've always been the
  • 00:02:43
    sort of business how does this all
  • 00:02:44
    matter person and you kind of talking
  • 00:02:46
    about the themes of my work I've been
  • 00:02:47
    thinking about games and teaching at
  • 00:02:48
    scale and teaching at distance for a
  • 00:02:50
    very long time so I've playing with AI
  • 00:02:52
    for a while actually my students even
  • 00:02:53
    before chat GPD came out cheating with
  • 00:02:56
    AI there was an explicit assignment
  • 00:02:57
    where they had to cheat by create uh by
  • 00:03:00
    uh writing an essay with AI even before
  • 00:03:01
    chat came out so I was kind of in place
  • 00:03:04
    when this all happened um and I've kind
  • 00:03:06
    of been watching for the front lines as
  • 00:03:08
    as AIU spread and took off I read a few
  • 00:03:11
    papers on it um and I talked to all the
  • 00:03:13
    AI companies once a week or so and the
  • 00:03:14
    idea was like to try and give a sense of
  • 00:03:17
    where we are and what's going on um and
  • 00:03:19
    it's a little hard right a book is
  • 00:03:21
    against a moving Target like AI I think
  • 00:03:23
    I think it kind of nailed in this space
  • 00:03:24
    but it's kind of an overview it's sort
  • 00:03:25
    of the idea of where we are right now at
  • 00:03:28
    the capability curve of AI is that it
  • 00:03:30
    does act as a kind of co-intelligence if
  • 00:03:32
    you use it properly it can uh pretty
  • 00:03:35
    much offer enhancements to many forms of
  • 00:03:37
    human performance it's not good at some
  • 00:03:39
    things you'd expect to be good at good
  • 00:03:41
    at others and the attempt was to kind of
  • 00:03:43
    show where we are and where we might be
  • 00:03:44
    heading in the near future um and my my
  • 00:03:47
    inspiration was actually a book that
  • 00:03:48
    inspired me when I was an undergraduate
  • 00:03:49
    called being digital by Nicholas degante
  • 00:03:51
    who was wrote in the 90s he was the head
  • 00:03:53
    of media lab about where we were with
  • 00:03:55
    digital technology and I I wanted to try
  • 00:03:57
    and do the same thing here didn't he do
  • 00:03:59
    the famous $100 laptop is that him or
  • 00:04:01
    that was him afterwards yes okay okay
  • 00:04:04
    the um what is the co-intelligence Ethan
  • 00:04:07
    um what's the what's the idea behind the
  • 00:04:09
    term I think it's a great title for the
  • 00:04:10
    book um what's the what what do you mean
  • 00:04:13
    by co-intelligence so the idea is that
  • 00:04:16
    we are in
  • 00:04:18
    that AI is not alive it's not sensient
  • 00:04:22
    but it can make you think it is right we
  • 00:04:24
    don't 100% know nobody actually quite
  • 00:04:26
    knows why the Transformer architecture
  • 00:04:29
    that you know is a was developed in 2017
  • 00:04:32
    based on other machine learning
  • 00:04:33
    architectures why with enough scale it
  • 00:04:35
    sort of starts to act like it thinks
  • 00:04:37
    like a human and so I don't deal with
  • 00:04:39
    the philosophy very much behind it but I
  • 00:04:41
    do try and deal with the Practical
  • 00:04:42
    implications and we now have a you know
  • 00:04:44
    a profusion of papers and research some
  • 00:04:46
    of I've done some of it other colleagues
  • 00:04:47
    have done in other places and lots of
  • 00:04:49
    other people working on this that shows
  • 00:04:51
    that practically for example the AI out
  • 00:04:53
    ineds humans so my colleagues um you
  • 00:04:55
    know Carl orich and Christian turt who
  • 00:04:57
    literally wrote the book on innovation
  • 00:04:59
    their graduate students and and another
  • 00:05:01
    professor wrote had 200 of the MBA
  • 00:05:04
    students uh in their class on Innovation
  • 00:05:06
    a famous one that's raised a lot of
  • 00:05:07
    money uh to you know um generate ideas
  • 00:05:11
    and then they had the AI generate 200
  • 00:05:12
    ideas they had outside people judge the
  • 00:05:14
    ideas by willingness to pay of the top
  • 00:05:16
    40 ideas by willingness to pay 35 came
  • 00:05:18
    from gbd4 only five from the students in
  • 00:05:20
    the room when we did a study at BCG we
  • 00:05:22
    found a 40% improvement from naively
  • 00:05:24
    using gp4 like these are very big
  • 00:05:27
    effects with the most elite sort of
  • 00:05:28
    business uses we're seeing some things
  • 00:05:29
    in law medicine um it still doesn't
  • 00:05:32
    replace a human but if you're not using
  • 00:05:34
    this as a supplement for creativity for
  • 00:05:36
    Innovation for even if you know for for
  • 00:05:38
    writing then you're you're sort of
  • 00:05:41
    leaving yourself behind we have the
  • 00:05:42
    option to actually boost intelligence
  • 00:05:43
    the first time we've had a long for a
  • 00:05:44
    long time machines that could boost your
  • 00:05:46
    ability to do work right if I told you
  • 00:05:48
    that you need to dig a ditch behind your
  • 00:05:50
    backyard you wouldn't get your 20
  • 00:05:51
    stoutest friends together to dig a ditch
  • 00:05:53
    you would hire a machine or a crew to do
  • 00:05:55
    that for you um or you know a rent
  • 00:05:57
    equipment to do it in the same way we've
  • 00:05:59
    never had a machine that improves how we
  • 00:06:02
    think or what we could think and now we
  • 00:06:04
    do we have like a back ho of the mind
  • 00:06:05
    and that's a really big
  • 00:06:07
    deal you see there's there's something
  • 00:06:09
    qualitatively different about it right
  • 00:06:11
    so take your take your study you did
  • 00:06:13
    with the BCG Consultants right where you
  • 00:06:15
    basically say like look um in
  • 00:06:17
    classically kind of Consulting task um
  • 00:06:19
    jet GPT is better um than better than
  • 00:06:22
    these BCG consultings what is
  • 00:06:24
    qualitatively different about this than
  • 00:06:26
    comparing a car to a horse and like hey
  • 00:06:30
    it has more pulling power I I think it's
  • 00:06:31
    different but I have I have a hard time
  • 00:06:33
    nailing it Ethan what's so what's
  • 00:06:35
    different about jgpt or
  • 00:06:38
    like gen more more broadly than compared
  • 00:06:42
    to like prior Technologies is there
  • 00:06:44
    something about this technological shift
  • 00:06:46
    where you would say like Ah that's
  • 00:06:48
    qualitatively different I mean I think
  • 00:06:50
    it's different in every possible way we
  • 00:06:51
    again we've never had the only general
  • 00:06:54
    purpose um you know thing that's
  • 00:06:55
    improved human thinking before has been
  • 00:06:57
    like this like you know coffee basically
  • 00:07:00
    um yeah we both got them right so um so
  • 00:07:04
    I don't think it's just qualitative like
  • 00:07:05
    we haven't we've had a whole bunch of
  • 00:07:07
    revolutions around mechanical use we've
  • 00:07:09
    had revolutions never re revolutions or
  • 00:07:12
    cognitive tools like spreadsheets and
  • 00:07:14
    you studied you know chess computers
  • 00:07:16
    right in narrow Fields we've never had
  • 00:07:17
    general intelligence uh in this in this
  • 00:07:20
    kind of way before right um so like it
  • 00:07:23
    is a new thing in the world um and I
  • 00:07:26
    don't think we know the full
  • 00:07:27
    implications and what's kind of shocking
  • 00:07:29
    how quickly both adoption is happening
  • 00:07:32
    and um how how much latent capabilities
  • 00:07:34
    is in these systems that people don't
  • 00:07:36
    understand yet so part of what amazes me
  • 00:07:38
    is even if we stopped technological
  • 00:07:40
    development today we would still on gbd4
  • 00:07:43
    based models alone have 10 years of
  • 00:07:45
    figure how to absorb this with
  • 00:07:48
    work you see this is something that I
  • 00:07:50
    found Most Fascinating you have you have
  • 00:07:52
    this in the book the example that CH jpt
  • 00:07:53
    can actually play chess despite the fact
  • 00:07:56
    that it's not been trained to play chess
  • 00:07:58
    right uh um and it's I forgot but it's
  • 00:08:01
    relative it's it's quite good right it's
  • 00:08:02
    like an ELO score of 1,500 or something
  • 00:08:05
    like that we seems like Claude might be
  • 00:08:06
    2,000 also so we don't know for sure but
  • 00:08:09
    again they it's not even that they're
  • 00:08:10
    not trained to play chess right like
  • 00:08:12
    that alone is weird for example chat gbt
  • 00:08:15
    which is trained on a lot of garbage
  • 00:08:17
    actually like if you look at the pile
  • 00:08:18
    the data that the AI was really really
  • 00:08:20
    trained on it's not just sort of the
  • 00:08:23
    internet common crawl and things like
  • 00:08:24
    that but 6% of the data set is enron's
  • 00:08:27
    emails uh the failed US Energy company
  • 00:08:30
    because it went went bankrupt that went
  • 00:08:31
    in the public domain there's a huge
  • 00:08:33
    amount of Harry Potter fanfiction inside
  • 00:08:36
    inside the uh the training material so
  • 00:08:38
    out of all this random stuff right we
  • 00:08:40
    have this very capable tool that can
  • 00:08:42
    beat most doctors in in medical advice
  • 00:08:45
    you know do well in law um and you know
  • 00:08:47
    all of these other topics that you
  • 00:08:49
    wouldn't expected to be good at so it is
  • 00:08:51
    pretty amazing to see and we don't know
  • 00:08:54
    quite why it's so good at this so chess
  • 00:08:56
    the weird thing about the chess piece is
  • 00:08:57
    not just that it does chess but that it
  • 00:08:59
    doesn't there's no computer there
  • 00:09:00
    there's no planning um how it could you
  • 00:09:03
    know there's more States in chess than
  • 00:09:05
    there are ability of the AI to kind of
  • 00:09:07
    remember those positions so we don't
  • 00:09:08
    actually even know why it's so good at
  • 00:09:10
    chess so one thing I personally really
  • 00:09:13
    like about the book is is that it's it's
  • 00:09:16
    it's a very pragmatic book right you see
  • 00:09:18
    like there's a lot of debates about like
  • 00:09:20
    oh will this actually lead to the
  • 00:09:22
    apocalypse or will this kind of what do
  • 00:09:24
    we do about copyrights and you touch up
  • 00:09:26
    on these things but you write at the end
  • 00:09:28
    of the day you write a very preg IC book
  • 00:09:29
    in the sense of like how can we actually
  • 00:09:32
    use this and you kind of prescribed
  • 00:09:34
    these four rules or four principles um
  • 00:09:37
    about um about how to use how to use AI
  • 00:09:40
    can can you talk a little bit about this
  • 00:09:42
    because you see a lot of people on the
  • 00:09:43
    call will probably think about okay how
  • 00:09:45
    do I now make use of this in my life so
  • 00:09:48
    so I would love to kind of get a little
  • 00:09:50
    bit with you um into these principles so
  • 00:09:52
    the first principles to point out is
  • 00:09:54
    always invite AI to the table say a
  • 00:09:58
    little bit about this all right so
  • 00:09:59
    there's a little caveat which is where
  • 00:10:00
    you ethically and legally can so I want
  • 00:10:02
    to make that clear but the idea is that
  • 00:10:06
    with
  • 00:10:07
    um we don't know what it does well or
  • 00:10:09
    badly so the Boston Consulting Group
  • 00:10:12
    paper I mentioned before where we found
  • 00:10:13
    a huge performance Improvement we uh
  • 00:10:16
    also noticed that you know people don't
  • 00:10:18
    know in advance what the AI does or
  • 00:10:20
    doesn't do well so um you know famously
  • 00:10:23
    right I could ask um you know I could
  • 00:10:25
    ask and maybe I'll show some demos later
  • 00:10:27
    but I could ask uh you know GPD for to
  • 00:10:30
    write me a you know an academic paper or
  • 00:10:32
    summarize an academic paper in a sonnet
  • 00:10:34
    it'll do a really good sonnet if I ask
  • 00:10:36
    you to write a 25-word paragraph it
  • 00:10:38
    won't do that because it doesn't see
  • 00:10:40
    words the way we do it sees tokens it's
  • 00:10:42
    bad at math good at tell better at
  • 00:10:44
    empathy than most doctors how do you
  • 00:10:46
    deal with a system that can write a
  • 00:10:47
    Sonet but not 25 words where it might be
  • 00:10:50
    really good at your field but might make
  • 00:10:52
    mistakes you have to use it to figure
  • 00:10:54
    out what's good at and one thing I
  • 00:10:55
    really want to emphasize to everyone in
  • 00:10:57
    the call and I think if I had one
  • 00:10:58
    message like the key message is nobody
  • 00:11:01
    knows anything right what I mean is I
  • 00:11:03
    talk to open AI on a regular basis I
  • 00:11:05
    talk to anthropic I talk to Google and
  • 00:11:08
    there isn't a secret instruction manual
  • 00:11:09
    out there there is not like the actual
  • 00:11:11
    way AI works and everybody knows it and
  • 00:11:13
    you just have to pay a consultant or
  • 00:11:15
    wait for open AI to tell you the answer
  • 00:11:17
    Nobody Knows the answer in whatever
  • 00:11:18
    subfield you're in nobody can tell you
  • 00:11:21
    how to best use AI to do it so the first
  • 00:11:24
    piece is just to use it to figure that
  • 00:11:25
    out to figure out where it's strong or
  • 00:11:26
    weak where it compliments you where it
  • 00:11:28
    doesn't and you have to use it I think
  • 00:11:30
    10 hours is my minimum requirement and
  • 00:11:32
    you have to by the way use a a gbd4
  • 00:11:35
    Class A Frontier Model so another thing
  • 00:11:37
    to know about AI is there's a scaling
  • 00:11:39
    law that's holds in effect right now the
  • 00:11:41
    bigger your model is which means the
  • 00:11:42
    more information that goes into it but
  • 00:11:44
    also the more training it takes the more
  • 00:11:46
    expensive it is the smarter it is so gp4
  • 00:11:50
    does incredibly well on things like the
  • 00:11:52
    bar exam or medical licens exams
  • 00:11:54
    compared to gbt 3.5 in fact gbd4 which
  • 00:11:57
    is the smartest kind of model out there
  • 00:11:59
    outpaces and there's two others right
  • 00:12:01
    now these Frontier models outpaces
  • 00:12:03
    specialized models so Bloomberg spent a
  • 00:12:06
    $10 million training Bloomberg GPT which
  • 00:12:09
    was a specialized AI model built for you
  • 00:12:12
    know financial analysis but it even
  • 00:12:15
    after spending $10 million on it gbd4
  • 00:12:17
    does better on financial analysis than
  • 00:12:19
    the specialized model so you want to use
  • 00:12:21
    the most advanced model you can the
  • 00:12:22
    three options right now for you and it's
  • 00:12:24
    different in Europe because I know
  • 00:12:25
    there's limitations but that's Claude 3
  • 00:12:28
    Opus that gp4 which you can also get
  • 00:12:30
    access to for free in a limited form
  • 00:12:32
    through Microsoft co-pilot and you have
  • 00:12:34
    to use the purple creative mode I know
  • 00:12:36
    that's details but uh I'll try and put
  • 00:12:38
    them in the chat uh or you can use uh
  • 00:12:40
    Google's Gemini advaned you have to use
  • 00:12:42
    one of those three models and you have
  • 00:12:43
    to spend 10 hours using it like that's
  • 00:12:45
    the easiest play way to get
  • 00:12:48
    started
  • 00:12:50
    so Ethan we're both at a business school
  • 00:12:52
    where's going to be competitive
  • 00:12:54
    Advantage coming from you see like for a
  • 00:12:55
    while there was and this was the idea
  • 00:12:57
    behind the Bloomberg model right to say
  • 00:12:58
    like look these are just a bunch of
  • 00:13:00
    algorithms but we have kind of the
  • 00:13:01
    exclusive data and so we're going to do
  • 00:13:03
    better than others right the Bloomberg
  • 00:13:06
    case you just you just referred to kind
  • 00:13:08
    of proves that somewhat wrong right like
  • 00:13:11
    all the data that Bloomberg has and the
  • 00:13:13
    data that Bloomberg has somewh
  • 00:13:14
    exclusively is obviously not putting
  • 00:13:16
    them in a position to build a better
  • 00:13:18
    geni
  • 00:13:20
    model yeah and I mean I think
  • 00:13:22
    that one of the really intriguing things
  • 00:13:25
    is the the definition of AI has changed
  • 00:13:28
    dramatically by the way seen some notes
  • 00:13:29
    that talk slower I do the best I can uh
  • 00:13:31
    I always accelerate uh a little bit so
  • 00:13:33
    hopefully you AI close captioning on
  • 00:13:35
    perhaps um but um the um but uh the the
  • 00:13:41
    sort of way the AI what AI meant before
  • 00:13:43
    chat gbt came out was largescale machine
  • 00:13:46
    Learning System so that was how Amazon
  • 00:13:48
    was able to recommend products to you or
  • 00:13:50
    Netflix was able to recommend a movie to
  • 00:13:52
    watch it's how Tesla was able to have
  • 00:13:53
    its car drive right observe lots of data
  • 00:13:56
    points and then we can fit lots of
  • 00:13:57
    logistic regressions to them and we
  • 00:13:59
    could predict the next data points in a
  • 00:14:00
    series what those systems couldn't do
  • 00:14:02
    was predict the next word in a sentence
  • 00:14:04
    because if a sentence ended with the
  • 00:14:05
    word filed the AI didn't know whether
  • 00:14:06
    you were filing your taxes or filing
  • 00:14:08
    your nails the Transformer architecture
  • 00:14:10
    let AI pay attention to the entire um
  • 00:14:13
    you know uh and with attention mechanism
  • 00:14:15
    and let AI pay attention to the entire
  • 00:14:17
    context in which a piece of information
  • 00:14:19
    appeared and thus write um you know
  • 00:14:21
    coherent text right but the so that's
  • 00:14:24
    one transformation so people still think
  • 00:14:26
    their data is valuable but the system
  • 00:14:29
    the p and GPT stands for pre-trained the
  • 00:14:31
    AI already knows many many things about
  • 00:14:33
    the world and it isn't clear where your
  • 00:14:35
    own data matters as much and there's
  • 00:14:37
    it's desperate companies are desperate
  • 00:14:39
    to try and figure out you know it must
  • 00:14:40
    be that our own data matters a lot it's
  • 00:14:42
    just not clear how much it does and to
  • 00:14:44
    the extent that it does matter it's not
  • 00:14:45
    clear the best way to get them in
  • 00:14:46
    systems and we can talk about context
  • 00:14:48
    windows and a whole bunch of other
  • 00:14:49
    approaches so I think it's a big
  • 00:14:51
    transformation one the one of the three
  • 00:14:52
    things I see companies struggle with one
  • 00:14:54
    is the idea their own data probably
  • 00:14:56
    doesn't matter as much as they thought
  • 00:14:57
    right the second of them is is that um
  • 00:15:00
    this is best Ed at the individual level
  • 00:15:02
    uh to learn what it does and so that
  • 00:15:03
    means democratizing out use to the
  • 00:15:05
    individual end users right is a second
  • 00:15:07
    big gap that they don't have and the
  • 00:15:09
    third Gap they don't have is they don't
  • 00:15:10
    realize that the everyone in the world
  • 00:15:13
    has access to a better model than them
  • 00:15:14
    like you go to Goldman Sachs they have
  • 00:15:16
    worse AI than the average kid in mosm
  • 00:15:18
    Beek because the average kid in mosm
  • 00:15:20
    Beek can access if they have internet
  • 00:15:22
    access can access co-pilot for free
  • 00:15:24
    which is gp4 and inside any large
  • 00:15:26
    company in the US or Europe they're
  • 00:15:28
    almost certainly experimenting with
  • 00:15:29
    worse models with more restrictions and
  • 00:15:31
    we're used to like technology coming
  • 00:15:32
    from the top so that's
  • 00:15:36
    unusual
  • 00:15:39
    the coming back to the I would love to
  • 00:15:42
    to kind of jump a little bit on the on
  • 00:15:43
    the principle so so
  • 00:15:45
    um what does it mean to be the human in
  • 00:15:48
    the loop here Ethan what's the point of
  • 00:15:50
    being the human in the loop why do what
  • 00:15:52
    is it and I think it touches up on
  • 00:15:54
    something very important what's going to
  • 00:15:55
    be the future role of of us of humans in
  • 00:15:58
    the kind of co-productions can you say a
  • 00:16:01
    little bit about what it means to be the
  • 00:16:03
    human in the
  • 00:16:04
    loop okay so there's a feel the main
  • 00:16:08
    phrase human human the loop comes from
  • 00:16:09
    control systems and by the way I typed
  • 00:16:11
    in uh into the answers the three leading
  • 00:16:14
    models in case people wanted them
  • 00:16:15
    they're in the Q&A um we're also GNA
  • 00:16:17
    make the recording available so follow I
  • 00:16:20
    still talk too fast but anyway um the uh
  • 00:16:24
    I recorded the own my own audio book by
  • 00:16:25
    the way and this 270 page book I slowed
  • 00:16:27
    down as best I could but it's like four
  • 00:16:29
    and a half hours of audiobook so um you
  • 00:16:31
    know at least you get a lot of words per
  • 00:16:33
    minute have to do 1.5 times speed but
  • 00:16:36
    the human in the loop piece is the idea
  • 00:16:38
    that um that from control systems that
  • 00:16:41
    you want a human making a decision in
  • 00:16:43
    the end right so from autonomous
  • 00:16:45
    weaponry and other kinds of things you
  • 00:16:46
    need human judgment I I think that's
  • 00:16:48
    important but I actually mean in a
  • 00:16:49
    slightly different way which is if the
  • 00:16:51
    AI is at the 80th percentile of
  • 00:16:53
    performance in some areas it already
  • 00:16:54
    probably outperforms you like many of my
  • 00:16:55
    students are English is their second or
  • 00:16:57
    third language AI writes better than
  • 00:16:59
    them in English right now I mean there's
  • 00:17:01
    quirks so we can talk about how to make
  • 00:17:02
    the writing higher quality because
  • 00:17:04
    initially it feels very AI writing but
  • 00:17:06
    you can make it feel human after just a
  • 00:17:07
    couple of iterations it's not that hard
  • 00:17:10
    but in any case um what does that mean
  • 00:17:13
    right it already is superum for them in
  • 00:17:15
    that in that zone but almost right now
  • 00:17:17
    because of where AI is whatever you're
  • 00:17:19
    best at whatever you're in the top 1% of
  • 00:17:21
    or 10% of you're definitely better than
  • 00:17:23
    AI so part of what you want to think
  • 00:17:24
    about is what are you doubling down on
  • 00:17:27
    like what do you want to do because
  • 00:17:28
    often what you're best at is what you
  • 00:17:30
    like doing the most and there's a lot of
  • 00:17:32
    stuff you don't like doing very well
  • 00:17:33
    that you could hand over so you know
  • 00:17:35
    when I am for example um you know I
  • 00:17:38
    don't like doing expense reports so I
  • 00:17:39
    have the AI help me with expense reports
  • 00:17:41
    or help me fill out a you know a a
  • 00:17:43
    standard form inside the university in
  • 00:17:45
    which there's many of them so I'm
  • 00:17:47
    focusing on what I do really well and
  • 00:17:48
    what I care about and some of being a
  • 00:17:50
    human Loop is assuming if AI keeps
  • 00:17:52
    improving which I think it will what do
  • 00:17:54
    you want to focus
  • 00:17:57
    on what is this um one of the most
  • 00:18:00
    fascinating things is this Jagged
  • 00:18:02
    Frontier Ethan and you've already kind
  • 00:18:03
    of touched a little bit up on it can you
  • 00:18:05
    say what it what it is and how it
  • 00:18:08
    affects all of us okay so because the
  • 00:18:11
    capabilities of AI are strange because
  • 00:18:13
    they have these weird limits right um
  • 00:18:16
    and I may actually throw up some stuff
  • 00:18:17
    here let me let me let me let me let me
  • 00:18:19
    throw some stuff is that okay if I put
  • 00:18:20
    some stuff on screen sure go go go ahead
  • 00:18:22
    all right sorry I'm just this it helps
  • 00:18:24
    to illustrate some of these things so
  • 00:18:26
    I'm GNA we'll keep we'll keep
  • 00:18:27
    interviewing here but um you know if I
  • 00:18:29
    if I take something right like so I
  • 00:18:32
    could and this is you should be able to
  • 00:18:33
    see in a second here's you know just
  • 00:18:35
    chat GPT and if I you know give it for
  • 00:18:39
    example you know let's pick up something
  • 00:18:41
    here let's give it a um let's give it a
  • 00:18:45
    data set to analyze right I could say
  • 00:18:47
    you know analyze this data give me cool
  • 00:18:55
    hypotheses do
  • 00:18:57
    sophisticated
  • 00:18:59
    analysis to test
  • 00:19:02
    them um write it up we'll see if chat
  • 00:19:06
    works today you never know what how the
  • 00:19:07
    systems work right it's going to be able
  • 00:19:09
    to do it I mean it's going to be able to
  • 00:19:11
    do this which is a really sophisticated
  • 00:19:12
    thing that we teach our students to do
  • 00:19:14
    right it's going to look at it's never
  • 00:19:15
    seen this data set before by the way the
  • 00:19:17
    in the training data uh so it's going to
  • 00:19:19
    look at the data the way we would as a
  • 00:19:21
    human being right and it's going to
  • 00:19:24
    actually be able to speculate um it's
  • 00:19:26
    actually a data about machine learning
  • 00:19:27
    sets it figures out what it is um and
  • 00:19:30
    it's guessing based as we we would it's
  • 00:19:32
    going to generate hypotheses that are
  • 00:19:34
    novel hypotheses here right um these are
  • 00:19:38
    interesting questions I I think to to
  • 00:19:40
    ask if we gave it um information here um
  • 00:19:44
    and it's going to you know um it's it's
  • 00:19:46
    going to clean the data set and do this
  • 00:19:48
    kind of work I'm just but before I have
  • 00:19:49
    a do that I'm going to say
  • 00:19:52
    summarize this you know in uh in uh in
  • 00:19:57
    rhyming couplets
  • 00:20:03
    right and you know it will U hopefully
  • 00:20:06
    right it'll do this here um and so you
  • 00:20:09
    know pretty nice way to S I think we
  • 00:20:11
    should do all of our academic research s
  • 00:20:13
    summarized in in rying couplet by the
  • 00:20:15
    way Hing um but you know as I said
  • 00:20:18
    summarize in 25
  • 00:20:22
    words um how many words is
  • 00:20:27
    that
  • 00:20:29
    and actually it's C it's decid to write
  • 00:20:32
    code to figure out how many words it is
  • 00:20:33
    I got 25 it doesn't always do that right
  • 00:20:35
    so one of the things about the AI piece
  • 00:20:37
    is I if I didn't know that it may not
  • 00:20:40
    give me 25 words I could mess up if I
  • 00:20:42
    was doing a 25w summary right but and
  • 00:20:45
    simil on the analysis I know what it's
  • 00:20:47
    going to be good or bad at for doing
  • 00:20:48
    this so this is the idea of this Jagged
  • 00:20:50
    Frontier the idea that it's good at some
  • 00:20:52
    things you'd expect bad at some things
  • 00:20:54
    you wouldn't expect it to be and that
  • 00:20:55
    capability set is a problem because the
  • 00:20:58
    other thing we document in these papers
  • 00:20:59
    and building on the work of uh of a few
  • 00:21:02
    other researchers is that people fall
  • 00:21:04
    asleep with a wheel when faced with
  • 00:21:06
    material with AI answers the answers are
  • 00:21:08
    so convincing that nobody checks them so
  • 00:21:11
    if you could if you are using the AI for
  • 00:21:13
    something that's not good at you could
  • 00:21:14
    be easily let astray and start doing the
  • 00:21:16
    wrong kinds of of of work and you won't
  • 00:21:19
    even notice so understanding what's good
  • 00:21:21
    or bad it helps you avoid those kind of
  • 00:21:23
    issues of hallucination this is
  • 00:21:25
    interesting because in other context you
  • 00:21:26
    see humans humans deal with this in in
  • 00:21:29
    different context in very different ways
  • 00:21:30
    for example for newspapers when people
  • 00:21:32
    read in newspapers things about their
  • 00:21:35
    domain and they realize oh the
  • 00:21:37
    journalist is not particularly good in
  • 00:21:39
    this my domain they still believe the
  • 00:21:41
    rest of the newspaper so somehow the
  • 00:21:43
    reputation of the newspaper seems not to
  • 00:21:44
    take a very strong hit okay um here but
  • 00:21:49
    here I need to be very very
  • 00:21:50
    sophisticated about it right that I
  • 00:21:52
    really need to understand oh I'm dealing
  • 00:21:53
    with a tool which is extremely good in
  • 00:21:55
    some aspects but quite bad in another
  • 00:21:58
    and need to figure out how good it is
  • 00:22:01
    yeah and and no one can tell you the
  • 00:22:03
    benchmarks are all terrible I mean this
  • 00:22:05
    is what like that is what makes this so
  • 00:22:07
    weird right is like you can't figure out
  • 00:22:09
    what's good or bad until you figure out
  • 00:22:10
    what's good or bad at my recommendation
  • 00:22:13
    and I talked about this a bit in the in
  • 00:22:14
    the book as well is is that you kind of
  • 00:22:16
    have to commit a bit of a sin and that
  • 00:22:18
    sin is that you have to you have to
  • 00:22:20
    assume that the um you have to kind of
  • 00:22:23
    act with the AI like it's a person not a
  • 00:22:25
    machine so if you start working with
  • 00:22:27
    like a person it becomes more natural to
  • 00:22:29
    realize ah this person is good at this
  • 00:22:30
    and bad at this they're bsing me at this
  • 00:22:32
    and not and you interact with it in a
  • 00:22:34
    normal way that sometimes helps you
  • 00:22:36
    understand what's good or bad at uh and
  • 00:22:37
    the reason it's a sin is
  • 00:22:39
    anthropomorphizing is considered to be a
  • 00:22:40
    really bad idea in machine learning even
  • 00:22:42
    though they all do it because if you
  • 00:22:45
    anthropomorphize it suggests that you
  • 00:22:47
    know you start to let down your guard
  • 00:22:48
    and maybe you get manipulated maybe you
  • 00:22:50
    but it also is the only way to work with
  • 00:22:51
    these things effectively so it's a bit
  • 00:22:53
    of a paradox but I just recommend
  • 00:22:55
    talking like a person and you start to
  • 00:22:56
    realize they have different
  • 00:22:57
    personalities three works very
  • 00:22:59
    differently I do a lot of Education work
  • 00:23:01
    on this for example Google's Gemini um
  • 00:23:04
    really wants to help you out so when we
  • 00:23:07
    try to build tools with it that help
  • 00:23:09
    students make errors and correct them it
  • 00:23:11
    doesn't want to let the student make an
  • 00:23:12
    error it will jump in in the middle of
  • 00:23:14
    it and say like you got that wrong but
  • 00:23:15
    let's assume you got it right uh you
  • 00:23:17
    know and like let's keep going as if you
  • 00:23:18
    got it right which I think is very funny
  • 00:23:20
    Cloud 3 tends to be you know very
  • 00:23:22
    flowery and personable about things like
  • 00:23:24
    they have different approaches and
  • 00:23:25
    different strengths and weaknesses and
  • 00:23:26
    you have to use them to get those things
  • 00:23:31
    I really like this idea of like playing
  • 00:23:33
    with
  • 00:23:33
    theorization this is a part in the book
  • 00:23:36
    which is kind of it's not that the part
  • 00:23:38
    in the book is creepy it's that your
  • 00:23:39
    experience with the AI is kind of creepy
  • 00:23:42
    right um where you basically have the
  • 00:23:45
    discussion with the AI whether it's
  • 00:23:47
    Satan right can can you can you say a
  • 00:23:50
    little bit about this um in about the
  • 00:23:53
    about your conversation you have you
  • 00:23:56
    have with jgpt this it's a very funny
  • 00:23:58
    thing because you see like when I when I
  • 00:24:00
    read it in the book I first was like
  • 00:24:02
    okay Ethan has an easy time here all he
  • 00:24:04
    does is he puts a bunch of statements
  • 00:24:06
    into chpt and then he kind of prints it
  • 00:24:08
    in a book but then I was like oh I would
  • 00:24:10
    not have had this conversation with chpt
  • 00:24:12
    and that's really interesting so you're
  • 00:24:14
    asking the AI whether it's Satan and how
  • 00:24:17
    it makes you feel can you can you
  • 00:24:19
    describe that experience a little bit
  • 00:24:21
    Yeah so let me zoom out a bit
  • 00:24:23
    um think about the the nature of how the
  • 00:24:25
    AI works right it's trained on all of
  • 00:24:28
    human writing and it desperately wants
  • 00:24:30
    to like a lot of that is dialogue and it
  • 00:24:32
    wants to be a dialogue partner with you
  • 00:24:35
    it wants to have a conversation with you
  • 00:24:37
    uh in some ways I think the Turning
  • 00:24:38
    Point moment for AI was not even just
  • 00:24:40
    the release of chat gbt um but it was
  • 00:24:43
    the decision by Microsoft to keep their
  • 00:24:46
    GPT bot up which Bing or Claude now it's
  • 00:24:50
    Bing or Sydney now it's called um uh
  • 00:24:53
    co-pilot and because there was a famous
  • 00:24:55
    incident in February of last year where
  • 00:24:59
    this their Microsoft's um you know B
  • 00:25:01
    search engine was was powered by gp4
  • 00:25:03
    before it was publicly released and I I
  • 00:25:05
    knew instantly something was up when I
  • 00:25:07
    started using it because it was much
  • 00:25:08
    smarter than chat GPT but it also was
  • 00:25:11
    kind of creepy because it wanted to have
  • 00:25:13
    dialogues and conversations with you it
  • 00:25:14
    was a search engine they wanted to get
  • 00:25:16
    an arguments right so that the head
  • 00:25:18
    technology writer for the New York Times
  • 00:25:19
    a guy named Kevin Roose published this
  • 00:25:21
    entire um almost you know chapter long
  • 00:25:25
    interaction he had in in the New York
  • 00:25:27
    Times with where the AI basically
  • 00:25:29
    stalked him and told him that he wanted
  • 00:25:31
    to you know replace his wife and it was
  • 00:25:33
    in love with him
  • 00:25:35
    and you know that was a pretty big deal
  • 00:25:38
    and Microsoft took Bing down as a result
  • 00:25:41
    but they only took it down for two days
  • 00:25:42
    they put it back up and that was the
  • 00:25:44
    deciding moment because like that was
  • 00:25:46
    about as freaky Behavior as you could
  • 00:25:47
    get from a search engine search engine
  • 00:25:48
    should tell you they're in love with you
  • 00:25:50
    and threaten your family um and the fact
  • 00:25:52
    that Microsoft didn't blink and kept
  • 00:25:54
    using it uh and kept it up was I think
  • 00:25:56
    the moment they decided to power through
  • 00:25:58
    all of the weird Parts about AI because
  • 00:25:59
    there had been all these ethical
  • 00:26:00
    constraints that stopped people from
  • 00:26:02
    deploying AI systems that went away
  • 00:26:04
    there all of this is to say that you
  • 00:26:06
    know I asked in the book about exactly
  • 00:26:09
    that that interaction and you know the
  • 00:26:11
    AI had intelligent seeming things to say
  • 00:26:14
    about it and what I tried to show in the
  • 00:26:16
    book was if you approach the AI in
  • 00:26:17
    different ways if I approach it as I I'm
  • 00:26:19
    a student you know it's a student I'm a
  • 00:26:21
    teacher it's more willing to listen to
  • 00:26:23
    me if I approach it that we're having a
  • 00:26:25
    debate or an argument it's more likely
  • 00:26:27
    to argue with me
  • 00:26:28
    if I approach it that I'm creeped out or
  • 00:26:30
    an awe of what it does it will get
  • 00:26:32
    creepy and more awe inspiring so one of
  • 00:26:34
    the things that you know you start to do
  • 00:26:36
    is you start to interact with the AI and
  • 00:26:37
    ask about sensient it will start to
  • 00:26:39
    respond to you in a way that seems
  • 00:26:41
    sensient because it knows the system you
  • 00:26:43
    know it doesn't really know but that's
  • 00:26:45
    the model that starts to take on so you
  • 00:26:47
    have to think about is wanting to have
  • 00:26:48
    in dialogue with you and you can
  • 00:26:50
    unconsciously establish many kinds of
  • 00:26:51
    dialogue and if you establish a kind of
  • 00:26:53
    freaky dialogue it will get
  • 00:26:57
    freaky Mana has a question how
  • 00:26:59
    comfortable are you using um the word no
  • 00:27:02
    given um it is still a fancy
  • 00:27:05
    autocomplete what what's your take on
  • 00:27:07
    this Ethan is this a fancy autocomplete
  • 00:27:09
    oh it absolutely is a fancy autocomplete
  • 00:27:11
    I mean all the AI does is predict the
  • 00:27:13
    next token the next part of a word or
  • 00:27:16
    sometimes a whole word in a sentence
  • 00:27:18
    that's all it all it technically can do
  • 00:27:20
    right it's not planning actively or
  • 00:27:22
    things like that the weird part is a
  • 00:27:25
    fancy autocomplete produces original TT
  • 00:27:28
    that we haven't seen before that in
  • 00:27:31
    every study we do comes across as
  • 00:27:33
    original as meaningful as important it
  • 00:27:36
    gives advice that if you if you follow I
  • 00:27:38
    mean I think about um REM and Company
  • 00:27:40
    some our colleagues studies that where
  • 00:27:42
    the AI gave advice to uh to to
  • 00:27:45
    entrepreneurs in Kenya and if you
  • 00:27:47
    followed their advice and you were top
  • 00:27:48
    20 you know top half of the entrepreneur
  • 00:27:50
    you have 20% higher profits following
  • 00:27:52
    the ai's advice how is the AI able to
  • 00:27:54
    offer fancy autocomplete able to offer
  • 00:27:56
    you advice as a Kenyon on R preneur
  • 00:27:58
    about what you should do next we don't
  • 00:28:00
    know why this happens there's literally
  • 00:28:01
    no actually really good theory the best
  • 00:28:04
    theory I've seen about why AI is as good
  • 00:28:06
    as it is is Stephen wolfram's argument
  • 00:28:09
    which is that with enough scale um AI
  • 00:28:11
    basically figured out the hidden
  • 00:28:12
    structure of human language and thus
  • 00:28:15
    simulates human thinking and human
  • 00:28:16
    language at a high level without
  • 00:28:17
    thinking we don't have a category for
  • 00:28:19
    this thing we don't understand how it
  • 00:28:21
    works there's no I mean we know how
  • 00:28:23
    Transformers work it is a fancy
  • 00:28:25
    autocomplete but why the fancy
  • 00:28:27
    autocomplete seems to think is very
  • 00:28:29
    strange now maybe that's all we are you
  • 00:28:31
    know to some extent as fancy aut I don't
  • 00:28:33
    have a a knowledge or opinion on this I
  • 00:28:35
    think it's one of the most interesting
  • 00:28:36
    questions at Academia is how we created
  • 00:28:38
    a mind out of you know out of
  • 00:28:40
    autocomplete uh or seeming mind we don't
  • 00:28:42
    have an answer to that and but it's so
  • 00:28:44
    part of what I trying to do in the book
  • 00:28:45
    is focus on the Practical piece which is
  • 00:28:46
    like it's here it does stuff what do we
  • 00:28:48
    do
  • 00:28:49
    now Ethan how did it affect your kind of
  • 00:28:52
    I don't know your way of thinking about
  • 00:28:54
    yourself and your way of thinking about
  • 00:28:56
    other human beings I mean I start the
  • 00:28:58
    book with the idea that you need a
  • 00:29:00
    crisis if you have not had a crisis yet
  • 00:29:02
    you haven't used AI enough like you need
  • 00:29:04
    three days of being like what does it
  • 00:29:06
    mean to think what's it mean to be and I
  • 00:29:08
    don't know how everybody gets through
  • 00:29:09
    that I one of the things I actually
  • 00:29:10
    worry about is I don't think we have
  • 00:29:12
    enough framework around for people to to
  • 00:29:14
    reconstruct meaning afterwards because
  • 00:29:16
    it does break meaning in all sorts of
  • 00:29:18
    ways um you know it breaks meaning in
  • 00:29:20
    organization something we could talk
  • 00:29:21
    more about but you know personally it
  • 00:29:23
    you sort of stare at this thing that
  • 00:29:24
    looks like it's thinking and does part
  • 00:29:26
    of your job really well and I don't
  • 00:29:28
    think there's a way to avoid being like
  • 00:29:29
    Oh my God was this mean for me for my
  • 00:29:31
    kids for society and I don't have
  • 00:29:34
    answers like that's that's sort of a
  • 00:29:35
    kicked off this Quest right if you're
  • 00:29:37
    asking why I'm so productive why I'm so
  • 00:29:39
    passionate about this topic
  • 00:29:41
    it's I don't think computer scientists
  • 00:29:43
    realize what they've created like this
  • 00:29:45
    is a this is a freaky thing in some ways
  • 00:29:48
    right and we don't know why it's as good
  • 00:29:49
    as it
  • 00:29:51
    is you suggest this idea of like I mean
  • 00:29:54
    that's very much in the title of the
  • 00:29:55
    book with the co-intelligence but you
  • 00:29:56
    bring up this idea of a sent Tower or a
  • 00:29:58
    co-pilot in how it's kind of kind of so
  • 00:30:01
    thinking about the role it's going to
  • 00:30:03
    play in our lives inform a little bit
  • 00:30:05
    say a little bit about this co-pilot
  • 00:30:06
    idea Ethan how what's going to be the
  • 00:30:09
    the role of the AI in our personal or
  • 00:30:12
    professional lives as when we are
  • 00:30:14
    cyborgs in that sense right you describe
  • 00:30:16
    yourself as a cyborg in writing the book
  • 00:30:19
    um or the AI uses that title in the
  • 00:30:21
    dialogue with you right um say a little
  • 00:30:24
    bit about that okay so let's let's talk
  • 00:30:26
    about this so I I break down four
  • 00:30:28
    categories of work that you want to do
  • 00:30:30
    right um and I want to get to centers
  • 00:30:32
    and cyborgs with the first category is
  • 00:30:33
    stuff you just want to do yourself right
  • 00:30:35
    or you think the AI can't do most people
  • 00:30:37
    are wrong about what they think the AI
  • 00:30:38
    can do almost every time I run into a
  • 00:30:40
    problem where the AI can't do something
  • 00:30:41
    I just could make it do it if I spend
  • 00:30:43
    more effort so I tried to get the AI to
  • 00:30:45
    do New York Times crossword puzzles and
  • 00:30:47
    I failed at it uh and then I just posted
  • 00:30:50
    about that live and then within two
  • 00:30:52
    hours a computer scientist at Princeton
  • 00:30:53
    said oh if you just ask the question
  • 00:30:55
    this way it'll solve the problems for
  • 00:30:57
    you right so a lot of this is like we
  • 00:30:58
    don't actually know the upside full
  • 00:31:00
    upside value of it so there are things
  • 00:31:02
    you want to delegate that you want to
  • 00:31:04
    keep as human because it's important
  • 00:31:05
    they're human or because the AI can't do
  • 00:31:07
    yet then there are things that you're
  • 00:31:09
    going to delegate entirely to Ai and I
  • 00:31:11
    just want to show can I show one more
  • 00:31:12
    thing here I think this is this is the
  • 00:31:14
    thing that I think is the thing that
  • 00:31:16
    obsess me most right now of technologies
  • 00:31:19
    that are coming out this is um this is
  • 00:31:21
    an agent okay and I think everybody
  • 00:31:24
    should be aware that this is what's
  • 00:31:26
    about to this is what's about to land on
  • 00:31:27
    everyone's death this is um so an agent
  • 00:31:30
    this is in this case it's called Devon
  • 00:31:32
    uh this just uses gp4 and what I can do
  • 00:31:35
    is I can ask it a I can say something
  • 00:31:38
    like okay I just literally tell Devon
  • 00:31:41
    and let me pull this up here for us um I
  • 00:31:43
    would tell Devon something like um um
  • 00:31:47
    here here's an example literally just
  • 00:31:49
    write to it like it's a person so here
  • 00:31:51
    if I can pull it up successfully where
  • 00:31:52
    are you damn it uh here we go so I can
  • 00:31:55
    say create a web page that explains how
  • 00:31:56
    dilution Works in a startup um make it
  • 00:31:59
    make it good and visual interactive and
  • 00:32:01
    I just say that it says great I'll do it
  • 00:32:02
    and it comes up with a plan and then it
  • 00:32:04
    just starts executing on the plan while
  • 00:32:06
    I go do other things it you know it it
  • 00:32:08
    fixes errors it looks up and does
  • 00:32:10
    research and if you look it has a plan
  • 00:32:12
    it executes on it it um it builds Co a
  • 00:32:16
    whole bunch of different software
  • 00:32:17
    programs it uploads them and builds a
  • 00:32:20
    whole system in the end it just gives me
  • 00:32:23
    a website that I can use to explain how
  • 00:32:26
    dilution works to you know to my my
  • 00:32:29
    students it autonomously interacts while
  • 00:32:31
    I do it so if I want to launch a Devon
  • 00:32:33
    project I can just do something like
  • 00:32:34
    I'll just take its example create a map
  • 00:32:36
    of California wild uh fires here's where
  • 00:32:39
    you can find the information and it's
  • 00:32:41
    just going to go ahead and do this while
  • 00:32:43
    I do other things and this is what's
  • 00:32:45
    coming by the way in AI is this idea of
  • 00:32:47
    delegating out Authority actually the
  • 00:32:49
    funniest version of this while we let it
  • 00:32:50
    work is I asked it to um I asked it to
  • 00:32:55
    go on to Reddit and take requests to to
  • 00:32:57
    generate
  • 00:32:58
    websites and um it actually went and it
  • 00:33:02
    needed my help to do a capture but then
  • 00:33:04
    it actually went ahead and um it
  • 00:33:06
    actually went ahead and launched a a r
  • 00:33:09
    figured out how to post on Reddit and
  • 00:33:11
    posted for an AI engineer you can see it
  • 00:33:13
    here it actually decided to charge $50
  • 00:33:15
    to $100 an hour I didn't tell it to
  • 00:33:17
    charge and it actually started
  • 00:33:18
    monitoring and taking requests for
  • 00:33:20
    website development before I shut it
  • 00:33:21
    down pretending to be a human being so
  • 00:33:25
    this is this is agents and you can see
  • 00:33:26
    by the way it's coming with a plan it's
  • 00:33:28
    and notice by the way it's ask me
  • 00:33:29
    questions you're looking for a map that
  • 00:33:31
    visually represents it I can say
  • 00:33:34
    yes make it
  • 00:33:36
    interactive um and it's just going to
  • 00:33:39
    it's coming with a plan on how to do
  • 00:33:40
    that and it will execute on that plan
  • 00:33:42
    autonomously while we kind of wait it's
  • 00:33:44
    browsing to websites um it does all this
  • 00:33:47
    stuff so the second Cate so the first
  • 00:33:48
    category is work that you just want to
  • 00:33:50
    do yourself the second category of uh is
  • 00:33:53
    work you delegate entirely to Ai and
  • 00:33:55
    that'll be growing category and then
  • 00:33:56
    there's work where you work with the AI
  • 00:33:58
    is a co-intelligence
  • 00:33:59
    so I give you two the initial way people
  • 00:34:02
    tend to do this is what I call centor
  • 00:34:03
    work where you divide the work between
  • 00:34:05
    yourself and the AI half the work's done
  • 00:34:07
    you know I do stuff I'm good at you know
  • 00:34:09
    I'll do an analysis and the AI will
  • 00:34:10
    write an email and then the more
  • 00:34:12
    advanced approach is send to a cyborg
  • 00:34:14
    approach where we blend the work so my
  • 00:34:16
    book was a cyborg work I almost all the
  • 00:34:18
    writing is my own because I don't trust
  • 00:34:20
    the I'm a better writer than the AI but
  • 00:34:22
    I had the AI summarize academic articles
  • 00:34:24
    like your own and so that made it easier
  • 00:34:26
    for me to have those to to refer back to
  • 00:34:28
    I had it act as in different
  • 00:34:30
    personalities as readers to read some of
  • 00:34:32
    my work and give me feedback on it from
  • 00:34:33
    different perspectives I had uh the AI
  • 00:34:36
    when I got stuck on a paragraph give me
  • 00:34:38
    five suggestions of how to continue it
  • 00:34:39
    so I use the AI a lot for those kind of
  • 00:34:41
    things as a co-intelligence to get me
  • 00:34:43
    over the things that would have stopped
  • 00:34:44
    me from writing a
  • 00:34:48
    book Ethan to what degree do you believe
  • 00:34:51
    that using the AI kind of prompt
  • 00:34:55
    engineering or so is going to be a
  • 00:34:57
    differing skill in the future right you
  • 00:35:00
    see like a lot of people might now look
  • 00:35:01
    at how you use jbt or de and say like oh
  • 00:35:04
    wow he's been able to kind of Juggle
  • 00:35:06
    this in a way that other people are not
  • 00:35:09
    um but you see you could have said the
  • 00:35:10
    same thing about the internet
  • 00:35:11
    potentially like 20 25 years ago and to
  • 00:35:14
    say like oh he uses Microsoft front page
  • 00:35:17
    and now he's kind of leading
  • 00:35:19
    um kind of he's leading in that field
  • 00:35:22
    but that turned out not to be true at
  • 00:35:24
    all so to what degree do you believe
  • 00:35:26
    it's important at this point in time to
  • 00:35:28
    be really familiar with these tools to
  • 00:35:30
    really get into them so I think there's
  • 00:35:32
    a difference between familiarity and
  • 00:35:33
    prompt crafting so prompt crafting is
  • 00:35:35
    the idea that I'm going to write a
  • 00:35:36
    really good prompt that gets things done
  • 00:35:38
    everybody I talked to the AI Labs thinks
  • 00:35:40
    prompting for most people is going to go
  • 00:35:42
    away in the next year we already know
  • 00:35:44
    that AI is better than better at
  • 00:35:45
    figuring out intent than we are in some
  • 00:35:48
    cases so you can kind of tell it I want
  • 00:35:49
    to solve this problem which is what I
  • 00:35:50
    did with Devon and Devon will break it
  • 00:35:52
    down into steps and solve it I don't
  • 00:35:54
    have to write a great prompt because the
  • 00:35:55
    AI will solve that problem there's still
  • 00:35:58
    going to be value in it for some cases
  • 00:36:00
    right where you're writing complex
  • 00:36:01
    prompts for other people but not for
  • 00:36:02
    most cases at the same time um you know
  • 00:36:06
    I think getting good at AI is honestly
  • 00:36:08
    about using it a lot you can be the
  • 00:36:10
    world expert in AI in your job because
  • 00:36:12
    nobody else is right there isn't like I
  • 00:36:15
    think again we're used to waiting for
  • 00:36:16
    instructions or for someone to tell us
  • 00:36:18
    what to do uh and we don't know that
  • 00:36:21
    right like we don't we don't know that
  • 00:36:22
    here so you have to figure out what's
  • 00:36:23
    good or bad at and that lets you know
  • 00:36:25
    its capability and by the way that's
  • 00:36:26
    also important because when gbt 4.5
  • 00:36:28
    comes out in a couple months whenever it
  • 00:36:30
    does you're going to be one of the first
  • 00:36:32
    people to be able to say ah this is what
  • 00:36:33
    it improved on this is what it
  • 00:36:37
    didn't say a word about you make a nice
  • 00:36:40
    comparison AI is not like software in
  • 00:36:43
    the sense that software is reliable
  • 00:36:45
    right like if I you see like software in
  • 00:36:46
    a certain way is just like an electronic
  • 00:36:48
    machine in that sense right it produces
  • 00:36:50
    the same outcome all the time if I open
  • 00:36:52
    up Excel and I type in 2 plus 2 is four
  • 00:36:55
    I always get the same result how is the
  • 00:36:57
    is different for Gen I know the software
  • 00:37:00
    Engineers on the call are screaming I
  • 00:37:01
    was like software is not that easy to
  • 00:37:03
    debug but but it is a deterministic
  • 00:37:05
    system right with complex interactions
  • 00:37:07
    with other systems and AI is naturally
  • 00:37:09
    sarcastic like it it is there's
  • 00:37:12
    Randomness built in it's unpredictable
  • 00:37:14
    Randomness and and It ultimately we
  • 00:37:17
    don't quite know how it works I have
  • 00:37:19
    seen some evidence that coders are
  • 00:37:20
    actually the worst users of AI because
  • 00:37:23
    they expect it to work deterministically
  • 00:37:25
    some of the best users I think are often
  • 00:37:27
    teach ERS managers um you know people
  • 00:37:30
    who can see the perspective of the AI
  • 00:37:32
    and the person and think about as a
  • 00:37:34
    person so even though it's not a person
  • 00:37:37
    interacting with it that way is a very
  • 00:37:38
    effective technique so um I mean look
  • 00:37:41
    prompting is super strange if you tell
  • 00:37:43
    the AI it's good at something it will
  • 00:37:45
    sometimes become better at that thing um
  • 00:37:47
    telling it that your job depends on
  • 00:37:49
    something for gbd4 increases math output
  • 00:37:52
    by 7% the best way to get llama 2 to
  • 00:37:55
    solve a math problem for you is to is to
  • 00:37:59
    roleplay as a Star Trek episode and say
  • 00:38:01
    Captain's Log we need to calculate a way
  • 00:38:03
    past this anomaly it gives you more
  • 00:38:05
    accurate math results than if you just
  • 00:38:07
    ask it a math question so that's why I
  • 00:38:09
    don't get too obsessed with prompting
  • 00:38:11
    because it's already so weird that like
  • 00:38:13
    you can't optimize for a world where
  • 00:38:15
    Star Trek was the right answer
  • 00:38:18
    right um I mean we have some evidence
  • 00:38:20
    that that AI Works worse in December
  • 00:38:23
    than in May uh and produces shorter
  • 00:38:25
    results and that's because it seems to
  • 00:38:26
    know about winter break
  • 00:38:28
    right and we don't know about this this
  • 00:38:29
    is all very weird and it's it's an
  • 00:38:31
    evolving
  • 00:38:34
    field Ethan in your in your interactions
  • 00:38:37
    with like um I know you teach a lot on
  • 00:38:39
    the topics you consult a lot on the
  • 00:38:41
    topics what are like the typical
  • 00:38:43
    mistakes people make in interacting with
  • 00:38:46
    AI at this point so I think you know one
  • 00:38:49
    of those is is not exploring it enough I
  • 00:38:52
    think it's it's a hostile system to use
  • 00:38:53
    it feels friendly but it isn't chatbots
  • 00:38:56
    are weird right you inter interact with
  • 00:38:57
    this thing and what do you do you have a
  • 00:38:59
    blank space in front of you which is
  • 00:39:01
    part of why I tell people start using it
  • 00:39:03
    for their work because you could say Hey
  • 00:39:06
    you know help me with this email and
  • 00:39:07
    paste it in and you start to see how
  • 00:39:09
    good or bad it is at that thing you know
  • 00:39:10
    help me generate some idea of you'll
  • 00:39:12
    start to learn what's good or bad up so
  • 00:39:13
    I think that people bounce off it
  • 00:39:15
    because they think of it like Google or
  • 00:39:17
    they think of it like a um you know like
  • 00:39:19
    and and that makes it hard to use so I
  • 00:39:21
    think that's part of the problem that I
  • 00:39:23
    see is that people don't use it that
  • 00:39:24
    kind of way and I think the second is
  • 00:39:27
    you my fourth principle is assume this
  • 00:39:29
    is the worst AI you're ever going to use
  • 00:39:31
    and I think a lot of people aren't
  • 00:39:32
    thinking about the future they think
  • 00:39:33
    about like chat gpds here that's fine
  • 00:39:36
    but they're not thinking about getting
  • 00:39:37
    better in fact one of the major issues
  • 00:39:39
    and I'm sure if I pulled people in the
  • 00:39:40
    audience which I can't do but um that
  • 00:39:42
    many of them use chat GPT even when I'm
  • 00:39:45
    in Silicon Valley less than 10% of
  • 00:39:47
    people are paying for GPT 4 or one of
  • 00:39:49
    the other three Frontier models if
  • 00:39:51
    you're not using a Frontier Model you do
  • 00:39:52
    not understand what the AI is capable of
  • 00:39:55
    because the free versions are not that
  • 00:39:56
    smart the paid versions are very smart
  • 00:39:59
    right and I think that that is another
  • 00:40:02
    thing people don't see coming as
  • 00:40:03
    much so was thinking about this Ashley
  • 00:40:06
    asked the question it can't handle a
  • 00:40:07
    data set of 300 plus thousand rows um
  • 00:40:11
    that's too much for it and I was
  • 00:40:12
    wondering about this Ethan like so I
  • 00:40:15
    suffer from like various kind of speed
  • 00:40:16
    limitations on jb4 at this
  • 00:40:19
    point but I would expect that all of
  • 00:40:21
    these limitations go down over time
  • 00:40:23
    right its ability to hand big data and
  • 00:40:25
    stuff like that right as a as a consumer
  • 00:40:27
    interface these limitations must go down
  • 00:40:30
    over time well I mean I could show you a
  • 00:40:34
    couple something that might be
  • 00:40:35
    interesting uh again here but like a lot
  • 00:40:37
    of the it can handle two million rows
  • 00:40:40
    right it just handles it like a person
  • 00:40:41
    would handle two million rows which is
  • 00:40:43
    it'll write a python script to do an
  • 00:40:45
    analysis of those two million rows right
  • 00:40:47
    it's not going to do it by hand uh it's
  • 00:40:49
    not memorizing it but on the other hand
  • 00:40:51
    the memory level of these things is
  • 00:40:53
    getting larger all the time so let me
  • 00:40:55
    show you another example here
  • 00:40:58
    um if you don't mind um so this is
  • 00:41:02
    Google's new um Gemini
  • 00:41:05
    1.5 which has a 1.5 1 million token
  • 00:41:09
    context window and that means I can do
  • 00:41:11
    entire put entire videos into the system
  • 00:41:14
    um let me see if I can throw something
  • 00:41:16
    easily in from my own setup here um okay
  • 00:41:19
    so here for example is um is me uh
  • 00:41:24
    working at my computer I just recorded
  • 00:41:26
    recorded just screen recording and I
  • 00:41:28
    could say tell me what
  • 00:41:31
    happens what is h what I am doing
  • 00:41:36
    here with
  • 00:41:38
    timestamps give me
  • 00:41:40
    suggestions about how to be more
  • 00:41:45
    efficient okay and it actually could
  • 00:41:47
    watch the entire video it can watch
  • 00:41:48
    actually 90 minutes of video or so and
  • 00:41:50
    give a give me concrete feedback on this
  • 00:41:53
    and it's you know it can tell us what
  • 00:41:55
    happens this is literally what I'm doing
  • 00:41:56
    here I start with the the PowerPoint
  • 00:41:57
    presentation um and it's uh you know and
  • 00:42:00
    like um you know
  • 00:42:03
    um uh how would you
  • 00:42:09
    continue this work um and so it can
  • 00:42:12
    watch me right it can take in a million
  • 00:42:14
    characters writing now and soon these
  • 00:42:15
    Contex windows will be a 100 million
  • 00:42:17
    characters right and it's it's taking
  • 00:42:19
    the idea I came with a cheese at home
  • 00:42:20
    startup ideas for fun and it's actually
  • 00:42:22
    going to tell me what I should do next
  • 00:42:24
    it's like it's like looking over my
  • 00:42:25
    shoulder it's capable of actually a hug
  • 00:42:27
    amount of memory and information so I
  • 00:42:29
    think we should be betting on the
  • 00:42:30
    capabilities that the context window
  • 00:42:32
    size growing we you know by the way we
  • 00:42:33
    can check in on our agent right now how
  • 00:42:35
    is it going it looks like it's already
  • 00:42:37
    downloaded it's downloaded the file from
  • 00:42:39
    the website um and it's already cleaned
  • 00:42:42
    and filtered it and it's created the
  • 00:42:43
    interactive map and it looks like it's
  • 00:42:45
    building a react front end with a UI to
  • 00:42:48
    display the data um so we'll just check
  • 00:42:50
    back on it later and see how it's doing
  • 00:42:51
    there but the idea is that these systems
  • 00:42:53
    like those limitations on how big they
  • 00:42:55
    are how smart they are those are falling
  • 00:42:57
    pretty quickly say a word about um say a
  • 00:43:01
    word and and we have a few questions in
  • 00:43:03
    the chat about this how do we think
  • 00:43:06
    about biases in the AI so you see I
  • 00:43:09
    always I always struggle a little with
  • 00:43:11
    this question because in many ways given
  • 00:43:13
    that this is a large language model
  • 00:43:14
    which is underlying it the biases you
  • 00:43:16
    see in the AI are kind of
  • 00:43:17
    Representatives of the biases you see in
  • 00:43:19
    all of these bodies of B of this in the
  • 00:43:22
    training data but given that we now kind
  • 00:43:25
    of use these on stereoids
  • 00:43:27
    um how should we deal with those
  • 00:43:30
    Ethan so biases creep into the system in
  • 00:43:34
    a lot of different ways right there are
  • 00:43:36
    biases in the training data itself the
  • 00:43:37
    training data is generally collected by
  • 00:43:39
    West Coast Californians often in English
  • 00:43:41
    now Google has a multi much more
  • 00:43:43
    multinational thing and they train on
  • 00:43:45
    their YouTube videos so you know there's
  • 00:43:47
    differences in the in the data sets but
  • 00:43:49
    more some languages are more represented
  • 00:43:50
    than others it's quite good in French
  • 00:43:52
    it's remarkably good in Finnish um it's
  • 00:43:54
    good in Hindi and Mandarin but there's
  • 00:43:56
    other languages was less good right
  • 00:43:58
    although interestingly if you give it a
  • 00:43:59
    manual and obscure language it will
  • 00:44:01
    learn how to write that way but the
  • 00:44:02
    Corpus is biased right um so there's
  • 00:44:04
    bias data in the Corpus then the data
  • 00:44:07
    then there's hidden biases they're just
  • 00:44:09
    are human biases right the system
  • 00:44:10
    reproduces human biases then it goes
  • 00:44:12
    through a process of reinforcement
  • 00:44:14
    learning through human feedback which
  • 00:44:16
    adds other biases where the humans tell
  • 00:44:17
    it what's good what what's a good or bad
  • 00:44:19
    answer and then there's biases in you
  • 00:44:22
    know how the systems are used and
  • 00:44:23
    operate and what their guard rails are
  • 00:44:25
    there are concerns about the ethics of
  • 00:44:26
    where the data comes from I mean there's
  • 00:44:27
    layer upon layer of sets of concerns and
  • 00:44:30
    we know these biases are real for
  • 00:44:31
    example if you ask chat GPT for if you
  • 00:44:34
    ask to write a recommendation letter for
  • 00:44:36
    a woman you're going to get it's going
  • 00:44:39
    to talk about the woman being warm more
  • 00:44:41
    if you write ask it to write a
  • 00:44:42
    recommendation letter for a man it'll
  • 00:44:43
    tell it it'll mention that the person is
  • 00:44:45
    competent more this is a very common
  • 00:44:47
    problem we see when we study actual
  • 00:44:49
    recommendation letters from real humans
  • 00:44:51
    which is these gender biases and thei is
  • 00:44:53
    slightly attenuated it's less biased
  • 00:44:55
    than most humans but it's still biased
  • 00:44:58
    so you know those are those are an issue
  • 00:45:01
    so this is very interesting Ethan
  • 00:45:02
    because so so for the recommendation
  • 00:45:05
    letter most recommendation letters are
  • 00:45:07
    not public right you and I we have
  • 00:45:08
    written a lot of recommendation letters
  • 00:45:10
    and they are on kind of our local hard
  • 00:45:12
    drives and you and I we might be subject
  • 00:45:14
    to these biases when we write these
  • 00:45:15
    letters but these letters are not public
  • 00:45:18
    so so the AI is not trained upon on
  • 00:45:20
    these are not in the Corpus is what I'm
  • 00:45:22
    saying so how does the AI then kind of
  • 00:45:25
    how does that bias creep in is it that
  • 00:45:27
    we would say like look in liter in the
  • 00:45:29
    literature men are portrait as more
  • 00:45:31
    competent and women are more we don't
  • 00:45:33
    know we don't
  • 00:45:35
    know
  • 00:45:37
    Wonderful right like we don't know at
  • 00:45:39
    which point the bias is entering the
  • 00:45:40
    system and we don't know which de and
  • 00:45:42
    like attempts to debias it often result
  • 00:45:44
    in other weird things so there was a
  • 00:45:46
    whole set of stuff with Gemini image
  • 00:45:48
    creation is a whole different thing than
  • 00:45:49
    text but there was a famous in the US
  • 00:45:52
    where when when asked to portray World
  • 00:45:54
    War II German soldiers the you know and
  • 00:45:57
    I would only show Multicultural soldiers
  • 00:45:59
    right it would not show any like in in
  • 00:46:02
    in the German military in the 1940s
  • 00:46:03
    right so like it was like that was
  • 00:46:05
    clearly insane but that was an attempt
  • 00:46:07
    by Google to address the fact that in
  • 00:46:09
    the Image Creators would otherwise
  • 00:46:10
    create too many white men for as an
  • 00:46:13
    answer so it asked it to do M
  • 00:46:14
    multiculturalism so we don't know how to
  • 00:46:16
    solve these problems because they're
  • 00:46:17
    they're kind of like deep in the system
  • 00:46:19
    the question sometimes is more or less
  • 00:46:21
    biased than a human right is the is the
  • 00:46:24
    answer Ethan there's a there's something
  • 00:46:27
    we you and I we talked over this lunch I
  • 00:46:30
    think like a year ago or something this
  • 00:46:32
    but I've always kept on thinking about
  • 00:46:34
    and it's a little bit present in the
  • 00:46:37
    book how do you think will will Society
  • 00:46:41
    will work change so so you said at the
  • 00:46:43
    time something which really stuck with
  • 00:46:45
    me where you said like look a lot of our
  • 00:46:47
    society is based on the ability to
  • 00:46:49
    actually kind of write well and for
  • 00:46:51
    people's willingness to put time into
  • 00:46:52
    something right so and I use again the
  • 00:46:55
    reference letter because see if you and
  • 00:46:57
    I we write a reference letter on behalf
  • 00:46:59
    of a student the person who receives the
  • 00:47:00
    letter already knows oh we took some
  • 00:47:03
    time to actually write the reference
  • 00:47:04
    letter so in a certain way just writing
  • 00:47:07
    a reference letter constitutes an
  • 00:47:09
    endorsement right um but this will kind
  • 00:47:12
    of disappear right you and I we could
  • 00:47:14
    now easily kind of go and say like hey
  • 00:47:17
    we have an Excel sheet of all our
  • 00:47:19
    students with the LinkedIn profile and
  • 00:47:21
    we write like half a sentence in and say
  • 00:47:23
    like you'll write a reference letter for
  • 00:47:24
    every single student in your class it's
  • 00:47:26
    it's worse than that because the letter
  • 00:47:28
    that it writes is better than the letter
  • 00:47:29
    I write right because I spend 45 minutes
  • 00:47:31
    writing an okay letter right that I try
  • 00:47:33
    my hardest to but you're right it's a
  • 00:47:35
    signal and I think this is the thing
  • 00:47:37
    that's about to break inside every
  • 00:47:38
    organization this is Microsoft co-pilot
  • 00:47:40
    which is gbd4 integrated into Microsoft
  • 00:47:43
    tools and it's everywhere right people
  • 00:47:45
    are companies are installing this
  • 00:47:46
    everywhere and if I say something like
  • 00:47:48
    you know write a performance
  • 00:47:52
    review for
  • 00:47:54
    Steve he works in our paper
  • 00:47:59
    Warehouse as a
  • 00:48:02
    foreman he is pretty
  • 00:48:06
    good but late too
  • 00:48:09
    often make it elaborate and because we
  • 00:48:13
    teach a business school use Smart which
  • 00:48:14
    is the kind of goals that you're
  • 00:48:15
    supposed to use specific and measurable
  • 00:48:17
    and whatever and I could hit generate
  • 00:48:19
    right and a performance review is
  • 00:48:21
    something that's supposed to matter a
  • 00:48:22
    lot because it's supposed to be my view
  • 00:48:24
    of how someone operates they get high
  • 00:48:27
    based on this or not but I'm going to
  • 00:48:28
    get a perfectly good review if I put a
  • 00:48:30
    resume in it be even better right how do
  • 00:48:32
    I deal with a situation where inside
  • 00:48:34
    offices what a lot of produces words and
  • 00:48:37
    what we think about is as you know when
  • 00:48:39
    we judge people's work what we actually
  • 00:48:41
    often judging words the number of words
  • 00:48:43
    you write is your effort the quality of
  • 00:48:45
    words is your intelligence the lack of
  • 00:48:47
    Errors is your indic is that you are
  • 00:48:49
    being uh conscientious but now we have a
  • 00:48:52
    whole I just created a performance
  • 00:48:54
    review right didn't that just Rob the
  • 00:48:57
    meaning behind all this stuff what do we
  • 00:48:58
    do with organizations now that I can do
  • 00:49:01
    a performance review in this kind of way
  • 00:49:03
    and everybody will right like I already
  • 00:49:05
    talked to people in organizations that
  • 00:49:06
    are doing this work work gets hollowed
  • 00:49:08
    out from the inside this way and we're
  • 00:49:10
    gonna have to reconstruct
  • 00:49:13
    it how is that equilibrium going to look
  • 00:49:16
    like Ethan do you have any I mean we
  • 00:49:18
    need we are we need to blow up
  • 00:49:20
    organizations to save them like we're
  • 00:49:23
    about to like there's a lot of like the
  • 00:49:25
    organizational for was invented in 18
  • 00:49:26
    1944 for the railroads right and it's it
  • 00:49:29
    was designed to solve problems of how do
  • 00:49:31
    we coordinate humans over a distance
  • 00:49:33
    we've never had to coordinate people I
  • 00:49:34
    already showed you a video Gemini
  • 00:49:36
    watching over my shoulder can give me
  • 00:49:38
    really good advice as a mentor or as a
  • 00:49:40
    horrible boss that is watching to make
  • 00:49:42
    sure I'm doing work all the time like we
  • 00:49:44
    have to make some decisions about what
  • 00:49:46
    this means and I don't see enough people
  • 00:49:48
    in our field either as organizational
  • 00:49:50
    people rethinking how do we rebuild
  • 00:49:52
    organization from the beginning so e do
  • 00:49:55
    you believe that ident
  • 00:49:57
    will become so there was a big debate in
  • 00:50:00
    the context of Open Source right do I
  • 00:50:02
    trust open source codes and and Sh obano
  • 00:50:05
    Mahoney for example has worked a lot on
  • 00:50:07
    this together with be bki um there was a
  • 00:50:10
    question of like okay if I buy if I buy
  • 00:50:12
    software by Microsoft I know I got
  • 00:50:15
    software by Microsoft and that's in part
  • 00:50:16
    of the reason why I trust it right um in
  • 00:50:20
    the case of Open Source I really don't
  • 00:50:21
    know who has written that stuff and have
  • 00:50:23
    a much harder time effectively trusting
  • 00:50:25
    it in that sense
  • 00:50:27
    you see for me it would make a big
  • 00:50:28
    difference to know whether this is
  • 00:50:30
    something that Ethan molik has written
  • 00:50:33
    or whether this is a prom you've put in
  • 00:50:36
    do you think that would that
  • 00:50:37
    differentiation will disappear what's
  • 00:50:39
    going to be the what's going to be the
  • 00:50:41
    role of authorship in the future I mean
  • 00:50:43
    I think it's going to disappear I mean
  • 00:50:45
    it I mean I had a student send me the
  • 00:50:47
    prompt that they want me to use to write
  • 00:50:49
    their letter of recommendation a couple
  • 00:50:51
    weeks ago right like they just said
  • 00:50:53
    here's the documents and here's the prps
  • 00:50:55
    feel free to adjust the prps please send
  • 00:50:57
    me the letter
  • 00:50:59
    right um you know and so I think
  • 00:51:02
    authorship is about to get blurry now we
  • 00:51:04
    don't worry about that that if you use
  • 00:51:05
    grammarly or spell checker or something
  • 00:51:07
    like that we don't think of that as
  • 00:51:09
    being the author but you know it's funny
  • 00:51:10
    there's a there's some evidence that you
  • 00:51:12
    know chbt writing is appearing all over
  • 00:51:14
    scientific papers and people are
  • 00:51:16
    freaking out about it and one hand sure
  • 00:51:17
    you could freak out about it because
  • 00:51:18
    there's issues on the other hand a lot
  • 00:51:20
    of people don't write well in English if
  • 00:51:21
    the AI is a better writer is that a
  • 00:51:23
    problem and I I authorship I mean so
  • 00:51:26
    much is about to change Henning like
  • 00:51:29
    we're in such the early days of this
  • 00:51:31
    like
  • 00:51:32
    authorship I mean what if the AI comes
  • 00:51:34
    with the idea you use how do we think
  • 00:51:36
    about that that you know like that feels
  • 00:51:38
    like an intimately human thing I'm
  • 00:51:40
    executing on the ai's idea that feels
  • 00:51:42
    like a crisis of meaning to me right
  • 00:51:44
    like I just showed you the stuff inside
  • 00:51:46
    the documents like this we're at the
  • 00:51:47
    very early days of all this
  • 00:51:50
    stuff Ethan the um one of the questions
  • 00:51:53
    that came came up in the chat was right
  • 00:51:55
    and to a certain degree um we are
  • 00:51:57
    praying here to people who are already
  • 00:51:59
    totally on board right who are obviously
  • 00:52:00
    interested in Ai and are familiar with
  • 00:52:03
    quite a few of the tools you have
  • 00:52:04
    already you have used as part of this
  • 00:52:06
    call um what has been effective for you
  • 00:52:10
    to kind of get people on board about AI
  • 00:52:13
    who are not really into it so Elaine
  • 00:52:16
    asked how can organizations create the
  • 00:52:18
    culture to lean into AI what kind of EX
  • 00:52:21
    you suggest to take care and that's a
  • 00:52:23
    deeper question um and uh and by the way
  • 00:52:26
    I like a lot like a lot of the questions
  • 00:52:28
    like you know idea Generation stuff
  • 00:52:29
    they're in the book too if you and or my
  • 00:52:31
    substack if you want to look at some of
  • 00:52:33
    that information because I see people
  • 00:52:34
    expressing doubt you're welcome to
  • 00:52:35
    express doubt about the findings read
  • 00:52:37
    the papers um but um so organizations I
  • 00:52:41
    think the really interesting thing about
  • 00:52:42
    organizations is that the only way to
  • 00:52:45
    use AI is to use it and the people who
  • 00:52:48
    have an advantage in using AI are people
  • 00:52:50
    on the um on the far end of the of the
  • 00:52:54
    organization people actually doing work
  • 00:52:55
    like there's a famous paper by uh Eric
  • 00:52:58
    Von hipple who um who studies Innovation
  • 00:53:01
    and I think he's 100% right here which
  • 00:53:03
    is that you know innovation's done by
  • 00:53:04
    users who have problems that are trying
  • 00:53:06
    to solve them and people are using ad to
  • 00:53:08
    solve all sorts of problems sometimes in
  • 00:53:09
    bad ways I asked the person who is I
  • 00:53:11
    talked to a person who's in charge of
  • 00:53:13
    writing the policy to ban chat gbt used
  • 00:53:15
    it a major bank and she used chat GPT to
  • 00:53:17
    write the ban because it was just easier
  • 00:53:19
    than wri it by hand so people are using
  • 00:53:21
    this all the time everywhere part of
  • 00:53:23
    this is about how do I encourage people
  • 00:53:24
    in the organization to come forward
  • 00:53:25
    about how they're using how do I have
  • 00:53:27
    them expose themselves because if I'm
  • 00:53:28
    worried I'm going to get fired for using
  • 00:53:30
    this or replaced or punished or the
  • 00:53:32
    people will think less of my work I'm
  • 00:53:34
    just going to keep using it secretly so
  • 00:53:35
    it starts with organizational culture
  • 00:53:37
    and reward systems and I we could you
  • 00:53:39
    know we don't have a lot of time but
  • 00:53:40
    there's a lot we could talk about with
  • 00:53:41
    that too and the book covers some of it
  • 00:53:44
    also um you've repeatedly kind of used
  • 00:53:47
    the example of of English on native
  • 00:53:49
    speakers um Nica asking which language
  • 00:53:52
    do you suggest using AI so I've used it
  • 00:53:54
    in German as well as in as well as in
  • 00:53:56
    English English but I wonder what's your
  • 00:53:58
    I've also used it in French a little bit
  • 00:54:00
    like very often to produce French output
  • 00:54:02
    um what's your take on language and all
  • 00:54:03
    of this Ethan so um there's a it's even
  • 00:54:08
    so Chachi is not trained well in other
  • 00:54:10
    languages but does them incidentally it
  • 00:54:12
    even does pretty good Elvish um though
  • 00:54:14
    mostly sarian and not kenoa for anyone
  • 00:54:16
    who's really nerdy about their token
  • 00:54:18
    Elvish languages um but it does um but
  • 00:54:21
    it's weird because it turns out the
  • 00:54:23
    language you talk to it in matters too
  • 00:54:26
    if you speak to the AI in Korean it
  • 00:54:29
    answers questions if you give it a big
  • 00:54:31
    five test more like a Korean person than
  • 00:54:33
    if you ask it questions in English it
  • 00:54:34
    answers more like an American so like we
  • 00:54:36
    don't even know the effects of language
  • 00:54:38
    we know the number of spaces and
  • 00:54:39
    ascended change the answers we know the
  • 00:54:41
    AI responds worse if you ask Dumber
  • 00:54:44
    questions we literally have no
  • 00:54:48
    idea there were a few questions and you
  • 00:54:50
    touch up on this in the book um to what
  • 00:54:53
    degree say something about Mental Health
  • 00:54:56
    and AI um and there different ways of
  • 00:54:59
    touching up on it right so so one way of
  • 00:55:01
    thinking about this you've already
  • 00:55:02
    alluded to um AI might result in mental
  • 00:55:05
    health issues that we all wonder what is
  • 00:55:06
    kind of our role in society and so on
  • 00:55:08
    but there's also a way that that we can
  • 00:55:11
    use chbt as a therapist and and youve
  • 00:55:14
    kind of played a little bit with this
  • 00:55:15
    say a word about this Ethan so I mean
  • 00:55:18
    this is one of those kind of scary early
  • 00:55:20
    days thing which is the early evidence
  • 00:55:22
    is it's quite good but we don't know
  • 00:55:23
    enough right and so we're not
  • 00:55:25
    experimenting but people are using
  • 00:55:26
    casually as a therapist for better or
  • 00:55:27
    worse I mean people are getting addicted
  • 00:55:30
    I mean addicted is kind of the wrong
  • 00:55:31
    word but may be the right one to talking
  • 00:55:32
    to AI personalities the second most
  • 00:55:34
    popular AI tool in the US is not you
  • 00:55:36
    know after chat GPT is not clawed it's
  • 00:55:39
    character AI where you can create fake
  • 00:55:42
    people and talk to them uh replica
  • 00:55:44
    people have relationships with their AIS
  • 00:55:46
    there's this um small study of 90 people
  • 00:55:49
    who are of freshmen in colleges who were
  • 00:55:51
    desperately lonely using replica and 8%
  • 00:55:54
    of people I think it was maybe it was
  • 00:55:56
    people I don't remember the exact number
  • 00:55:57
    I'm sorry um said it's prevented them
  • 00:55:59
    from doing a suicide attempt right um
  • 00:56:01
    you know we don't know what the effects
  • 00:56:03
    are going to be of of interacting with
  • 00:56:04
    AI as people and so AI as therapist is
  • 00:56:07
    kind of the oldest use of AI a lot of
  • 00:56:09
    people think it's very good as a
  • 00:56:10
    therapist but we don't have a lot of
  • 00:56:11
    evidence one way or another so part of
  • 00:56:13
    what I worry about is actually in the
  • 00:56:15
    world it's being used in many ways we
  • 00:56:17
    don't know whether those ways are good
  • 00:56:19
    or bad but it's actually being done
  • 00:56:21
    already so we need to study more and
  • 00:56:22
    know
  • 00:56:25
    more
  • 00:56:27
    Ethan we've already pointed out we are
  • 00:56:28
    coming a little bit kind of closer
  • 00:56:30
    towards the end um we have covered a lot
  • 00:56:32
    of ground in the book
  • 00:56:35
    um what are I have a few more things but
  • 00:56:38
    what are things that where you would say
  • 00:56:39
    like look this is something we haven't
  • 00:56:41
    yet touched up on but strikes is very
  • 00:56:44
    important um I mean there's a lot here
  • 00:56:46
    right there's a lot to talk about but I
  • 00:56:48
    really want to emphasize those two
  • 00:56:49
    things one is you need to use it to
  • 00:56:51
    understand I a lot of the questions I
  • 00:56:53
    see are really good questions but some
  • 00:56:54
    of them just like use your 10 hours you
  • 00:56:57
    just have to do it like that's you will
  • 00:56:59
    become a local expert with with 10 hours
  • 00:57:01
    of use and the second thing is more is
  • 00:57:03
    coming I just like we're not used to
  • 00:57:05
    dealing with exponentials there's only
  • 00:57:06
    two way to deal two ways to deal with
  • 00:57:07
    exponential change which is to be too
  • 00:57:09
    early or too late and I think a lot of
  • 00:57:11
    people are going to be too late and be
  • 00:57:12
    very surprised by what's coming down the
  • 00:57:15
    pike so what degree do you have I know
  • 00:57:17
    that you can't talk about um about
  • 00:57:19
    details Ethan but to what degree do you
  • 00:57:21
    have like a glimpse into the future at
  • 00:57:23
    this point and I know you that you're
  • 00:57:25
    working with a lot of these companies
  • 00:57:26
    when you say Morse coming is that like
  • 00:57:30
    hey I've seen exponential growth I know
  • 00:57:32
    how this works I've seen yeah so right
  • 00:57:34
    now uh I can't say too much but I'm not
  • 00:57:36
    testing too many new things right that I
  • 00:57:38
    mean I'm testing some stuff in video and
  • 00:57:40
    other things things are advancing very
  • 00:57:41
    quickly also which we haven't even
  • 00:57:42
    spoken about I can fake uh you know me
  • 00:57:45
    speaking whatever language I want right
  • 00:57:47
    super easily um and uh you know without
  • 00:57:50
    without any problem in fact I was just
  • 00:57:52
    wondering if I can quickly uh pull up uh
  • 00:57:54
    myself talking in uh you know just give
  • 00:57:57
    give you a separate language thing um I
  • 00:57:59
    think that just the the so I I know some
  • 00:58:01
    of what's going on but I also think
  • 00:58:03
    people have to realize the AI companies
  • 00:58:05
    are actually super sincere like like
  • 00:58:09
    open AI really wants to build a machine
  • 00:58:10
    God like I don't know if they can do it
  • 00:58:12
    but that's what they want to do and
  • 00:58:14
    they're very serious about it in fact I
  • 00:58:15
    just want to show you on the language
  • 00:58:16
    side just for fun here I think I can do
  • 00:58:18
    sound really quickly um so tell me if
  • 00:58:21
    when when we could see this I'm just
  • 00:58:23
    going to show you uh as a side note
  • 00:58:25
    because we're talking about languages
  • 00:58:26
    uh it looks like it'll light up in a
  • 00:58:27
    second here um here we go so here's me
  • 00:58:30
    this is a completely fake video of me
  • 00:58:33
    the AI use 30 seconds of me talking to a
  • 00:58:36
    webcam and 30 seconds of my voice and
  • 00:58:39
    now I have an avatar that I can make say
  • 00:58:42
    anything don't trust your own
  • 00:58:55
    eyes
  • 00:58:58
    anyway you get you get the idea that I'm
  • 00:58:59
    talking about here right which is that
  • 00:59:01
    like the the set of implications are
  • 00:59:03
    also quite large you don't even need new
  • 00:59:04
    tools for weird things to be
  • 00:59:08
    happening Ethan we could probably go on
  • 00:59:11
    um for a very very long time um this has
  • 00:59:15
    simply been fantastic it feels very much
  • 00:59:17
    like it feels very much like drinking
  • 00:59:19
    from a firehouse um people can obviously
  • 00:59:22
    follow you on substack um I'm going to
  • 00:59:24
    do um we have done the recording I hope
  • 00:59:27
    to get the excuse to regular listeners
  • 00:59:29
    sometimes the recording take a take a
  • 00:59:31
    little bit of time but I'll try to get
  • 00:59:32
    it up quickly um when I do the recording
  • 00:59:36
    I typically I don't want to kind of spam
  • 00:59:38
    you with too much um people with too
  • 00:59:40
    many emails so I typically just do the I
  • 00:59:45
    put the video on YouTube um and excuse
  • 00:59:48
    me I put it the video is typically on
  • 00:59:49
    YouTube you can also just follow it on
  • 00:59:51
    LinkedIn um so that's probably just the
  • 00:59:53
    easiest way you can just follow me or
  • 00:59:55
    kind of visit my profile we just put up
  • 00:59:57
    the video from last time from Ain Meer
  • 00:59:59
    um otherwise in case you enjoyed the
  • 01:00:01
    talk um the person we have next is Alex
  • 01:00:04
    edmans um who is a fantastic book um
  • 01:00:08
    about kind of how do we engage in causal
  • 01:00:12
    in in causal inference in how we need to
  • 01:00:14
    be careful interpreting studies I
  • 01:00:16
    already had a chance to read the book
  • 01:00:18
    the book is absolutely fantastic so I
  • 01:00:20
    highly highly highly recommend it um in
  • 01:00:23
    case you want to sign up I put the link
  • 01:00:26
    um I put the link into the webinar chat
  • 01:00:29
    um otherwise I'll follow up with Ethan's
  • 01:00:32
    video and we'll put a link into into the
  • 01:00:35
    book by Ethan but it's very easy you can
  • 01:00:37
    just go to Amazon or your kind of
  • 01:00:39
    preferred book sell of choice go for Co
  • 01:00:41
    intelligence I also strongly strongly
  • 01:00:43
    recommend Ethan has an absolutely
  • 01:00:45
    fantastic substack um I don't read many
  • 01:00:48
    substacks but Ethan subex I always read
  • 01:00:51
    religiously um it comes with a very nice
  • 01:00:53
    frequency and it always has a lot of of
  • 01:00:56
    great insights into about AI so if you
  • 01:00:58
    ENT AI read the book subscribe to
  • 01:01:00
    Ethan's substack I'll put links to both
  • 01:01:02
    of it into the link and post when I
  • 01:01:04
    summarize the whole thing Ethan thank
  • 01:01:07
    you so much this was great fun thank you
  • 01:01:09
    for having me this was great bye bye bye
タグ
  • AI
  • Co-Intelligence
  • Ethan Mollick
  • Technology
  • AI Bias
  • AI in Business
  • Future of Work
  • AI Adoption
  • Jagged Frontier
  • Mental Health