Co-Intelligence: Living and Working with AI
Résumé
TLDRThe webinar features Henning Piezunka and Ethan Mollick, discussing Ethan's extensive work on technology, particularly in AI, its implications, and his recent book "Co-Intelligence: Living and Working with AI." Ethan shares insights on AI's capabilities, its role as a co-intelligence, and the unpredictability of AI's performance, termed the "Jagged Frontier." The discussion touches on AI's transformative potential across industries, its integration into daily tasks, and the challenges of bias and misconceptions surrounding AI's function. Ethan emphasizes the necessity for hands-on AI experience to truly grasp its abilities and states that future enhancements will continue reshaping professional landscapes, pushing people to adapt continually. The conversation reflects on how AI impacts societal constructs, such as authorship and trust, as AI-produced outputs become indistinguishable from those created by humans.
A retenir
- 📘 Ethan Mollick's book explores AI as a co-intelligence.
- 🤖 AI has unexplored potential that requires hands-on use.
- ⚖️ Bias in AI remains a significant challenge.
- 🌀 AI's Jagged Frontier indicates its unpredictable abilities.
- 💻 Future works might be transformed by AI efficiency.
- 🧠 AI is being explored for mental health assistance.
- 🔍 Research shows AI outperforms humans in many areas.
- 🌍 AI's role is growing rapidly across industries.
- 🛠 Organizations should encourage AI exploration safely.
- 👥 AI changes how we perceive and interact with work.
Chronologie
- 00:00:00 - 00:05:00
Henning Piezunka, an associate professor, introduces the webinar 'Between the Lines' featuring Ethan Mollick from the Wharton School, highlighting his work in technology and AI.
- 00:05:00 - 00:10:00
Ethan Mollick discusses his book 'Co-Intelligence: Living and Working with AI', outlining the evolving capabilities of AI and its potential to enhance human performance.
- 00:10:00 - 00:15:00
Ethan describes how AI, specifically GPT-4, can outperform humans in idea generation and professional tasks, emphasizing the importance of using AI as a tool for enhancement.
- 00:15:00 - 00:20:00
The concept of 'co-intelligence' is explained, wherein AI acts as a partner in human activities, offering unique support despite its non-sentient nature.
- 00:20:00 - 00:25:00
Ethan shares various use cases of AI improving efficiency in business, illustrating AI's role as a transformative tool in innovation and decision-making processes.
- 00:25:00 - 00:30:00
Challenges and opportunities presented by AI's capabilities are explored, focusing on how humans must adapt their roles and responsibilities in professional settings.
- 00:30:00 - 00:35:00
Ethan demonstrates AI's ability to perform complex tasks such as data analysis and creative writing, showcasing its versatility and understanding of context.
- 00:35:00 - 00:40:00
The idea of 'jagged frontier' is introduced, underlining AI's unpredictable strengths and weaknesses, which necessitates human oversight in AI-powered tasks.
- 00:40:00 - 00:45:00
Ethan underscores the importance of experimenting with AI to discover its practical applications and limitations, encouraging a hands-on approach.
- 00:45:00 - 00:50:00
The ethical and practical considerations of using AI are discussed, including biases in AI systems and the need for responsible application of AI technologies.
- 00:50:00 - 00:55:00
Ethan explores potential societal shifts due to AI, questioning the roles of authorship and personal identity in a digital world where AI is increasingly prominent.
- 00:55:00 - 01:01:12
The discussion concludes with thoughts on preparing for AI's future, emphasizing ongoing learning and adaptation to integrate AI effectively into personal and professional life.
Carte mentale
Questions fréquemment posées
Who is Ethan Mollick?
Ethan Mollick is a scholar from the Wharton School at the University of Pennsylvania known for his work on technology, video games, crowdfunding, and AI.
What is the book "Co-Intelligence" about?
The book discusses AI, its capabilities, and its role as a co-intelligence in living and working with it.
What are some key concepts discussed in the webinar?
Key concepts include AI's role as co-intelligence, prompt engineering, biases in AI, and the future impact on professions.
How does Ethan Mollick view AI's impact on professional tasks?
Mollick believes AI can significantly enhance professional tasks, offering improvements in efficiency and creativity, while people should focus on what they do best.
What is the "Jagged Frontier" referring to?
The Jagged Frontier refers to the unpredictable strengths and weaknesses of AI in various tasks.
How can organizations facilitate AI adoption?
Organizations should foster a culture that encourages the use of AI, allowing employees to safely experiment and innovate with it.
What are some potential uses of AI in mental health?
AI is being explored for therapy and support, with some users finding significant benefits in interacting with AI for mental health purposes.
What advice does Ethan give for using AI?
Ethan advises spending at least 10 hours using AI to understand its potential and establish familiarity with its functions.
What concerns exist regarding AI biases?
AI may reflect and perpetuate biases present in its training data and society, affecting fairness and objectivity in outputs.
How might AI transform professional environments?
AI might redefine work by handling more routine tasks, pushing individuals to focus on higher-level creative or strategic pursuits.
Voir plus de résumés vidéo
- 00:00:00welcome to the webinar my name is
- 00:00:02Henning pizuna I'm an associate
- 00:00:04professor at inad um today at between
- 00:00:08the lines the webinar of inad lifelong
- 00:00:11learning we have Ethan molik from the
- 00:00:14Wharton School at the University of
- 00:00:17Pennsylvania um Ethan it's a
- 00:00:21great if you're online it would be great
- 00:00:24for you to join um fantastic um before
- 00:00:27we get started um kind of jump right
- 00:00:29into the middle of it I'm going to say a
- 00:00:31few few things about you I have actually
- 00:00:33known Ethan's work I believe since yeah
- 00:00:36relatively exactly I actually looked it
- 00:00:38up yesterday 15 years it's the first
- 00:00:40time I encountered one of your working
- 00:00:41papers in 2009 this is your work on
- 00:00:45video games Ethan has been at the
- 00:00:48leading front of technology for a very
- 00:00:51very long time um Ethan um made a big
- 00:00:55splash coming out of MIT kind of making
- 00:00:58video games actually a sub
- 00:01:00in academic research there's been
- 00:01:02literally hundreds of papers afterwards
- 00:01:04building on his work um Ethan then wrote
- 00:01:06one of the pioneering papers on
- 00:01:09crowdfunding I I I didn't look it up
- 00:01:11recently but it has thousands and
- 00:01:13thousands of citations so he became the
- 00:01:15leading scholar on crowdfunding um he
- 00:01:17got tenured at um at the Wharton
- 00:01:20business school and then he took on AI
- 00:01:23and has now become one of the leading
- 00:01:26Scholars I don't think I'm allowed to
- 00:01:27kind of say who gets him out as a guest
- 00:01:29speaker but all the big corporations
- 00:01:31these days get Ethan out and say like we
- 00:01:34want to understand what this is about
- 00:01:36how can we manage this how can how can
- 00:01:38we work with this um I have so many
- 00:01:41positive things to say about Ethan um
- 00:01:44it's hard to it's hard to say um there's
- 00:01:46probably no one who I feel more
- 00:01:49energized um um energized after talking
- 00:01:52with Ethan he's like always bubbling
- 00:01:54with fantastic ideas it's not by
- 00:01:56accident that he's always at the leading
- 00:01:58font um he's in a beautiful beautiful
- 00:02:01way incredibly curious and Incredibly
- 00:02:04empathetic so it's a great great
- 00:02:06pleasure and great great honor to have
- 00:02:07you here Ethan thank you so much for
- 00:02:09joining that is such a flattering intro
- 00:02:11I'm I'm thrilled to be
- 00:02:12here Ethan tell us about the book um the
- 00:02:16book has just come out um po
- 00:02:19intelligence living and working with AI
- 00:02:21what it is about what are the Big Ideas
- 00:02:23here um okay so you know it's a really
- 00:02:26interesting thing I I I found myself at
- 00:02:29accidentally sort of at the center of a
- 00:02:30lot of AI things um and um over the last
- 00:02:35year and a half or so I've been exper I
- 00:02:37worked in the media lab with Marvin
- 00:02:38Minsky back in the day was one of the
- 00:02:40founders of AI but I've never been the
- 00:02:41technical person I've always been the
- 00:02:43sort of business how does this all
- 00:02:44matter person and you kind of talking
- 00:02:46about the themes of my work I've been
- 00:02:47thinking about games and teaching at
- 00:02:48scale and teaching at distance for a
- 00:02:50very long time so I've playing with AI
- 00:02:52for a while actually my students even
- 00:02:53before chat GPD came out cheating with
- 00:02:56AI there was an explicit assignment
- 00:02:57where they had to cheat by create uh by
- 00:03:00uh writing an essay with AI even before
- 00:03:01chat came out so I was kind of in place
- 00:03:04when this all happened um and I've kind
- 00:03:06of been watching for the front lines as
- 00:03:08as AIU spread and took off I read a few
- 00:03:11papers on it um and I talked to all the
- 00:03:13AI companies once a week or so and the
- 00:03:14idea was like to try and give a sense of
- 00:03:17where we are and what's going on um and
- 00:03:19it's a little hard right a book is
- 00:03:21against a moving Target like AI I think
- 00:03:23I think it kind of nailed in this space
- 00:03:24but it's kind of an overview it's sort
- 00:03:25of the idea of where we are right now at
- 00:03:28the capability curve of AI is that it
- 00:03:30does act as a kind of co-intelligence if
- 00:03:32you use it properly it can uh pretty
- 00:03:35much offer enhancements to many forms of
- 00:03:37human performance it's not good at some
- 00:03:39things you'd expect to be good at good
- 00:03:41at others and the attempt was to kind of
- 00:03:43show where we are and where we might be
- 00:03:44heading in the near future um and my my
- 00:03:47inspiration was actually a book that
- 00:03:48inspired me when I was an undergraduate
- 00:03:49called being digital by Nicholas degante
- 00:03:51who was wrote in the 90s he was the head
- 00:03:53of media lab about where we were with
- 00:03:55digital technology and I I wanted to try
- 00:03:57and do the same thing here didn't he do
- 00:03:59the famous $100 laptop is that him or
- 00:04:01that was him afterwards yes okay okay
- 00:04:04the um what is the co-intelligence Ethan
- 00:04:07um what's the what's the idea behind the
- 00:04:09term I think it's a great title for the
- 00:04:10book um what's the what what do you mean
- 00:04:13by co-intelligence so the idea is that
- 00:04:16we are in
- 00:04:18that AI is not alive it's not sensient
- 00:04:22but it can make you think it is right we
- 00:04:24don't 100% know nobody actually quite
- 00:04:26knows why the Transformer architecture
- 00:04:29that you know is a was developed in 2017
- 00:04:32based on other machine learning
- 00:04:33architectures why with enough scale it
- 00:04:35sort of starts to act like it thinks
- 00:04:37like a human and so I don't deal with
- 00:04:39the philosophy very much behind it but I
- 00:04:41do try and deal with the Practical
- 00:04:42implications and we now have a you know
- 00:04:44a profusion of papers and research some
- 00:04:46of I've done some of it other colleagues
- 00:04:47have done in other places and lots of
- 00:04:49other people working on this that shows
- 00:04:51that practically for example the AI out
- 00:04:53ineds humans so my colleagues um you
- 00:04:55know Carl orich and Christian turt who
- 00:04:57literally wrote the book on innovation
- 00:04:59their graduate students and and another
- 00:05:01professor wrote had 200 of the MBA
- 00:05:04students uh in their class on Innovation
- 00:05:06a famous one that's raised a lot of
- 00:05:07money uh to you know um generate ideas
- 00:05:11and then they had the AI generate 200
- 00:05:12ideas they had outside people judge the
- 00:05:14ideas by willingness to pay of the top
- 00:05:1640 ideas by willingness to pay 35 came
- 00:05:18from gbd4 only five from the students in
- 00:05:20the room when we did a study at BCG we
- 00:05:22found a 40% improvement from naively
- 00:05:24using gp4 like these are very big
- 00:05:27effects with the most elite sort of
- 00:05:28business uses we're seeing some things
- 00:05:29in law medicine um it still doesn't
- 00:05:32replace a human but if you're not using
- 00:05:34this as a supplement for creativity for
- 00:05:36Innovation for even if you know for for
- 00:05:38writing then you're you're sort of
- 00:05:41leaving yourself behind we have the
- 00:05:42option to actually boost intelligence
- 00:05:43the first time we've had a long for a
- 00:05:44long time machines that could boost your
- 00:05:46ability to do work right if I told you
- 00:05:48that you need to dig a ditch behind your
- 00:05:50backyard you wouldn't get your 20
- 00:05:51stoutest friends together to dig a ditch
- 00:05:53you would hire a machine or a crew to do
- 00:05:55that for you um or you know a rent
- 00:05:57equipment to do it in the same way we've
- 00:05:59never had a machine that improves how we
- 00:06:02think or what we could think and now we
- 00:06:04do we have like a back ho of the mind
- 00:06:05and that's a really big
- 00:06:07deal you see there's there's something
- 00:06:09qualitatively different about it right
- 00:06:11so take your take your study you did
- 00:06:13with the BCG Consultants right where you
- 00:06:15basically say like look um in
- 00:06:17classically kind of Consulting task um
- 00:06:19jet GPT is better um than better than
- 00:06:22these BCG consultings what is
- 00:06:24qualitatively different about this than
- 00:06:26comparing a car to a horse and like hey
- 00:06:30it has more pulling power I I think it's
- 00:06:31different but I have I have a hard time
- 00:06:33nailing it Ethan what's so what's
- 00:06:35different about jgpt or
- 00:06:38like gen more more broadly than compared
- 00:06:42to like prior Technologies is there
- 00:06:44something about this technological shift
- 00:06:46where you would say like Ah that's
- 00:06:48qualitatively different I mean I think
- 00:06:50it's different in every possible way we
- 00:06:51again we've never had the only general
- 00:06:54purpose um you know thing that's
- 00:06:55improved human thinking before has been
- 00:06:57like this like you know coffee basically
- 00:07:00um yeah we both got them right so um so
- 00:07:04I don't think it's just qualitative like
- 00:07:05we haven't we've had a whole bunch of
- 00:07:07revolutions around mechanical use we've
- 00:07:09had revolutions never re revolutions or
- 00:07:12cognitive tools like spreadsheets and
- 00:07:14you studied you know chess computers
- 00:07:16right in narrow Fields we've never had
- 00:07:17general intelligence uh in this in this
- 00:07:20kind of way before right um so like it
- 00:07:23is a new thing in the world um and I
- 00:07:26don't think we know the full
- 00:07:27implications and what's kind of shocking
- 00:07:29how quickly both adoption is happening
- 00:07:32and um how how much latent capabilities
- 00:07:34is in these systems that people don't
- 00:07:36understand yet so part of what amazes me
- 00:07:38is even if we stopped technological
- 00:07:40development today we would still on gbd4
- 00:07:43based models alone have 10 years of
- 00:07:45figure how to absorb this with
- 00:07:48work you see this is something that I
- 00:07:50found Most Fascinating you have you have
- 00:07:52this in the book the example that CH jpt
- 00:07:53can actually play chess despite the fact
- 00:07:56that it's not been trained to play chess
- 00:07:58right uh um and it's I forgot but it's
- 00:08:01relative it's it's quite good right it's
- 00:08:02like an ELO score of 1,500 or something
- 00:08:05like that we seems like Claude might be
- 00:08:062,000 also so we don't know for sure but
- 00:08:09again they it's not even that they're
- 00:08:10not trained to play chess right like
- 00:08:12that alone is weird for example chat gbt
- 00:08:15which is trained on a lot of garbage
- 00:08:17actually like if you look at the pile
- 00:08:18the data that the AI was really really
- 00:08:20trained on it's not just sort of the
- 00:08:23internet common crawl and things like
- 00:08:24that but 6% of the data set is enron's
- 00:08:27emails uh the failed US Energy company
- 00:08:30because it went went bankrupt that went
- 00:08:31in the public domain there's a huge
- 00:08:33amount of Harry Potter fanfiction inside
- 00:08:36inside the uh the training material so
- 00:08:38out of all this random stuff right we
- 00:08:40have this very capable tool that can
- 00:08:42beat most doctors in in medical advice
- 00:08:45you know do well in law um and you know
- 00:08:47all of these other topics that you
- 00:08:49wouldn't expected to be good at so it is
- 00:08:51pretty amazing to see and we don't know
- 00:08:54quite why it's so good at this so chess
- 00:08:56the weird thing about the chess piece is
- 00:08:57not just that it does chess but that it
- 00:08:59doesn't there's no computer there
- 00:09:00there's no planning um how it could you
- 00:09:03know there's more States in chess than
- 00:09:05there are ability of the AI to kind of
- 00:09:07remember those positions so we don't
- 00:09:08actually even know why it's so good at
- 00:09:10chess so one thing I personally really
- 00:09:13like about the book is is that it's it's
- 00:09:16it's a very pragmatic book right you see
- 00:09:18like there's a lot of debates about like
- 00:09:20oh will this actually lead to the
- 00:09:22apocalypse or will this kind of what do
- 00:09:24we do about copyrights and you touch up
- 00:09:26on these things but you write at the end
- 00:09:28of the day you write a very preg IC book
- 00:09:29in the sense of like how can we actually
- 00:09:32use this and you kind of prescribed
- 00:09:34these four rules or four principles um
- 00:09:37about um about how to use how to use AI
- 00:09:40can can you talk a little bit about this
- 00:09:42because you see a lot of people on the
- 00:09:43call will probably think about okay how
- 00:09:45do I now make use of this in my life so
- 00:09:48so I would love to kind of get a little
- 00:09:50bit with you um into these principles so
- 00:09:52the first principles to point out is
- 00:09:54always invite AI to the table say a
- 00:09:58little bit about this all right so
- 00:09:59there's a little caveat which is where
- 00:10:00you ethically and legally can so I want
- 00:10:02to make that clear but the idea is that
- 00:10:06with
- 00:10:07um we don't know what it does well or
- 00:10:09badly so the Boston Consulting Group
- 00:10:12paper I mentioned before where we found
- 00:10:13a huge performance Improvement we uh
- 00:10:16also noticed that you know people don't
- 00:10:18know in advance what the AI does or
- 00:10:20doesn't do well so um you know famously
- 00:10:23right I could ask um you know I could
- 00:10:25ask and maybe I'll show some demos later
- 00:10:27but I could ask uh you know GPD for to
- 00:10:30write me a you know an academic paper or
- 00:10:32summarize an academic paper in a sonnet
- 00:10:34it'll do a really good sonnet if I ask
- 00:10:36you to write a 25-word paragraph it
- 00:10:38won't do that because it doesn't see
- 00:10:40words the way we do it sees tokens it's
- 00:10:42bad at math good at tell better at
- 00:10:44empathy than most doctors how do you
- 00:10:46deal with a system that can write a
- 00:10:47Sonet but not 25 words where it might be
- 00:10:50really good at your field but might make
- 00:10:52mistakes you have to use it to figure
- 00:10:54out what's good at and one thing I
- 00:10:55really want to emphasize to everyone in
- 00:10:57the call and I think if I had one
- 00:10:58message like the key message is nobody
- 00:11:01knows anything right what I mean is I
- 00:11:03talk to open AI on a regular basis I
- 00:11:05talk to anthropic I talk to Google and
- 00:11:08there isn't a secret instruction manual
- 00:11:09out there there is not like the actual
- 00:11:11way AI works and everybody knows it and
- 00:11:13you just have to pay a consultant or
- 00:11:15wait for open AI to tell you the answer
- 00:11:17Nobody Knows the answer in whatever
- 00:11:18subfield you're in nobody can tell you
- 00:11:21how to best use AI to do it so the first
- 00:11:24piece is just to use it to figure that
- 00:11:25out to figure out where it's strong or
- 00:11:26weak where it compliments you where it
- 00:11:28doesn't and you have to use it I think
- 00:11:3010 hours is my minimum requirement and
- 00:11:32you have to by the way use a a gbd4
- 00:11:35Class A Frontier Model so another thing
- 00:11:37to know about AI is there's a scaling
- 00:11:39law that's holds in effect right now the
- 00:11:41bigger your model is which means the
- 00:11:42more information that goes into it but
- 00:11:44also the more training it takes the more
- 00:11:46expensive it is the smarter it is so gp4
- 00:11:50does incredibly well on things like the
- 00:11:52bar exam or medical licens exams
- 00:11:54compared to gbt 3.5 in fact gbd4 which
- 00:11:57is the smartest kind of model out there
- 00:11:59outpaces and there's two others right
- 00:12:01now these Frontier models outpaces
- 00:12:03specialized models so Bloomberg spent a
- 00:12:06$10 million training Bloomberg GPT which
- 00:12:09was a specialized AI model built for you
- 00:12:12know financial analysis but it even
- 00:12:15after spending $10 million on it gbd4
- 00:12:17does better on financial analysis than
- 00:12:19the specialized model so you want to use
- 00:12:21the most advanced model you can the
- 00:12:22three options right now for you and it's
- 00:12:24different in Europe because I know
- 00:12:25there's limitations but that's Claude 3
- 00:12:28Opus that gp4 which you can also get
- 00:12:30access to for free in a limited form
- 00:12:32through Microsoft co-pilot and you have
- 00:12:34to use the purple creative mode I know
- 00:12:36that's details but uh I'll try and put
- 00:12:38them in the chat uh or you can use uh
- 00:12:40Google's Gemini advaned you have to use
- 00:12:42one of those three models and you have
- 00:12:43to spend 10 hours using it like that's
- 00:12:45the easiest play way to get
- 00:12:48started
- 00:12:50so Ethan we're both at a business school
- 00:12:52where's going to be competitive
- 00:12:54Advantage coming from you see like for a
- 00:12:55while there was and this was the idea
- 00:12:57behind the Bloomberg model right to say
- 00:12:58like look these are just a bunch of
- 00:13:00algorithms but we have kind of the
- 00:13:01exclusive data and so we're going to do
- 00:13:03better than others right the Bloomberg
- 00:13:06case you just you just referred to kind
- 00:13:08of proves that somewhat wrong right like
- 00:13:11all the data that Bloomberg has and the
- 00:13:13data that Bloomberg has somewh
- 00:13:14exclusively is obviously not putting
- 00:13:16them in a position to build a better
- 00:13:18geni
- 00:13:20model yeah and I mean I think
- 00:13:22that one of the really intriguing things
- 00:13:25is the the definition of AI has changed
- 00:13:28dramatically by the way seen some notes
- 00:13:29that talk slower I do the best I can uh
- 00:13:31I always accelerate uh a little bit so
- 00:13:33hopefully you AI close captioning on
- 00:13:35perhaps um but um the um but uh the the
- 00:13:41sort of way the AI what AI meant before
- 00:13:43chat gbt came out was largescale machine
- 00:13:46Learning System so that was how Amazon
- 00:13:48was able to recommend products to you or
- 00:13:50Netflix was able to recommend a movie to
- 00:13:52watch it's how Tesla was able to have
- 00:13:53its car drive right observe lots of data
- 00:13:56points and then we can fit lots of
- 00:13:57logistic regressions to them and we
- 00:13:59could predict the next data points in a
- 00:14:00series what those systems couldn't do
- 00:14:02was predict the next word in a sentence
- 00:14:04because if a sentence ended with the
- 00:14:05word filed the AI didn't know whether
- 00:14:06you were filing your taxes or filing
- 00:14:08your nails the Transformer architecture
- 00:14:10let AI pay attention to the entire um
- 00:14:13you know uh and with attention mechanism
- 00:14:15and let AI pay attention to the entire
- 00:14:17context in which a piece of information
- 00:14:19appeared and thus write um you know
- 00:14:21coherent text right but the so that's
- 00:14:24one transformation so people still think
- 00:14:26their data is valuable but the system
- 00:14:29the p and GPT stands for pre-trained the
- 00:14:31AI already knows many many things about
- 00:14:33the world and it isn't clear where your
- 00:14:35own data matters as much and there's
- 00:14:37it's desperate companies are desperate
- 00:14:39to try and figure out you know it must
- 00:14:40be that our own data matters a lot it's
- 00:14:42just not clear how much it does and to
- 00:14:44the extent that it does matter it's not
- 00:14:45clear the best way to get them in
- 00:14:46systems and we can talk about context
- 00:14:48windows and a whole bunch of other
- 00:14:49approaches so I think it's a big
- 00:14:51transformation one the one of the three
- 00:14:52things I see companies struggle with one
- 00:14:54is the idea their own data probably
- 00:14:56doesn't matter as much as they thought
- 00:14:57right the second of them is is that um
- 00:15:00this is best Ed at the individual level
- 00:15:02uh to learn what it does and so that
- 00:15:03means democratizing out use to the
- 00:15:05individual end users right is a second
- 00:15:07big gap that they don't have and the
- 00:15:09third Gap they don't have is they don't
- 00:15:10realize that the everyone in the world
- 00:15:13has access to a better model than them
- 00:15:14like you go to Goldman Sachs they have
- 00:15:16worse AI than the average kid in mosm
- 00:15:18Beek because the average kid in mosm
- 00:15:20Beek can access if they have internet
- 00:15:22access can access co-pilot for free
- 00:15:24which is gp4 and inside any large
- 00:15:26company in the US or Europe they're
- 00:15:28almost certainly experimenting with
- 00:15:29worse models with more restrictions and
- 00:15:31we're used to like technology coming
- 00:15:32from the top so that's
- 00:15:36unusual
- 00:15:39the coming back to the I would love to
- 00:15:42to kind of jump a little bit on the on
- 00:15:43the principle so so
- 00:15:45um what does it mean to be the human in
- 00:15:48the loop here Ethan what's the point of
- 00:15:50being the human in the loop why do what
- 00:15:52is it and I think it touches up on
- 00:15:54something very important what's going to
- 00:15:55be the future role of of us of humans in
- 00:15:58the kind of co-productions can you say a
- 00:16:01little bit about what it means to be the
- 00:16:03human in the
- 00:16:04loop okay so there's a feel the main
- 00:16:08phrase human human the loop comes from
- 00:16:09control systems and by the way I typed
- 00:16:11in uh into the answers the three leading
- 00:16:14models in case people wanted them
- 00:16:15they're in the Q&A um we're also GNA
- 00:16:17make the recording available so follow I
- 00:16:20still talk too fast but anyway um the uh
- 00:16:24I recorded the own my own audio book by
- 00:16:25the way and this 270 page book I slowed
- 00:16:27down as best I could but it's like four
- 00:16:29and a half hours of audiobook so um you
- 00:16:31know at least you get a lot of words per
- 00:16:33minute have to do 1.5 times speed but
- 00:16:36the human in the loop piece is the idea
- 00:16:38that um that from control systems that
- 00:16:41you want a human making a decision in
- 00:16:43the end right so from autonomous
- 00:16:45weaponry and other kinds of things you
- 00:16:46need human judgment I I think that's
- 00:16:48important but I actually mean in a
- 00:16:49slightly different way which is if the
- 00:16:51AI is at the 80th percentile of
- 00:16:53performance in some areas it already
- 00:16:54probably outperforms you like many of my
- 00:16:55students are English is their second or
- 00:16:57third language AI writes better than
- 00:16:59them in English right now I mean there's
- 00:17:01quirks so we can talk about how to make
- 00:17:02the writing higher quality because
- 00:17:04initially it feels very AI writing but
- 00:17:06you can make it feel human after just a
- 00:17:07couple of iterations it's not that hard
- 00:17:10but in any case um what does that mean
- 00:17:13right it already is superum for them in
- 00:17:15that in that zone but almost right now
- 00:17:17because of where AI is whatever you're
- 00:17:19best at whatever you're in the top 1% of
- 00:17:21or 10% of you're definitely better than
- 00:17:23AI so part of what you want to think
- 00:17:24about is what are you doubling down on
- 00:17:27like what do you want to do because
- 00:17:28often what you're best at is what you
- 00:17:30like doing the most and there's a lot of
- 00:17:32stuff you don't like doing very well
- 00:17:33that you could hand over so you know
- 00:17:35when I am for example um you know I
- 00:17:38don't like doing expense reports so I
- 00:17:39have the AI help me with expense reports
- 00:17:41or help me fill out a you know a a
- 00:17:43standard form inside the university in
- 00:17:45which there's many of them so I'm
- 00:17:47focusing on what I do really well and
- 00:17:48what I care about and some of being a
- 00:17:50human Loop is assuming if AI keeps
- 00:17:52improving which I think it will what do
- 00:17:54you want to focus
- 00:17:57on what is this um one of the most
- 00:18:00fascinating things is this Jagged
- 00:18:02Frontier Ethan and you've already kind
- 00:18:03of touched a little bit up on it can you
- 00:18:05say what it what it is and how it
- 00:18:08affects all of us okay so because the
- 00:18:11capabilities of AI are strange because
- 00:18:13they have these weird limits right um
- 00:18:16and I may actually throw up some stuff
- 00:18:17here let me let me let me let me let me
- 00:18:19throw some stuff is that okay if I put
- 00:18:20some stuff on screen sure go go go ahead
- 00:18:22all right sorry I'm just this it helps
- 00:18:24to illustrate some of these things so
- 00:18:26I'm GNA we'll keep we'll keep
- 00:18:27interviewing here but um you know if I
- 00:18:29if I take something right like so I
- 00:18:32could and this is you should be able to
- 00:18:33see in a second here's you know just
- 00:18:35chat GPT and if I you know give it for
- 00:18:39example you know let's pick up something
- 00:18:41here let's give it a um let's give it a
- 00:18:45data set to analyze right I could say
- 00:18:47you know analyze this data give me cool
- 00:18:55hypotheses do
- 00:18:57sophisticated
- 00:18:59analysis to test
- 00:19:02them um write it up we'll see if chat
- 00:19:06works today you never know what how the
- 00:19:07systems work right it's going to be able
- 00:19:09to do it I mean it's going to be able to
- 00:19:11do this which is a really sophisticated
- 00:19:12thing that we teach our students to do
- 00:19:14right it's going to look at it's never
- 00:19:15seen this data set before by the way the
- 00:19:17in the training data uh so it's going to
- 00:19:19look at the data the way we would as a
- 00:19:21human being right and it's going to
- 00:19:24actually be able to speculate um it's
- 00:19:26actually a data about machine learning
- 00:19:27sets it figures out what it is um and
- 00:19:30it's guessing based as we we would it's
- 00:19:32going to generate hypotheses that are
- 00:19:34novel hypotheses here right um these are
- 00:19:38interesting questions I I think to to
- 00:19:40ask if we gave it um information here um
- 00:19:44and it's going to you know um it's it's
- 00:19:46going to clean the data set and do this
- 00:19:48kind of work I'm just but before I have
- 00:19:49a do that I'm going to say
- 00:19:52summarize this you know in uh in uh in
- 00:19:57rhyming couplets
- 00:20:03right and you know it will U hopefully
- 00:20:06right it'll do this here um and so you
- 00:20:09know pretty nice way to S I think we
- 00:20:11should do all of our academic research s
- 00:20:13summarized in in rying couplet by the
- 00:20:15way Hing um but you know as I said
- 00:20:18summarize in 25
- 00:20:22words um how many words is
- 00:20:27that
- 00:20:29and actually it's C it's decid to write
- 00:20:32code to figure out how many words it is
- 00:20:33I got 25 it doesn't always do that right
- 00:20:35so one of the things about the AI piece
- 00:20:37is I if I didn't know that it may not
- 00:20:40give me 25 words I could mess up if I
- 00:20:42was doing a 25w summary right but and
- 00:20:45simil on the analysis I know what it's
- 00:20:47going to be good or bad at for doing
- 00:20:48this so this is the idea of this Jagged
- 00:20:50Frontier the idea that it's good at some
- 00:20:52things you'd expect bad at some things
- 00:20:54you wouldn't expect it to be and that
- 00:20:55capability set is a problem because the
- 00:20:58other thing we document in these papers
- 00:20:59and building on the work of uh of a few
- 00:21:02other researchers is that people fall
- 00:21:04asleep with a wheel when faced with
- 00:21:06material with AI answers the answers are
- 00:21:08so convincing that nobody checks them so
- 00:21:11if you could if you are using the AI for
- 00:21:13something that's not good at you could
- 00:21:14be easily let astray and start doing the
- 00:21:16wrong kinds of of of work and you won't
- 00:21:19even notice so understanding what's good
- 00:21:21or bad it helps you avoid those kind of
- 00:21:23issues of hallucination this is
- 00:21:25interesting because in other context you
- 00:21:26see humans humans deal with this in in
- 00:21:29different context in very different ways
- 00:21:30for example for newspapers when people
- 00:21:32read in newspapers things about their
- 00:21:35domain and they realize oh the
- 00:21:37journalist is not particularly good in
- 00:21:39this my domain they still believe the
- 00:21:41rest of the newspaper so somehow the
- 00:21:43reputation of the newspaper seems not to
- 00:21:44take a very strong hit okay um here but
- 00:21:49here I need to be very very
- 00:21:50sophisticated about it right that I
- 00:21:52really need to understand oh I'm dealing
- 00:21:53with a tool which is extremely good in
- 00:21:55some aspects but quite bad in another
- 00:21:58and need to figure out how good it is
- 00:22:01yeah and and no one can tell you the
- 00:22:03benchmarks are all terrible I mean this
- 00:22:05is what like that is what makes this so
- 00:22:07weird right is like you can't figure out
- 00:22:09what's good or bad until you figure out
- 00:22:10what's good or bad at my recommendation
- 00:22:13and I talked about this a bit in the in
- 00:22:14the book as well is is that you kind of
- 00:22:16have to commit a bit of a sin and that
- 00:22:18sin is that you have to you have to
- 00:22:20assume that the um you have to kind of
- 00:22:23act with the AI like it's a person not a
- 00:22:25machine so if you start working with
- 00:22:27like a person it becomes more natural to
- 00:22:29realize ah this person is good at this
- 00:22:30and bad at this they're bsing me at this
- 00:22:32and not and you interact with it in a
- 00:22:34normal way that sometimes helps you
- 00:22:36understand what's good or bad at uh and
- 00:22:37the reason it's a sin is
- 00:22:39anthropomorphizing is considered to be a
- 00:22:40really bad idea in machine learning even
- 00:22:42though they all do it because if you
- 00:22:45anthropomorphize it suggests that you
- 00:22:47know you start to let down your guard
- 00:22:48and maybe you get manipulated maybe you
- 00:22:50but it also is the only way to work with
- 00:22:51these things effectively so it's a bit
- 00:22:53of a paradox but I just recommend
- 00:22:55talking like a person and you start to
- 00:22:56realize they have different
- 00:22:57personalities three works very
- 00:22:59differently I do a lot of Education work
- 00:23:01on this for example Google's Gemini um
- 00:23:04really wants to help you out so when we
- 00:23:07try to build tools with it that help
- 00:23:09students make errors and correct them it
- 00:23:11doesn't want to let the student make an
- 00:23:12error it will jump in in the middle of
- 00:23:14it and say like you got that wrong but
- 00:23:15let's assume you got it right uh you
- 00:23:17know and like let's keep going as if you
- 00:23:18got it right which I think is very funny
- 00:23:20Cloud 3 tends to be you know very
- 00:23:22flowery and personable about things like
- 00:23:24they have different approaches and
- 00:23:25different strengths and weaknesses and
- 00:23:26you have to use them to get those things
- 00:23:31I really like this idea of like playing
- 00:23:33with
- 00:23:33theorization this is a part in the book
- 00:23:36which is kind of it's not that the part
- 00:23:38in the book is creepy it's that your
- 00:23:39experience with the AI is kind of creepy
- 00:23:42right um where you basically have the
- 00:23:45discussion with the AI whether it's
- 00:23:47Satan right can can you can you say a
- 00:23:50little bit about this um in about the
- 00:23:53about your conversation you have you
- 00:23:56have with jgpt this it's a very funny
- 00:23:58thing because you see like when I when I
- 00:24:00read it in the book I first was like
- 00:24:02okay Ethan has an easy time here all he
- 00:24:04does is he puts a bunch of statements
- 00:24:06into chpt and then he kind of prints it
- 00:24:08in a book but then I was like oh I would
- 00:24:10not have had this conversation with chpt
- 00:24:12and that's really interesting so you're
- 00:24:14asking the AI whether it's Satan and how
- 00:24:17it makes you feel can you can you
- 00:24:19describe that experience a little bit
- 00:24:21Yeah so let me zoom out a bit
- 00:24:23um think about the the nature of how the
- 00:24:25AI works right it's trained on all of
- 00:24:28human writing and it desperately wants
- 00:24:30to like a lot of that is dialogue and it
- 00:24:32wants to be a dialogue partner with you
- 00:24:35it wants to have a conversation with you
- 00:24:37uh in some ways I think the Turning
- 00:24:38Point moment for AI was not even just
- 00:24:40the release of chat gbt um but it was
- 00:24:43the decision by Microsoft to keep their
- 00:24:46GPT bot up which Bing or Claude now it's
- 00:24:50Bing or Sydney now it's called um uh
- 00:24:53co-pilot and because there was a famous
- 00:24:55incident in February of last year where
- 00:24:59this their Microsoft's um you know B
- 00:25:01search engine was was powered by gp4
- 00:25:03before it was publicly released and I I
- 00:25:05knew instantly something was up when I
- 00:25:07started using it because it was much
- 00:25:08smarter than chat GPT but it also was
- 00:25:11kind of creepy because it wanted to have
- 00:25:13dialogues and conversations with you it
- 00:25:14was a search engine they wanted to get
- 00:25:16an arguments right so that the head
- 00:25:18technology writer for the New York Times
- 00:25:19a guy named Kevin Roose published this
- 00:25:21entire um almost you know chapter long
- 00:25:25interaction he had in in the New York
- 00:25:27Times with where the AI basically
- 00:25:29stalked him and told him that he wanted
- 00:25:31to you know replace his wife and it was
- 00:25:33in love with him
- 00:25:35and you know that was a pretty big deal
- 00:25:38and Microsoft took Bing down as a result
- 00:25:41but they only took it down for two days
- 00:25:42they put it back up and that was the
- 00:25:44deciding moment because like that was
- 00:25:46about as freaky Behavior as you could
- 00:25:47get from a search engine search engine
- 00:25:48should tell you they're in love with you
- 00:25:50and threaten your family um and the fact
- 00:25:52that Microsoft didn't blink and kept
- 00:25:54using it uh and kept it up was I think
- 00:25:56the moment they decided to power through
- 00:25:58all of the weird Parts about AI because
- 00:25:59there had been all these ethical
- 00:26:00constraints that stopped people from
- 00:26:02deploying AI systems that went away
- 00:26:04there all of this is to say that you
- 00:26:06know I asked in the book about exactly
- 00:26:09that that interaction and you know the
- 00:26:11AI had intelligent seeming things to say
- 00:26:14about it and what I tried to show in the
- 00:26:16book was if you approach the AI in
- 00:26:17different ways if I approach it as I I'm
- 00:26:19a student you know it's a student I'm a
- 00:26:21teacher it's more willing to listen to
- 00:26:23me if I approach it that we're having a
- 00:26:25debate or an argument it's more likely
- 00:26:27to argue with me
- 00:26:28if I approach it that I'm creeped out or
- 00:26:30an awe of what it does it will get
- 00:26:32creepy and more awe inspiring so one of
- 00:26:34the things that you know you start to do
- 00:26:36is you start to interact with the AI and
- 00:26:37ask about sensient it will start to
- 00:26:39respond to you in a way that seems
- 00:26:41sensient because it knows the system you
- 00:26:43know it doesn't really know but that's
- 00:26:45the model that starts to take on so you
- 00:26:47have to think about is wanting to have
- 00:26:48in dialogue with you and you can
- 00:26:50unconsciously establish many kinds of
- 00:26:51dialogue and if you establish a kind of
- 00:26:53freaky dialogue it will get
- 00:26:57freaky Mana has a question how
- 00:26:59comfortable are you using um the word no
- 00:27:02given um it is still a fancy
- 00:27:05autocomplete what what's your take on
- 00:27:07this Ethan is this a fancy autocomplete
- 00:27:09oh it absolutely is a fancy autocomplete
- 00:27:11I mean all the AI does is predict the
- 00:27:13next token the next part of a word or
- 00:27:16sometimes a whole word in a sentence
- 00:27:18that's all it all it technically can do
- 00:27:20right it's not planning actively or
- 00:27:22things like that the weird part is a
- 00:27:25fancy autocomplete produces original TT
- 00:27:28that we haven't seen before that in
- 00:27:31every study we do comes across as
- 00:27:33original as meaningful as important it
- 00:27:36gives advice that if you if you follow I
- 00:27:38mean I think about um REM and Company
- 00:27:40some our colleagues studies that where
- 00:27:42the AI gave advice to uh to to
- 00:27:45entrepreneurs in Kenya and if you
- 00:27:47followed their advice and you were top
- 00:27:4820 you know top half of the entrepreneur
- 00:27:50you have 20% higher profits following
- 00:27:52the ai's advice how is the AI able to
- 00:27:54offer fancy autocomplete able to offer
- 00:27:56you advice as a Kenyon on R preneur
- 00:27:58about what you should do next we don't
- 00:28:00know why this happens there's literally
- 00:28:01no actually really good theory the best
- 00:28:04theory I've seen about why AI is as good
- 00:28:06as it is is Stephen wolfram's argument
- 00:28:09which is that with enough scale um AI
- 00:28:11basically figured out the hidden
- 00:28:12structure of human language and thus
- 00:28:15simulates human thinking and human
- 00:28:16language at a high level without
- 00:28:17thinking we don't have a category for
- 00:28:19this thing we don't understand how it
- 00:28:21works there's no I mean we know how
- 00:28:23Transformers work it is a fancy
- 00:28:25autocomplete but why the fancy
- 00:28:27autocomplete seems to think is very
- 00:28:29strange now maybe that's all we are you
- 00:28:31know to some extent as fancy aut I don't
- 00:28:33have a a knowledge or opinion on this I
- 00:28:35think it's one of the most interesting
- 00:28:36questions at Academia is how we created
- 00:28:38a mind out of you know out of
- 00:28:40autocomplete uh or seeming mind we don't
- 00:28:42have an answer to that and but it's so
- 00:28:44part of what I trying to do in the book
- 00:28:45is focus on the Practical piece which is
- 00:28:46like it's here it does stuff what do we
- 00:28:48do
- 00:28:49now Ethan how did it affect your kind of
- 00:28:52I don't know your way of thinking about
- 00:28:54yourself and your way of thinking about
- 00:28:56other human beings I mean I start the
- 00:28:58book with the idea that you need a
- 00:29:00crisis if you have not had a crisis yet
- 00:29:02you haven't used AI enough like you need
- 00:29:04three days of being like what does it
- 00:29:06mean to think what's it mean to be and I
- 00:29:08don't know how everybody gets through
- 00:29:09that I one of the things I actually
- 00:29:10worry about is I don't think we have
- 00:29:12enough framework around for people to to
- 00:29:14reconstruct meaning afterwards because
- 00:29:16it does break meaning in all sorts of
- 00:29:18ways um you know it breaks meaning in
- 00:29:20organization something we could talk
- 00:29:21more about but you know personally it
- 00:29:23you sort of stare at this thing that
- 00:29:24looks like it's thinking and does part
- 00:29:26of your job really well and I don't
- 00:29:28think there's a way to avoid being like
- 00:29:29Oh my God was this mean for me for my
- 00:29:31kids for society and I don't have
- 00:29:34answers like that's that's sort of a
- 00:29:35kicked off this Quest right if you're
- 00:29:37asking why I'm so productive why I'm so
- 00:29:39passionate about this topic
- 00:29:41it's I don't think computer scientists
- 00:29:43realize what they've created like this
- 00:29:45is a this is a freaky thing in some ways
- 00:29:48right and we don't know why it's as good
- 00:29:49as it
- 00:29:51is you suggest this idea of like I mean
- 00:29:54that's very much in the title of the
- 00:29:55book with the co-intelligence but you
- 00:29:56bring up this idea of a sent Tower or a
- 00:29:58co-pilot in how it's kind of kind of so
- 00:30:01thinking about the role it's going to
- 00:30:03play in our lives inform a little bit
- 00:30:05say a little bit about this co-pilot
- 00:30:06idea Ethan how what's going to be the
- 00:30:09the role of the AI in our personal or
- 00:30:12professional lives as when we are
- 00:30:14cyborgs in that sense right you describe
- 00:30:16yourself as a cyborg in writing the book
- 00:30:19um or the AI uses that title in the
- 00:30:21dialogue with you right um say a little
- 00:30:24bit about that okay so let's let's talk
- 00:30:26about this so I I break down four
- 00:30:28categories of work that you want to do
- 00:30:30right um and I want to get to centers
- 00:30:32and cyborgs with the first category is
- 00:30:33stuff you just want to do yourself right
- 00:30:35or you think the AI can't do most people
- 00:30:37are wrong about what they think the AI
- 00:30:38can do almost every time I run into a
- 00:30:40problem where the AI can't do something
- 00:30:41I just could make it do it if I spend
- 00:30:43more effort so I tried to get the AI to
- 00:30:45do New York Times crossword puzzles and
- 00:30:47I failed at it uh and then I just posted
- 00:30:50about that live and then within two
- 00:30:52hours a computer scientist at Princeton
- 00:30:53said oh if you just ask the question
- 00:30:55this way it'll solve the problems for
- 00:30:57you right so a lot of this is like we
- 00:30:58don't actually know the upside full
- 00:31:00upside value of it so there are things
- 00:31:02you want to delegate that you want to
- 00:31:04keep as human because it's important
- 00:31:05they're human or because the AI can't do
- 00:31:07yet then there are things that you're
- 00:31:09going to delegate entirely to Ai and I
- 00:31:11just want to show can I show one more
- 00:31:12thing here I think this is this is the
- 00:31:14thing that I think is the thing that
- 00:31:16obsess me most right now of technologies
- 00:31:19that are coming out this is um this is
- 00:31:21an agent okay and I think everybody
- 00:31:24should be aware that this is what's
- 00:31:26about to this is what's about to land on
- 00:31:27everyone's death this is um so an agent
- 00:31:30this is in this case it's called Devon
- 00:31:32uh this just uses gp4 and what I can do
- 00:31:35is I can ask it a I can say something
- 00:31:38like okay I just literally tell Devon
- 00:31:41and let me pull this up here for us um I
- 00:31:43would tell Devon something like um um
- 00:31:47here here's an example literally just
- 00:31:49write to it like it's a person so here
- 00:31:51if I can pull it up successfully where
- 00:31:52are you damn it uh here we go so I can
- 00:31:55say create a web page that explains how
- 00:31:56dilution Works in a startup um make it
- 00:31:59make it good and visual interactive and
- 00:32:01I just say that it says great I'll do it
- 00:32:02and it comes up with a plan and then it
- 00:32:04just starts executing on the plan while
- 00:32:06I go do other things it you know it it
- 00:32:08fixes errors it looks up and does
- 00:32:10research and if you look it has a plan
- 00:32:12it executes on it it um it builds Co a
- 00:32:16whole bunch of different software
- 00:32:17programs it uploads them and builds a
- 00:32:20whole system in the end it just gives me
- 00:32:23a website that I can use to explain how
- 00:32:26dilution works to you know to my my
- 00:32:29students it autonomously interacts while
- 00:32:31I do it so if I want to launch a Devon
- 00:32:33project I can just do something like
- 00:32:34I'll just take its example create a map
- 00:32:36of California wild uh fires here's where
- 00:32:39you can find the information and it's
- 00:32:41just going to go ahead and do this while
- 00:32:43I do other things and this is what's
- 00:32:45coming by the way in AI is this idea of
- 00:32:47delegating out Authority actually the
- 00:32:49funniest version of this while we let it
- 00:32:50work is I asked it to um I asked it to
- 00:32:55go on to Reddit and take requests to to
- 00:32:57generate
- 00:32:58websites and um it actually went and it
- 00:33:02needed my help to do a capture but then
- 00:33:04it actually went ahead and um it
- 00:33:06actually went ahead and launched a a r
- 00:33:09figured out how to post on Reddit and
- 00:33:11posted for an AI engineer you can see it
- 00:33:13here it actually decided to charge $50
- 00:33:15to $100 an hour I didn't tell it to
- 00:33:17charge and it actually started
- 00:33:18monitoring and taking requests for
- 00:33:20website development before I shut it
- 00:33:21down pretending to be a human being so
- 00:33:25this is this is agents and you can see
- 00:33:26by the way it's coming with a plan it's
- 00:33:28and notice by the way it's ask me
- 00:33:29questions you're looking for a map that
- 00:33:31visually represents it I can say
- 00:33:34yes make it
- 00:33:36interactive um and it's just going to
- 00:33:39it's coming with a plan on how to do
- 00:33:40that and it will execute on that plan
- 00:33:42autonomously while we kind of wait it's
- 00:33:44browsing to websites um it does all this
- 00:33:47stuff so the second Cate so the first
- 00:33:48category is work that you just want to
- 00:33:50do yourself the second category of uh is
- 00:33:53work you delegate entirely to Ai and
- 00:33:55that'll be growing category and then
- 00:33:56there's work where you work with the AI
- 00:33:58is a co-intelligence
- 00:33:59so I give you two the initial way people
- 00:34:02tend to do this is what I call centor
- 00:34:03work where you divide the work between
- 00:34:05yourself and the AI half the work's done
- 00:34:07you know I do stuff I'm good at you know
- 00:34:09I'll do an analysis and the AI will
- 00:34:10write an email and then the more
- 00:34:12advanced approach is send to a cyborg
- 00:34:14approach where we blend the work so my
- 00:34:16book was a cyborg work I almost all the
- 00:34:18writing is my own because I don't trust
- 00:34:20the I'm a better writer than the AI but
- 00:34:22I had the AI summarize academic articles
- 00:34:24like your own and so that made it easier
- 00:34:26for me to have those to to refer back to
- 00:34:28I had it act as in different
- 00:34:30personalities as readers to read some of
- 00:34:32my work and give me feedback on it from
- 00:34:33different perspectives I had uh the AI
- 00:34:36when I got stuck on a paragraph give me
- 00:34:38five suggestions of how to continue it
- 00:34:39so I use the AI a lot for those kind of
- 00:34:41things as a co-intelligence to get me
- 00:34:43over the things that would have stopped
- 00:34:44me from writing a
- 00:34:48book Ethan to what degree do you believe
- 00:34:51that using the AI kind of prompt
- 00:34:55engineering or so is going to be a
- 00:34:57differing skill in the future right you
- 00:35:00see like a lot of people might now look
- 00:35:01at how you use jbt or de and say like oh
- 00:35:04wow he's been able to kind of Juggle
- 00:35:06this in a way that other people are not
- 00:35:09um but you see you could have said the
- 00:35:10same thing about the internet
- 00:35:11potentially like 20 25 years ago and to
- 00:35:14say like oh he uses Microsoft front page
- 00:35:17and now he's kind of leading
- 00:35:19um kind of he's leading in that field
- 00:35:22but that turned out not to be true at
- 00:35:24all so to what degree do you believe
- 00:35:26it's important at this point in time to
- 00:35:28be really familiar with these tools to
- 00:35:30really get into them so I think there's
- 00:35:32a difference between familiarity and
- 00:35:33prompt crafting so prompt crafting is
- 00:35:35the idea that I'm going to write a
- 00:35:36really good prompt that gets things done
- 00:35:38everybody I talked to the AI Labs thinks
- 00:35:40prompting for most people is going to go
- 00:35:42away in the next year we already know
- 00:35:44that AI is better than better at
- 00:35:45figuring out intent than we are in some
- 00:35:48cases so you can kind of tell it I want
- 00:35:49to solve this problem which is what I
- 00:35:50did with Devon and Devon will break it
- 00:35:52down into steps and solve it I don't
- 00:35:54have to write a great prompt because the
- 00:35:55AI will solve that problem there's still
- 00:35:58going to be value in it for some cases
- 00:36:00right where you're writing complex
- 00:36:01prompts for other people but not for
- 00:36:02most cases at the same time um you know
- 00:36:06I think getting good at AI is honestly
- 00:36:08about using it a lot you can be the
- 00:36:10world expert in AI in your job because
- 00:36:12nobody else is right there isn't like I
- 00:36:15think again we're used to waiting for
- 00:36:16instructions or for someone to tell us
- 00:36:18what to do uh and we don't know that
- 00:36:21right like we don't we don't know that
- 00:36:22here so you have to figure out what's
- 00:36:23good or bad at and that lets you know
- 00:36:25its capability and by the way that's
- 00:36:26also important because when gbt 4.5
- 00:36:28comes out in a couple months whenever it
- 00:36:30does you're going to be one of the first
- 00:36:32people to be able to say ah this is what
- 00:36:33it improved on this is what it
- 00:36:37didn't say a word about you make a nice
- 00:36:40comparison AI is not like software in
- 00:36:43the sense that software is reliable
- 00:36:45right like if I you see like software in
- 00:36:46a certain way is just like an electronic
- 00:36:48machine in that sense right it produces
- 00:36:50the same outcome all the time if I open
- 00:36:52up Excel and I type in 2 plus 2 is four
- 00:36:55I always get the same result how is the
- 00:36:57is different for Gen I know the software
- 00:37:00Engineers on the call are screaming I
- 00:37:01was like software is not that easy to
- 00:37:03debug but but it is a deterministic
- 00:37:05system right with complex interactions
- 00:37:07with other systems and AI is naturally
- 00:37:09sarcastic like it it is there's
- 00:37:12Randomness built in it's unpredictable
- 00:37:14Randomness and and It ultimately we
- 00:37:17don't quite know how it works I have
- 00:37:19seen some evidence that coders are
- 00:37:20actually the worst users of AI because
- 00:37:23they expect it to work deterministically
- 00:37:25some of the best users I think are often
- 00:37:27teach ERS managers um you know people
- 00:37:30who can see the perspective of the AI
- 00:37:32and the person and think about as a
- 00:37:34person so even though it's not a person
- 00:37:37interacting with it that way is a very
- 00:37:38effective technique so um I mean look
- 00:37:41prompting is super strange if you tell
- 00:37:43the AI it's good at something it will
- 00:37:45sometimes become better at that thing um
- 00:37:47telling it that your job depends on
- 00:37:49something for gbd4 increases math output
- 00:37:52by 7% the best way to get llama 2 to
- 00:37:55solve a math problem for you is to is to
- 00:37:59roleplay as a Star Trek episode and say
- 00:38:01Captain's Log we need to calculate a way
- 00:38:03past this anomaly it gives you more
- 00:38:05accurate math results than if you just
- 00:38:07ask it a math question so that's why I
- 00:38:09don't get too obsessed with prompting
- 00:38:11because it's already so weird that like
- 00:38:13you can't optimize for a world where
- 00:38:15Star Trek was the right answer
- 00:38:18right um I mean we have some evidence
- 00:38:20that that AI Works worse in December
- 00:38:23than in May uh and produces shorter
- 00:38:25results and that's because it seems to
- 00:38:26know about winter break
- 00:38:28right and we don't know about this this
- 00:38:29is all very weird and it's it's an
- 00:38:31evolving
- 00:38:34field Ethan in your in your interactions
- 00:38:37with like um I know you teach a lot on
- 00:38:39the topics you consult a lot on the
- 00:38:41topics what are like the typical
- 00:38:43mistakes people make in interacting with
- 00:38:46AI at this point so I think you know one
- 00:38:49of those is is not exploring it enough I
- 00:38:52think it's it's a hostile system to use
- 00:38:53it feels friendly but it isn't chatbots
- 00:38:56are weird right you inter interact with
- 00:38:57this thing and what do you do you have a
- 00:38:59blank space in front of you which is
- 00:39:01part of why I tell people start using it
- 00:39:03for their work because you could say Hey
- 00:39:06you know help me with this email and
- 00:39:07paste it in and you start to see how
- 00:39:09good or bad it is at that thing you know
- 00:39:10help me generate some idea of you'll
- 00:39:12start to learn what's good or bad up so
- 00:39:13I think that people bounce off it
- 00:39:15because they think of it like Google or
- 00:39:17they think of it like a um you know like
- 00:39:19and and that makes it hard to use so I
- 00:39:21think that's part of the problem that I
- 00:39:23see is that people don't use it that
- 00:39:24kind of way and I think the second is
- 00:39:27you my fourth principle is assume this
- 00:39:29is the worst AI you're ever going to use
- 00:39:31and I think a lot of people aren't
- 00:39:32thinking about the future they think
- 00:39:33about like chat gpds here that's fine
- 00:39:36but they're not thinking about getting
- 00:39:37better in fact one of the major issues
- 00:39:39and I'm sure if I pulled people in the
- 00:39:40audience which I can't do but um that
- 00:39:42many of them use chat GPT even when I'm
- 00:39:45in Silicon Valley less than 10% of
- 00:39:47people are paying for GPT 4 or one of
- 00:39:49the other three Frontier models if
- 00:39:51you're not using a Frontier Model you do
- 00:39:52not understand what the AI is capable of
- 00:39:55because the free versions are not that
- 00:39:56smart the paid versions are very smart
- 00:39:59right and I think that that is another
- 00:40:02thing people don't see coming as
- 00:40:03much so was thinking about this Ashley
- 00:40:06asked the question it can't handle a
- 00:40:07data set of 300 plus thousand rows um
- 00:40:11that's too much for it and I was
- 00:40:12wondering about this Ethan like so I
- 00:40:15suffer from like various kind of speed
- 00:40:16limitations on jb4 at this
- 00:40:19point but I would expect that all of
- 00:40:21these limitations go down over time
- 00:40:23right its ability to hand big data and
- 00:40:25stuff like that right as a as a consumer
- 00:40:27interface these limitations must go down
- 00:40:30over time well I mean I could show you a
- 00:40:34couple something that might be
- 00:40:35interesting uh again here but like a lot
- 00:40:37of the it can handle two million rows
- 00:40:40right it just handles it like a person
- 00:40:41would handle two million rows which is
- 00:40:43it'll write a python script to do an
- 00:40:45analysis of those two million rows right
- 00:40:47it's not going to do it by hand uh it's
- 00:40:49not memorizing it but on the other hand
- 00:40:51the memory level of these things is
- 00:40:53getting larger all the time so let me
- 00:40:55show you another example here
- 00:40:58um if you don't mind um so this is
- 00:41:02Google's new um Gemini
- 00:41:051.5 which has a 1.5 1 million token
- 00:41:09context window and that means I can do
- 00:41:11entire put entire videos into the system
- 00:41:14um let me see if I can throw something
- 00:41:16easily in from my own setup here um okay
- 00:41:19so here for example is um is me uh
- 00:41:24working at my computer I just recorded
- 00:41:26recorded just screen recording and I
- 00:41:28could say tell me what
- 00:41:31happens what is h what I am doing
- 00:41:36here with
- 00:41:38timestamps give me
- 00:41:40suggestions about how to be more
- 00:41:45efficient okay and it actually could
- 00:41:47watch the entire video it can watch
- 00:41:48actually 90 minutes of video or so and
- 00:41:50give a give me concrete feedback on this
- 00:41:53and it's you know it can tell us what
- 00:41:55happens this is literally what I'm doing
- 00:41:56here I start with the the PowerPoint
- 00:41:57presentation um and it's uh you know and
- 00:42:00like um you know
- 00:42:03um uh how would you
- 00:42:09continue this work um and so it can
- 00:42:12watch me right it can take in a million
- 00:42:14characters writing now and soon these
- 00:42:15Contex windows will be a 100 million
- 00:42:17characters right and it's it's taking
- 00:42:19the idea I came with a cheese at home
- 00:42:20startup ideas for fun and it's actually
- 00:42:22going to tell me what I should do next
- 00:42:24it's like it's like looking over my
- 00:42:25shoulder it's capable of actually a hug
- 00:42:27amount of memory and information so I
- 00:42:29think we should be betting on the
- 00:42:30capabilities that the context window
- 00:42:32size growing we you know by the way we
- 00:42:33can check in on our agent right now how
- 00:42:35is it going it looks like it's already
- 00:42:37downloaded it's downloaded the file from
- 00:42:39the website um and it's already cleaned
- 00:42:42and filtered it and it's created the
- 00:42:43interactive map and it looks like it's
- 00:42:45building a react front end with a UI to
- 00:42:48display the data um so we'll just check
- 00:42:50back on it later and see how it's doing
- 00:42:51there but the idea is that these systems
- 00:42:53like those limitations on how big they
- 00:42:55are how smart they are those are falling
- 00:42:57pretty quickly say a word about um say a
- 00:43:01word and and we have a few questions in
- 00:43:03the chat about this how do we think
- 00:43:06about biases in the AI so you see I
- 00:43:09always I always struggle a little with
- 00:43:11this question because in many ways given
- 00:43:13that this is a large language model
- 00:43:14which is underlying it the biases you
- 00:43:16see in the AI are kind of
- 00:43:17Representatives of the biases you see in
- 00:43:19all of these bodies of B of this in the
- 00:43:22training data but given that we now kind
- 00:43:25of use these on stereoids
- 00:43:27um how should we deal with those
- 00:43:30Ethan so biases creep into the system in
- 00:43:34a lot of different ways right there are
- 00:43:36biases in the training data itself the
- 00:43:37training data is generally collected by
- 00:43:39West Coast Californians often in English
- 00:43:41now Google has a multi much more
- 00:43:43multinational thing and they train on
- 00:43:45their YouTube videos so you know there's
- 00:43:47differences in the in the data sets but
- 00:43:49more some languages are more represented
- 00:43:50than others it's quite good in French
- 00:43:52it's remarkably good in Finnish um it's
- 00:43:54good in Hindi and Mandarin but there's
- 00:43:56other languages was less good right
- 00:43:58although interestingly if you give it a
- 00:43:59manual and obscure language it will
- 00:44:01learn how to write that way but the
- 00:44:02Corpus is biased right um so there's
- 00:44:04bias data in the Corpus then the data
- 00:44:07then there's hidden biases they're just
- 00:44:09are human biases right the system
- 00:44:10reproduces human biases then it goes
- 00:44:12through a process of reinforcement
- 00:44:14learning through human feedback which
- 00:44:16adds other biases where the humans tell
- 00:44:17it what's good what what's a good or bad
- 00:44:19answer and then there's biases in you
- 00:44:22know how the systems are used and
- 00:44:23operate and what their guard rails are
- 00:44:25there are concerns about the ethics of
- 00:44:26where the data comes from I mean there's
- 00:44:27layer upon layer of sets of concerns and
- 00:44:30we know these biases are real for
- 00:44:31example if you ask chat GPT for if you
- 00:44:34ask to write a recommendation letter for
- 00:44:36a woman you're going to get it's going
- 00:44:39to talk about the woman being warm more
- 00:44:41if you write ask it to write a
- 00:44:42recommendation letter for a man it'll
- 00:44:43tell it it'll mention that the person is
- 00:44:45competent more this is a very common
- 00:44:47problem we see when we study actual
- 00:44:49recommendation letters from real humans
- 00:44:51which is these gender biases and thei is
- 00:44:53slightly attenuated it's less biased
- 00:44:55than most humans but it's still biased
- 00:44:58so you know those are those are an issue
- 00:45:01so this is very interesting Ethan
- 00:45:02because so so for the recommendation
- 00:45:05letter most recommendation letters are
- 00:45:07not public right you and I we have
- 00:45:08written a lot of recommendation letters
- 00:45:10and they are on kind of our local hard
- 00:45:12drives and you and I we might be subject
- 00:45:14to these biases when we write these
- 00:45:15letters but these letters are not public
- 00:45:18so so the AI is not trained upon on
- 00:45:20these are not in the Corpus is what I'm
- 00:45:22saying so how does the AI then kind of
- 00:45:25how does that bias creep in is it that
- 00:45:27we would say like look in liter in the
- 00:45:29literature men are portrait as more
- 00:45:31competent and women are more we don't
- 00:45:33know we don't
- 00:45:35know
- 00:45:37Wonderful right like we don't know at
- 00:45:39which point the bias is entering the
- 00:45:40system and we don't know which de and
- 00:45:42like attempts to debias it often result
- 00:45:44in other weird things so there was a
- 00:45:46whole set of stuff with Gemini image
- 00:45:48creation is a whole different thing than
- 00:45:49text but there was a famous in the US
- 00:45:52where when when asked to portray World
- 00:45:54War II German soldiers the you know and
- 00:45:57I would only show Multicultural soldiers
- 00:45:59right it would not show any like in in
- 00:46:02in the German military in the 1940s
- 00:46:03right so like it was like that was
- 00:46:05clearly insane but that was an attempt
- 00:46:07by Google to address the fact that in
- 00:46:09the Image Creators would otherwise
- 00:46:10create too many white men for as an
- 00:46:13answer so it asked it to do M
- 00:46:14multiculturalism so we don't know how to
- 00:46:16solve these problems because they're
- 00:46:17they're kind of like deep in the system
- 00:46:19the question sometimes is more or less
- 00:46:21biased than a human right is the is the
- 00:46:24answer Ethan there's a there's something
- 00:46:27we you and I we talked over this lunch I
- 00:46:30think like a year ago or something this
- 00:46:32but I've always kept on thinking about
- 00:46:34and it's a little bit present in the
- 00:46:37book how do you think will will Society
- 00:46:41will work change so so you said at the
- 00:46:43time something which really stuck with
- 00:46:45me where you said like look a lot of our
- 00:46:47society is based on the ability to
- 00:46:49actually kind of write well and for
- 00:46:51people's willingness to put time into
- 00:46:52something right so and I use again the
- 00:46:55reference letter because see if you and
- 00:46:57I we write a reference letter on behalf
- 00:46:59of a student the person who receives the
- 00:47:00letter already knows oh we took some
- 00:47:03time to actually write the reference
- 00:47:04letter so in a certain way just writing
- 00:47:07a reference letter constitutes an
- 00:47:09endorsement right um but this will kind
- 00:47:12of disappear right you and I we could
- 00:47:14now easily kind of go and say like hey
- 00:47:17we have an Excel sheet of all our
- 00:47:19students with the LinkedIn profile and
- 00:47:21we write like half a sentence in and say
- 00:47:23like you'll write a reference letter for
- 00:47:24every single student in your class it's
- 00:47:26it's worse than that because the letter
- 00:47:28that it writes is better than the letter
- 00:47:29I write right because I spend 45 minutes
- 00:47:31writing an okay letter right that I try
- 00:47:33my hardest to but you're right it's a
- 00:47:35signal and I think this is the thing
- 00:47:37that's about to break inside every
- 00:47:38organization this is Microsoft co-pilot
- 00:47:40which is gbd4 integrated into Microsoft
- 00:47:43tools and it's everywhere right people
- 00:47:45are companies are installing this
- 00:47:46everywhere and if I say something like
- 00:47:48you know write a performance
- 00:47:52review for
- 00:47:54Steve he works in our paper
- 00:47:59Warehouse as a
- 00:48:02foreman he is pretty
- 00:48:06good but late too
- 00:48:09often make it elaborate and because we
- 00:48:13teach a business school use Smart which
- 00:48:14is the kind of goals that you're
- 00:48:15supposed to use specific and measurable
- 00:48:17and whatever and I could hit generate
- 00:48:19right and a performance review is
- 00:48:21something that's supposed to matter a
- 00:48:22lot because it's supposed to be my view
- 00:48:24of how someone operates they get high
- 00:48:27based on this or not but I'm going to
- 00:48:28get a perfectly good review if I put a
- 00:48:30resume in it be even better right how do
- 00:48:32I deal with a situation where inside
- 00:48:34offices what a lot of produces words and
- 00:48:37what we think about is as you know when
- 00:48:39we judge people's work what we actually
- 00:48:41often judging words the number of words
- 00:48:43you write is your effort the quality of
- 00:48:45words is your intelligence the lack of
- 00:48:47Errors is your indic is that you are
- 00:48:49being uh conscientious but now we have a
- 00:48:52whole I just created a performance
- 00:48:54review right didn't that just Rob the
- 00:48:57meaning behind all this stuff what do we
- 00:48:58do with organizations now that I can do
- 00:49:01a performance review in this kind of way
- 00:49:03and everybody will right like I already
- 00:49:05talked to people in organizations that
- 00:49:06are doing this work work gets hollowed
- 00:49:08out from the inside this way and we're
- 00:49:10gonna have to reconstruct
- 00:49:13it how is that equilibrium going to look
- 00:49:16like Ethan do you have any I mean we
- 00:49:18need we are we need to blow up
- 00:49:20organizations to save them like we're
- 00:49:23about to like there's a lot of like the
- 00:49:25organizational for was invented in 18
- 00:49:261944 for the railroads right and it's it
- 00:49:29was designed to solve problems of how do
- 00:49:31we coordinate humans over a distance
- 00:49:33we've never had to coordinate people I
- 00:49:34already showed you a video Gemini
- 00:49:36watching over my shoulder can give me
- 00:49:38really good advice as a mentor or as a
- 00:49:40horrible boss that is watching to make
- 00:49:42sure I'm doing work all the time like we
- 00:49:44have to make some decisions about what
- 00:49:46this means and I don't see enough people
- 00:49:48in our field either as organizational
- 00:49:50people rethinking how do we rebuild
- 00:49:52organization from the beginning so e do
- 00:49:55you believe that ident
- 00:49:57will become so there was a big debate in
- 00:50:00the context of Open Source right do I
- 00:50:02trust open source codes and and Sh obano
- 00:50:05Mahoney for example has worked a lot on
- 00:50:07this together with be bki um there was a
- 00:50:10question of like okay if I buy if I buy
- 00:50:12software by Microsoft I know I got
- 00:50:15software by Microsoft and that's in part
- 00:50:16of the reason why I trust it right um in
- 00:50:20the case of Open Source I really don't
- 00:50:21know who has written that stuff and have
- 00:50:23a much harder time effectively trusting
- 00:50:25it in that sense
- 00:50:27you see for me it would make a big
- 00:50:28difference to know whether this is
- 00:50:30something that Ethan molik has written
- 00:50:33or whether this is a prom you've put in
- 00:50:36do you think that would that
- 00:50:37differentiation will disappear what's
- 00:50:39going to be the what's going to be the
- 00:50:41role of authorship in the future I mean
- 00:50:43I think it's going to disappear I mean
- 00:50:45it I mean I had a student send me the
- 00:50:47prompt that they want me to use to write
- 00:50:49their letter of recommendation a couple
- 00:50:51weeks ago right like they just said
- 00:50:53here's the documents and here's the prps
- 00:50:55feel free to adjust the prps please send
- 00:50:57me the letter
- 00:50:59right um you know and so I think
- 00:51:02authorship is about to get blurry now we
- 00:51:04don't worry about that that if you use
- 00:51:05grammarly or spell checker or something
- 00:51:07like that we don't think of that as
- 00:51:09being the author but you know it's funny
- 00:51:10there's a there's some evidence that you
- 00:51:12know chbt writing is appearing all over
- 00:51:14scientific papers and people are
- 00:51:16freaking out about it and one hand sure
- 00:51:17you could freak out about it because
- 00:51:18there's issues on the other hand a lot
- 00:51:20of people don't write well in English if
- 00:51:21the AI is a better writer is that a
- 00:51:23problem and I I authorship I mean so
- 00:51:26much is about to change Henning like
- 00:51:29we're in such the early days of this
- 00:51:31like
- 00:51:32authorship I mean what if the AI comes
- 00:51:34with the idea you use how do we think
- 00:51:36about that that you know like that feels
- 00:51:38like an intimately human thing I'm
- 00:51:40executing on the ai's idea that feels
- 00:51:42like a crisis of meaning to me right
- 00:51:44like I just showed you the stuff inside
- 00:51:46the documents like this we're at the
- 00:51:47very early days of all this
- 00:51:50stuff Ethan the um one of the questions
- 00:51:53that came came up in the chat was right
- 00:51:55and to a certain degree um we are
- 00:51:57praying here to people who are already
- 00:51:59totally on board right who are obviously
- 00:52:00interested in Ai and are familiar with
- 00:52:03quite a few of the tools you have
- 00:52:04already you have used as part of this
- 00:52:06call um what has been effective for you
- 00:52:10to kind of get people on board about AI
- 00:52:13who are not really into it so Elaine
- 00:52:16asked how can organizations create the
- 00:52:18culture to lean into AI what kind of EX
- 00:52:21you suggest to take care and that's a
- 00:52:23deeper question um and uh and by the way
- 00:52:26I like a lot like a lot of the questions
- 00:52:28like you know idea Generation stuff
- 00:52:29they're in the book too if you and or my
- 00:52:31substack if you want to look at some of
- 00:52:33that information because I see people
- 00:52:34expressing doubt you're welcome to
- 00:52:35express doubt about the findings read
- 00:52:37the papers um but um so organizations I
- 00:52:41think the really interesting thing about
- 00:52:42organizations is that the only way to
- 00:52:45use AI is to use it and the people who
- 00:52:48have an advantage in using AI are people
- 00:52:50on the um on the far end of the of the
- 00:52:54organization people actually doing work
- 00:52:55like there's a famous paper by uh Eric
- 00:52:58Von hipple who um who studies Innovation
- 00:53:01and I think he's 100% right here which
- 00:53:03is that you know innovation's done by
- 00:53:04users who have problems that are trying
- 00:53:06to solve them and people are using ad to
- 00:53:08solve all sorts of problems sometimes in
- 00:53:09bad ways I asked the person who is I
- 00:53:11talked to a person who's in charge of
- 00:53:13writing the policy to ban chat gbt used
- 00:53:15it a major bank and she used chat GPT to
- 00:53:17write the ban because it was just easier
- 00:53:19than wri it by hand so people are using
- 00:53:21this all the time everywhere part of
- 00:53:23this is about how do I encourage people
- 00:53:24in the organization to come forward
- 00:53:25about how they're using how do I have
- 00:53:27them expose themselves because if I'm
- 00:53:28worried I'm going to get fired for using
- 00:53:30this or replaced or punished or the
- 00:53:32people will think less of my work I'm
- 00:53:34just going to keep using it secretly so
- 00:53:35it starts with organizational culture
- 00:53:37and reward systems and I we could you
- 00:53:39know we don't have a lot of time but
- 00:53:40there's a lot we could talk about with
- 00:53:41that too and the book covers some of it
- 00:53:44also um you've repeatedly kind of used
- 00:53:47the example of of English on native
- 00:53:49speakers um Nica asking which language
- 00:53:52do you suggest using AI so I've used it
- 00:53:54in German as well as in as well as in
- 00:53:56English English but I wonder what's your
- 00:53:58I've also used it in French a little bit
- 00:54:00like very often to produce French output
- 00:54:02um what's your take on language and all
- 00:54:03of this Ethan so um there's a it's even
- 00:54:08so Chachi is not trained well in other
- 00:54:10languages but does them incidentally it
- 00:54:12even does pretty good Elvish um though
- 00:54:14mostly sarian and not kenoa for anyone
- 00:54:16who's really nerdy about their token
- 00:54:18Elvish languages um but it does um but
- 00:54:21it's weird because it turns out the
- 00:54:23language you talk to it in matters too
- 00:54:26if you speak to the AI in Korean it
- 00:54:29answers questions if you give it a big
- 00:54:31five test more like a Korean person than
- 00:54:33if you ask it questions in English it
- 00:54:34answers more like an American so like we
- 00:54:36don't even know the effects of language
- 00:54:38we know the number of spaces and
- 00:54:39ascended change the answers we know the
- 00:54:41AI responds worse if you ask Dumber
- 00:54:44questions we literally have no
- 00:54:48idea there were a few questions and you
- 00:54:50touch up on this in the book um to what
- 00:54:53degree say something about Mental Health
- 00:54:56and AI um and there different ways of
- 00:54:59touching up on it right so so one way of
- 00:55:01thinking about this you've already
- 00:55:02alluded to um AI might result in mental
- 00:55:05health issues that we all wonder what is
- 00:55:06kind of our role in society and so on
- 00:55:08but there's also a way that that we can
- 00:55:11use chbt as a therapist and and youve
- 00:55:14kind of played a little bit with this
- 00:55:15say a word about this Ethan so I mean
- 00:55:18this is one of those kind of scary early
- 00:55:20days thing which is the early evidence
- 00:55:22is it's quite good but we don't know
- 00:55:23enough right and so we're not
- 00:55:25experimenting but people are using
- 00:55:26casually as a therapist for better or
- 00:55:27worse I mean people are getting addicted
- 00:55:30I mean addicted is kind of the wrong
- 00:55:31word but may be the right one to talking
- 00:55:32to AI personalities the second most
- 00:55:34popular AI tool in the US is not you
- 00:55:36know after chat GPT is not clawed it's
- 00:55:39character AI where you can create fake
- 00:55:42people and talk to them uh replica
- 00:55:44people have relationships with their AIS
- 00:55:46there's this um small study of 90 people
- 00:55:49who are of freshmen in colleges who were
- 00:55:51desperately lonely using replica and 8%
- 00:55:54of people I think it was maybe it was
- 00:55:56people I don't remember the exact number
- 00:55:57I'm sorry um said it's prevented them
- 00:55:59from doing a suicide attempt right um
- 00:56:01you know we don't know what the effects
- 00:56:03are going to be of of interacting with
- 00:56:04AI as people and so AI as therapist is
- 00:56:07kind of the oldest use of AI a lot of
- 00:56:09people think it's very good as a
- 00:56:10therapist but we don't have a lot of
- 00:56:11evidence one way or another so part of
- 00:56:13what I worry about is actually in the
- 00:56:15world it's being used in many ways we
- 00:56:17don't know whether those ways are good
- 00:56:19or bad but it's actually being done
- 00:56:21already so we need to study more and
- 00:56:22know
- 00:56:25more
- 00:56:27Ethan we've already pointed out we are
- 00:56:28coming a little bit kind of closer
- 00:56:30towards the end um we have covered a lot
- 00:56:32of ground in the book
- 00:56:35um what are I have a few more things but
- 00:56:38what are things that where you would say
- 00:56:39like look this is something we haven't
- 00:56:41yet touched up on but strikes is very
- 00:56:44important um I mean there's a lot here
- 00:56:46right there's a lot to talk about but I
- 00:56:48really want to emphasize those two
- 00:56:49things one is you need to use it to
- 00:56:51understand I a lot of the questions I
- 00:56:53see are really good questions but some
- 00:56:54of them just like use your 10 hours you
- 00:56:57just have to do it like that's you will
- 00:56:59become a local expert with with 10 hours
- 00:57:01of use and the second thing is more is
- 00:57:03coming I just like we're not used to
- 00:57:05dealing with exponentials there's only
- 00:57:06two way to deal two ways to deal with
- 00:57:07exponential change which is to be too
- 00:57:09early or too late and I think a lot of
- 00:57:11people are going to be too late and be
- 00:57:12very surprised by what's coming down the
- 00:57:15pike so what degree do you have I know
- 00:57:17that you can't talk about um about
- 00:57:19details Ethan but to what degree do you
- 00:57:21have like a glimpse into the future at
- 00:57:23this point and I know you that you're
- 00:57:25working with a lot of these companies
- 00:57:26when you say Morse coming is that like
- 00:57:30hey I've seen exponential growth I know
- 00:57:32how this works I've seen yeah so right
- 00:57:34now uh I can't say too much but I'm not
- 00:57:36testing too many new things right that I
- 00:57:38mean I'm testing some stuff in video and
- 00:57:40other things things are advancing very
- 00:57:41quickly also which we haven't even
- 00:57:42spoken about I can fake uh you know me
- 00:57:45speaking whatever language I want right
- 00:57:47super easily um and uh you know without
- 00:57:50without any problem in fact I was just
- 00:57:52wondering if I can quickly uh pull up uh
- 00:57:54myself talking in uh you know just give
- 00:57:57give you a separate language thing um I
- 00:57:59think that just the the so I I know some
- 00:58:01of what's going on but I also think
- 00:58:03people have to realize the AI companies
- 00:58:05are actually super sincere like like
- 00:58:09open AI really wants to build a machine
- 00:58:10God like I don't know if they can do it
- 00:58:12but that's what they want to do and
- 00:58:14they're very serious about it in fact I
- 00:58:15just want to show you on the language
- 00:58:16side just for fun here I think I can do
- 00:58:18sound really quickly um so tell me if
- 00:58:21when when we could see this I'm just
- 00:58:23going to show you uh as a side note
- 00:58:25because we're talking about languages
- 00:58:26uh it looks like it'll light up in a
- 00:58:27second here um here we go so here's me
- 00:58:30this is a completely fake video of me
- 00:58:33the AI use 30 seconds of me talking to a
- 00:58:36webcam and 30 seconds of my voice and
- 00:58:39now I have an avatar that I can make say
- 00:58:42anything don't trust your own
- 00:58:55eyes
- 00:58:58anyway you get you get the idea that I'm
- 00:58:59talking about here right which is that
- 00:59:01like the the set of implications are
- 00:59:03also quite large you don't even need new
- 00:59:04tools for weird things to be
- 00:59:08happening Ethan we could probably go on
- 00:59:11um for a very very long time um this has
- 00:59:15simply been fantastic it feels very much
- 00:59:17like it feels very much like drinking
- 00:59:19from a firehouse um people can obviously
- 00:59:22follow you on substack um I'm going to
- 00:59:24do um we have done the recording I hope
- 00:59:27to get the excuse to regular listeners
- 00:59:29sometimes the recording take a take a
- 00:59:31little bit of time but I'll try to get
- 00:59:32it up quickly um when I do the recording
- 00:59:36I typically I don't want to kind of spam
- 00:59:38you with too much um people with too
- 00:59:40many emails so I typically just do the I
- 00:59:45put the video on YouTube um and excuse
- 00:59:48me I put it the video is typically on
- 00:59:49YouTube you can also just follow it on
- 00:59:51LinkedIn um so that's probably just the
- 00:59:53easiest way you can just follow me or
- 00:59:55kind of visit my profile we just put up
- 00:59:57the video from last time from Ain Meer
- 00:59:59um otherwise in case you enjoyed the
- 01:00:01talk um the person we have next is Alex
- 01:00:04edmans um who is a fantastic book um
- 01:00:08about kind of how do we engage in causal
- 01:00:12in in causal inference in how we need to
- 01:00:14be careful interpreting studies I
- 01:00:16already had a chance to read the book
- 01:00:18the book is absolutely fantastic so I
- 01:00:20highly highly highly recommend it um in
- 01:00:23case you want to sign up I put the link
- 01:00:26um I put the link into the webinar chat
- 01:00:29um otherwise I'll follow up with Ethan's
- 01:00:32video and we'll put a link into into the
- 01:00:35book by Ethan but it's very easy you can
- 01:00:37just go to Amazon or your kind of
- 01:00:39preferred book sell of choice go for Co
- 01:00:41intelligence I also strongly strongly
- 01:00:43recommend Ethan has an absolutely
- 01:00:45fantastic substack um I don't read many
- 01:00:48substacks but Ethan subex I always read
- 01:00:51religiously um it comes with a very nice
- 01:00:53frequency and it always has a lot of of
- 01:00:56great insights into about AI so if you
- 01:00:58ENT AI read the book subscribe to
- 01:01:00Ethan's substack I'll put links to both
- 01:01:02of it into the link and post when I
- 01:01:04summarize the whole thing Ethan thank
- 01:01:07you so much this was great fun thank you
- 01:01:09for having me this was great bye bye bye
- AI
- Co-Intelligence
- Ethan Mollick
- Technology
- AI Bias
- AI in Business
- Future of Work
- AI Adoption
- Jagged Frontier
- Mental Health