OpenAI's Altman and Makanju on Global Implications of AI

00:38:47
https://www.youtube.com/watch?v=GRioxTshI2E

الملخص

TLDROpenAI announced new guidelines for the use of AI in elections, aiming to enhance transparency and prevent misuse. These guidelines include banning the use of ChatGPT in political campaigns and introducing cryptographic watermarks for AI-generated images to ensure authenticity. The company plans to enforce these guidelines through robust safety systems and strategic partnerships with authoritative bodies. The discussion highlights the need to address potential impacts of AI on democratic processes, emphasizing OpenAI's commitment to responsible AI use. The conversation also touches on the broader implications of AI, including its effects on jobs and societal inequalities. OpenAI expresses optimism about AI's role as a tool for enhancing productivity rather than displacing jobs. Additionally, the dialogue explores OpenAI's stance on publisher relations, copyright management, and collaborations with military entities on non-violent projects like cybersecurity. OpenAI maintains a focus on balancing innovation with ethical considerations, preparing for advancements in AI capabilities while respecting regulatory frameworks and societal norms.

الوجبات الجاهزة

  • 📜 New AI guidelines ban ChatGPT in elections.
  • 🔗 Cryptographic watermarks for AI images introduced.
  • 🛡️ Strong safety systems to enforce guidelines.
  • 🤝 Partnerships to provide credible voting information.
  • 🗳️ AI's role in elections needs cautious monitoring.
  • 🛑 AI impacts jobs by transforming roles, not displacing.
  • 📚 Favor learning from high-quality data over quantity.
  • 🤖 OpenAI collaborates on non-violent military projects.
  • ⚙️ Explore innovative AI model developments carefully.
  • 🚫 Focus on protecting copyright and data rights.

الجدول الزمني

  • 00:00:00 - 00:05:00

    OpenAI has announced new guidelines for the use of AI in elections, including banning the use of ChatGPT in political campaigns and introducing cryptographic watermarks for AI-generated images to ensure transparency and authenticity. Although larger platforms like Facebook and YouTube struggle with enforcement, OpenAI leverages its safety systems and partnerships, such as with the National Association of Secretaries of State, to effectively enforce these guidelines.

  • 00:05:00 - 00:10:00

    There is ongoing anxiety and focus within OpenAI regarding its role in upcoming elections, emphasizing the importance of monitoring and a proactive approach. The mention of past technological cycles suggests that existing dynamics, such as those seen with previous tech advancements, will inform their strategies. The seriousness with which OpenAI approaches the ethical development of AI is clear, drawing lessons from past incidents such as Cambridge Analytica.

  • 00:10:00 - 00:15:00

    The discussion extends to the implications of AI in the political sphere, recognizing that AI could become a significant social issue. The utilization of AI could exacerbate inequalities or technological divides, reflecting concerns similar to those previously seen with figures like Donald Trump. However, OpenAI views its technology as a productivity tool that could enhance human capabilities, and they express cautious optimism about its impact on jobs and society.

  • 00:15:00 - 00:20:00

    OpenAI maintains a stance against the use of its technology in military projects, except in areas like cybersecurity and assistance for veterans, reflecting ethical considerations. Adjustments in their policies are aimed at improving transparency and aligning with broader social good, resonating with their mission to prioritize democratic oversight in AI technology.

  • 00:20:00 - 00:25:00

    Collaboration with the Department of Defense is highlighted, focusing on non-destructive applications like cybersecurity. OpenAI is also exploring GPT Store, which invites creativity akin to the early mobile app stores. Issues of copyright and partnerships with publishers are being navigated to ensure cooperative content use, with efforts to respect copyright laws while advancing AI capabilities.

  • 00:25:00 - 00:30:00

    The conversation touches upon the challenges and opportunities in AI-related copyright issues, with OpenAI fostering partnerships and addressing the complexities of data usage. They remain committed to working with artists, considering solutions for concerns about style imitation. This reflects their vision to create beneficial tools through dialogue and collaboration.

  • 00:30:00 - 00:38:47

    OpenAI's unique corporate structure, which includes a nonprofit board, faced challenges during internal governance issues. While the return of Sam Altman as CEO was employee-driven, it highlights the importance of their mission-driven focus. In broader discussions, themes of regulation, safety, and addressing potential conflicts between AI progress and energy requirements are explored, with an emphasis on balancing innovation with responsible governance.

اعرض المزيد

الخريطة الذهنية

Mind Map

الأسئلة الشائعة

  • What new guidelines were announced by OpenAI?

    OpenAI announced new guidelines banning the use of ChatGPT in political campaigns and introducing cryptographic watermarks for AI-generated images.

  • How does OpenAI plan to enforce these AI guidelines?

    OpenAI plans to leverage strong safety systems and partnerships with organizations like the National Association of Secretaries of State to enforce AI guidelines.

  • What are the potential impacts of AI on elections?

    AI could impact the dissemination of information and campaign strategies, requiring careful monitoring and enforcement to maintain fair elections.

  • How does OpenAI view its role compared to distribution platforms like Facebook and TikTok?

    OpenAI sees its role as distinct but complementary to distribution platforms, focusing on generating content while platforms manage distribution.

  • What is OpenAI's stance on AI's impact on job displacement?

    OpenAI believes AI changes the nature of jobs rather than causing massive job displacement, offering tools that enhance productivity.

  • How important are publisher relations to OpenAI's business?

    Publisher relations are important for content access and partnerships, though OpenAI is focusing on learning from smaller amounts of higher-quality data.

  • What are OpenAI's policies on military use of AI?

    OpenAI prohibits the development of weapons but collaborates on cybersecurity tools and other non-violent applications with military agencies.

  • What are the anticipated challenges in regulating AI?

    Regulating AI involves ensuring innovation isn't stifled while establishing effective safety and ethical guidelines across industries.

  • What does OpenAI expect from future AI model developments?

    OpenAI anticipates more capable and efficient models, cautiously integrating advancements into society.

  • How does OpenAI handle copyright issues with AI models?

    OpenAI focuses on not training models with unauthorized data and aims to respect publisher rights while exploring data partnerships.

عرض المزيد من ملخصات الفيديو

احصل على وصول فوري إلى ملخصات فيديو YouTube المجانية المدعومة بالذكاء الاصطناعي!
الترجمات
en
التمرير التلقائي:
  • 00:00:00
    you guys made some news today announcing
  • 00:00:02
    some new guidelines around the use of AI
  • 00:00:04
    in elections I'm sure it's all uh stuff
  • 00:00:07
    that the the the davo set loved to hear
  • 00:00:11
    uh you banned the use of chat GPT in
  • 00:00:13
    political campaigns you introduced
  • 00:00:16
    cryptographic watermarks or images
  • 00:00:18
    created by do to to create kind of
  • 00:00:20
    Providence and transparency around the
  • 00:00:23
    use of AI generated images I read it and
  • 00:00:25
    I thought you know this is great some of
  • 00:00:28
    these principles are shared by much
  • 00:00:29
    larger platforms like Facebook and Tik
  • 00:00:31
    Tok and YouTube and they have struggled
  • 00:00:34
    to enforce it how do
  • 00:00:36
    you make it
  • 00:00:38
    real I mean these a lot of these are
  • 00:00:40
    things that we've been doing for a long
  • 00:00:42
    time and we have a really strong safety
  • 00:00:44
    systems team that um not only sort of
  • 00:00:48
    has monitoring but we're actually able
  • 00:00:49
    to leverage our own tools in order to
  • 00:00:51
    scale our enforcement which gives us I
  • 00:00:53
    think a significant Advantage um but uh
  • 00:00:56
    so there this there also some some
  • 00:01:00
    really important Partnerships like with
  • 00:01:01
    the National Association with the
  • 00:01:02
    secretaries of state so we can Surface
  • 00:01:04
    authoritative voting information so we
  • 00:01:06
    have quite a few ways that we are able
  • 00:01:08
    to enforce this mean Sam are you does
  • 00:01:10
    this put your mind at ease that we don't
  • 00:01:13
    that that open AI doesn't move the
  • 00:01:15
    needle in in some 77 upcoming critical
  • 00:01:19
    Democratic elections in 2024 no we're
  • 00:01:22
    quite focused on it uh and I think it's
  • 00:01:24
    good that our mind is not at EAS I think
  • 00:01:25
    it's good that we have a lot of anxiety
  • 00:01:26
    and are going to do everything we can to
  • 00:01:28
    get it as right as we can um I think our
  • 00:01:31
    role is very different than the role of
  • 00:01:33
    a distribution platform but still
  • 00:01:35
    important we'll have to work with them
  • 00:01:36
    too uh it'll you know it's like you
  • 00:01:38
    generate here and distribute here uh and
  • 00:01:41
    there needs to be a good conversation
  • 00:01:43
    between them but we also have the
  • 00:01:45
    benefit of having watched what's
  • 00:01:47
    happened in previous Cycles with
  • 00:01:49
    previous uh you know Technologies and I
  • 00:01:53
    don't think this will be the same as
  • 00:01:54
    before I I think it's always a mistake
  • 00:01:55
    to try to fight the last war but we do
  • 00:01:58
    get to take away some learnings from
  • 00:01:59
    that
  • 00:02:00
    and so I I wouldn't you know I I think
  • 00:02:03
    it'd be terrible if I said oh yeah I'm
  • 00:02:04
    not worried I feel great like we're
  • 00:02:05
    going to have to watch this incredibly
  • 00:02:07
    closely this year super tight monitoring
  • 00:02:09
    super tight feedback loop Anna you you
  • 00:02:11
    were at Facebook for open
  • 00:02:13
    AI so so I almost apologize for asking
  • 00:02:16
    it this in this way uh probably a
  • 00:02:18
    trigger phrase but do you worry about
  • 00:02:20
    another Cambridge analytical analytica
  • 00:02:22
    moment I think as Sam alluded to there
  • 00:02:25
    are a lot of learnings that we can
  • 00:02:27
    leverage but also open the eye from its
  • 00:02:29
    Inception has been a company that thinks
  • 00:02:31
    about these issues that it was one of
  • 00:02:33
    the reasons that it was founded so I
  • 00:02:35
    think I'm a lot less concerned because
  • 00:02:37
    these are issues that our teams have
  • 00:02:38
    been thinking about from the beginning
  • 00:02:40
    of uh our building of these tools Sam
  • 00:02:44
    Donald Trump just won the Iowa caucus
  • 00:02:47
    yesterday uh we are now sort of
  • 00:02:49
    confronted with the reality of this
  • 00:02:50
    upcoming election what do you think is
  • 00:02:52
    at
  • 00:02:53
    stake in the in the US election for for
  • 00:02:57
    Tech and for the safe stewardship of AI
  • 00:02:59
    do you feel like that's a a critical
  • 00:03:01
    issue that voters should and will have
  • 00:03:04
    to consider in this election I think the
  • 00:03:05
    now confronted as part of the problem uh
  • 00:03:07
    I actually think most people who come to
  • 00:03:09
    D say that again I didn't quite get that
  • 00:03:10
    I think part of the problem is we're
  • 00:03:11
    saying we're now confronted you know it
  • 00:03:13
    never occurred to us that what Trump is
  • 00:03:15
    saying might be resonating with a lot of
  • 00:03:16
    people and now all of a sudden after
  • 00:03:18
    this performance in Iowa oh man um it's
  • 00:03:22
    a very like Davos Centric you know um
  • 00:03:25
    I've been here for two days I guess
  • 00:03:28
    just uh so I I would love if we had a
  • 00:03:32
    lot more reflection and if we started it
  • 00:03:34
    a lot sooner um about and we didn't feel
  • 00:03:37
    now confronted but uh I think there's a
  • 00:03:39
    lot at stake at this election I think
  • 00:03:41
    elections are you know huge deals I
  • 00:03:44
    believe that America is going to be fine
  • 00:03:47
    no matter what happens in this election
  • 00:03:49
    I believe that AI is going to be fine no
  • 00:03:51
    matter what happens in this election and
  • 00:03:52
    we will have to work very hard to make
  • 00:03:54
    it so um but this is not you know no one
  • 00:03:59
    wants to sit up here and like hear me
  • 00:04:01
    rant about politics I'm going to stop
  • 00:04:02
    after this um but I think there has been
  • 00:04:07
    a real failure to sort of learn lessons
  • 00:04:11
    about what what's kind of like working
  • 00:04:13
    for the citizens of America and what's
  • 00:04:15
    not Anna I want to ask you the same
  • 00:04:17
    question uh um you know taking your
  • 00:04:19
    political background into account what
  • 00:04:21
    do you feel like for Silicon Valley for
  • 00:04:24
    AI is at stake in the US election I
  • 00:04:27
    think what has struck me and has been
  • 00:04:29
    really remarkable is that the
  • 00:04:30
    conversation around AI has remained very
  • 00:04:34
    bipartisan and so you know I think that
  • 00:04:37
    the one concern I have is that somehow
  • 00:04:39
    both parties hate
  • 00:04:42
    it no but you know this is like an area
  • 00:04:45
    where um
  • 00:04:46
    you Republicans tend to of course have a
  • 00:04:50
    an approach where they are not as in
  • 00:04:52
    favor of Regulation but on this I think
  • 00:04:54
    there's agreement on both parties that
  • 00:04:55
    they are consider they believe that
  • 00:04:57
    something is needed on this technology
  • 00:05:00
    you know Senator Schumer has this
  • 00:05:01
    bipartisan effort that he is running
  • 00:05:03
    with his Republican counterparts again
  • 00:05:05
    uh when we speak to people in DC on both
  • 00:05:08
    sides of the aisle for now it seems like
  • 00:05:11
    they're on the same page and do you feel
  • 00:05:13
    like all the existing campaigns are
  • 00:05:15
    equally articulate about the about the
  • 00:05:18
    issues relating to Ai No know that AI
  • 00:05:20
    has really been a campaign issue to date
  • 00:05:22
    so it will be interesting to see how
  • 00:05:24
    that if we're right about what's going
  • 00:05:25
    to happen here this is like bigger than
  • 00:05:28
    just a technological re ution in some
  • 00:05:30
    sense I mean sort of like all
  • 00:05:31
    technological revolutions or societal
  • 00:05:33
    revolutions but this one feels like it
  • 00:05:35
    can be much more of that than usual and
  • 00:05:39
    so it it is going to become uh a social
  • 00:05:43
    issue a political issue um it already
  • 00:05:45
    has in some ways but I think it is
  • 00:05:48
    strange to both of us that it's not more
  • 00:05:50
    of that already but with what we expect
  • 00:05:52
    to happen this year not with the
  • 00:05:53
    election but just with the the increase
  • 00:05:55
    in the capabilities of the products uh
  • 00:05:58
    and as people really
  • 00:06:00
    catch up with what's going to happen
  • 00:06:02
    what is happening what's already
  • 00:06:04
    happened uh there's like a lot of a Nur
  • 00:06:05
    always in society well I mean there are
  • 00:06:07
    political figures in the US and around
  • 00:06:08
    the world like Donald Trump who have
  • 00:06:11
    successfully tapped into a feeling of
  • 00:06:13
    yeah
  • 00:06:14
    dislocation uh anger of the working
  • 00:06:17
    class the feeling of you know
  • 00:06:19
    exacerbating inequality or technology
  • 00:06:22
    leaving people behind is there the
  • 00:06:24
    danger that uh you know AI furthers
  • 00:06:27
    those Trends yes for sure I think that's
  • 00:06:29
    something to think about but one of the
  • 00:06:32
    things that surprised us very pleasantly
  • 00:06:34
    on the upside uh cuz you know when you
  • 00:06:36
    start building a technology you start
  • 00:06:37
    doing research you you kind of say well
  • 00:06:39
    we'll follow where the science leads us
  • 00:06:40
    and when you put a product you'll say
  • 00:06:42
    this is going to co-evolve with society
  • 00:06:43
    and we'll follow where users lead us but
  • 00:06:46
    it's not you get you get to steer it but
  • 00:06:48
    only somewhat there's some which is just
  • 00:06:50
    like this is what the technology can do
  • 00:06:53
    this is how people want to use it and
  • 00:06:55
    this is what it's capable of and this
  • 00:06:57
    has been much more of a tool than I
  • 00:06:59
    think we expected it is not yet and
  • 00:07:02
    again in the future it'll it'll get
  • 00:07:04
    better but it's not yet like replacing
  • 00:07:06
    jobs in the way to the degree that
  • 00:07:08
    people thought it was going to it is
  • 00:07:10
    this incredible tool for productivity
  • 00:07:13
    and you can see people magnifying what
  • 00:07:14
    they can do um by a factor of two or
  • 00:07:17
    five or in some way that doesn't even
  • 00:07:19
    talk to makes sense to talk about a
  • 00:07:20
    number because they just couldn't do the
  • 00:07:21
    things at all before and that is I think
  • 00:07:25
    quite exciting this this new vision of
  • 00:07:28
    the future that we didn't really see
  • 00:07:30
    when we started we kind of didn't know
  • 00:07:31
    how it was going to go and very thankful
  • 00:07:33
    the technology did go in this direction
  • 00:07:35
    but where this is a tool that magnifies
  • 00:07:37
    what humans do lets people do their jobs
  • 00:07:39
    better lets the AI do parts of jobs and
  • 00:07:42
    of course jobs will change and of course
  • 00:07:43
    some jobs will totally go away but the
  • 00:07:46
    human drives are so strong and the sort
  • 00:07:48
    of way that Society works is so strong
  • 00:07:50
    that I think and I can't believe I'm
  • 00:07:52
    saying this because it would have
  • 00:07:54
    sounded like an ungrammatical sentence
  • 00:07:56
    to me at some point but I think AGI will
  • 00:07:58
    get developed in the reasonably
  • 00:08:00
    close-ish future and it'll change the
  • 00:08:02
    world much less than we all think it'll
  • 00:08:03
    change jobs much less than we all think
  • 00:08:06
    and again that sounds I may be wrong
  • 00:08:08
    again now but that wouldn't have even
  • 00:08:10
    compiled for me as a sentence at some
  • 00:08:11
    point given my conception then of how
  • 00:08:13
    AGI was going to go as you've watched
  • 00:08:15
    the technology develop have you both
  • 00:08:17
    changed your views on how significant
  • 00:08:19
    the job dislocation and disruption will
  • 00:08:22
    be as AGI comes into Focus so this is
  • 00:08:25
    actually an area that we know we have a
  • 00:08:26
    policy research team that studies this
  • 00:08:28
    and they've seen pretty significant
  • 00:08:30
    impact in terms of changing the way
  • 00:08:31
    people do jobs rather than job
  • 00:08:33
    dislocation and I think that's actually
  • 00:08:35
    going to accelerate and that it's going
  • 00:08:36
    to change more people's jobs um but as
  • 00:08:39
    Sam said so far it hasn't been the
  • 00:08:41
    significant a replacement of jobs you
  • 00:08:44
    know you hear a coder say okay I'm like
  • 00:08:46
    two times more productive three times
  • 00:08:48
    more productive whatever than they used
  • 00:08:49
    to be and I like can never code again
  • 00:08:50
    without this tool you mostly hear that
  • 00:08:52
    from the younger ones but
  • 00:08:54
    um it turns out and I think this will be
  • 00:08:57
    true for a lot of Industries the world
  • 00:08:58
    just needs a lot more code than we have
  • 00:09:00
    people to write right now and so it's
  • 00:09:02
    not like we run out of demand it's that
  • 00:09:04
    people can just do more expectations go
  • 00:09:06
    up but ability goes up
  • 00:09:08
    too goes up I want to ask you about
  • 00:09:10
    another news report today that suggested
  • 00:09:13
    that open AI was relaxing its
  • 00:09:15
    restrictions around the use of AI in
  • 00:09:18
    military projects and developing weapons
  • 00:09:21
    can you say more about that and you what
  • 00:09:24
    work are you doing with the US
  • 00:09:25
    Department of Defense and other military
  • 00:09:27
    agencies so a lot of these policies were
  • 00:09:30
    written um before we even knew what
  • 00:09:32
    these people would use our tools for so
  • 00:09:34
    what this was not actually just the
  • 00:09:37
    adjustment of the military use case
  • 00:09:38
    policies but across the board to make it
  • 00:09:40
    more clear so that people understand
  • 00:09:42
    what is possible what is not possible
  • 00:09:43
    but specifically on this um area we
  • 00:09:47
    actually still prohibit the development
  • 00:09:49
    of weapons um the destruction of
  • 00:09:51
    property harm to individuals but for
  • 00:09:53
    example we've been doing work with the
  • 00:09:55
    Department of Defense on um cyber
  • 00:09:57
    security tools for uh open source
  • 00:09:59
    software that secures critical
  • 00:10:01
    infrastructure we've been exploring
  • 00:10:02
    whether it can assist with veteran
  • 00:10:04
    suicide and because we previously had a
  • 00:10:06
    what essentially was a blanket
  • 00:10:07
    prohibition on Military many people felt
  • 00:10:10
    like that would have prohibited any of
  • 00:10:12
    these use cases which we think are very
  • 00:10:13
    much aligned with what we want to see in
  • 00:10:15
    the world has the US government asked
  • 00:10:17
    you to restrict the level of cooperation
  • 00:10:20
    with uh militaries in other
  • 00:10:23
    countries um they haven't asked us but
  • 00:10:25
    we certainly are not you know right for
  • 00:10:28
    now actually our discussion are focused
  • 00:10:29
    on um United States national security
  • 00:10:32
    agencies and um you know I think we have
  • 00:10:35
    always believed that democracies need to
  • 00:10:37
    be in the lead on this technology uh Sam
  • 00:10:39
    changing topics uh give us an update on
  • 00:10:41
    the GPT store and are you seeing maybe
  • 00:10:44
    probably explain it briefly and are you
  • 00:10:45
    seeing the same kind of explosion of
  • 00:10:47
    creativity we saw in the early days of
  • 00:10:50
    the mobile app stores yeah the same
  • 00:10:52
    level of creativity and the same level
  • 00:10:53
    of crap but it I mean that happens in
  • 00:10:56
    the early days as people like feel out a
  • 00:10:57
    technology there's some incredible stuff
  • 00:10:59
    in there too um give us an example the
  • 00:11:01
    gpts should I say what gpts are first
  • 00:11:04
    yeah sure um so gpts are a way to do a
  • 00:11:06
    very lightweight customization of chat
  • 00:11:08
    GPT and if you want it to behave in a
  • 00:11:11
    particular way to use particular data to
  • 00:11:13
    be able to call out to an external
  • 00:11:14
    service um you can make this thing and
  • 00:11:17
    you can do all sorts of like uh great
  • 00:11:19
    stuff with it um and then we just
  • 00:11:21
    recently launched a store where you can
  • 00:11:23
    see what other people have built and you
  • 00:11:24
    can share it and um I mean personally
  • 00:11:27
    one that I have loved is Al Trails I
  • 00:11:29
    have this like every other weekend I
  • 00:11:31
    would like to like go for a long hike
  • 00:11:33
    and there's always like the version of
  • 00:11:34
    Netflix that other people have where
  • 00:11:35
    it's like takes an hour to figure out
  • 00:11:37
    what to watch it takes me like two hours
  • 00:11:38
    to figure out what hike to do and the
  • 00:11:40
    all Trails thing to like say I want this
  • 00:11:42
    I want that you know I've already done
  • 00:11:44
    this one and like here's a great hike
  • 00:11:46
    it's been I it's sounds silly but I love
  • 00:11:48
    that one have you added any gpts of your
  • 00:11:50
    own have I made any yeah um I have not
  • 00:11:53
    put any in the store maybe I will great
  • 00:11:57
    um can you give us an update on the
  • 00:11:58
    volume or or the pace at which you're
  • 00:12:00
    seeing new gpts um the number I know is
  • 00:12:03
    that there had been 3 million created
  • 00:12:04
    before we launched the store I have been
  • 00:12:05
    in the middle of this trip around the
  • 00:12:07
    world that has been quite hectic and I
  • 00:12:08
    have not been doing my normal daily
  • 00:12:10
    metrics tracking so I don't know how
  • 00:12:12
    it's gone since launch but I'll tell you
  • 00:12:13
    by the slowness of chat GPT it's
  • 00:12:15
    probably doing really
  • 00:12:18
    well um I want to ask you about open
  • 00:12:20
    ai's copyright issues uh how important
  • 00:12:22
    are publisher relations to open ai's
  • 00:12:25
    business considering for example the
  • 00:12:27
    lawsuit last month file against open AI
  • 00:12:29
    by the New York Times They are important
  • 00:12:32
    but not for the reason people think um
  • 00:12:34
    there is this belief held by some people
  • 00:12:36
    that man you need all of my training
  • 00:12:38
    data and my training data is so valuable
  • 00:12:40
    and actually uh that is generally not
  • 00:12:43
    the case we do not want to train on the
  • 00:12:45
    New York Times data for example um and
  • 00:12:48
    all more generally we're getting to a
  • 00:12:49
    world where it's been like data data
  • 00:12:51
    data you just need more you need more
  • 00:12:53
    you need more you're going to run out of
  • 00:12:54
    that at some point anyway so a lot of
  • 00:12:55
    our research has been how can we learn
  • 00:12:57
    more from smaller amounts of very high
  • 00:12:59
    quality data and I think the world is
  • 00:13:01
    going to figure that out what we want to
  • 00:13:02
    do with Publishers if they want is when
  • 00:13:05
    one of our users says what happened to
  • 00:13:08
    Davos today be able to say here's an
  • 00:13:10
    article from blueberg here's an article
  • 00:13:11
    from New York Times and here you know
  • 00:13:12
    here's like a little snippet or probably
  • 00:13:14
    not a snippet there's probably some
  • 00:13:15
    cooler thing that we can do with the
  • 00:13:16
    technology and you know some people want
  • 00:13:18
    to partner with us some people don't
  • 00:13:20
    we've been striking a lot of great
  • 00:13:21
    Partnerships and we have a lot more
  • 00:13:23
    coming um and then you know some people
  • 00:13:26
    don't want want to uh we'd rather they
  • 00:13:28
    just say we don't want to do that rather
  • 00:13:30
    than Sue us but like we'll defend
  • 00:13:32
    ourselves that's fine too I just heard
  • 00:13:34
    you say you don't want to train on the
  • 00:13:36
    New York Times does that mean given the
  • 00:13:38
    the legal exposure you would have done
  • 00:13:40
    things differently as you trained your
  • 00:13:41
    model here's a tricky thing about that
  • 00:13:43
    um people the web is a big thing and
  • 00:13:46
    there are people who like copy from The
  • 00:13:47
    New York Times and put an article
  • 00:13:49
    without attribution up on some website
  • 00:13:51
    and you don't know that's a New York
  • 00:13:52
    Times article if the New York Times
  • 00:13:54
    wants to give us a database of all their
  • 00:13:55
    articles or someone else does and say
  • 00:13:57
    hey don't put anything out that's like a
  • 00:13:58
    match for this we can probably do a
  • 00:14:00
    pretty good job and um solve we don't
  • 00:14:03
    want to regurgitate someone else's
  • 00:14:05
    content um but the problem is not as
  • 00:14:07
    easy as it sounds in a vacuum I think we
  • 00:14:10
    can get that number down and down and
  • 00:14:11
    down have it be quite low and that seems
  • 00:14:13
    like a super reasonable thing to
  • 00:14:15
    evaluate us on you know if you have
  • 00:14:17
    copyrighted content whether or not it
  • 00:14:20
    got put into someone else's thing
  • 00:14:22
    without our knowledge and you're willing
  • 00:14:23
    to show us what it is and say don't
  • 00:14:25
    don't put this stuff as a direct
  • 00:14:26
    response we should be able to do that
  • 00:14:29
    um again it won't like thousand you know
  • 00:14:32
    monkeys thousand typewriters whatever it
  • 00:14:33
    is once in a while the model will just
  • 00:14:35
    generate something very close but on the
  • 00:14:36
    whole we should be able to do a great
  • 00:14:38
    job with this um so there's like there's
  • 00:14:41
    all the negatives of this people like ah
  • 00:14:43
    you know don't don't do this but the
  • 00:14:44
    positives are I think there's going to
  • 00:14:46
    be great new ways to consume and
  • 00:14:49
    monetize news and other published
  • 00:14:51
    content and for every one New York Times
  • 00:14:54
    situation we have we have many more
  • 00:14:56
    Super productive things about people
  • 00:14:57
    that are excited to to build the future
  • 00:14:59
    and not do their theatrics and and what
  • 00:15:03
    and what about DOI I mean there have
  • 00:15:05
    been artists who have been upset with
  • 00:15:07
    Dolly 2 Dolly 3 what what has that
  • 00:15:09
    taught you and how will you do things
  • 00:15:11
    differently we engage with the artist
  • 00:15:12
    Community a lot and uh you know we we
  • 00:15:15
    try to like do the requests so one is
  • 00:15:16
    don't don't generate in my style um even
  • 00:15:20
    if you're not training on my data super
  • 00:15:22
    reasonable so we you know Implement
  • 00:15:23
    things like that
  • 00:15:25
    um you know let me opt out of training
  • 00:15:27
    even if my images are all over the
  • 00:15:28
    Internet and you don't know what they
  • 00:15:29
    are what I'm and so there's a lot of
  • 00:15:31
    other things too what I'm really excited
  • 00:15:32
    to do and the technology isn't here yet
  • 00:15:34
    but get to a point where rather than the
  • 00:15:36
    artist say I don't want this thing for
  • 00:15:38
    these reasons be able to deliver
  • 00:15:40
    something where an artist can make a
  • 00:15:41
    great version of Dolly in their style
  • 00:15:44
    sell access to that if they want don't
  • 00:15:46
    if they don't want just use it for
  • 00:15:47
    themselves uh or get some sort of
  • 00:15:49
    economic benefit or otherwise when
  • 00:15:52
    someone does use their stuff um and it's
  • 00:15:54
    not just training on their images it
  • 00:15:55
    really is like you know it really is
  • 00:15:59
    about style uh and and that's that's the
  • 00:16:02
    thing that at least in the artist
  • 00:16:03
    conversations I've had that people are
  • 00:16:05
    super interested in so for now it's like
  • 00:16:07
    all right let's know what people don't
  • 00:16:08
    want make sure that we respect that um
  • 00:16:11
    of course you can't make everybody happy
  • 00:16:12
    but try to like make the community feel
  • 00:16:14
    like we're being a good partner um but
  • 00:16:17
    what what I what I think will be better
  • 00:16:18
    and more exciting is when we can do
  • 00:16:20
    things that artists are like that's
  • 00:16:23
    awesome Anna you are open AI ambassador
  • 00:16:27
    to Washington other capitals around the
  • 00:16:30
    world I I'm curious what you've taken
  • 00:16:32
    from your experience in Facebook what
  • 00:16:34
    you've taken from the tense relations
  • 00:16:37
    between a lot of tech companies and
  • 00:16:39
    governments and Regulators over the past
  • 00:16:42
    few decades and how you're putting that
  • 00:16:43
    to use now in open open AI I mean so I
  • 00:16:46
    think one thing that I really learned
  • 00:16:48
    working in government and of course I
  • 00:16:49
    worked in the White House during the
  • 00:16:51
    2016 Russia election interference and
  • 00:16:53
    people think that that was the first
  • 00:16:54
    time we'd ever heard of it but it was
  • 00:16:56
    something that we had actually been
  • 00:16:57
    working on for years and thinking you
  • 00:16:59
    know we know that this happens what do
  • 00:17:01
    we do about it and one thing I never did
  • 00:17:03
    during that period is go out and talk to
  • 00:17:05
    the companies because it's not actually
  • 00:17:07
    typical thing you do in government and
  • 00:17:08
    was much more rare back then especially
  • 00:17:11
    with you know these emerging tools and I
  • 00:17:13
    thought about that a lot as I entered
  • 00:17:15
    the tech space that I regretted that and
  • 00:17:16
    that I wanted governments to be able to
  • 00:17:18
    really understand the technology and how
  • 00:17:19
    the decisions are made by these
  • 00:17:21
    companies and also just honestly when I
  • 00:17:23
    first joined openi no one of course had
  • 00:17:25
    heard of openi in government for the
  • 00:17:27
    most part
  • 00:17:28
    and I thought every time I used it I
  • 00:17:31
    thought my God if IID had this for8
  • 00:17:33
    years I was in the administration I
  • 00:17:35
    could have gotten 10 times more done so
  • 00:17:37
    for me it was really how do I get my
  • 00:17:38
    colleagues to use it um especially with
  • 00:17:40
    open eyes mission to make sure these
  • 00:17:42
    tools benefit everyone I don't think
  • 00:17:44
    that'll ever happen unless governments
  • 00:17:45
    are incorporating it to serve citizens
  • 00:17:47
    more efficiently and faster and so this
  • 00:17:49
    is actually one of the things I've been
  • 00:17:51
    most excited about is to just really get
  • 00:17:53
    governments to use it for everyone's
  • 00:17:55
    benefit I mean I'm hearing like a lot of
  • 00:17:56
    sincerity in that pitch are Regulators
  • 00:17:59
    receptive to it it feels like a lot are
  • 00:18:01
    coming to the conversation probably with
  • 00:18:04
    a good deal of skepticism because of
  • 00:18:06
    past interactions with Silicon Valley I
  • 00:18:08
    think I mostly don't even really get to
  • 00:18:09
    talk about it because for the most part
  • 00:18:11
    people are interested in governance and
  • 00:18:12
    Regulation and I think that they know um
  • 00:18:16
    theoretically that there is a lot of
  • 00:18:17
    benefit the government many governments
  • 00:18:19
    are not quite ready to incorporate I
  • 00:18:20
    mean there are exceptions obviously
  • 00:18:22
    people who are really at the Forefront
  • 00:18:24
    so it's not you know I think often I
  • 00:18:26
    just don't even really get to that
  • 00:18:27
    conversation
  • 00:18:29
    so I want to ask you both about the
  • 00:18:31
    dramatic turn of events in uh November
  • 00:18:34
    Sam one day the window on these
  • 00:18:36
    questions will close um that is not you
  • 00:18:39
    think they
  • 00:18:40
    will I think at some point they probably
  • 00:18:43
    will but it hasn't happened yet so it
  • 00:18:45
    doesn't doesn't matter um I guess my
  • 00:18:47
    question is is you know have you
  • 00:18:50
    addressed the Govern the governance
  • 00:18:52
    issues the very unique uh corporate
  • 00:18:56
    structure at open AI with the nonprofit
  • 00:18:58
    board and the cap profit arm that led to
  • 00:19:02
    your ouer we're going to focus first on
  • 00:19:05
    putting a great full board in place um I
  • 00:19:08
    expect us to make a lot of progress on
  • 00:19:09
    that in the coming months uh and then
  • 00:19:11
    after that the new board uh will take a
  • 00:19:13
    look at the governance structure but I
  • 00:19:15
    think we debated both what does that
  • 00:19:17
    mean is it should open AI be a
  • 00:19:19
    traditional Silicon Valley for-profit
  • 00:19:21
    company we'll never be a traditional
  • 00:19:23
    company but the structure I I think we
  • 00:19:25
    should take a look at the structure
  • 00:19:27
    maybe the answer we have now is right
  • 00:19:28
    but I think we should be willing to
  • 00:19:30
    consider other things but I think this
  • 00:19:32
    is not the time for it and the focus on
  • 00:19:33
    the board first and then we'll go look
  • 00:19:35
    at it from all angles I mean presumably
  • 00:19:37
    you have investors including Microsoft
  • 00:19:40
    including uh your Venture Capital
  • 00:19:42
    supporters um your employees who uh over
  • 00:19:46
    the long term are seeking a return on
  • 00:19:48
    their investment um I think one of the
  • 00:19:51
    things that's difficult to express about
  • 00:19:54
    open aai is the degree to which our team
  • 00:19:57
    and the people around us investors
  • 00:19:58
    Microsoft whatever are committed to this
  • 00:20:01
    Mission um in the middle of that crazy
  • 00:20:04
    few days uh at one point I think like 97
  • 00:20:09
    something like that 98% of the company
  • 00:20:11
    signed uh a letter saying you know we're
  • 00:20:14
    all going to resign and go to something
  • 00:20:16
    else and that would have torched
  • 00:20:18
    everyone's equity and for a lot of our
  • 00:20:20
    employees like this is all or the great
  • 00:20:22
    majority of their wealth and people
  • 00:20:24
    being willing to go do that I think is
  • 00:20:27
    quite unusual our investors who also
  • 00:20:29
    were about to like watch their Stakes go
  • 00:20:31
    to zero which just like how can we
  • 00:20:33
    support you and whatever is best for for
  • 00:20:35
    the mission Microsoft too um I feel very
  • 00:20:37
    very fortunate about that uh of course
  • 00:20:40
    also would like to make all of our
  • 00:20:42
    shareholders a bunch of money but it was
  • 00:20:44
    very clear to me what people's
  • 00:20:45
    priorities were and uh that meant a lot
  • 00:20:47
    I I I sort of smiled because you came to
  • 00:20:49
    the Bloomberg Tech Conference in last
  • 00:20:51
    June and Emily Chang asked uh it was
  • 00:20:54
    something along along the lines of why
  • 00:20:56
    should we trust you and you very
  • 00:20:58
    candidly says you shouldn't and you said
  • 00:21:00
    the board should be able to fire me if
  • 00:21:02
    if they want and of course then they did
  • 00:21:05
    and you quite uh adeptly orchestrated
  • 00:21:08
    your return actually let me tell you
  • 00:21:09
    something um I the board did that I was
  • 00:21:12
    like I think this is wild super confused
  • 00:21:16
    super caught off guard but this is the
  • 00:21:17
    structure and I immediately just went to
  • 00:21:19
    go thinking about what I was going to do
  • 00:21:20
    next it was not until some board members
  • 00:21:22
    called me the next morning that I even
  • 00:21:24
    thought about really coming back um when
  • 00:21:27
    they asked you don't want you want to
  • 00:21:28
    come back uh you want to talk about that
  • 00:21:31
    but like the board did have all of the
  • 00:21:33
    Power there now you know what I'm not
  • 00:21:36
    going to say that next thing but I I I
  • 00:21:39
    think you should continue I think I no I
  • 00:21:42
    would I would also just say that I think
  • 00:21:44
    that there's a lot of narratives out
  • 00:21:45
    there it's like oh well this was
  • 00:21:46
    orchestrated by all these other forces
  • 00:21:48
    it's not accurate I mean it was the
  • 00:21:50
    employees of open AI that wanted this
  • 00:21:54
    and that thought that it was the right
  • 00:21:55
    thing for Sam to be back the you know
  • 00:21:57
    like yeah I thing I'll will say is uh I
  • 00:21:59
    think it's important that I have an
  • 00:22:01
    entity that like can fire this but that
  • 00:22:04
    entity has got to have some
  • 00:22:05
    accountability too and that is a clear
  • 00:22:08
    issue with what happened right Anna you
  • 00:22:11
    wrote a remarkable letter to employees
  • 00:22:13
    during The Saga and one of the many
  • 00:22:15
    reasons I was excited to to have you on
  • 00:22:17
    stage today was ju to just ask you what
  • 00:22:20
    were those five days like for you and
  • 00:22:22
    why did you step up and write that uh
  • 00:22:25
    Anna can clearly answer this if she
  • 00:22:26
    wants to but like is really what you
  • 00:22:28
    want to spend our time on like the soap
  • 00:22:30
    opera rather than like what AI is going
  • 00:22:32
    to do I mean I'm wrapping it up but but
  • 00:22:35
    um I mean go I think people are
  • 00:22:36
    interested okay well we can leave it
  • 00:22:38
    here if you want no no yeah let's let's
  • 00:22:40
    answer that question and we'll we'll we
  • 00:22:42
    can move on I would just say uh for
  • 00:22:45
    color that it happened the day before
  • 00:22:46
    the entire company was supposed to take
  • 00:22:48
    a week off so we were all on Friday uh
  • 00:22:50
    preparing to you know have a restful
  • 00:22:52
    week after an insane year so then you
  • 00:22:54
    know many of us slept on the floor of
  • 00:22:56
    the office for a week right there's a
  • 00:22:58
    question here that I think is a a really
  • 00:23:00
    good one we are at Davos climate change
  • 00:23:03
    is on the agenda um the question is does
  • 00:23:06
    do well I'm going to give it a different
  • 00:23:08
    spin considering the compute costs and
  • 00:23:12
    the the need for chips does the
  • 00:23:14
    development of AI and the path to AGI
  • 00:23:16
    threaten to take us in the opposite
  • 00:23:19
    direction on the climate
  • 00:23:23
    um we do need way more energy in the
  • 00:23:27
    world than I think we thought we needed
  • 00:23:29
    before my my whole model of the world is
  • 00:23:32
    that the two important currencies of the
  • 00:23:34
    future are compute SL intelligence and
  • 00:23:37
    energy um you know the ideas that we
  • 00:23:40
    want and the ability to make stuff
  • 00:23:42
    happen and uh the ability to like run
  • 00:23:44
    the compute and I think we still don't
  • 00:23:47
    appreciate the energy needs of this
  • 00:23:50
    technology um the good news to the
  • 00:23:53
    degree there's good news is there's no
  • 00:23:55
    way to get there without a breakthrough
  • 00:23:57
    we need Fusion or we need like radically
  • 00:23:59
    cheaper solar Plus Storage or something
  • 00:24:02
    at massive scale like a scale that no
  • 00:24:04
    one is really planning for um so
  • 00:24:08
    we it's totally fair to say that AI is
  • 00:24:11
    going to need a lot of energy but it
  • 00:24:13
    will force us I think to invest more in
  • 00:24:16
    the technologies that can deliver this
  • 00:24:18
    none of which are the ones that are
  • 00:24:19
    burning the carbon like that'll be those
  • 00:24:21
    all those unbelievable number of fuel
  • 00:24:23
    trucks and by the way you back one or
  • 00:24:25
    more nuclear yeah I I personally think
  • 00:24:29
    that
  • 00:24:31
    is either the most likely or the second
  • 00:24:33
    most likely approach feel like the world
  • 00:24:36
    is more receptive to that technology now
  • 00:24:38
    certainly historically not in the US um
  • 00:24:40
    I think the world is still
  • 00:24:43
    unfortunately pretty negative on fishing
  • 00:24:46
    super positive on Fusion it's a much
  • 00:24:48
    easier story um but I wish the world
  • 00:24:51
    would Embrace fishing much more I look I
  • 00:24:55
    I may be too optimistic about this but I
  • 00:24:56
    think
  • 00:24:58
    I I think we have paths now to
  • 00:25:02
    massive a massive energy transition away
  • 00:25:05
    from burning carbon it'll take a while
  • 00:25:07
    those cars are going to keep driving
  • 00:25:08
    there you know there's all the transport
  • 00:25:10
    stuff it'll be a while till there's like
  • 00:25:12
    a fusion reactor in every cargo ship um
  • 00:25:14
    but if if we can drop the cost of energy
  • 00:25:16
    as dramatically as I hope we can then
  • 00:25:19
    the math on carbon captur just so
  • 00:25:22
    changes uh I still expect unfortunately
  • 00:25:26
    the world is on a path where we're going
  • 00:25:27
    to have to do something dramatic with
  • 00:25:29
    climate look like geoengineering as a as
  • 00:25:32
    a as a Band-Aid as a stop Gap but I
  • 00:25:34
    think we do now see a path to the
  • 00:25:36
    long-term solution so I I want to just
  • 00:25:38
    go back to my question in terms of
  • 00:25:40
    moving in the opposite direction it
  • 00:25:42
    sounds like the answer is potentially
  • 00:25:44
    yes on the demand side unless we take
  • 00:25:49
    drastic action on the supply side but
  • 00:25:51
    there there is no I I see no way to
  • 00:25:54
    supply this with to to manage the supply
  • 00:25:56
    side without
  • 00:25:58
    a really big breakthrough right which is
  • 00:26:01
    this is does this frighten you guys
  • 00:26:03
    because um you know the world hasn't
  • 00:26:05
    been that versatile when it comes to
  • 00:26:08
    supply but AI as you know you have
  • 00:26:10
    pointed out is not going to take its
  • 00:26:12
    time until we start generating enough
  • 00:26:14
    power it motivates us to go invest more
  • 00:26:16
    in fusion and invest more in nor new
  • 00:26:18
    storage and and not only the technology
  • 00:26:21
    but what it's going to take to deliver
  • 00:26:23
    this at the scale that AI needs and that
  • 00:26:26
    the whole globe needs so I think it
  • 00:26:28
    would be not helpful for us to just sit
  • 00:26:30
    there and be nervous um we're just like
  • 00:26:32
    hey we see what's coming with very high
  • 00:26:34
    conviction it's coming how can we use
  • 00:26:37
    our
  • 00:26:38
    abilities uh our Capital our whatever
  • 00:26:40
    else to do this and in the process of
  • 00:26:42
    that hopefully deliver a solution for
  • 00:26:44
    the rest of the world not just AI
  • 00:26:46
    training workloads or inference
  • 00:26:47
    workloads Anna it felt like in 2023 we
  • 00:26:50
    had the beginning of a almost
  • 00:26:53
    hypothetical conversation about
  • 00:26:54
    regulating AI what what should we expect
  • 00:26:58
    in 2024 and you know does it do do do
  • 00:27:02
    governments act does it does it become
  • 00:27:04
    real and what is what is AI safety look
  • 00:27:06
    like so I think we it is becoming real
  • 00:27:09
    you know the EU is uh on the cusp of
  • 00:27:12
    actually finalizing this regulation
  • 00:27:14
    which is going to be quite extensive and
  • 00:27:16
    the Biden Administration uh wrote the
  • 00:27:18
    longest executive order I think in the
  • 00:27:20
    history of executive orders uh covering
  • 00:27:22
    this technology and is being implemented
  • 00:27:24
    in 2024 because they gave agencies you
  • 00:27:26
    know a bunch of homework for how to
  • 00:27:29
    implement this and govern this
  • 00:27:30
    technology and and it's happening so I
  • 00:27:32
    think it is really moving forward um but
  • 00:27:35
    what exactly safety looks like of what
  • 00:27:37
    it even is I think this is still a
  • 00:27:38
    conversation we haven't bottomed out on
  • 00:27:41
    you know we founded this Frontier Model
  • 00:27:42
    Forum in part yeah maybe explain what
  • 00:27:44
    that is so this is um for now this is um
  • 00:27:47
    Microsoft openai anthropic and um Google
  • 00:27:50
    but it will I think expand to other
  • 00:27:52
    Frontier Labs but really right now all
  • 00:27:55
    of us are working on safety we all red
  • 00:27:57
    teamr models um we all do a lot of this
  • 00:28:00
    work but we really don't have even a
  • 00:28:01
    common vocabulary um or a standardized
  • 00:28:04
    approach and to the extent that people
  • 00:28:06
    think like well this is just industry
  • 00:28:08
    but uh this is in part in response to
  • 00:28:10
    many governments that have asked us for
  • 00:28:12
    this very thing so like what is it
  • 00:28:14
    across industry that you think are
  • 00:28:16
    viable best practices is there a risk
  • 00:28:20
    that regulation starts to discourage
  • 00:28:23
    entrepreneurial activity in in AI I mean
  • 00:28:26
    I think people are terrified of this um
  • 00:28:28
    this is why I think Germany and France
  • 00:28:30
    and Italy in interjected into the EU um
  • 00:28:34
    AI act discussion because they are
  • 00:28:36
    really concerned about their own
  • 00:28:37
    domestic Industries being sort of
  • 00:28:39
    undercut before they've even had a
  • 00:28:41
    chance to develop were you satisfied
  • 00:28:44
    with your old boss's executive order and
  • 00:28:46
    was was there anything in there that uh
  • 00:28:48
    you had lobbied against no and in fact
  • 00:28:52
    you know I think it's it was really good
  • 00:28:54
    in that it wasn't just these are the
  • 00:28:56
    restrictions it's like and then also
  • 00:28:58
    please go and think about how your
  • 00:29:00
    agency will actually leverage this to do
  • 00:29:02
    your work better so I was really
  • 00:29:04
    encouraged that they actually did have a
  • 00:29:07
    balanced
  • 00:29:08
    approach um Sam first time at Davos
  • 00:29:11
    first time okay is um uh you mentioned
  • 00:29:15
    that uh You' prefer to spend more of our
  • 00:29:16
    time here on stage talking about AGI
  • 00:29:19
    what is the message you're bringing to
  • 00:29:21
    political leaders and other Business
  • 00:29:22
    Leaders here if you could distill it
  • 00:29:24
    thank you
  • 00:29:26
    um
  • 00:29:28
    so I think 2023 was a year where the
  • 00:29:31
    world woke up to the possibility of
  • 00:29:34
    these systems becoming increasingly
  • 00:29:36
    capable and increasingly General but GPT
  • 00:29:39
    4 I think is best understood as a
  • 00:29:42
    preview and it was more Over the Bar
  • 00:29:46
    than we expected of utility for more
  • 00:29:48
    people in more ways but you know it's
  • 00:29:51
    easy to point out the limitations and
  • 00:29:53
    again we're thrilled that people love it
  • 00:29:55
    and use it as much as they do but this
  • 00:29:58
    is progress here is not linear and this
  • 00:30:01
    is the thing that I think is really
  • 00:30:03
    tricky humans have horrible intuition
  • 00:30:06
    for exponentials at least speaking for
  • 00:30:07
    myself but it seems like a common part
  • 00:30:09
    of the human condition um what does it
  • 00:30:12
    mean if GPT 5 is as much better than gp4
  • 00:30:15
    is four was to three and six is to five
  • 00:30:17
    and what does it mean if we're just on
  • 00:30:18
    this trajectory now um what you know on
  • 00:30:23
    the question of Regulation I think it's
  • 00:30:24
    great that different countries are going
  • 00:30:25
    to try different things some countries
  • 00:30:27
    will probably ban AI some countries will
  • 00:30:29
    probably say no guard rails at all both
  • 00:30:31
    of those I think will turn out to be
  • 00:30:32
    suboptimal and we'll we'll get to see
  • 00:30:34
    different things work but as these
  • 00:30:36
    systems become more powerful um as they
  • 00:30:41
    as they become more deeply integrated
  • 00:30:42
    into the economy as they become
  • 00:30:43
    something we all used to do our work and
  • 00:30:45
    then as things beyond that happen as
  • 00:30:47
    they become capable of discovering new
  • 00:30:50
    scientific knowledge for
  • 00:30:52
    Humanity even as they become capable of
  • 00:30:54
    doing AI research at some point um the
  • 00:30:57
    world is going
  • 00:30:58
    to change more slowly and then more
  • 00:31:01
    quickly than than we might imagine but
  • 00:31:03
    the world is going to change um this is
  • 00:31:06
    you know a thing I I always say to
  • 00:31:08
    people is no one knows what happens next
  • 00:31:09
    and I really believe that and I think
  • 00:31:10
    keeping the humility about that is
  • 00:31:12
    really important you can see a few steps
  • 00:31:14
    in front of you but not too many
  • 00:31:17
    um but when cognition the when the cost
  • 00:31:20
    of cognition Falls by a factor of a
  • 00:31:23
    thousand or a million when the
  • 00:31:24
    capability of it becomes uh it augments
  • 00:31:28
    Us in ways we can't even imagine you
  • 00:31:30
    know uh like one example I I try to give
  • 00:31:33
    to people is what if everybody in the
  • 00:31:35
    world had a really competent company of
  • 00:31:38
    10,000 great virtual employees experts
  • 00:31:41
    in every area they never fought with
  • 00:31:42
    each other they didn't need to rest they
  • 00:31:45
    got really smart they got smarter at
  • 00:31:46
    this rapid Pace what would we be able to
  • 00:31:48
    create for each other what would that do
  • 00:31:50
    to the world that we experience and the
  • 00:31:52
    answer is none of us know of course and
  • 00:31:55
    none of us have strong intuitions for
  • 00:31:56
    that I can imagine it sort of but it's
  • 00:31:59
    not like a clear picture um and this is
  • 00:32:03
    going to happen uh it doesn't mean we
  • 00:32:06
    don't get to steer it it doesn't mean we
  • 00:32:07
    don't get to work really hard to make it
  • 00:32:09
    safe and to do it in a responsible way
  • 00:32:11
    but we are going to go to the Future and
  • 00:32:13
    I think the best way to get there in a
  • 00:32:15
    way that works
  • 00:32:17
    is the level of Engagement we now have
  • 00:32:20
    part of the reason a big part of the
  • 00:32:21
    reason we believe in iterative
  • 00:32:23
    deployment of our technology is that
  • 00:32:25
    people need time to gradually get used
  • 00:32:28
    to it to understand it we need time to
  • 00:32:30
    make mistakes while the stakes are low
  • 00:32:32
    governments need time to make some
  • 00:32:33
    policy mistakes and also technology and
  • 00:32:36
    Society have to co-evolve in a case like
  • 00:32:39
    this uh so technology is going to change
  • 00:32:41
    with each iteration but so is the way
  • 00:32:43
    Society works and that's got to be this
  • 00:32:45
    interactive iterative process um and we
  • 00:32:48
    need to embrace it but have caution
  • 00:32:51
    without fear and how long do we have for
  • 00:32:53
    this iterative process to play I I think
  • 00:32:56
    it's surprisingly continuous I don't
  • 00:32:58
    like if I try to think about
  • 00:33:00
    discontinuities I can sort of see one
  • 00:33:02
    when AI can do really good AI research
  • 00:33:05
    um and I can see a few others too but
  • 00:33:07
    that's like an evocative example um but
  • 00:33:09
    on the whole I don't think it's about
  • 00:33:12
    like Crossing this one line I think it's
  • 00:33:14
    about this continuous exponential curve
  • 00:33:17
    we climb together and so how long do we
  • 00:33:19
    have like no time at all in
  • 00:33:24
    infinite I saw GPT five trending on X
  • 00:33:28
    earlier this week and I clicked and I
  • 00:33:30
    you know couldn't I it sounded uh you
  • 00:33:33
    know probably misinformed but what what
  • 00:33:35
    can you tell us about gbt 5 and is it an
  • 00:33:40
    exponential uh you know improvement over
  • 00:33:43
    what we've seen look I don't know what
  • 00:33:44
    we're going to call our next model um I
  • 00:33:45
    don't know when are you going to get
  • 00:33:46
    creative with the uh the naming process
  • 00:33:49
    uh I don't want to be like shipping
  • 00:33:52
    iPhone
  • 00:33:53
    27 um so you know it's not my style
  • 00:33:57
    quite uh but I I think the next model we
  • 00:34:02
    release uh I expect it to be very
  • 00:34:04
    impressive to do new things that were
  • 00:34:06
    not possible with gp4 to do a lot of
  • 00:34:08
    things better and I expect us to like
  • 00:34:10
    take our time and make sure we can
  • 00:34:11
    launch something that we feel good about
  • 00:34:14
    and responsible about within open AI
  • 00:34:16
    some employees consider themselves to be
  • 00:34:20
    quote building God is that I haven't
  • 00:34:23
    heard that okay is um I mean I've heard
  • 00:34:27
    like people say that factiously but uh I
  • 00:34:31
    think almost all employees would say
  • 00:34:34
    they're building a tool more so than
  • 00:34:36
    they thought they were going to be which
  • 00:34:38
    they're thrilled about you know this
  • 00:34:39
    confusion in the industry of Are We
  • 00:34:41
    building a creature are we building a
  • 00:34:42
    tool um I think we're much more building
  • 00:34:45
    a tool and that's much
  • 00:34:46
    better uh to transition to something
  • 00:34:49
    yeah goad no no no no you finish your
  • 00:34:51
    thought oh I was just going to say like
  • 00:34:53
    the
  • 00:34:54
    the we think of ourselves as tool
  • 00:34:57
    Builders um AI is much more of a tool
  • 00:35:01
    than a product and much much more of a
  • 00:35:03
    tool than this like entity and uh one of
  • 00:35:08
    the most wonderful things about last
  • 00:35:10
    year was seeing just how much people
  • 00:35:12
    around the world could do with that tool
  • 00:35:14
    and they astonished us and I think we'll
  • 00:35:16
    just see more and more and human
  • 00:35:17
    creativity uh and ability to like do
  • 00:35:21
    more with better tools is remarkable and
  • 00:35:23
    and before we have to start wrapping up
  • 00:35:25
    you know there was a report that you
  • 00:35:26
    were working with Johnny I on an AI
  • 00:35:29
    powered device either within open AI
  • 00:35:32
    perhaps as a separate company you know I
  • 00:35:34
    bring it up because CES was earlier this
  • 00:35:37
    month and AI powered devices were the
  • 00:35:39
    the talk of of the conference you know
  • 00:35:42
    can you give us an update on that and
  • 00:35:44
    are we approach does AI bring us to the
  • 00:35:46
    beginning of the end of the smartphone
  • 00:35:48
    era smartphones are fantastic I don't
  • 00:35:51
    think smartphones are going anywhere uh
  • 00:35:53
    I think what they do they do really
  • 00:35:54
    really well and they're very general if
  • 00:35:56
    if there is a new thing to make uh I
  • 00:35:59
    don't think it replaces a smartphone in
  • 00:36:01
    the way that I don't think smartphones
  • 00:36:02
    replace computers but if there's a new
  • 00:36:04
    thing to make that helps us do more
  • 00:36:06
    better you know in a in a new way given
  • 00:36:08
    that we have this unbelievable change
  • 00:36:11
    like I don't think we quite I don't
  • 00:36:13
    spend enough time I think like marveling
  • 00:36:14
    at the fact that we can now talk to
  • 00:36:16
    computers and they understand us and do
  • 00:36:18
    stuff for us like it is a new affordance
  • 00:36:20
    a new way to use a computer and if we
  • 00:36:22
    can do something great there uh a new
  • 00:36:25
    kind of computer we should do that and
  • 00:36:27
    if it turns out that the smartphone's
  • 00:36:28
    really good and this is all software
  • 00:36:29
    then fine but I bet there is something
  • 00:36:32
    great to be done and um the partnership
  • 00:36:35
    with Johnny is that an open AI effort is
  • 00:36:38
    that another company I have not heard
  • 00:36:39
    anything official about a partnership
  • 00:36:41
    with
  • 00:36:42
    Johnny okay um Anna I'm going to give
  • 00:36:46
    you the last word as you and Sam meet
  • 00:36:48
    with business and world leaders here at
  • 00:36:50
    Davos what's the message you want to
  • 00:36:52
    leave them
  • 00:36:54
    with um I think the that there is an a
  • 00:36:58
    trend where people feel more fear than
  • 00:37:01
    excitement about this technology and I
  • 00:37:03
    understand that we have to work very
  • 00:37:04
    hard to make sure that the best version
  • 00:37:06
    of this technology is realized but I do
  • 00:37:08
    think that many people are engaging with
  • 00:37:11
    this via the leaders here and that they
  • 00:37:13
    really have a responsibility to make
  • 00:37:15
    sure that um they are sending a balanced
  • 00:37:17
    message so that um people can really
  • 00:37:20
    actually engage with it and realize the
  • 00:37:22
    benefit of this technology can I have 20
  • 00:37:24
    seconds absolutely one one of the things
  • 00:37:26
    that I think open ey has not always done
  • 00:37:28
    right in the field hasn't either is find
  • 00:37:30
    a way to build these tools in a way uh
  • 00:37:33
    and also talk about them that don't
  • 00:37:36
    don't get that kind of response I think
  • 00:37:38
    chat gbt one of the best things it did
  • 00:37:39
    is it shifted the conversation to the
  • 00:37:41
    positive not because we said trust us
  • 00:37:43
    it'll be great but because people used
  • 00:37:44
    it and are like oh I get this I use this
  • 00:37:47
    in a very natural way the smartphone was
  • 00:37:48
    cool cuz I didn't even have to use a
  • 00:37:49
    keyboard and phone I could use it more
  • 00:37:51
    naturally talking is even more natural
  • 00:37:53
    um speaking of Johnny Johnny is a genius
  • 00:37:55
    and one of the things that I think he
  • 00:37:57
    has done again and again about computers
  • 00:37:59
    is figuring out a way to make them very
  • 00:38:03
    human compatible and I think that's
  • 00:38:05
    super important with this technology
  • 00:38:07
    making this feel like uh you know not
  • 00:38:10
    this mystical thing from sci-fi not this
  • 00:38:11
    scary thing from sci-fi but this this
  • 00:38:14
    new way to use a computer that you love
  • 00:38:16
    and that really feels like I still
  • 00:38:18
    remember the first iMac I got and what
  • 00:38:21
    that felt like to me
  • 00:38:23
    relative it was heavy but the fact that
  • 00:38:25
    it had that handle even though it is
  • 00:38:26
    like a kid it was very heavy to carry um
  • 00:38:29
    it did mean that I was like I had a
  • 00:38:31
    different relationship with it because
  • 00:38:32
    of that handle and because of the way it
  • 00:38:34
    looked I was like oh I can move this
  • 00:38:36
    thing around I could unplug it and throw
  • 00:38:38
    it out the window if it tried to like
  • 00:38:39
    wake up and take over that's nice um and
  • 00:38:42
    I think the way we design our technology
  • 00:38:44
    and our products really does matter
الوسوم
  • AI guidelines
  • Elections
  • AI in politics
  • ChatGPT ban
  • Cryptographic watermarks
  • OpenAI
  • AI enforcement
  • Job displacement
  • Publisher relations
  • AI regulation