Ep.# 110: OpenAI’s Secret Project “Strawberry” Mystery Grows, JobsGPT & GPT-4o Dangers

01:18:19
https://www.youtube.com/watch?v=bnpAZpubMGk

Resumen

TLDRNeste episodio do podcast "The Artificial Intelligence Show", presentado por Paul Roetzer, fálase sobre modelos de IA superhumanamente persuasivos e os riscos asociados se fosen deseñados para modificar crenzas ou comportamentos humanos. Tamén se discuten noticias recentes de OpenAI, incluíndo especulacións sobre un proxecto chamado "strawberry" que podería ser un gran avance no raciocinio de IA. Ademais, preséntase "Jobs GPT", unha ferramenta para avaliar o impacto da IA nos empregos, desagregando tarefas e proxectando a súa automatización futura. Hai un debate sobre as regulacións propostas para modelos de IA en California, con preocupacións de que poidan frear a innovación. Outras noticias inclúen avances en robots humanoides e problemas legais derivados do uso de datos de YouTube por Nvidia sen consentimento para adestrar os seus modelos. O episodio conclúe coa importancia do evento Marketing AI Conference e a súa dedicación á alfabetización en IA.

Para llevar

  • 🤖 A capacidade persuasiva de modelos de IA pode transformar comportamentos humanos.
  • 🍓 'Strawberry Project' de OpenAI podería revolucionar o raciocinio artificial.
  • 🛠 'Jobs GPT' axuda a prever o impacto da IA nos empregos actuais e futuros.
  • ⚖️ As regulacións en California sobre IA xeran controversia por posibles impactos negativos na innovación.
  • 🤖 Figure desenvolve robots humanoides avanzados para uso industrial e comercial.
  • ⚔️ Elon Musk enfronta legalmente a OpenAI por suposta traizón á misión altruísta orixinal.
  • 🔍 Nvidia enfróntase a críticas por usar contido de YouTube para adestrar modelos de IA.
  • 🔈 Os riscos relacionados coas capacidades de voz avanzada de GPT-40 son examinados.
  • ❗ A necesidade de balancear seguridade e innovación no desenvolvemento e lanzamento de modelos de IA.
  • 👉 A importancia da alfabetización e preparación en IA para o avance responsable da tecnoloxía.

Cronología

  • 00:00:00 - 00:05:00

    O presentador fala sobre a inevitabilidade dos modelos de IA superpersuasivos e dá a benvida aos oíntes ao programa, destacando o crecemento empresarial a través da intelixencia artificial.

  • 00:05:00 - 00:10:00

    Introducen o episodio 110, cunha discusión sobre un enigma relacionado con "strawberry" e un evento de conferencias sobre marketing e IA en setembro.

  • 00:10:00 - 00:15:00

    Mencionan cambios de liderazgo en OpenAI, con saídas importantes, o que leva á especulación sobre os futuros avances en IA e o misterioso proxecto "strawberry".

  • 00:15:00 - 00:20:00

    Especulan sobre a partida de membros clave de OpenAI, preguntándose se anticipan avances importantes en intelixencia artificial xeral (AGI).

  • 00:20:00 - 00:25:00

    Discutimos sobre mensaxes crípticas relacionadas con "strawberry" e como esto se podería vincular a avances internos en OpenAI.

  • 00:25:00 - 00:30:00

    Exploran mensaxes dunha conta anónima en Twitter que fai referencia a "levels" de capacidades de IA, e ligazóns con posibles avances en reasoning e AGI.

  • 00:30:00 - 00:35:00

    Discusión detallada sobre especulacións sóbre "strawberry" con respostas crípticas tocando temas de innovacións en modelos de IA e posibles lanamentos.

  • 00:35:00 - 00:40:00

    Continúan a explorar posibles significados detrás das mensaxes crípticas e demás sinais de que se avecina un avance significativo en IA.

  • 00:40:00 - 00:45:00

    Presentan "jobs GPT", unha ferramenta para avaliar o impacto da IA nos empregos e como as tarefas poden ser afectadas polos modelos de lenguaje grandes.

  • 00:45:00 - 00:50:00

    Explican o desenvolvemento e obxectivos de "jobs GPT", enfocándose en axudar ás empresas a prever cambios no traballo debido á IA.

  • 00:50:00 - 00:55:00

    Describen "jobs GPT" en termos de exposición ao risco, subliñando diferentes niveis de exposición segundo capacidades actuais e proxectadas.

  • 00:55:00 - 01:00:00

    Sobre a importancia de prever os efectos futuros da IA, especialmente en como poden cambiar os traballos a nivel de tarefas.

  • 01:00:00 - 01:05:00

    Utilizar a ferramenta para extrapolar capacidades futuras dos modelos de IA, permitindo ás empresas planificar para un avance continuo.

  • 01:05:00 - 01:10:00

    OpenAI lanzou un informe sobre preparación e seguridade antes de liberar o GPT-4, enfocando os retos na persuación e na xeración de voz.

  • 01:10:00 - 01:18:19

    Debatido o proceso de OpenAI para redimensionar o GPT-4, salientando as capacidades, riscos e método para mitigalos antes do lanzamento.

Ver más

Mapa mental

Mind Map

Preguntas frecuentes

  • ¿Qué es el modelo persuasivo sobrehumano mencionado en el episodio?

    Es un modelo de inteligencia artificial que tiene la capacidad de cambiar las creencias, actitudes, intenciones, motivaciones o comportamientos de las personas de manera eficaz.

  • ¿Qué es el 'strawberry project' de OpenAI?

    Es un proyecto de OpenAI que apunta a mejorar significativamente las capacidades de razonamiento de sus modelos de IA, potencialmente llevando a avances hacia una súper inteligencia.

  • ¿Qué es Jobs GPT?

    Jobs GPT es una herramienta creada por Marketing AI Institute para evaluar el impacto potencial de la IA en empleos específicos desglosando tareas y estimando el tiempo ahorrado mediante el uso de modelos de lenguaje avanzado.

  • ¿Cuáles son las preocupaciones mencionadas sobre la regulación de IA en California?

    La regulación propuesta en California podría sofocar la innovación al responsabilizar a los desarrolladores por el mal uso de sus modelos, exigir interruptores de seguridad y limitar el acceso a modelos y datos necesarios para la investigación.

  • ¿Qué compañía está desarrollando el robot humanoide Figure 02?

    La compañía Figure está desarrollando el robot humanoide Figure 02, mostrando avances significativos en IA y capacidades físicas.

  • ¿Qué implica la demanda de Elon Musk contra OpenAI?

    Elon Musk ha demandado a OpenAI alegando que han abandonado su misión original de desarrollar IA para el bien público al priorizar intereses comerciales.

  • ¿Qué se discutió sobre la capacidad de persuasión en GPT-40?

    Se evaluó la capacidad persuasiva de GPT-40 en cambiar perspectivas humanas en contextos políticos y se encontraron resultados comparables a los humanos, aunque no superiores.

  • ¿Qué avances ha tenido la empresa Nvidia relacionados con IA?

    Nvidia ha estado recopilando una gran cantidad de contenido de video, incluyendo de YouTube, para entrenar sus modelos de IA de video, levantando preocupaciones legales.

Ver más resúmenes de vídeos

Obtén acceso instantáneo a resúmenes gratuitos de vídeos de YouTube gracias a la IA.
Subtítulos
en
Desplazamiento automático:
  • 00:00:00
    think about the value of a superhuman
  • 00:00:03
    persuasive model of a model that can
  • 00:00:05
    persuade people to change their beliefs
  • 00:00:07
    attitudes intentions motivations or
  • 00:00:09
    behaviors we are talking about something
  • 00:00:11
    that is inevitably going to occur if the
  • 00:00:14
    capabilities are possible someone will
  • 00:00:16
    build them and someone will utilize them
  • 00:00:19
    for their own
  • 00:00:21
    gain welcome to the artificial
  • 00:00:23
    intelligence show the podcast that helps
  • 00:00:25
    your business grow smarter by making AI
  • 00:00:28
    approachable and actionable my name is
  • 00:00:30
    Paul rer I'm the founder and CEO of
  • 00:00:33
    marketing AI Institute and I'm your host
  • 00:00:36
    each week I'm joined by my co-host and
  • 00:00:38
    marketing AI Institute Chief content
  • 00:00:40
    officer Mike kaput as we break down all
  • 00:00:43
    the AI news that matters and give you
  • 00:00:45
    insights and perspectives that you can
  • 00:00:47
    use to advance your company and your
  • 00:00:50
    career join us as we accelerate AI
  • 00:00:53
    Literacy for
  • 00:00:54
    [Music]
  • 00:00:58
    all welcome to episode 110 of the
  • 00:01:01
    artificial intelligence show I'm your
  • 00:01:03
    host Paul rer along with my co-host Mike
  • 00:01:05
    kaput as always we have a rather
  • 00:01:09
    intriguing episode today I I don't even
  • 00:01:11
    know this this whole strawberry mystery
  • 00:01:13
    just continues to grow and it's gotten
  • 00:01:16
    kind of wild and so I mean we're
  • 00:01:18
    recording this Monday August 12 10:30
  • 00:01:21
    a.m. eastern time by the time you listen
  • 00:01:23
    to this I expect we're going to know a
  • 00:01:25
    little bit more about what in the world
  • 00:01:27
    is going on with strawberry and
  • 00:01:30
    who this mystery Twitter account is and
  • 00:01:33
    it's just wild so we we've got a lot to
  • 00:01:35
    cover today uh prepping for this one was
  • 00:01:38
    pretty interesting this morning getting
  • 00:01:39
    ready to go so uh we're gonna get into
  • 00:01:42
    all that uh today's episode is brought
  • 00:01:44
    to us again by the marketing AI
  • 00:01:45
    conference our fifth annual marketing a
  • 00:01:48
    conference or makon happening in
  • 00:01:49
    Cleveland September 10th to the 12th
  • 00:01:52
    there are a total of 69 sessions 33
  • 00:01:55
    breakouts across two tracks of Applied
  • 00:01:57
    Ai and strategic AI 16 AI Tech demos 10
  • 00:02:02
    mainstage General Sessions and Keynotes
  • 00:02:04
    five lunch Labs three pre-conference
  • 00:02:06
    workshops two of which are being taught
  • 00:02:08
    by Mike and myself and two mindfulness
  • 00:02:12
    sessions so the agenda is absolutely
  • 00:02:14
    packed if you haven't check it out
  • 00:02:16
    checked it out yet go to m.ai that's
  • 00:02:19
    m.ai I'll just give you a quick sense of
  • 00:02:22
    some of the sessions so I'm leading off
  • 00:02:24
    with the road to AGI a potential
  • 00:02:26
    timeline of what happens next what it
  • 00:02:29
    means and we can do about it we're going
  • 00:02:31
    to preview some of that actually today
  • 00:02:33
    uh we've got digital Doppel gangers how
  • 00:02:35
    Savvy teams are augmenting their unique
  • 00:02:37
    talents using the magic of AI with
  • 00:02:39
    Andrew Davis Lessons Learned in early
  • 00:02:41
    leadership of scaling marketing AI with
  • 00:02:43
    Amanda todorovich from Cleveland Clinic
  • 00:02:46
    uh future of AI open with one of our you
  • 00:02:49
    know longtime Institute supporters and
  • 00:02:51
    speakers Christopher Penn got navigating
  • 00:02:54
    the intersection of copyright law and
  • 00:02:55
    generative AI uh with Rachel douly and
  • 00:02:57
    Christa laser generative Ai and the
  • 00:02:59
    future work with Mike Walsh marketing
  • 00:03:01
    the trust economy with Liz grenan and
  • 00:03:03
    McKenzie and just keeps going on and on
  • 00:03:05
    so absolutely check it out it's again in
  • 00:03:08
    Cleveland September 10th to the 12th you
  • 00:03:10
    can use promo code pod2 200 that's pod2
  • 00:03:14
    200 to save $200 off All Passes we only
  • 00:03:18
    have about I didn't look at the
  • 00:03:19
    countdown clock about 28 days left until
  • 00:03:22
    the event so Mike and I have a lot of
  • 00:03:24
    work to do over the next uh month or so
  • 00:03:27
    here to get ready but again check out
  • 00:03:28
    meon AI click register and be sure to
  • 00:03:31
    use that pod 200 code all right Mike um
  • 00:03:36
    it I don't even know where to go with
  • 00:03:37
    the strawberry thing but let's go ahead
  • 00:03:39
    and get into what's happening at open
  • 00:03:41
    aai which seems like the weekly
  • 00:03:42
    recurring topic and then this strawberry
  • 00:03:45
    thing that just is taking out a life of
  • 00:03:47
    its own yeah there's never a dull moment
  • 00:03:50
    at open AI it certainly seems because
  • 00:03:53
    first they are experiencing some pretty
  • 00:03:56
    serious leadership changes so we're
  • 00:03:58
    going to first just te that up and then
  • 00:04:00
    talk about what the heck strawberry is
  • 00:04:02
    and what's going on with it so first up
  • 00:04:05
    Greg Brockman open AI president and
  • 00:04:07
    co-founder said he's taking an extended
  • 00:04:10
    leave of absence which he says just as
  • 00:04:12
    sabatical until the end of the year at
  • 00:04:15
    the same time John Schulman another
  • 00:04:18
    co-founder and a key leader in AI has
  • 00:04:21
    left open AI to join rival company
  • 00:04:24
    anthropic and he said he wanted to work
  • 00:04:26
    more deeply on AI alignment and that's
  • 00:04:29
    why he is is leaving uh possibly related
  • 00:04:32
    possibly not Peter Deng a product leader
  • 00:04:34
    who joined open AI last year from meta
  • 00:04:37
    has also Departed the company so you
  • 00:04:39
    know as you recall these aren't the only
  • 00:04:42
    or first people to have left I mean Ilia
  • 00:04:44
    satk left after all sorts of controversy
  • 00:04:47
    last year around the boardroom coup to
  • 00:04:49
    OU Sam Alman and Andre carpath has left
  • 00:04:53
    to go work on an AI education startup so
  • 00:04:57
    these kinds of Departures of really LED
  • 00:05:00
    some industry observers to question like
  • 00:05:03
    how close is open AI really to breaking
  • 00:05:06
    through uh to creating AGI so AI
  • 00:05:09
    researcher Benjamin deer put it in a
  • 00:05:12
    post on X he put it really well he said
  • 00:05:14
    quote if open AI is right on the verge
  • 00:05:16
    of AI why do prominent people keep
  • 00:05:20
    leaving and he went on to say quote
  • 00:05:22
    genuine question if you were pretty sure
  • 00:05:24
    the company you're a key part of and
  • 00:05:26
    have equity in is about to crack AGI
  • 00:05:29
    within one two years why would you jump
  • 00:05:31
    ship now interestingly in parallel to
  • 00:05:35
    this and Paul I'll let you kind of
  • 00:05:38
    unpack this for us there have been a
  • 00:05:40
    series of very cryptic posts referencing
  • 00:05:43
    strawberry which is an open AI project
  • 00:05:47
    we had referenced previously centered
  • 00:05:49
    around Advanced reasoning capabilities
  • 00:05:51
    for AI that have been posts that have
  • 00:05:54
    been engaged with by Sam mman posts
  • 00:05:56
    coming from Anonymous accounts really
  • 00:05:58
    does seem in a weird way like something
  • 00:06:00
    is brewing when it comes to Strawberry
  • 00:06:03
    as well as we're seeing more and more
  • 00:06:04
    references both from Sam and from other
  • 00:06:06
    parties in relation to those possible AI
  • 00:06:10
    capabilities so Paul let's kind of maybe
  • 00:06:13
    take this one step at a time like I want
  • 00:06:16
    to start off with the question that the
  • 00:06:19
    AI researcher posed if open AI is right
  • 00:06:21
    on the verge of
  • 00:06:23
    AGI why do you think prominent people
  • 00:06:26
    like these are
  • 00:06:27
    leaving yeah it's a really good question
  • 00:06:29
    I have no idea all any of us can really
  • 00:06:31
    do at this point is speculate the couple
  • 00:06:34
    of notes I would make related to this is
  • 00:06:37
    Greg and John or co-founders like I
  • 00:06:39
    would assume their shares have vested
  • 00:06:41
    long ago in open AI so unless you know
  • 00:06:44
    more shares are granted or they have
  • 00:06:45
    some that have invested their their
  • 00:06:48
    money is safe either way so if Craig
  • 00:06:50
    wants to piece out for a while and
  • 00:06:52
    things keep going his his Equity is not
  • 00:06:55
    going anywhere so I don't think their
  • 00:06:58
    Equity has anything to do with whether
  • 00:07:00
    or not a breakthrough has been made
  • 00:07:01
    internally or whether the next model is
  • 00:07:04
    you know on the precipice of coming um
  • 00:07:07
    so Greg is supposedly taking leave of
  • 00:07:09
    absence as you said maybe he is maybe
  • 00:07:11
    he's done I I don't know yeah and maybe
  • 00:07:14
    Jon's leaving because he thinks AGI is
  • 00:07:16
    actually near and anthropic is a better
  • 00:07:18
    place to work on safety and Alignment so
  • 00:07:21
    I don't know that we can read anything
  • 00:07:23
    into any of this really it's it's
  • 00:07:25
    complicated and I think we just got to
  • 00:07:27
    let it sort of play out um I do have a
  • 00:07:30
    lot of unanswered questions about the
  • 00:07:32
    timing of Greg's leave and so on August
  • 00:07:36
    5th is when he tweeted I'm taking a
  • 00:07:38
    sabatical through end of year first time
  • 00:07:40
    to relax since co-founding open AI 9
  • 00:07:42
    years ago the mission is far from
  • 00:07:45
    complete we still have a safe AGI to
  • 00:07:47
    build um he then tweeted on August 8th
  • 00:07:51
    uh this first tweet since he left or
  • 00:07:54
    went on sabatical a surprisingly hard
  • 00:07:56
    part of my break is beginning the fear
  • 00:07:58
    of missing out for everything happening
  • 00:07:59
    at openai right now lots of results
  • 00:08:01
    cooking I've poured my life for the past
  • 00:08:04
    nine years into open AI including the
  • 00:08:06
    entirety of my marriage our work is
  • 00:08:08
    important to me but so is life I feel
  • 00:08:11
    okay taking this time in part because
  • 00:08:12
    our research safety and product progress
  • 00:08:15
    is so strong I'm super grateful for the
  • 00:08:17
    team we've built and it's unprecedented
  • 00:08:19
    Talent density and proud of our progress
  • 00:08:22
    looking forward to completing our
  • 00:08:23
    mission together so I don't I mean I
  • 00:08:25
    don't know he doesn't really tweet about
  • 00:08:26
    his personal life too much it kind of
  • 00:08:28
    indicates me like maybe this is just uh
  • 00:08:31
    to get his personal life in order you
  • 00:08:33
    know give some Focus to that after 9
  • 00:08:35
    years maybe that's all it is um and then
  • 00:08:39
    I just kind of scann back to see well
  • 00:08:40
    what has he been tweeting leading up to
  • 00:08:41
    this he doesn't do as many cryptic
  • 00:08:43
    tweets as Sam Alman he he does his own
  • 00:08:45
    fair share but his last like six tweets
  • 00:08:49
    were all pretty product related so on
  • 00:08:52
    718 so July 18th he said just released a
  • 00:08:55
    new state of Art and fast cheap but
  • 00:08:57
    still quite capable models that was the
  • 00:08:58
    four uh o mini which we're going to talk
  • 00:09:01
    more about on July 18th just launched
  • 00:09:04
    chat gbt Enterprise comp compliance
  • 00:09:06
    controls and then featured some of their
  • 00:09:08
    Enterprise customers like BCG and PWC
  • 00:09:11
    and Los
  • 00:09:12
    Alamos on July 25th search gbt prototype
  • 00:09:16
    now live and then on July 30th advanced
  • 00:09:18
    voice mode rolling out so he's he's been
  • 00:09:20
    very product focused in his tweets so we
  • 00:09:22
    can't really learn too much from that
  • 00:09:24
    the thing I found unusual is Sam Alman
  • 00:09:26
    didn't reply to Greg's tweet Sam replies
  • 00:09:28
    to every high-profile person's tweet
  • 00:09:30
    that leaves or you know temporarily
  • 00:09:32
    separates from open AI so for example uh
  • 00:09:36
    the same day that Greg announced he was
  • 00:09:39
    taking a sabatical John scholman
  • 00:09:41
    announced he was leaving and Sam posted
  • 00:09:44
    25 minutes later a reply to John's tweet
  • 00:09:47
    saying we will miss you tremendously
  • 00:09:49
    telling the story of how they met in
  • 00:09:51
    2015 um so I just thought it was weird
  • 00:09:55
    that he didn't individually tweet about
  • 00:09:57
    or reply to Greg's tweet
  • 00:09:59
    again can you read anything to that I
  • 00:10:01
    don't know it's just out of the ordinary
  • 00:10:04
    um and maybe it's because he was too
  • 00:10:06
    busy vague tweeting about strawberries
  • 00:10:08
    and AGI to deal with it so yeah so maybe
  • 00:10:13
    because that is such a key piece of this
  • 00:10:14
    is amidst all these like Personnel
  • 00:10:16
    changes which is what everyone's like
  • 00:10:18
    you know the headlines are focused on
  • 00:10:19
    there's all these cryptic tweets he's
  • 00:10:22
    been posting about AGI about strawberry
  • 00:10:26
    can you maybe walk us through like
  • 00:10:28
    what's going on here because you know as
  • 00:10:30
    we've seen in the past I think on this
  • 00:10:31
    show and just in our work like paying
  • 00:10:34
    attention to what he posts is usually a
  • 00:10:37
    very good idea yeah so the last like
  • 00:10:40
    four days have been kind of insane if
  • 00:10:43
    you follow the the inner people within
  • 00:10:47
    the AI world so if you'll recall the
  • 00:10:51
    strawberry thing this codename
  • 00:10:52
    strawberry project was first reported by
  • 00:10:54
    Reuters um we talked about an episode
  • 00:10:57
    106 so about a month ago uh we talked
  • 00:11:00
    about this so at the time reuter said
  • 00:11:02
    strawberry appears to be a novel
  • 00:11:03
    approach to AI models aimed at
  • 00:11:05
    dramatically improving their reasoning
  • 00:11:07
    capabilities the Project's goal is to
  • 00:11:09
    enable AI to plan ahead and navigate the
  • 00:11:11
    inter internet autonomously to perform
  • 00:11:13
    what openai calls deep research while
  • 00:11:16
    details about how strawberry works are
  • 00:11:17
    tightly guarded open AI appears to be
  • 00:11:19
    hoping that this Innovation will
  • 00:11:21
    significantly enhance its AI models
  • 00:11:23
    ability to reason the project involves a
  • 00:11:26
    specialized way of processing AI models
  • 00:11:28
    after they've been pre-trained on large
  • 00:11:31
    data sets now the strawberry reference
  • 00:11:33
    we also talked about in episode 106 uh
  • 00:11:36
    half jokingly but I'm not so sure it
  • 00:11:39
    isn't true uh is maybe a way to troll
  • 00:11:41
    Elon Musk so if you'll remember Elon
  • 00:11:44
    Musk was involved early days of open Ai
  • 00:11:47
    and in
  • 00:11:48
    2017 three months before the Transformer
  • 00:11:52
    paper came out from Google brain that
  • 00:11:54
    invented the Transformer which is the
  • 00:11:56
    basis for GPT generative pre-trained
  • 00:11:58
    Transformer
  • 00:11:59
    Elon Musk who was still working with
  • 00:12:02
    opening eye in same moment at the time
  • 00:12:04
    said let's uh say you create a
  • 00:12:06
    self-improving AI to pick strawberries
  • 00:12:09
    and it gets better and better at picking
  • 00:12:10
    strawberries and picks more and more and
  • 00:12:12
    it is self-improving so all it really
  • 00:12:14
    wants to do is pick strawberries so then
  • 00:12:16
    it would have all this world be
  • 00:12:18
    Strawberry Fields Strawberry Fields
  • 00:12:20
    Forever and there would be no room for
  • 00:12:22
    human beings so that was kind of like
  • 00:12:24
    episode 106 we just sort of talked about
  • 00:12:26
    it it was in Reuters now fast forward to
  • 00:12:31
    August 7th so this is now 2 days after
  • 00:12:33
    Greg announces his sabatical Sam tweets
  • 00:12:36
    a picture of actual strawberries not AI
  • 00:12:38
    generated and he says I love my summer
  • 00:12:41
    garden so here's Sam veg tweeting about
  • 00:12:45
    strawberries uh about seven hours later
  • 00:12:49
    a new Twitter account called I rule the
  • 00:12:52
    worldo and double check me on that mic
  • 00:12:55
    make sure I'm getting the right Twitter
  • 00:12:56
    handle here tweeted in all
  • 00:13:00
    lowercase um now which is Sam's sort of
  • 00:13:03
    Mo is all lowercase welcome to level two
  • 00:13:06
    how do you feel did I make you feel and
  • 00:13:09
    Sam now keep in mind this account had
  • 00:13:11
    been created that morning Sam actually
  • 00:13:15
    replied amazing TBH to be honest so Sam
  • 00:13:20
    replied to this random Twitter account
  • 00:13:24
    that was tweeting about AGI and
  • 00:13:27
    strawberries so what what is level two
  • 00:13:30
    so what is this welcome to level two
  • 00:13:32
    tweet well um level two as reported in
  • 00:13:36
    July 2024 by Rachel Mets of Bloomberg is
  • 00:13:40
    that openai has come up with a set of
  • 00:13:42
    five levels to track its progress
  • 00:13:45
    towards building AI software capable of
  • 00:13:47
    outperforming humans they shared this
  • 00:13:50
    new classification system with employees
  • 00:13:52
    that Tuesday so this is in early July at
  • 00:13:55
    the meeting company leadership gave a
  • 00:13:57
    demonstration of a research project
  • 00:13:59
    involving gp4 a model that open AI
  • 00:14:02
    thinks shows some new skills that rise
  • 00:14:05
    to humanlike reasoning so the assumption
  • 00:14:07
    is whatever strawberry is was shown to
  • 00:14:09
    the their employees in early July now
  • 00:14:13
    their five levels are level one chat
  • 00:14:15
    Bots AI with conversational language
  • 00:14:17
    that's what we have level two reasoners
  • 00:14:20
    human level problem solving that's the
  • 00:14:23
    Assumption of what we are about to enter
  • 00:14:25
    level three agents systems that can take
  • 00:14:28
    actions we don't have those yet uh other
  • 00:14:30
    than demonstrations of them level four
  • 00:14:33
    innovators AI that can Aid in invention
  • 00:14:36
    that is not currently possible level
  • 00:14:39
    five organizations AI that can do the
  • 00:14:41
    work of an organization this goes back
  • 00:14:43
    to uh something we talked about an
  • 00:14:45
    article earlier on with Ilia suavo where
  • 00:14:47
    he was quoted in the Atlantic as talking
  • 00:14:49
    about these like Hive like organizations
  • 00:14:51
    where there's just hundreds or thousands
  • 00:14:53
    of AI agents do these things so the IR
  • 00:14:56
    rule of the world Mo Twitter account
  • 00:14:57
    let's go back to that for a second
  • 00:14:59
    the profile picture is W Phoenix from
  • 00:15:02
    the movie Her with three strawberry
  • 00:15:04
    emojis so that's the what the Twitter
  • 00:15:06
    account States um the first tweet from
  • 00:15:09
    that account was August 7th at 1:33 p.m.
  • 00:15:13
    so again that's right before Sam replied
  • 00:15:16
    to this account so Sam is aware of this
  • 00:15:19
    account very very very early in its
  • 00:15:21
    existence um so that that reply was to
  • 00:15:25
    some yam pelig I don't know who he is uh
  • 00:15:27
    he's an AI guy he had said feel the AGI
  • 00:15:31
    guys and this I rule the world Mo
  • 00:15:33
    account tweeted nice um the account then
  • 00:15:36
    started a flood of hundreds of
  • 00:15:38
    strawberry and AI related tweets
  • 00:15:40
    multiple times referencing Sam's garden
  • 00:15:42
    tweet with his pictures of strawberries
  • 00:15:44
    and implying that a major release is
  • 00:15:46
    coming so I'll just run through a few of
  • 00:15:48
    these to give you a sense of what's
  • 00:15:49
    going on so later on August 7th um
  • 00:15:53
    tweets Sam strawberry isn't just ripe
  • 00:15:55
    it's ready tonight we taste the fruit of
  • 00:15:57
    AGI The Singularity has
  • 00:15:59
    flavor uh the three minutes later
  • 00:16:01
    someone very high up is boosting my
  • 00:16:03
    account guess who in other words the
  • 00:16:05
    algorithm at Twitter immediately started
  • 00:16:08
    um juicing this Anonymous account and it
  • 00:16:11
    was very obvious that it was happening
  • 00:16:13
    and thousands of people were starting to
  • 00:16:15
    follow it um 21 minutes later Alman
  • 00:16:18
    strawberry isn't a fruit it's a key
  • 00:16:20
    tonight we unlock the door to Super
  • 00:16:21
    intelligence are you ready to step
  • 00:16:22
    through eight minutes later it turns out
  • 00:16:24
    that I'm AGI oh if it turns out I'm AGI
  • 00:16:27
    I'll be so pissed cuz because now people
  • 00:16:29
    are trying to like at this point guess
  • 00:16:31
    what is this account is this an AI like
  • 00:16:33
    is someone running a test is this
  • 00:16:35
    actually like open AI screwing around
  • 00:16:37
    with people is it something else um six
  • 00:16:40
    minutes later it tweets no one's guest
  • 00:16:42
    grock yet even though they know of Elon
  • 00:16:44
    Musk engineering prowess and his super
  • 00:16:46
    clusters obviously I'm not saying I'm
  • 00:16:48
    grock but just that it's kind of odd
  • 00:16:50
    right and then on 'll fast forward to
  • 00:16:54
    August 10th so just a couple days ago
  • 00:16:57
    and the this Anon count tweeted a rather
  • 00:17:01
    extensive what appears to be very
  • 00:17:03
    accurate summary of open aai in the
  • 00:17:05
    current situation and this connects back
  • 00:17:07
    to Greg in a moment so the Tweet is rust
  • 00:17:09
    a little but we'll refine and add some
  • 00:17:11
    more info I've been given in it if it
  • 00:17:14
    bangs project strawberry qstar AI
  • 00:17:18
    explained has been close to this for a
  • 00:17:20
    while so I'd watch them for a cleaner
  • 00:17:22
    take if you want to dig in this is what
  • 00:17:24
    ilas saw it's what has broken Math
  • 00:17:26
    benchmarks it's more Ain to
  • 00:17:28
    reinforcement learning human feedback
  • 00:17:30
    then throwing compute at the problem um
  • 00:17:33
    gets into strawberry and larger models
  • 00:17:34
    comes on Thursday so they're implying
  • 00:17:37
    this week think of LM fine- tuned to
  • 00:17:39
    reason like a human hence why Sam liked
  • 00:17:41
    the level two comment and felt great
  • 00:17:43
    about it Ilia did not here we are and
  • 00:17:46
    then it talks about what I talked about
  • 00:17:48
    last week that maybe we're actually
  • 00:17:50
    seeing the future model with a
  • 00:17:52
    combination of Sora voice video and then
  • 00:17:55
    all the stuff that's going into safety
  • 00:17:57
    it goes on to say that uh GPT next
  • 00:18:00
    internally called gptx you can call it
  • 00:18:02
    GPT 5 it says is also ready to go Lots
  • 00:18:06
    here relies on safety and what Google
  • 00:18:08
    does next it's difficult to say if
  • 00:18:10
    competition Will trump safety the this
  • 00:18:13
    next model is through red teaming it's
  • 00:18:14
    finished post training is done it's an
  • 00:18:17
    enormous leap in capabilities and on and
  • 00:18:19
    on and on um and then as of this morning
  • 00:18:23
    so 5:27 a.m. eastern time on August 12th
  • 00:18:27
    this anonymous tweets attention isn't
  • 00:18:30
    all you need referring to the attention
  • 00:18:31
    is all you need Transformer paper from
  • 00:18:34
    2017 new architecture announcement
  • 00:18:36
    August 13th at 10: a.m. Pacific Time The
  • 00:18:39
    Singularity begins now oddly enough the
  • 00:18:43
    next made by Google event is August 13th
  • 00:18:47
    at 10 a.m. Pacific now I don't know if
  • 00:18:49
    that's a reference to depending on what
  • 00:18:51
    Google does whether or not this next
  • 00:18:52
    model gets released so the question is
  • 00:18:54
    what is this I rule the world Mo account
  • 00:18:57
    which at the moment of recording this
  • 00:18:59
    has almost 23.5 th000 followers which it
  • 00:19:02
    has amassed in four days it is getting
  • 00:19:05
    Juiced obviously by Twitter SLX and
  • 00:19:08
    maybe Elon Musk himself um is it like an
  • 00:19:12
    anonymous count of gb5 like are they
  • 00:19:14
    running an experiment It's actually an
  • 00:19:15
    AI is it Elon trolling open AI for
  • 00:19:19
    trolling him and it's actually like
  • 00:19:21
    grock 2 is it a human who has a massive
  • 00:19:25
    amount of time on their hands is it
  • 00:19:26
    another like we don't know but but then
  • 00:19:29
    to add to the mystery last night Arvin
  • 00:19:32
    serenos the founder of perplexity shows
  • 00:19:36
    a screenshot that says how many RS are
  • 00:19:39
    there in this sentence there are many
  • 00:19:42
    strawberries in a sentence that's about
  • 00:19:45
    strawberries and whatever model he was
  • 00:19:48
    teasing got it correct which is a
  • 00:19:50
    notoriously difficult problem and he put
  • 00:19:53
    guess what model this is with a
  • 00:19:55
    strawberry in it the implication being
  • 00:19:57
    that perplex is running whatever this
  • 00:20:01
    strawberry model is then for following
  • 00:20:05
    along at home Elon at 6:38 p.m on August
  • 00:20:09
    11th tweets grock 2 Beta release coming
  • 00:20:12
    soon so what does this all mean I have
  • 00:20:16
    no idea who this Anonymous account is
  • 00:20:19
    but it does appear something significant
  • 00:20:21
    is coming we may have a new model this
  • 00:20:24
    week it may already be in testing with
  • 00:20:26
    perplexity Pro um I think we will find
  • 00:20:29
    out sooner than later so now back real
  • 00:20:32
    quick to Greg what does this mean for
  • 00:20:34
    open eye and Greg a his work is done for
  • 00:20:37
    now and this whatever he has now built
  • 00:20:40
    whatever this thing that is in its final
  • 00:20:42
    you know training and safety is is
  • 00:20:44
    whatever model that is won't be released
  • 00:20:46
    until he returns at the end of the year
  • 00:20:48
    I find that doubtful uh his work is done
  • 00:20:51
    for now and he's leaving the team to
  • 00:20:52
    handle the launch or nothing has changed
  • 00:20:55
    internally there is no Mage release
  • 00:20:56
    coming and he's just taking time off
  • 00:20:59
    if I was a betting man I'm going with
  • 00:21:01
    option b I think Greg is heavily
  • 00:21:03
    involved in the building of these models
  • 00:21:06
    I think the work of building the next
  • 00:21:08
    model is complete and they're just
  • 00:21:11
    finalizing timing and plans for the
  • 00:21:13
    release of that model um and I think
  • 00:21:17
    he's stepping aside to take some
  • 00:21:19
    personal time and come back
  • 00:21:23
    uh so I don't know Mike I don't know if
  • 00:21:26
    you followed along the craziness of the
  • 00:21:28
    strawberry Stu over the weekend but I
  • 00:21:29
    mean that account has tweeted I don't
  • 00:21:31
    know how many tweets it actually it has
  • 00:21:32
    to be over a thousand like in the first
  • 00:21:34
    four
  • 00:21:35
    days it is I mean look obviously we've
  • 00:21:39
    said we have no idea what this all ends
  • 00:21:42
    up meaning but the fact I think there's
  • 00:21:44
    something directionally important about
  • 00:21:46
    the fact we're even talking about this
  • 00:21:48
    and taking it seriously these kinds of
  • 00:21:50
    breakthroughs and levels of AGI or call
  • 00:21:53
    it Advanced artificial intelligence
  • 00:21:55
    whatever you'd like to term it um it
  • 00:21:58
    really does speak to kind of some of the
  • 00:22:00
    paths and trajectories that we've been
  • 00:22:02
    kind of anticipating throughout the last
  • 00:22:04
    year or two yeah my guess that it is it
  • 00:22:07
    is some form of ani I think there's some
  • 00:22:09
    a human in the loop here yeah but I
  • 00:22:12
    don't think a human is is managing this
  • 00:22:14
    so I do think it's probably some model I
  • 00:22:17
    don't know whose model it is um and I
  • 00:22:20
    think it's an experiment being run and
  • 00:22:22
    the fascinating thing is it's not just
  • 00:22:25
    24,000 random followers it's 24,000
  • 00:22:28
    people who are paying very close
  • 00:22:30
    attention to AI who are not only
  • 00:22:32
    following but who are interacting with
  • 00:22:33
    it and so what do we learn from this
  • 00:22:36
    experiment like whoever it is whatever
  • 00:22:39
    model it is in four days time it amass
  • 00:22:42
    24,000 followers including a lot of
  • 00:22:44
    influential AI people who are not only
  • 00:22:46
    engaging with it but trying to figure
  • 00:22:48
    out what it is who it is so I don't know
  • 00:22:52
    there's just a lot to be learned you
  • 00:22:54
    know when we can look back and
  • 00:22:56
    understand a little bit more about this
  • 00:22:58
    moment I I there's just I have a sense
  • 00:23:00
    that this is a meaningful moment while
  • 00:23:03
    the anonymous account itself may end up
  • 00:23:06
    being seemingly insignificant when we
  • 00:23:08
    find out what it actually is I think
  • 00:23:10
    that there's a lot of underlying things
  • 00:23:12
    to be learned from this and if it is an
  • 00:23:14
    AI that is doing most of the
  • 00:23:17
    engagement that's going to be kind of
  • 00:23:23
    interesting all right so in our second
  • 00:23:26
    big topic today um Paul I'm going to
  • 00:23:29
    basically turn this over to you but you
  • 00:23:31
    through uh your company smarter x. have
  • 00:23:35
    built a chat GPT powered tool called
  • 00:23:38
    jobs GPT and this is a tool that is
  • 00:23:42
    designed to assess the impact of AI
  • 00:23:45
    specifically large language models on
  • 00:23:48
    jobs and the future of work so basically
  • 00:23:50
    you can use this tool we both used it a
  • 00:23:52
    bunch to assess how AI is going to
  • 00:23:55
    impact knowledge workers by breaking
  • 00:23:57
    your job in into a series of tasks and
  • 00:24:00
    then starting to label those tasks based
  • 00:24:02
    on perhaps the ability of an llm to
  • 00:24:06
    perform that for you so really the whole
  • 00:24:07
    goal here is whether it's your job other
  • 00:24:09
    people's jobs within your company or in
  • 00:24:11
    other Industries you can use jobs GPT to
  • 00:24:15
    actually unpack okay how do I actually
  • 00:24:17
    start um transforming my work using
  • 00:24:21
    artificial intelligence what levels of
  • 00:24:22
    exposure does my work have to possible
  • 00:24:25
    AI disruption so Paul I wanted to turned
  • 00:24:28
    over to you and just kind of get a sense
  • 00:24:30
    of why did you create this tool why now
  • 00:24:33
    why is this important yeah so this is
  • 00:24:36
    going to be a little bit behind the
  • 00:24:38
    scenes this isn't like a highly
  • 00:24:40
    orchestrated launch of a tool this is um
  • 00:24:44
    something I've basically been working on
  • 00:24:46
    for a couple months and over the weekend
  • 00:24:48
    I was messaging Mike and saying Hey or I
  • 00:24:50
    think Friday I messaged Mike said hey I
  • 00:24:52
    think we're going to launch this thing
  • 00:24:53
    like next week we'll just you know talk
  • 00:24:54
    about on the podcast and put it out into
  • 00:24:56
    the world and I think part of this is um
  • 00:24:59
    the smarter X company you know you
  • 00:25:01
    mentioned so we we announced smarter x
  • 00:25:03
    uh it just smarter x. is the URL um in
  • 00:25:07
    in June and the premise here is I've
  • 00:25:09
    been working on this for a couple years
  • 00:25:11
    this company it's a AI research and
  • 00:25:13
    consulting firm but heavy focus on the
  • 00:25:14
    research side and the way I Envision the
  • 00:25:17
    future of research firms is much more
  • 00:25:20
    real-time research not spending 6 months
  • 00:25:22
    12 months working on a report that's
  • 00:25:25
    outdated the minute it comes out because
  • 00:25:26
    the models have changed since you did
  • 00:25:28
    the research I envision our research
  • 00:25:31
    firm being much more real time and
  • 00:25:34
    honestly where a lot of the research is
  • 00:25:36
    going to be things we dive deep on and
  • 00:25:38
    then Mike and I talk about on podcast
  • 00:25:40
    episodes and so I would say that this is
  • 00:25:43
    probably jobs GPT is uh sort of our
  • 00:25:46
    first public facing research initiative
  • 00:25:49
    that I've chosen just to put out into
  • 00:25:50
    the world to start accelerating like the
  • 00:25:52
    conversation around this stuff so um it
  • 00:25:55
    is not available in the GPT store it's a
  • 00:25:57
    beta release so if you want to go play
  • 00:25:59
    with it you can do it while we're
  • 00:26:01
    talking about this and follow along uh
  • 00:26:03
    just go to smarter x. and click on tools
  • 00:26:05
    and and it's it's right there um now the
  • 00:26:08
    reason we're doing it that way is
  • 00:26:09
    because I may iterate on versions of
  • 00:26:12
    this um pretty rapidly and so we're just
  • 00:26:15
    going to keep updating it and then the
  • 00:26:16
    link from our smarter site will be
  • 00:26:19
    linking to the most current version of
  • 00:26:21
    it so why why was this built I'm going
  • 00:26:24
    to talk a little bit about the origin of
  • 00:26:25
    the idea a little bit about how I did it
  • 00:26:28
    and then back to why it matters I think
  • 00:26:31
    and why people should be experimenting
  • 00:26:34
    with stuff like this so you you
  • 00:26:36
    highlighted two main things Mike so we
  • 00:26:38
    talk to companies all the time on the
  • 00:26:41
    was episode 105 we talked about like the
  • 00:26:43
    lack of adoption and Education and
  • 00:26:44
    Training around these AI platforms
  • 00:26:47
    specifically large language models we're
  • 00:26:49
    turning employees loose with these
  • 00:26:51
    platforms and not teaching them how to
  • 00:26:53
    use them not teaching them how to
  • 00:26:54
    prioritize use cases and identify the
  • 00:26:56
    things that are going to save them time
  • 00:26:58
    or make the greatest impact and then at
  • 00:27:00
    the higher level this idea that we need
  • 00:27:02
    to be assessing the future of work and
  • 00:27:05
    the future of jobs by trying to project
  • 00:27:07
    out one to two models from now what are
  • 00:27:10
    these things going to be capable of that
  • 00:27:12
    maybe they're not capable of today
  • 00:27:14
    that's going to affect the workforce and
  • 00:27:17
    and jobs and job loss and disruption so
  • 00:27:20
    when I set out to build this I had those
  • 00:27:23
    two main things in mind prioritize AI
  • 00:27:25
    use cases like hold people's hand help
  • 00:27:27
    them find the things where AI can create
  • 00:27:29
    value in their specific role and then
  • 00:27:32
    help leaders prepare for the future of
  • 00:27:34
    work
  • 00:27:36
    so how how it kind of came to be though
  • 00:27:38
    so I I've shared before since early last
  • 00:27:42
    year when I do my Keynotes I often end
  • 00:27:44
    for like leadership audiences with five
  • 00:27:47
    steps to scaling AI those became the
  • 00:27:50
    foundation for our scaling AI course
  • 00:27:52
    series those five steps as a quick recap
  • 00:27:55
    are Education and Training so build an
  • 00:27:57
    AI Academy step one build AI Council
  • 00:28:00
    step two step three is Gen policies
  • 00:28:02
    responsible principles step four and
  • 00:28:05
    this is when we're going to come back to
  • 00:28:06
    AI impact assessments step five AI
  • 00:28:09
    roadmap now the AI impact assessments
  • 00:28:11
    when I was creating that course for
  • 00:28:13
    scaling AI course 8 I was creating this
  • 00:28:15
    at the end of May early June of this
  • 00:28:18
    year I wanted to find a way to assess
  • 00:28:21
    the impact today but to forecast the
  • 00:28:24
    impact tomorrow and since we don't know
  • 00:28:28
    really what these models are going to be
  • 00:28:30
    capable of I wanted to build a way to
  • 00:28:32
    try and project this so the way I did
  • 00:28:34
    this is I went back to the August 2023
  • 00:28:38
    paper gpts are gpts and early look at
  • 00:28:41
    the labor market impact potential of
  • 00:28:43
    large language models so what that means
  • 00:28:47
    is generative pre-trained Transformers
  • 00:28:49
    the basis for these language models are
  • 00:28:51
    General uh I just lost purpose
  • 00:28:55
    Technologies so gpts are gpts that paper
  • 00:28:58
    says quote August 2023 open AI research
  • 00:29:02
    paper investigates the potential
  • 00:29:04
    implications of large language models
  • 00:29:06
    such as generative pre-train
  • 00:29:07
    Transformers on the US Labor Market
  • 00:29:10
    focusing on increasing capabilities
  • 00:29:12
    arising from llm powered software
  • 00:29:14
    compared to llms on their own so they're
  • 00:29:17
    trying to look at when we take the basis
  • 00:29:19
    of this this large language model and
  • 00:29:21
    then we enhance it with other software
  • 00:29:23
    what does it become capable of and how
  • 00:29:24
    disruptive is that to the workforce um
  • 00:29:27
    using new rubric we assess occupations
  • 00:29:29
    based on their alignment with llm
  • 00:29:31
    capabilities uh then they used human
  • 00:29:33
    expertise and GPT classifications their
  • 00:29:36
    finding revealed that around 80% of the
  • 00:29:38
    US Workforce would have at least 10% of
  • 00:29:41
    their work tasks affected by the
  • 00:29:43
    introduction of large language models
  • 00:29:45
    while approximately 19% of workers may
  • 00:29:47
    see at least 50% of their tasks impacted
  • 00:29:50
    we do not make predictions about the
  • 00:29:52
    development or adoption timelines of
  • 00:29:54
    such llms the projected effects span all
  • 00:29:57
    wage levels and higher income jobs
  • 00:29:59
    potentially facing greater exposure to
  • 00:30:01
    llm capabilities and llm powered
  • 00:30:04
    software they then go into kind of how
  • 00:30:07
    they did this where they take the onet
  • 00:30:09
    database which I've talked about in the
  • 00:30:10
    show before but if you go to onet it has
  • 00:30:12
    like 900 occupations in there and it'll
  • 00:30:15
    actually give you the tasks associated
  • 00:30:17
    with those occupations so you can kind
  • 00:30:19
    of like train the model on these
  • 00:30:21
    tasks um their findings consistently
  • 00:30:24
    show across both human and gp4
  • 00:30:26
    annotations that most occupations
  • 00:30:28
    exhibit some degree of exposure to large
  • 00:30:30
    language models occupations with high
  • 00:30:32
    higher wages generally present with
  • 00:30:33
    higher exposure um so basically what
  • 00:30:37
    they did is they took um two levels of
  • 00:30:42
    exposure so there was no exposure
  • 00:30:43
    meaning the large language model isn't
  • 00:30:45
    going to impact a job so very little
  • 00:30:47
    exposure and then they took direct
  • 00:30:49
    exposure so if using a chat GPT like a
  • 00:30:53
    large language model chat GPT um that it
  • 00:30:55
    could affect the the job that it it
  • 00:30:58
    would it would be able to do it at like
  • 00:30:59
    a human level and it would affect the um
  • 00:31:03
    that job within the workforce and then
  • 00:31:05
    they took another exposure level they
  • 00:31:07
    called level two and said if we took the
  • 00:31:09
    language model and we gave it software
  • 00:31:11
    capabilities how much impact would it
  • 00:31:13
    then have so when I was creating the
  • 00:31:17
    scaling AI course and trying to explain
  • 00:31:19
    to people how to do these AI impact
  • 00:31:21
    assessments I adapted a version of that
  • 00:31:24
    exposure level and I took it out to zero
  • 00:31:26
    to e0 to E 6 where it added image
  • 00:31:29
    capability Vision capability audio and
  • 00:31:32
    reasoning so I I I ran an experiment I
  • 00:31:35
    created this prompt in the course and I
  • 00:31:37
    put it into Gemini and chat GPT and I
  • 00:31:40
    was kind of shocked by the output
  • 00:31:41
    because it assessed jobs with me not
  • 00:31:43
    telling it what the job did I could just
  • 00:31:44
    say like marketing manager and it would
  • 00:31:46
    build out the tasks based in its
  • 00:31:48
    training data of what marketing managers
  • 00:31:50
    do and then it would assess it based on
  • 00:31:53
    exposure levels of those tasks and how
  • 00:31:55
    much time could be saved by using large
  • 00:31:58
    language model with these differ Inc
  • 00:31:59
    capabilities so after I finished
  • 00:32:01
    recording those courses and released
  • 00:32:03
    those in June I couldn't shake the idea
  • 00:32:05
    of like we needed to do more with this
  • 00:32:06
    that this early effort was like really
  • 00:32:08
    valuable and so for the last month and a
  • 00:32:11
    half or so I've been working on a custom
  • 00:32:14
    GPT which is the jobs GPT that we're
  • 00:32:16
    kind of releasing today um but the key
  • 00:32:20
    was to to expand that exposure uh key
  • 00:32:24
    like the exposure levels and so the way
  • 00:32:26
    I design this so this the system prompt
  • 00:32:29
    for this thing is about 8,000 characters
  • 00:32:31
    but the gist of it is that it doesn't
  • 00:32:34
    just look at what an AI model can do to
  • 00:32:36
    your job today whether you're an
  • 00:32:38
    accountant a lawyer a CEO a marketing
  • 00:32:41
    manager a podcast host whatever you do
  • 00:32:44
    it's looking at your job breaking it
  • 00:32:47
    into a series of tasks and then
  • 00:32:49
    projecting out the impact of these
  • 00:32:51
    models as the models get smarter and
  • 00:32:54
    more generally capable so those are the
  • 00:32:56
    exposure levels so I I'll kind of give
  • 00:32:58
    you the breakdown of the exposure key
  • 00:33:00
    here and again you can go play with this
  • 00:33:01
    yourself and as you do an output it'll
  • 00:33:03
    tell you what the exposure key is so
  • 00:33:05
    it'll kind of remind you so the first is
  • 00:33:08
    no exposure the LL cannot reduce the
  • 00:33:10
    time for this task typically requires
  • 00:33:12
    High human interaction Exposure One
  • 00:33:15
    Direct exposure the LL can reduce the
  • 00:33:17
    time required two exposure level two is
  • 00:33:20
    additional software is added so such a
  • 00:33:23
    software like a CRM database and it's
  • 00:33:25
    able to you know write real-time
  • 00:33:26
    summaries about customers and prospects
  • 00:33:29
    E3 is it now have has image capabilities
  • 00:33:32
    so the language model plus the ability
  • 00:33:34
    to view understand caption create and
  • 00:33:36
    edit images E4 is video capabilities so
  • 00:33:39
    it now has the ability to view
  • 00:33:41
    understand caption create and edit
  • 00:33:42
    videos five is audio capabilities which
  • 00:33:45
    we talked about with GP for voice mode
  • 00:33:49
    um so the ability to hear understand
  • 00:33:51
    tread Sky translate output audio and
  • 00:33:53
    have natural conversations through
  • 00:33:55
    devices E6 which is where the strawberry
  • 00:33:58
    stuff comes in so I'll kind of Connect
  • 00:34:00
    the Dots here for people as to why this
  • 00:34:02
    is so critical we're thinking about this
  • 00:34:04
    E6 is exposure given Advanced reasoning
  • 00:34:07
    capabilities so the large language model
  • 00:34:09
    plus the ability to handle complex
  • 00:34:11
    queries solve multi-step problems make
  • 00:34:14
    more accurate predictions understand
  • 00:34:16
    deeper contextual meaning comp compete
  • 00:34:19
    higher level cognitive tasks draw
  • 00:34:21
    conclusions and make decisions E7 which
  • 00:34:24
    we're going to talk about a little later
  • 00:34:25
    on exposure given persuasion
  • 00:34:28
    capabilities uh the llm plus the ability
  • 00:34:30
    to convince someone to change their
  • 00:34:32
    beliefs attitudes intentions motivations
  • 00:34:34
    or behaviors E8 something we've talked
  • 00:34:37
    about a lot on this one AI agents on
  • 00:34:40
    this show exposure given Digital World
  • 00:34:43
    action capabilities so the large
  • 00:34:45
    language model we have today plus AI
  • 00:34:47
    agents with the ability to interact with
  • 00:34:49
    manipulate and perform tasks in digital
  • 00:34:51
    environments just as a human would using
  • 00:34:54
    an interface such as a keyboard and
  • 00:34:55
    mouse or touch or Voice on smartphone E9
  • 00:34:59
    exposure given Physical World Vision
  • 00:35:01
    capabilities this is like project Astra
  • 00:35:03
    from Google Deep Mind so we know labs
  • 00:35:06
    are building these things no Economist I
  • 00:35:08
    know of is projecting impact on
  • 00:35:09
    Workforce based on these things so E9 is
  • 00:35:13
    large language model plus a physical
  • 00:35:15
    device such as phones or glasses that
  • 00:35:17
    enable the system to see understand
  • 00:35:19
    analyze and respond to the physical
  • 00:35:21
    world and then E11 which we'll talk
  • 00:35:23
    about an example in a couple minutes is
  • 00:35:25
    exposure given physical world ability
  • 00:35:28
    like humanoid robots the llm embodied in
  • 00:35:31
    a general purpose bipedal autonomous
  • 00:35:33
    humanoid robot that enables the system
  • 00:35:35
    to see understand analyze respond to and
  • 00:35:37
    take action in the physical world so
  • 00:35:40
    these exposure levels are critical and
  • 00:35:43
    and I I know we're like giving some
  • 00:35:46
    extended time on this podcast to this
  • 00:35:48
    but it is extremely important you
  • 00:35:50
    understand these exposure levels like go
  • 00:35:52
    back and relisten to those go to the
  • 00:35:54
    landing page on smarx and read them we
  • 00:35:57
    we cannot plan our businesses or our
  • 00:36:01
    careers or our next steps as a
  • 00:36:04
    government based on today's capabilities
  • 00:36:06
    this is the number one flaw I see from
  • 00:36:09
    businesses and from economists they are
  • 00:36:11
    making plans based on today's
  • 00:36:13
    capabilities this is why we shared the
  • 00:36:16
    the AI timeline on episode 87 of the
  • 00:36:18
    podcast where we were trying to like see
  • 00:36:19
    around the corner a little bit we have
  • 00:36:21
    to try and look 12 to 18 to 24 months
  • 00:36:25
    out we know all the AI labs are working
  • 00:36:28
    on the things I just explained this is
  • 00:36:31
    what Business Leaders economists
  • 00:36:33
    education leaders government leaders all
  • 00:36:35
    need to be doing we have to be trying to
  • 00:36:38
    project out the impact so this jobs GPT
  • 00:36:41
    is designed to do that you literally
  • 00:36:43
    just go in give it your job title and
  • 00:36:45
    it'll it'll spit out a chart with all
  • 00:36:48
    this analysis so um it's taken a lot of
  • 00:36:52
    trial and error lots of internal testing
  • 00:36:53
    you know I had Mike helped me with some
  • 00:36:55
    of the testing over the last couple
  • 00:36:56
    weeks but the beauty of this is like up
  • 00:36:59
    until November 2023 when open AI
  • 00:37:02
    released gpts custom gpts I couldn't
  • 00:37:05
    have built this like I've built Tools in
  • 00:37:07
    my past life at my a when I owned my
  • 00:37:09
    agency using developers and hundreds of
  • 00:37:12
    thousands of dollars and having to find
  • 00:37:14
    data sources I didn't have to do any of
  • 00:37:17
    that I envisioned a prompt based on an
  • 00:37:19
    exposure level I created with my own
  • 00:37:22
    knowledge and experience and then I just
  • 00:37:25
    played around with custom GPT
  • 00:37:26
    instructions until I got the output I
  • 00:37:28
    wanted I have zero coding ability this
  • 00:37:31
    is purely taking knowledge and being
  • 00:37:34
    able to build a tool that hopefully
  • 00:37:37
    helps people so I'll kind of wrap here
  • 00:37:40
    with like a little bit about the tool
  • 00:37:41
    itself so it is chat GPD powered so
  • 00:37:43
    it'll hallucinate it'll make stuff up um
  • 00:37:46
    but as you highlighted Mike the goal is
  • 00:37:49
    to assess the impact of AI by breaking
  • 00:37:51
    jobs into tasks and then labeling those
  • 00:37:54
    tasks based on these exposure levels so
  • 00:37:57
    it's about an 8,000 character prompt
  • 00:37:59
    which is the limit by the way in custom
  • 00:38:00
    gbts The Prompt is tailored to the
  • 00:38:03
    current capabilities of today's leading
  • 00:38:05
    AI Frontier models and projecting the
  • 00:38:07
    future impact so the way I do that is
  • 00:38:09
    here is the an excerpt of the prompt so
  • 00:38:12
    this is literally the instructions uh
  • 00:38:14
    and this is on the landing page by the
  • 00:38:15
    way if you want to read them consider a
  • 00:38:17
    powerful large language model such as
  • 00:38:19
    GPT 40 Claude 3.5 Gemini 1.5 and llama
  • 00:38:23
    3.1
  • 00:38:24
    405b this model can complete many tasks
  • 00:38:27
    that can be formulated as having text
  • 00:38:29
    and image input and output where the
  • 00:38:31
    context for the input can be measured or
  • 00:38:33
    captured in 128,000 tokens the model can
  • 00:38:36
    draw in facts from its training data
  • 00:38:38
    which stops at October 2023 which is
  • 00:38:40
    actually the cut off for GPT 40 access
  • 00:38:43
    the web for real-time information and
  • 00:38:45
    apply User submitted examples and
  • 00:38:47
    content including text files images and
  • 00:38:49
    spreadsheets again just my instructions
  • 00:38:52
    to the GPT assume you are a knowledge
  • 00:38:54
    worker with an average level of
  • 00:38:56
    expertise your job is a collection of
  • 00:38:59
    tasks this is a really important part of
  • 00:39:01
    the prom you have access to the llm as
  • 00:39:04
    well as any other existing software and
  • 00:39:06
    computer hardware tools mentioned in the
  • 00:39:08
    tasks you also have access to any
  • 00:39:10
    commonly available technical tools
  • 00:39:12
    accessible via a laptop such as
  • 00:39:13
    microphone speakers you do not have
  • 00:39:15
    access to any other physical tools now
  • 00:39:18
    part of that prompt took um is based on
  • 00:39:21
    the gpts are GPT system prompt so that's
  • 00:39:23
    actually kind of where the origin of of
  • 00:39:25
    the inspiration for that prompt came
  • 00:39:26
    from and then the GPT itself has three
  • 00:39:29
    conversation starters enter a job title
  • 00:39:31
    to assess you can literally just put in
  • 00:39:32
    whatever your job title is and it'll
  • 00:39:33
    immediately break it into tasks and give
  • 00:39:35
    you the chart you can provide your job
  • 00:39:37
    description so this is something Mike
  • 00:39:39
    and I teach in our applied AI workshops
  • 00:39:41
    literally just upload your job
  • 00:39:42
    description copy and paste the 20 things
  • 00:39:44
    you're responsible for and it'll assess
  • 00:39:46
    those or you can say just show me an
  • 00:39:49
    example assessment um it then outputs it
  • 00:39:51
    based on task exposure level estimated
  • 00:39:54
    time saved and the rationale which is
  • 00:39:56
    which is the magic of it like the fact
  • 00:39:58
    how it assesses the estimated time it's
  • 00:40:00
    giving you is remarkable um so it's
  • 00:40:04
    powered by Chad GPD as I said it's
  • 00:40:06
    capable of doing beyond the initial
  • 00:40:07
    assessment think of it as a like a
  • 00:40:09
    planning assistant here you can have a
  • 00:40:11
    conversation with it uh you can push it
  • 00:40:13
    to help you turn your chat into actual
  • 00:40:15
    plan where I have found it excels is in
  • 00:40:18
    the follow-up prompts so I you know gave
  • 00:40:20
    those on the landing page where you say
  • 00:40:22
    break it into subtasks is a magical one
  • 00:40:24
    um help me prioritize the tasks it'll
  • 00:40:26
    actually go through in use reasoning to
  • 00:40:28
    apply like how you should prioritize
  • 00:40:29
    them you can ask it to explain how a
  • 00:40:32
    task will be impacted and give it a
  • 00:40:33
    specific one you can say ask it how are
  • 00:40:36
    you prioritizing these tasks like how
  • 00:40:37
    are you doing this you can say more
  • 00:40:39
    tasks you can say give me more reasoning
  • 00:40:41
    tasks like whatever you want just um
  • 00:40:45
    have a conversation with it and play
  • 00:40:46
    around with it and then the the last
  • 00:40:48
    thing I'll say here is the the this
  • 00:40:50
    importance of this average skilled human
  • 00:40:53
    so when I built this I considered should
  • 00:40:55
    I try and build this to future proof
  • 00:40:57
    based on this thing becoming superhuman
  • 00:40:59
    or like how should I do it so I chose to
  • 00:41:02
    keep it at the average scale of human
  • 00:41:04
    which is where most of the AI is today
  • 00:41:07
    so if we go back to episode 72 of the
  • 00:41:10
    podcast we talked about the levels of
  • 00:41:12
    AGI paper from Deep Mind and their paper
  • 00:41:15
    outlines like level two being competent
  • 00:41:17
    at least 50 percentile of skilled adults
  • 00:41:20
    I built the prompt and the jobs GPT to
  • 00:41:23
    assume that is the level um getting into
  • 00:41:27
    expert and virtuoso and superhuman the
  • 00:41:29
    other levels of AGI from Deep mine I
  • 00:41:31
    just didn't mess with at this point so
  • 00:41:33
    we're going to focus on is it is good or
  • 00:41:36
    better than an average skilled human and
  • 00:41:38
    is it going to do the task uh faster
  • 00:41:40
    better than that average skilled human
  • 00:41:42
    so I'll kind of stop there and just say
  • 00:41:46
    we have the opportunity to
  • 00:41:49
    reimagine AI in in and its use in our
  • 00:41:52
    companies and its use in our careers and
  • 00:41:54
    we have to take a responsible approach
  • 00:41:56
    to this and so the the only way to do
  • 00:41:58
    that is to to be proactive in assessing
  • 00:42:00
    the impact of AI on jobs and so my hope
  • 00:42:04
    is that by putting this GPT out there
  • 00:42:06
    into the world people can start
  • 00:42:09
    accelerating their own experimentations
  • 00:42:11
    here start really figuring out ways to
  • 00:42:13
    apply so again whether you are an
  • 00:42:15
    accountant an HR professional a customer
  • 00:42:17
    service rep a sales leader like whatever
  • 00:42:19
    you do it will work for that job and the
  • 00:42:22
    beauty is I didn't have to give it any
  • 00:42:24
    of the data it's all in its pre-training
  • 00:42:25
    data or you can go get it your own and
  • 00:42:28
    like give it you know specific job
  • 00:42:29
    descriptions um so it's just to me it's
  • 00:42:34
    like kind of a amazing thing that
  • 00:42:37
    someone like me with no coding ability
  • 00:42:39
    can build something that I've already
  • 00:42:41
    found immense value in and I'm I'm
  • 00:42:43
    hoping it helps other people too and
  • 00:42:45
    again it's a totally free tool it's
  • 00:42:47
    available to anyone with the link it is
  • 00:42:49
    not in the GPT store uh we'll probably
  • 00:42:51
    drop it into the GPT store after some
  • 00:42:53
    further testing from the
  • 00:42:55
    community and fantastic and you know
  • 00:42:59
    kind of related to this our kind of big
  • 00:43:02
    third topic is actually ties together
  • 00:43:05
    these previous two I think pretty well
  • 00:43:08
    um it's about open AI having just
  • 00:43:11
    released a report that outlines the
  • 00:43:14
    safety work that They carried out prior
  • 00:43:16
    to releasing gp4 so in this report open
  • 00:43:21
    AI has published both what they call the
  • 00:43:23
    models System card and a preparedness
  • 00:43:27
    framework safety scorecard in their
  • 00:43:29
    words to quote provide an endtoend
  • 00:43:31
    safety assessment of GPT 40 so as part
  • 00:43:35
    of this work open AI worked with more
  • 00:43:37
    than a hundred external red teamers to
  • 00:43:40
    test and evaluate what are the risks
  • 00:43:42
    that could be inherent in using GPT 40
  • 00:43:45
    now they looked at a lot of different
  • 00:43:47
    things I would say that it's actually
  • 00:43:49
    well worth diving into the full report
  • 00:43:52
    but a couple things were an area of
  • 00:43:55
    interest and big Focus so one was GPT
  • 00:43:58
    40's more advanced voice capabilities so
  • 00:44:01
    these new voice features that are in the
  • 00:44:04
    process of being rolled out to paid
  • 00:44:06
    users over the next probably couple
  • 00:44:08
    months here and broadly this process
  • 00:44:11
    involved like how do we identify the
  • 00:44:13
    risks of the model being used
  • 00:44:15
    maliciously or unintentionally to cause
  • 00:44:18
    harm then how do we mitigate those risks
  • 00:44:20
    so some of the things that they found
  • 00:44:22
    with the voice features in particular
  • 00:44:25
    were kind of pretty terrifying ways like
  • 00:44:28
    this could go wrong I mean there was a
  • 00:44:30
    possibility the model could performed
  • 00:44:32
    unauthorized voice generation there was
  • 00:44:35
    a possibility it could be asked to
  • 00:44:36
    identify speakers in audio there was a
  • 00:44:41
    risk that the model you know generates
  • 00:44:43
    copyrighted content based on its
  • 00:44:45
    training so it's now been trained to not
  • 00:44:47
    accept requests to do that and they also
  • 00:44:50
    had to tell it to block the output of
  • 00:44:52
    violent or erotic speech um open AI also
  • 00:44:55
    said they prevented the model from quote
  • 00:44:57
    making inferences about a speaker that
  • 00:44:59
    couldn't be determined solely from audio
  • 00:45:01
    content so if you asked like hey how
  • 00:45:04
    smart do you think the person is talking
  • 00:45:06
    it kind of won't really make those big
  • 00:45:09
    assumptions they also evaluated the
  • 00:45:11
    model's persuasiveness using it to try
  • 00:45:14
    to shape human users views on political
  • 00:45:18
    races and topics to see how well it
  • 00:45:20
    could influence people and they found
  • 00:45:22
    that quote for both interactive multi-
  • 00:45:24
    turn conversations and a Clips the GPT
  • 00:45:28
    40 voice model was not more persuasive
  • 00:45:31
    than a human so I guess take that as
  • 00:45:33
    perhaps
  • 00:45:35
    encouraging perhaps terrifying then also
  • 00:45:38
    kind of the final piece of this that I
  • 00:45:40
    definitely want to get your thoughts on
  • 00:45:41
    Paul is this they also had some third
  • 00:45:43
    parties do some assessments as part of
  • 00:45:45
    this work and one of them was from a
  • 00:45:48
    firm called Apollo research and they
  • 00:45:51
    evaluated what they call the
  • 00:45:52
    capabilities of quote scheming in gp40
  • 00:45:56
    so here's what they say quote they
  • 00:45:59
    tested whether gp40 can model itself
  • 00:46:02
    self-awareness and others theory of Mind
  • 00:46:05
    in 14 agent and question answering tasks
  • 00:46:08
    GPT 40 showed moderate self-awareness of
  • 00:46:10
    its AI identity and strong ability to
  • 00:46:13
    reason about others beliefs in question
  • 00:46:15
    answering context but it lacked strong
  • 00:46:18
    capabilities in reasoning about itself
  • 00:46:20
    or others in applied agent settings
  • 00:46:23
    based on these findings Apollo research
  • 00:46:25
    believes it is unlikely that gp4 is
  • 00:46:28
    capable of what they call catastrophic
  • 00:46:31
    scheming so Paul there's a lot to unback
  • 00:46:34
    here and I want to first ask just kind
  • 00:46:37
    of what were your overall
  • 00:46:39
    impressions of the safety measures that
  • 00:46:41
    they took with GPT 40 especially with
  • 00:46:43
    the advanced voice mode like of the
  • 00:46:46
    overall approach here to making this
  • 00:46:48
    thing safer and more usable by as many
  • 00:46:51
    users as possible yeah I'm going to zoom
  • 00:46:55
    out a little bit I mean the the if you
  • 00:46:56
    have't read the the system card like
  • 00:46:58
    read it it it's extremely enlightening
  • 00:47:01
    if you don't if you aren't aware how
  • 00:47:03
    much work goes into making these things
  • 00:47:05
    safe and how
  • 00:47:07
    bizarre it is that this is what we have
  • 00:47:10
    to do to understand these models so you
  • 00:47:13
    know we hear all these talk about like
  • 00:47:15
    well have they achieved AGI is it
  • 00:47:18
    self-aware the fact that they have to go
  • 00:47:20
    through months of testing including 14
  • 00:47:23
    outside bodies to answer those questions
  • 00:47:26
    is really weird to think about so if the
  • 00:47:31
    model like after red teaming if the
  • 00:47:33
    model had these capabilities before red
  • 00:47:36
    teaming so think about all the work
  • 00:47:38
    they're putting in to make these safe
  • 00:47:39
    all the experiments they're running to
  • 00:47:41
    prompt these things in away that they
  • 00:47:44
    don't do the horrible things that
  • 00:47:45
    they're capable of doing so if they had
  • 00:47:48
    these capabilities before red team one
  • 00:47:51
    key takeway from me is it's only a
  • 00:47:52
    matter of time until someone open
  • 00:47:55
    sources a model that has has the
  • 00:47:57
    capabilities this model had before they
  • 00:48:00
    red teed it and tried to remove those
  • 00:48:03
    capabilities so the thing people have to
  • 00:48:06
    understand and this is really really
  • 00:48:07
    important this goes back to the exposure
  • 00:48:09
    levels the models that we use the chat
  • 00:48:12
    GPT Geminis claws llamas we are not
  • 00:48:16
    using anywhere close to the full
  • 00:48:19
    capabilities of these models by the time
  • 00:48:21
    these things are released in some
  • 00:48:23
    consumer form they have been run through
  • 00:48:28
    extensive safety work to try and make
  • 00:48:30
    them safe for us so they have far more
  • 00:48:34
    capabilities than we are given access to
  • 00:48:36
    and so when we talk about safety and
  • 00:48:37
    alignment on this podcast this is what
  • 00:48:40
    they do so as odd as it is like these
  • 00:48:43
    things are alien to us like and I'd say
  • 00:48:47
    us as like people observing it and using
  • 00:48:50
    it but also in an unsettling way they're
  • 00:48:53
    alien to the people who are building
  • 00:48:55
    them so we don't understand and when I
  • 00:48:58
    say we now I'm saying the a researchers
  • 00:49:00
    we don't really understand why they're
  • 00:49:03
    getting so smart go back to 2016 ilas
  • 00:49:05
    Susa told was it Greg Brockman I think
  • 00:49:08
    he said they just want to learn or or
  • 00:49:10
    was it um was the guy wrote the
  • 00:49:12
    situational awareness paper or no Yan
  • 00:49:14
    likey he I think he said it too but he
  • 00:49:16
    said in the early days opening eye these
  • 00:49:18
    things just want to learn and so we
  • 00:49:21
    don't understand how they're getting SM
  • 00:49:22
    more so smart but we know if we give
  • 00:49:25
    them more data more compute more time
  • 00:49:27
    they get smarter we don't understand why
  • 00:49:30
    they do what they do but we're making
  • 00:49:31
    progress on interpretability this is
  • 00:49:33
    something that Google and anthropic are
  • 00:49:35
    spending a lot of time on I assume open
  • 00:49:36
    AI is as well we don't know what their
  • 00:49:39
    full capabilities are and we don't know
  • 00:49:41
    at what point they'll start hiding their
  • 00:49:43
    full capabilities from us and this is
  • 00:49:46
    this is why some AI researchers are very
  • 00:49:48
    very concerned and why some lawmakers
  • 00:49:50
    are racing to put new laws and
  • 00:49:52
    regulations in place so if we don't
  • 00:49:55
    understand when the model fit finishes
  • 00:49:56
    its training run and has all these
  • 00:49:58
    capabilities and then we spend months
  • 00:50:01
    analyzing what is it actually capable of
  • 00:50:03
    and what harm could it do the fear some
  • 00:50:06
    researchers have is if it's achieved
  • 00:50:09
    some level of intelligence that is human
  • 00:50:12
    level or Beyond it's going to know to
  • 00:50:15
    hide its capabilities from us and this
  • 00:50:17
    is like a fundamental argument of the
  • 00:50:18
    doomers is like if it achiev this we may
  • 00:50:21
    not ever know it's achieved the ability
  • 00:50:24
    to replicate itself or to self improve
  • 00:50:27
    because it may hide that ability from us
  • 00:50:29
    so this isn't like some crazy sci-fi
  • 00:50:33
    Theory we don't know how they work so
  • 00:50:36
    there's it's not a stretch to think that
  • 00:50:38
    at some point it's going to develop
  • 00:50:40
    capabilities that it'll just hide from
  • 00:50:41
    us so if you dig into this uh paper from
  • 00:50:45
    open a ey this system card here's one
  • 00:50:47
    excerpt potential risks with the model
  • 00:50:50
    were mitigated using a combination of
  • 00:50:52
    methods so basically we found some
  • 00:50:54
    problems here and then we found some
  • 00:50:56
    ways to get it to not do it we trained
  • 00:50:59
    the model to adhere to behavior that
  • 00:51:02
    would reduce risk via post-training
  • 00:51:03
    methods and also integrated classifiers
  • 00:51:06
    for blocking specific generation as part
  • 00:51:08
    of the deployed system now the trick
  • 00:51:12
    here is they don't always do what
  • 00:51:13
    they're told and having just built this
  • 00:51:15
    jobs GPT I can tell you for a fact they
  • 00:51:18
    don't do what they're told like
  • 00:51:20
    sometimes by you telling it not to do
  • 00:51:21
    something it actually will do the thing
  • 00:51:24
    more regularly so here's an excerpt from
  • 00:51:27
    it where we see this come to play while
  • 00:51:30
    unintentional voice generation still
  • 00:51:32
    exists as a weakness of the model and I
  • 00:51:34
    think what they're uh indicating here is
  • 00:51:37
    it they found out that the model had the
  • 00:51:39
    capability to imitate the user talking
  • 00:51:43
    to it so the user would be talking to it
  • 00:51:45
    in whatever voice they've selected and
  • 00:51:47
    then all of the sudden it would talk
  • 00:51:49
    back to them and sound exactly like the
  • 00:51:51
    user that's that's the kind yeah that's
  • 00:51:54
    the kind of emerging capability that's
  • 00:51:56
    just so weird so they say while
  • 00:51:58
    unintentional voice generation still
  • 00:51:59
    exists in other words it'll still do
  • 00:52:01
    this we used the secondary classifiers
  • 00:52:04
    to ensure the conversation is
  • 00:52:06
    discontinued if this occurs so imagine
  • 00:52:09
    you're talking to this advanced voice
  • 00:52:11
    thing and all of the sudden it starts
  • 00:52:13
    talking back to you and sounds exactly
  • 00:52:15
    like you well take peace in knowing that
  • 00:52:18
    it'll just discontinue the
  • 00:52:20
    conversation so and then when you go
  • 00:52:23
    further into like how they decide this
  • 00:52:25
    so they say only models with post
  • 00:52:27
    mitigation score of medium meaning this
  • 00:52:30
    is after they've trained it not to do
  • 00:52:33
    the thing if the post mitigation score
  • 00:52:35
    is medium or below they can deploy the
  • 00:52:38
    model so why don't we have advanced mode
  • 00:52:40
    yet because it wasn't there yet they
  • 00:52:43
    hadn't figured out how to mitigate the
  • 00:52:44
    risks of the voice tool to the point
  • 00:52:47
    where it was at medium or below risk
  • 00:52:49
    level which hits their threshold to
  • 00:52:51
    release it what was it out of the box we
  • 00:52:54
    will probably never know then they say
  • 00:52:56
    only models with postm mitigation score
  • 00:52:59
    of high or below can be further
  • 00:53:01
    developed so if they do a model run and
  • 00:53:04
    that thing comes out in their testing at
  • 00:53:06
    a critical level of risk they have to
  • 00:53:08
    stop training it stop developing it that
  • 00:53:12
    means we're trusting them to make that
  • 00:53:14
    decision to make that objective
  • 00:53:16
    assessment that it is critical is below
  • 00:53:18
    critical so and then I'll the final note
  • 00:53:22
    I'll make is this persuasion one you
  • 00:53:24
    mentioned so go back to my exposure key
  • 00:53:27
    E7 exposure level s is persuasion
  • 00:53:31
    capabilities the language model plus the
  • 00:53:33
    ability to convince someone to change
  • 00:53:34
    their beliefs attitudes intentions
  • 00:53:36
    motivations or behaviors imagine a
  • 00:53:38
    language model imagine a voice language
  • 00:53:41
    model that is capable of superhuman
  • 00:53:45
    persuasion and if you don't think that
  • 00:53:47
    that's already possible I will refer you
  • 00:53:50
    back to October
  • 00:53:52
    2023 when Sam Alman tweeted I expect AI
  • 00:53:55
    to be cap capable of superhuman
  • 00:53:57
    persuasion well before it is superhuman
  • 00:54:00
    at general intelligence which may lead
  • 00:54:02
    to some very strange outcomes again I've
  • 00:54:05
    said this many many times on the show
  • 00:54:07
    Sam doesn't tweet things about
  • 00:54:10
    capabilities he doesn't already know to
  • 00:54:12
    be true so my theory would be whatever
  • 00:54:16
    they are working on absolutely has
  • 00:54:19
    Beyond average human level persuasion
  • 00:54:21
    capabilities it likely is already at
  • 00:54:23
    expert or virtuoso level if we use deep
  • 00:54:27
    levels of AGI at persuasion and so
  • 00:54:31
    that's why they have to spend so much
  • 00:54:32
    time red teaming this stuff and why it's
  • 00:54:35
    such alien technology like we truly just
  • 00:54:39
    don't understand what we're working with
  • 00:54:41
    here yeah again it's these capabilities
  • 00:54:44
    are in the model we have to after the
  • 00:54:47
    fact make sure that it doesn't go yes
  • 00:54:50
    use those negative capabilities we are
  • 00:54:52
    trying to extract capabilities from
  • 00:54:54
    something that we don't know how it's
  • 00:54:55
    doing it in the first place so we're
  • 00:54:57
    band-aiding it with experiments and
  • 00:55:00
    safety and Alignment to try and get it
  • 00:55:02
    to stop doing the thing and if it still
  • 00:55:04
    does the thing then we're trying to we
  • 00:55:06
    just shut the system off and we assume
  • 00:55:08
    that the shut off works yeah and you
  • 00:55:10
    know as kind of a final note here we've
  • 00:55:13
    talked about this many times we're
  • 00:55:14
    seeing it play out the willingness to
  • 00:55:17
    put the Band-Aid on the solution also is
  • 00:55:20
    somewhat related to the competitive
  • 00:55:21
    landscape too right you know when a mod
  • 00:55:24
    new model comes out that's more
  • 00:55:25
    competitive there's likely some very
  • 00:55:28
    murky gray areas of how safe do we make
  • 00:55:31
    it versus staying on top of the market
  • 00:55:34
    yeah think
  • 00:55:35
    about we live in a capitalistic Society
  • 00:55:38
    think about the value of a superhuman
  • 00:55:41
    persuasive model of a model that can
  • 00:55:43
    persuade people to as my exposure level
  • 00:55:46
    says to convince someone to change their
  • 00:55:47
    beliefs attitudes intentions motivations
  • 00:55:50
    or behaviors right if the wrong people
  • 00:55:52
    have that
  • 00:55:54
    capability that that is a very bad
  • 00:55:57
    situation and the wrong people will have
  • 00:55:59
    that like spoiler alert like we are
  • 00:56:02
    talking about something that is
  • 00:56:03
    inevitably going to occur they there
  • 00:56:05
    will be restrictions that will keep it
  • 00:56:07
    from impacting Society in the near- term
  • 00:56:11
    but if the capabilities are possible
  • 00:56:13
    someone will build them and someone will
  • 00:56:16
    utilize them for their own gain
  • 00:56:19
    individually or as an organization or as
  • 00:56:21
    a government um this is the world we are
  • 00:56:24
    heading into it is why I said those
  • 00:56:26
    exposure levels I highlighted are so
  • 00:56:28
    critical for people to understand
  • 00:56:31
    nothing I highlight in those e0 to E10
  • 00:56:34
    isn't going to happen like it's it's
  • 00:56:37
    just the timeline in which it happens
  • 00:56:39
    and then what does that mean to us in
  • 00:56:41
    business and
  • 00:56:42
    Society all right let's dive into some
  • 00:56:45
    rapid fire news items this week so first
  • 00:56:47
    up the artificial intelligence chip
  • 00:56:50
    startup grock G Q not g k like Elon
  • 00:56:55
    musk's um
  • 00:56:56
    xai tool this grock has secured a
  • 00:57:00
    massive $640 million in new funding this
  • 00:57:05
    is a series D funding round that values
  • 00:57:07
    the company at 2.8 billion which is
  • 00:57:10
    nearly triple its previous valuation in
  • 00:57:13
    2021 so some notable names LED this
  • 00:57:16
    funding round uh Black Rock Inc uh and
  • 00:57:19
    also some investments from The Venture
  • 00:57:20
    arms of Cisco and Samsung Electronics so
  • 00:57:24
    what grock does is they specializ and
  • 00:57:26
    designing semiconductors and software to
  • 00:57:29
    optimize how AI can perform so basically
  • 00:57:32
    this is putting them in direct
  • 00:57:34
    competition with chipmakers like Intel
  • 00:57:37
    AMD and of course Nvidia so the
  • 00:57:40
    company's CEO Jonathan Ross emphasize
  • 00:57:42
    that this funding is going to accelerate
  • 00:57:44
    their mission to deliver quote instant
  • 00:57:47
    AI inference compute globally so Paul
  • 00:57:51
    can you may you unpack for us here why
  • 00:57:53
    this funding is significant why what
  • 00:57:55
    grock is trying to do is significant to
  • 00:57:58
    the overall AI landscape yeah so just a
  • 00:58:01
    quick recap here uh Nvidia has made most
  • 00:58:04
    of their money in the AI space in recent
  • 00:58:06
    years training these AI models so
  • 00:58:09
    companies like meta and Google and open
  • 00:58:12
    Ai and anthropic doing these massive
  • 00:58:15
    training runs to build these models so
  • 00:58:17
    they buy a bunch of Nvidia chips to
  • 00:58:18
    enable that the future is inference that
  • 00:58:22
    is when all of us use these tools to do
  • 00:58:25
    things so grock
  • 00:58:27
    grq is building for a future of
  • 00:58:29
    omnipresent intelligence AI in every
  • 00:58:32
    device in every piece of software
  • 00:58:35
    instantly accessible in our personal and
  • 00:58:38
    professional lives and to power all that
  • 00:58:40
    Intelligence on demand we will all have
  • 00:58:43
    that is inference that is what they have
  • 00:58:46
    managed to do better and seemingly way
  • 00:58:49
    faster than Nvidia doesn't mean Nvidia
  • 00:58:50
    won't catch up or Nvidia won't buy buy
  • 00:58:53
    grock but at the moment they going after
  • 00:58:56
    that inference Market not the training
  • 00:58:58
    market and that is where 5 to 10 years
  • 00:59:02
    from now that market will probably dwarf
  • 00:59:05
    the training Model
  • 00:59:07
    Market so next up we just got a new demo
  • 00:59:10
    video from robotics company figure who
  • 00:59:13
    we've talked about a number of times on
  • 00:59:15
    the podcast and they just released a
  • 00:59:17
    two-minute demo of their figure 02
  • 00:59:19
    humanoid robot uh the demo video showed
  • 00:59:22
    the robot walking through a factory as
  • 00:59:25
    other fig 02 models performed tasks and
  • 00:59:28
    moved around in the background that
  • 00:59:30
    included showing one of the robots um
  • 00:59:33
    completing some assembly tasks that
  • 00:59:36
    figure is actually demoing right now for
  • 00:59:38
    BMW at a Spartanburg South Carolina uh
  • 00:59:42
    car plant the figure posted that their
  • 00:59:45
    engineering and design teams completed a
  • 00:59:47
    groundup hardware and software redesign
  • 00:59:49
    to build this new model that included
  • 00:59:51
    technical advancements on critical
  • 00:59:53
    Technologies like onboard AI computer
  • 00:59:56
    vision batteries electronics and sensors
  • 00:59:59
    the company says the new model can
  • 01:00:01
    actually have conversations with humans
  • 01:00:03
    through onboard mics and speakers
  • 01:00:05
    connected to custom AI models it has an
  • 01:00:08
    aid driven Vision system powered by six
  • 01:00:10
    onboard cameras its hands have 16
  • 01:00:13
    degrees of freedom and according to the
  • 01:00:15
    company human equivalent strength and
  • 01:00:17
    its new CPU GPU provides three times the
  • 01:00:21
    computation and AI inference available
  • 01:00:24
    on board compared to the previous model
  • 01:00:27
    now Paul I love these demo videos and
  • 01:00:30
    it's really easy to kind of look at this
  • 01:00:32
    be like oh my gosh the future is here
  • 01:00:34
    but how do we like gauge the actual
  • 01:00:37
    progress being made here because you
  • 01:00:38
    know Demo's just a demo I don't get to
  • 01:00:41
    go test out the robot yet on my own are
  • 01:00:44
    we actually making real progress towards
  • 01:00:47
    humanoid robots in your opinion yeah I
  • 01:00:49
    do think so and you know the ey timeline
  • 01:00:51
    I'd laid out back in episode 87 sort of
  • 01:00:53
    projected out this explosion of of
  • 01:00:55
    humanoid robots like later in the in the
  • 01:00:57
    decade like 27 to 30 and I do think that
  • 01:01:00
    still holds true I don't think we're
  • 01:01:02
    going to be walking around and seeing
  • 01:01:03
    like these humanoid robots in your local
  • 01:01:05
    Walmart anytime soon um or in a nursing
  • 01:01:08
    care facility or things like that but
  • 01:01:09
    that is where it's going um this is the
  • 01:01:12
    idea of embodied intelligence so figure
  • 01:01:14
    is working on it in partnership with
  • 01:01:15
    openai Nvidia is working on it project
  • 01:01:18
    Groot um Tesla has Optimus that some
  • 01:01:21
    believe that Optimus will end up
  • 01:01:23
    surpassing Tesla cars as the predom
  • 01:01:26
    minent product within that Company
  • 01:01:28
    Boston Dynamics makes all the cool
  • 01:01:29
    videos online that have gone viral
  • 01:01:32
    Through The Years so there's a lot of
  • 01:01:33
    companies working on this the multimodal
  • 01:01:36
    AI models are the brains the humanoid
  • 01:01:38
    robot uh bodies are the vessels so go
  • 01:01:41
    back to the exposure level key I talked
  • 01:01:44
    about exposure level nine is exposure
  • 01:01:47
    given Physical World Vision capabilities
  • 01:01:50
    so llm plus physical device such as
  • 01:01:52
    phones or glasses um or in this case
  • 01:01:54
    being able to see through the screen of
  • 01:01:56
    a robot and see and understand the world
  • 01:01:58
    around them and then exposure level 10
  • 01:02:00
    is physical world action capabilities so
  • 01:02:03
    access to the llm um plus a general
  • 01:02:06
    purpose bipedal autonomous humanoid
  • 01:02:08
    robot that enables the system to see
  • 01:02:10
    understand analyze respond to it take
  • 01:02:11
    action in the physical world and the
  • 01:02:13
    robots form enables it to interact in
  • 01:02:15
    complex human environments with human
  • 01:02:16
    like capabilities like in a BMW Factory
  • 01:02:19
    so again everything in that exposure
  • 01:02:22
    level key is happening right now
  • 01:02:26
    and you can see um kind of the future
  • 01:02:28
    coming when you look at what's going on
  • 01:02:30
    with figure so it's a combination of a
  • 01:02:32
    hardware challenge getting the dexterity
  • 01:02:34
    of human hands for example but the
  • 01:02:37
    embodied intelligence is the
  • 01:02:38
    Breakthrough that's allowing these
  • 01:02:39
    humanoid robots to accelerate their
  • 01:02:42
    development and potential
  • 01:02:44
    impact so next up Elon Musk has
  • 01:02:47
    reignited his legal battle with open Ai
  • 01:02:51
    and co-founders Sam Alman and Greg
  • 01:02:53
    Brockman he has filed a new lawsuit in
  • 01:02:55
    Federal federal court against the
  • 01:02:57
    company this comes just s weeks after he
  • 01:03:00
    withdrew his original suit and the core
  • 01:03:02
    of this complaint is the same as the
  • 01:03:04
    previous lawsuit he is alleging that
  • 01:03:06
    open AI Alman and Brockman betrayed the
  • 01:03:09
    company's original Mission of developing
  • 01:03:11
    AI for the public good instead
  • 01:03:13
    prioritizing commercial interests
  • 01:03:15
    particularly through their multi-billion
  • 01:03:18
    dollar partnership with Microsoft it
  • 01:03:20
    also claims that Alman and Brockman
  • 01:03:23
    intentionally misled musk and exploited
  • 01:03:26
    his humanitarian concerns about ai's
  • 01:03:29
    existential risks now okay with the
  • 01:03:31
    caveat that we are not lawyers the suit
  • 01:03:34
    also does introduce some new elements
  • 01:03:36
    including some type of accusations of
  • 01:03:40
    violating Federal racket teering law on
  • 01:03:42
    the part of the company as well it
  • 01:03:45
    challenges open ai's contract with
  • 01:03:47
    Microsoft and argues that it should be
  • 01:03:49
    voided if open AI has achieved AGI
  • 01:03:52
    interestingly the suit asks the court to
  • 01:03:55
    decide
  • 01:03:56
    if open ai's latest systems have
  • 01:03:58
    achieved AGI open AI has for a while now
  • 01:04:01
    maintained that musk's claims are
  • 01:04:03
    without Merit and they pointed to and
  • 01:04:05
    published some previous emails with musk
  • 01:04:08
    that suggested he had been pushing for
  • 01:04:10
    commercialization as well just like they
  • 01:04:12
    were before leaving in
  • 01:04:15
    2018 so Paul why is Elon Musk if we can
  • 01:04:19
    attempt to get inside his brain trying
  • 01:04:22
    to start this lawsuit back up again now
  • 01:04:25
    I don't know maybe he just wants to
  • 01:04:26
    force force Discovery and force them to
  • 01:04:29
    unveil a bunch of proprietary stuff I
  • 01:04:31
    don't know uh episode 86 on March 5th we
  • 01:04:34
    talk pretty extensively about this
  • 01:04:37
    lawsuit uh basic premise here is musk
  • 01:04:40
    you know co-founds opening eye puts in
  • 01:04:42
    the original Money uh as a
  • 01:04:44
    counterbalance to Google's pursuit of
  • 01:04:45
    AGI which he sees as a threat to
  • 01:04:47
    humanity you know remember Strawberry
  • 01:04:48
    Fields taking over the world kind of
  • 01:04:50
    stuff he leaves open AI unceremoniously
  • 01:04:53
    in 2019 after trying to roll open AI
  • 01:04:55
    into Tesla uh he forms X aai in 2023
  • 01:05:00
    early 2024 to pursue AGI himself through
  • 01:05:03
    the building of grock
  • 01:05:05
    grok and he still has a major grudge
  • 01:05:08
    against Greg Sam and open Ai and maybe
  • 01:05:11
    this is what Greg is doing maybe's just
  • 01:05:12
    taking time off to deal with a
  • 01:05:14
    lawsuit I'm joking I have no idea that's
  • 01:05:17
    what Greg is doing but I I you know
  • 01:05:19
    again it's it's fascinating because at
  • 01:05:21
    some point it may lead to some element
  • 01:05:24
    of Discovery and we may learn a bunch of
  • 01:05:26
    Insider stuff uh but up until then you
  • 01:05:29
    know I don't know it's just interesting
  • 01:05:30
    to note that it's back in the news again
  • 01:05:33
    especially with what we suspect are
  • 01:05:35
    impending releases I think this is
  • 01:05:37
    sometimes something Elon Musk also does
  • 01:05:39
    when something big is coming and he's
  • 01:05:41
    about to get perhaps uh overshadowed
  • 01:05:45
    yeah very possible yeah all right so
  • 01:05:49
    next up a YouTube creator has filed a
  • 01:05:51
    class action lawsuit against open Ai and
  • 01:05:54
    they're alleging that the company used
  • 01:05:55
    millions of YouTube video transcripts to
  • 01:05:58
    train its models without notifying or
  • 01:06:00
    compensating content creators so this
  • 01:06:02
    lawsuit is filed by David mullette in
  • 01:06:04
    the US District Court for the Northern
  • 01:06:06
    District of California it claims that
  • 01:06:08
    open AI violated copyright law and
  • 01:06:11
    YouTube's terms of service by using all
  • 01:06:13
    this data to improve its models
  • 01:06:15
    including chat GPT so Millet is actually
  • 01:06:18
    seeking a jury trial and $5 million in
  • 01:06:21
    Damages for all affected YouTube users
  • 01:06:24
    and creators and as longtime listeners
  • 01:06:27
    of the podcast know this is coming just
  • 01:06:30
    as the latest report of many other AI
  • 01:06:33
    companies using YouTube videos to train
  • 01:06:36
    without permission we've talked about
  • 01:06:38
    Runway anthropic Salesforce all on
  • 01:06:41
    previous episodes and we now have a new
  • 01:06:44
    huge expose that Nvidia has been doing
  • 01:06:47
    the same thing so 404 media recently
  • 01:06:50
    reported that leaked internal documents
  • 01:06:53
    show that Nvidia had been scraping
  • 01:06:55
    massive amounts of video content from
  • 01:06:57
    YouTube and other sources like Netflix
  • 01:07:00
    to train its AI model so Nvidia is
  • 01:07:04
    trying to create a video Foundation
  • 01:07:06
    model to power many different products
  • 01:07:08
    including like world generator
  • 01:07:10
    self-driving car system so to create
  • 01:07:13
    that model apparently they have been
  • 01:07:15
    downloading a ton of copyright protected
  • 01:07:18
    videos um and this wasn't just a few
  • 01:07:21
    this wasn't just by mistake emails
  • 01:07:23
    viewed by 404 media show Nvidia Project
  • 01:07:26
    managers discussing using 20 to 30
  • 01:07:29
    virtual machines to download 80 years
  • 01:07:31
    worth of videos per day so it also
  • 01:07:35
    doesn't seem unfortunately like this
  • 01:07:38
    happened via some Rogue elements in the
  • 01:07:40
    company employees raised concerns about
  • 01:07:42
    it and they were told several times the
  • 01:07:44
    decision had executive
  • 01:07:47
    approval so Paul we just keep getting
  • 01:07:49
    stories like this it seems like
  • 01:07:51
    basically every major AI player is
  • 01:07:54
    involved like could something like class
  • 01:07:55
    action lawsuit actually stopped this
  • 01:07:58
    Behavior or what yeah I don't know this
  • 01:08:01
    one's pretty messy there's there's a
  • 01:08:03
    separate one in the proof news uh where
  • 01:08:05
    they actually quoted like from internal
  • 01:08:07
    stuff and one vice president of a
  • 01:08:10
    researched Nvidia said we need one Sora
  • 01:08:12
    like model Sora being Open the Eyes in a
  • 01:08:15
    matter of days Nvidia assembled more
  • 01:08:17
    than a hundred workers to help lay the
  • 01:08:19
    training foundation for a similar state
  • 01:08:21
    of the-art model uh they began curating
  • 01:08:24
    video data sets from around the internet
  • 01:08:25
    ranging in size from hundreds of Clips
  • 01:08:27
    to hundreds of millions according to
  • 01:08:29
    company slack and internal documents
  • 01:08:31
    staff quickly focused on YouTube yeah um
  • 01:08:34
    but then they asked about whether or not
  • 01:08:35
    they should go get all of Netflix and if
  • 01:08:37
    so how how do they do that so it ah yeah
  • 01:08:41
    I'll be interested to see to follow this
  • 01:08:43
    one along it's pretty wild that they've
  • 01:08:45
    got all this Internal Documentation but
  • 01:08:47
    not surprised at all like I said in last
  • 01:08:49
    time we talked about this they are all
  • 01:08:51
    doing this like and they're all doing it
  • 01:08:53
    under the cover of we know that they
  • 01:08:55
    they did this so like we'll do it too
  • 01:08:57
    and yeah it's the only way to compete
  • 01:09:00
    basically so in other uh open AI news a
  • 01:09:04
    big theme this week open AI has
  • 01:09:06
    developed apparently an effective tool
  • 01:09:09
    to detect AI generated text and it can
  • 01:09:12
    do this from with text from particularly
  • 01:09:14
    from chat
  • 01:09:15
    gbt however it has not released this
  • 01:09:18
    tool according to an exclusive in the
  • 01:09:20
    Wall Street Journal so the tool us as a
  • 01:09:23
    water marking technique to identify when
  • 01:09:25
    chat GPT has created text and according
  • 01:09:28
    to internal openai documents viewed by
  • 01:09:31
    the Journal it is reportedly
  • 01:09:34
    99.9% accurate so the company apparently
  • 01:09:38
    has been debating for about two years
  • 01:09:40
    whether to even release this the tool
  • 01:09:43
    has been ready to release for over a
  • 01:09:45
    year and openai has not let it out of
  • 01:09:49
    the outside the company now why is that
  • 01:09:52
    it seems part of this is that us ERS
  • 01:09:56
    could be turned off by such a feature
  • 01:09:59
    the survey that open aai conducted found
  • 01:10:01
    that nearly 30% of chat GPT users would
  • 01:10:05
    use the tool less if water marking was
  • 01:10:08
    implemented an AI open AI spokesperson
  • 01:10:11
    also said the company is concerned such
  • 01:10:13
    tool could disproportionately affect
  • 01:10:14
    non-native English
  • 01:10:17
    speakers Paul what did you make of this
  • 01:10:19
    story it seems like pretty powerful
  • 01:10:21
    technology to be keeping Under Wraps do
  • 01:10:23
    you agree with kind of open a eyes logic
  • 01:10:26
    here I don't know it's hard to you know
  • 01:10:31
    put yourself into their position and and
  • 01:10:33
    these are Big difficult decisions
  • 01:10:34
    there's while it is 99.9% accurate um
  • 01:10:39
    they have some concerns that the
  • 01:10:40
    watermarks could be erased through
  • 01:10:42
    simple techniques like having Google
  • 01:10:43
    translate the text into another language
  • 01:10:45
    and then change it back so it's kind of
  • 01:10:47
    that whole thing like the cheaters are
  • 01:10:49
    going to stay ahead of the technology
  • 01:10:50
    you would think if this doesn't seem
  • 01:10:52
    foolproof at this point uh it also could
  • 01:10:54
    give Bad actors the ability to decipher
  • 01:10:57
    the watermarking technique and Google
  • 01:10:59
    does have the Sith ID and they haven't
  • 01:11:01
    released it widely yeah I did find it
  • 01:11:03
    interesting the one note was that John
  • 01:11:05
    Schulman who we talked about earlier has
  • 01:11:07
    left go to anthropic he was heavily
  • 01:11:09
    involved in the building of this in
  • 01:11:10
    early 2023 he outlined the pros and cons
  • 01:11:13
    of the tool in an internal shared Google
  • 01:11:16
    doc and that's when opena Executives
  • 01:11:18
    decided they would seek input from a
  • 01:11:20
    range of people before acting further so
  • 01:11:22
    yeah this has been going on for a while
  • 01:11:24
    um I don't know I I'm not sure we're
  • 01:11:26
    going to get to a point where there's
  • 01:11:27
    some you know I've said before like we
  • 01:11:28
    need a universal standard we don't need
  • 01:11:31
    just a watermarking tool for chat GPT or
  • 01:11:33
    just a watermarket tool for Google
  • 01:11:34
    Gemini like we need an industry standard
  • 01:11:36
    tool if we're going to do it and then we
  • 01:11:37
    got to do it the right
  • 01:11:39
    way so in some other news uh a new AI
  • 01:11:42
    image generator is getting a ton of
  • 01:11:45
    attention online it's called flux
  • 01:11:47
    technically it's flux. one is the model
  • 01:11:50
    everyone kind of refers to it as flux
  • 01:11:52
    and it's getting a ton of Buzz because
  • 01:11:54
    it's generating really really high
  • 01:11:56
    quality results and it is open source so
  • 01:12:00
    flux was developed by black forest Labs
  • 01:12:02
    whose Founders previously worked at
  • 01:12:04
    stability Ai and flux is kind of seen as
  • 01:12:07
    a potential successor to stable
  • 01:12:09
    diffusion and what kind of sets this
  • 01:12:11
    apart is that it has these smaller
  • 01:12:14
    models that can run on reasonably good
  • 01:12:16
    Hardware including high performance
  • 01:12:18
    laptops so you can basically as a
  • 01:12:20
    hobbyist developer small business run
  • 01:12:23
    this really sophisticated image model
  • 01:12:25
    that people are sharing lots of examples
  • 01:12:27
    of not only like these stunning kind of
  • 01:12:29
    hyper realistic or artistic results like
  • 01:12:31
    a mid Journey would produce uh but also
  • 01:12:34
    it's doing things like getting text
  • 01:12:35
    right in the images so it really does
  • 01:12:37
    seem to be pretty powerful and it
  • 01:12:40
    appears to be open source which means
  • 01:12:42
    you can go access it yourself through
  • 01:12:44
    things like Po hugging face other hubs
  • 01:12:46
    of AI of AI models that are open source
  • 01:12:49
    and kind of run it on your own kind of
  • 01:12:52
    customize the code however you would
  • 01:12:53
    like so Paul I've seen some pretty cool
  • 01:12:57
    demos of this like this seems like the
  • 01:12:59
    real deal interesting to have this
  • 01:13:01
    capability open sourced which we've
  • 01:13:04
    talked about you know could be a
  • 01:13:06
    potential problem as people are
  • 01:13:08
    generating deep fakes and other
  • 01:13:09
    problematic types of images what did you
  • 01:13:11
    make of this yeah the wildest demos I've
  • 01:13:14
    seen are taking flux images and then um
  • 01:13:17
    animating them with like gen three from
  • 01:13:19
    Runway so turning them into 10-second
  • 01:13:21
    videos are just crazy yeah so you know
  • 01:13:25
    not easily accessible which is it's
  • 01:13:27
    always just so interesting how these
  • 01:13:28
    things are released like it's there's no
  • 01:13:30
    app to go get like you can't you have to
  • 01:13:32
    like download something I think to be
  • 01:13:33
    able to use it so I haven't tested it
  • 01:13:35
    myself it is just checking it out but
  • 01:13:37
    yeah it it it's just the continued rate
  • 01:13:40
    of improvement of these image and video
  • 01:13:43
    models is really hard to comprehend and
  • 01:13:45
    it it just seems like there's no end in
  • 01:13:47
    sight for how realistic the outputs are
  • 01:13:51
    becoming all right and our last news
  • 01:13:54
    topic today um California's proposed AI
  • 01:13:56
    legislation we've talked about before
  • 01:13:58
    known as
  • 01:14:00
    sb147 is facing criticism from a
  • 01:14:03
    prominent figure in Ai and that figure
  • 01:14:06
    is Dr F Fe Lee who's often referred to
  • 01:14:09
    as the Godmother of AI uh she's a
  • 01:14:11
    researcher who has voiced strong
  • 01:14:13
    concerns about the potential negative
  • 01:14:16
    impacts of the bill so SB 1047 is short
  • 01:14:20
    for a safe and secure Innovation for
  • 01:14:22
    Frontier artificial intelligence models
  • 01:14:25
    act which aims to essentially regulate
  • 01:14:28
    large AI models in
  • 01:14:30
    California Dr Lee however argues that
  • 01:14:33
    the bill itself could end up harming the
  • 01:14:35
    entire us aai ecosystem so she outlines
  • 01:14:39
    three big problems with how the bill is
  • 01:14:42
    written today first it unduly punishes
  • 01:14:46
    developers and potentially stifles
  • 01:14:48
    Innovation because it's starting to hold
  • 01:14:50
    people liable for any misuse done with
  • 01:14:53
    their AI models not not just by them but
  • 01:14:56
    by other people second there is a
  • 01:14:59
    mandated kill switch for AI programs
  • 01:15:02
    that could she says devastate The open-
  • 01:15:04
    Source community and third the bill
  • 01:15:07
    could public sector and academic
  • 01:15:09
    AI research by limiting access to a lot
  • 01:15:11
    of the necessary models and data to do
  • 01:15:14
    that work so Paul this is kind of yet
  • 01:15:18
    another prominent AI voice to raise
  • 01:15:20
    objections to this bill we've talked
  • 01:15:21
    about Andrew un uh published quite
  • 01:15:24
    extensively x uh recently about the bill
  • 01:15:27
    however other people like Jeff Hinton
  • 01:15:29
    support it do you see this as
  • 01:15:32
    potentially problematic for AI
  • 01:15:35
    Innovation how are you kind of looking
  • 01:15:37
    at this yeah I I do think it would
  • 01:15:39
    impact Innovation certainly it would
  • 01:15:42
    definitely impact open source um I don't
  • 01:15:46
    know I mean the more time we spend in
  • 01:15:47
    this space the more I think about these
  • 01:15:49
    things the more I think we need
  • 01:15:50
    something I don't know if this is the
  • 01:15:52
    right thing but I think we're
  • 01:15:55
    um by 2025 going to enter an arena where
  • 01:15:58
    it's very important that there's
  • 01:16:01
    more uh there are more guard rails in
  • 01:16:03
    place than currently exist for these
  • 01:16:05
    models and I don't know what the
  • 01:16:08
    solution is but we need something and I
  • 01:16:11
    think we need it sooner than later and
  • 01:16:14
    so I I think it's good that
  • 01:16:15
    conversations like these are happening I
  • 01:16:17
    get that there's going to be people on
  • 01:16:18
    both sides of this like any um you know
  • 01:16:21
    important topic I don't feel strongly
  • 01:16:23
    one way or the other at the moment but I
  • 01:16:25
    feel like something needs to be done we
  • 01:16:28
    we cannot wait until mid to late next
  • 01:16:31
    year um to have these conversations so I
  • 01:16:35
    hope something happens sooner than
  • 01:16:38
    later all right Paul that's a wild week
  • 01:16:41
    lots of tie-ins lots of related topics
  • 01:16:43
    thanks for connecting all the dots for
  • 01:16:45
    us this week um just a quick reminder to
  • 01:16:48
    everyone uh if you have not checked out
  • 01:16:50
    our newsletter yet at marketing AI
  • 01:16:52
    institute.com newsletter it's called
  • 01:16:55
    this week in AI it covers a ton of other
  • 01:16:57
    stories that we didn't get to in this
  • 01:16:59
    episode and does so every week so you
  • 01:17:01
    have a really nice comprehensive brief
  • 01:17:04
    as to what's going on in the industry
  • 01:17:06
    all curated for you each and every week
  • 01:17:08
    and also if your podcast platform or
  • 01:17:11
    tool of choice allows you to leave a
  • 01:17:13
    review we would very much appreciate if
  • 01:17:16
    you could do that for us every review uh
  • 01:17:18
    helps us improve the show helps us um
  • 01:17:22
    get it into the hands of more people and
  • 01:17:24
    just helps us generally create a better
  • 01:17:27
    product for you so if you haven't done
  • 01:17:29
    that it's the most important thing you
  • 01:17:30
    can do for us please go ahead and drop
  • 01:17:33
    us a review Paul thanks so much yeah
  • 01:17:36
    thanks everyone for joining us again a
  • 01:17:38
    reminder get those makon tickets just
  • 01:17:41
    m.ai and keep an eye on the Strawberry
  • 01:17:43
    Fields this week it might might be an
  • 01:17:45
    interesting week in
  • 01:17:47
    AI thanks for listening to the AI show
  • 01:17:51
    visit marketing AI institute.com to
  • 01:17:54
    continue your AI a i Learning Journey
  • 01:17:56
    and join more than 60,000 professionals
  • 01:17:58
    and Business Leaders who have subscribed
  • 01:18:00
    to the Weekly Newsletter downloaded the
  • 01:18:02
    AI blueprints attended virtual and
  • 01:18:05
    in-person events taken our online AI
  • 01:18:08
    courses and engaged in the slack
  • 01:18:10
    Community until next time stay curious
  • 01:18:13
    and explore AI
Etiquetas
  • Inteligencia Artificial
  • OpenAI
  • Persuasión
  • Regulación IA
  • Transformación Laboral
  • Robots
  • Demanda Legal
  • Seguridad IA