Ep.# 110: OpenAI’s Secret Project “Strawberry” Mystery Grows, JobsGPT & GPT-4o Dangers
Summary
TLDRNeste episodio do podcast "The Artificial Intelligence Show", presentado por Paul Roetzer, fálase sobre modelos de IA superhumanamente persuasivos e os riscos asociados se fosen deseñados para modificar crenzas ou comportamentos humanos. Tamén se discuten noticias recentes de OpenAI, incluíndo especulacións sobre un proxecto chamado "strawberry" que podería ser un gran avance no raciocinio de IA. Ademais, preséntase "Jobs GPT", unha ferramenta para avaliar o impacto da IA nos empregos, desagregando tarefas e proxectando a súa automatización futura. Hai un debate sobre as regulacións propostas para modelos de IA en California, con preocupacións de que poidan frear a innovación. Outras noticias inclúen avances en robots humanoides e problemas legais derivados do uso de datos de YouTube por Nvidia sen consentimento para adestrar os seus modelos. O episodio conclúe coa importancia do evento Marketing AI Conference e a súa dedicación á alfabetización en IA.
Takeaways
- 🤖 A capacidade persuasiva de modelos de IA pode transformar comportamentos humanos.
- 🍓 'Strawberry Project' de OpenAI podería revolucionar o raciocinio artificial.
- 🛠 'Jobs GPT' axuda a prever o impacto da IA nos empregos actuais e futuros.
- ⚖️ As regulacións en California sobre IA xeran controversia por posibles impactos negativos na innovación.
- 🤖 Figure desenvolve robots humanoides avanzados para uso industrial e comercial.
- ⚔️ Elon Musk enfronta legalmente a OpenAI por suposta traizón á misión altruísta orixinal.
- 🔍 Nvidia enfróntase a críticas por usar contido de YouTube para adestrar modelos de IA.
- 🔈 Os riscos relacionados coas capacidades de voz avanzada de GPT-40 son examinados.
- ❗ A necesidade de balancear seguridade e innovación no desenvolvemento e lanzamento de modelos de IA.
- 👉 A importancia da alfabetización e preparación en IA para o avance responsable da tecnoloxía.
Timeline
- 00:00:00 - 00:05:00
O presentador fala sobre a inevitabilidade dos modelos de IA superpersuasivos e dá a benvida aos oíntes ao programa, destacando o crecemento empresarial a través da intelixencia artificial.
- 00:05:00 - 00:10:00
Introducen o episodio 110, cunha discusión sobre un enigma relacionado con "strawberry" e un evento de conferencias sobre marketing e IA en setembro.
- 00:10:00 - 00:15:00
Mencionan cambios de liderazgo en OpenAI, con saídas importantes, o que leva á especulación sobre os futuros avances en IA e o misterioso proxecto "strawberry".
- 00:15:00 - 00:20:00
Especulan sobre a partida de membros clave de OpenAI, preguntándose se anticipan avances importantes en intelixencia artificial xeral (AGI).
- 00:20:00 - 00:25:00
Discutimos sobre mensaxes crípticas relacionadas con "strawberry" e como esto se podería vincular a avances internos en OpenAI.
- 00:25:00 - 00:30:00
Exploran mensaxes dunha conta anónima en Twitter que fai referencia a "levels" de capacidades de IA, e ligazóns con posibles avances en reasoning e AGI.
- 00:30:00 - 00:35:00
Discusión detallada sobre especulacións sóbre "strawberry" con respostas crípticas tocando temas de innovacións en modelos de IA e posibles lanamentos.
- 00:35:00 - 00:40:00
Continúan a explorar posibles significados detrás das mensaxes crípticas e demás sinais de que se avecina un avance significativo en IA.
- 00:40:00 - 00:45:00
Presentan "jobs GPT", unha ferramenta para avaliar o impacto da IA nos empregos e como as tarefas poden ser afectadas polos modelos de lenguaje grandes.
- 00:45:00 - 00:50:00
Explican o desenvolvemento e obxectivos de "jobs GPT", enfocándose en axudar ás empresas a prever cambios no traballo debido á IA.
- 00:50:00 - 00:55:00
Describen "jobs GPT" en termos de exposición ao risco, subliñando diferentes niveis de exposición segundo capacidades actuais e proxectadas.
- 00:55:00 - 01:00:00
Sobre a importancia de prever os efectos futuros da IA, especialmente en como poden cambiar os traballos a nivel de tarefas.
- 01:00:00 - 01:05:00
Utilizar a ferramenta para extrapolar capacidades futuras dos modelos de IA, permitindo ás empresas planificar para un avance continuo.
- 01:05:00 - 01:10:00
OpenAI lanzou un informe sobre preparación e seguridade antes de liberar o GPT-4, enfocando os retos na persuación e na xeración de voz.
- 01:10:00 - 01:18:19
Debatido o proceso de OpenAI para redimensionar o GPT-4, salientando as capacidades, riscos e método para mitigalos antes do lanzamento.
Mind Map
Video Q&A
¿Qué es el modelo persuasivo sobrehumano mencionado en el episodio?
Es un modelo de inteligencia artificial que tiene la capacidad de cambiar las creencias, actitudes, intenciones, motivaciones o comportamientos de las personas de manera eficaz.
¿Qué es el 'strawberry project' de OpenAI?
Es un proyecto de OpenAI que apunta a mejorar significativamente las capacidades de razonamiento de sus modelos de IA, potencialmente llevando a avances hacia una súper inteligencia.
¿Qué es Jobs GPT?
Jobs GPT es una herramienta creada por Marketing AI Institute para evaluar el impacto potencial de la IA en empleos específicos desglosando tareas y estimando el tiempo ahorrado mediante el uso de modelos de lenguaje avanzado.
¿Cuáles son las preocupaciones mencionadas sobre la regulación de IA en California?
La regulación propuesta en California podría sofocar la innovación al responsabilizar a los desarrolladores por el mal uso de sus modelos, exigir interruptores de seguridad y limitar el acceso a modelos y datos necesarios para la investigación.
¿Qué compañía está desarrollando el robot humanoide Figure 02?
La compañía Figure está desarrollando el robot humanoide Figure 02, mostrando avances significativos en IA y capacidades físicas.
¿Qué implica la demanda de Elon Musk contra OpenAI?
Elon Musk ha demandado a OpenAI alegando que han abandonado su misión original de desarrollar IA para el bien público al priorizar intereses comerciales.
¿Qué se discutió sobre la capacidad de persuasión en GPT-40?
Se evaluó la capacidad persuasiva de GPT-40 en cambiar perspectivas humanas en contextos políticos y se encontraron resultados comparables a los humanos, aunque no superiores.
¿Qué avances ha tenido la empresa Nvidia relacionados con IA?
Nvidia ha estado recopilando una gran cantidad de contenido de video, incluyendo de YouTube, para entrenar sus modelos de IA de video, levantando preocupaciones legales.
View more video summaries
Do Americans trust the news media?
DESMONTAR LUNAS de COCHES CLÁSICOS VW GOLF MK1 MK2 JETTA A1 A2 CARIBE RABBIT
Ousadia para Transformar o Mundo do Trabalho
Ousadia para Transformar o Mundo do Trabalho
Exploración respiratoria para estudiantes de Fisioterapia. Parte 1: espirometría simple
Finanzas para emprendedores
- 00:00:00think about the value of a superhuman
- 00:00:03persuasive model of a model that can
- 00:00:05persuade people to change their beliefs
- 00:00:07attitudes intentions motivations or
- 00:00:09behaviors we are talking about something
- 00:00:11that is inevitably going to occur if the
- 00:00:14capabilities are possible someone will
- 00:00:16build them and someone will utilize them
- 00:00:19for their own
- 00:00:21gain welcome to the artificial
- 00:00:23intelligence show the podcast that helps
- 00:00:25your business grow smarter by making AI
- 00:00:28approachable and actionable my name is
- 00:00:30Paul rer I'm the founder and CEO of
- 00:00:33marketing AI Institute and I'm your host
- 00:00:36each week I'm joined by my co-host and
- 00:00:38marketing AI Institute Chief content
- 00:00:40officer Mike kaput as we break down all
- 00:00:43the AI news that matters and give you
- 00:00:45insights and perspectives that you can
- 00:00:47use to advance your company and your
- 00:00:50career join us as we accelerate AI
- 00:00:53Literacy for
- 00:00:54[Music]
- 00:00:58all welcome to episode 110 of the
- 00:01:01artificial intelligence show I'm your
- 00:01:03host Paul rer along with my co-host Mike
- 00:01:05kaput as always we have a rather
- 00:01:09intriguing episode today I I don't even
- 00:01:11know this this whole strawberry mystery
- 00:01:13just continues to grow and it's gotten
- 00:01:16kind of wild and so I mean we're
- 00:01:18recording this Monday August 12 10:30
- 00:01:21a.m. eastern time by the time you listen
- 00:01:23to this I expect we're going to know a
- 00:01:25little bit more about what in the world
- 00:01:27is going on with strawberry and
- 00:01:30who this mystery Twitter account is and
- 00:01:33it's just wild so we we've got a lot to
- 00:01:35cover today uh prepping for this one was
- 00:01:38pretty interesting this morning getting
- 00:01:39ready to go so uh we're gonna get into
- 00:01:42all that uh today's episode is brought
- 00:01:44to us again by the marketing AI
- 00:01:45conference our fifth annual marketing a
- 00:01:48conference or makon happening in
- 00:01:49Cleveland September 10th to the 12th
- 00:01:52there are a total of 69 sessions 33
- 00:01:55breakouts across two tracks of Applied
- 00:01:57Ai and strategic AI 16 AI Tech demos 10
- 00:02:02mainstage General Sessions and Keynotes
- 00:02:04five lunch Labs three pre-conference
- 00:02:06workshops two of which are being taught
- 00:02:08by Mike and myself and two mindfulness
- 00:02:12sessions so the agenda is absolutely
- 00:02:14packed if you haven't check it out
- 00:02:16checked it out yet go to m.ai that's
- 00:02:19m.ai I'll just give you a quick sense of
- 00:02:22some of the sessions so I'm leading off
- 00:02:24with the road to AGI a potential
- 00:02:26timeline of what happens next what it
- 00:02:29means and we can do about it we're going
- 00:02:31to preview some of that actually today
- 00:02:33uh we've got digital Doppel gangers how
- 00:02:35Savvy teams are augmenting their unique
- 00:02:37talents using the magic of AI with
- 00:02:39Andrew Davis Lessons Learned in early
- 00:02:41leadership of scaling marketing AI with
- 00:02:43Amanda todorovich from Cleveland Clinic
- 00:02:46uh future of AI open with one of our you
- 00:02:49know longtime Institute supporters and
- 00:02:51speakers Christopher Penn got navigating
- 00:02:54the intersection of copyright law and
- 00:02:55generative AI uh with Rachel douly and
- 00:02:57Christa laser generative Ai and the
- 00:02:59future work with Mike Walsh marketing
- 00:03:01the trust economy with Liz grenan and
- 00:03:03McKenzie and just keeps going on and on
- 00:03:05so absolutely check it out it's again in
- 00:03:08Cleveland September 10th to the 12th you
- 00:03:10can use promo code pod2 200 that's pod2
- 00:03:14200 to save $200 off All Passes we only
- 00:03:18have about I didn't look at the
- 00:03:19countdown clock about 28 days left until
- 00:03:22the event so Mike and I have a lot of
- 00:03:24work to do over the next uh month or so
- 00:03:27here to get ready but again check out
- 00:03:28meon AI click register and be sure to
- 00:03:31use that pod 200 code all right Mike um
- 00:03:36it I don't even know where to go with
- 00:03:37the strawberry thing but let's go ahead
- 00:03:39and get into what's happening at open
- 00:03:41aai which seems like the weekly
- 00:03:42recurring topic and then this strawberry
- 00:03:45thing that just is taking out a life of
- 00:03:47its own yeah there's never a dull moment
- 00:03:50at open AI it certainly seems because
- 00:03:53first they are experiencing some pretty
- 00:03:56serious leadership changes so we're
- 00:03:58going to first just te that up and then
- 00:04:00talk about what the heck strawberry is
- 00:04:02and what's going on with it so first up
- 00:04:05Greg Brockman open AI president and
- 00:04:07co-founder said he's taking an extended
- 00:04:10leave of absence which he says just as
- 00:04:12sabatical until the end of the year at
- 00:04:15the same time John Schulman another
- 00:04:18co-founder and a key leader in AI has
- 00:04:21left open AI to join rival company
- 00:04:24anthropic and he said he wanted to work
- 00:04:26more deeply on AI alignment and that's
- 00:04:29why he is is leaving uh possibly related
- 00:04:32possibly not Peter Deng a product leader
- 00:04:34who joined open AI last year from meta
- 00:04:37has also Departed the company so you
- 00:04:39know as you recall these aren't the only
- 00:04:42or first people to have left I mean Ilia
- 00:04:44satk left after all sorts of controversy
- 00:04:47last year around the boardroom coup to
- 00:04:49OU Sam Alman and Andre carpath has left
- 00:04:53to go work on an AI education startup so
- 00:04:57these kinds of Departures of really LED
- 00:05:00some industry observers to question like
- 00:05:03how close is open AI really to breaking
- 00:05:06through uh to creating AGI so AI
- 00:05:09researcher Benjamin deer put it in a
- 00:05:12post on X he put it really well he said
- 00:05:14quote if open AI is right on the verge
- 00:05:16of AI why do prominent people keep
- 00:05:20leaving and he went on to say quote
- 00:05:22genuine question if you were pretty sure
- 00:05:24the company you're a key part of and
- 00:05:26have equity in is about to crack AGI
- 00:05:29within one two years why would you jump
- 00:05:31ship now interestingly in parallel to
- 00:05:35this and Paul I'll let you kind of
- 00:05:38unpack this for us there have been a
- 00:05:40series of very cryptic posts referencing
- 00:05:43strawberry which is an open AI project
- 00:05:47we had referenced previously centered
- 00:05:49around Advanced reasoning capabilities
- 00:05:51for AI that have been posts that have
- 00:05:54been engaged with by Sam mman posts
- 00:05:56coming from Anonymous accounts really
- 00:05:58does seem in a weird way like something
- 00:06:00is brewing when it comes to Strawberry
- 00:06:03as well as we're seeing more and more
- 00:06:04references both from Sam and from other
- 00:06:06parties in relation to those possible AI
- 00:06:10capabilities so Paul let's kind of maybe
- 00:06:13take this one step at a time like I want
- 00:06:16to start off with the question that the
- 00:06:19AI researcher posed if open AI is right
- 00:06:21on the verge of
- 00:06:23AGI why do you think prominent people
- 00:06:26like these are
- 00:06:27leaving yeah it's a really good question
- 00:06:29I have no idea all any of us can really
- 00:06:31do at this point is speculate the couple
- 00:06:34of notes I would make related to this is
- 00:06:37Greg and John or co-founders like I
- 00:06:39would assume their shares have vested
- 00:06:41long ago in open AI so unless you know
- 00:06:44more shares are granted or they have
- 00:06:45some that have invested their their
- 00:06:48money is safe either way so if Craig
- 00:06:50wants to piece out for a while and
- 00:06:52things keep going his his Equity is not
- 00:06:55going anywhere so I don't think their
- 00:06:58Equity has anything to do with whether
- 00:07:00or not a breakthrough has been made
- 00:07:01internally or whether the next model is
- 00:07:04you know on the precipice of coming um
- 00:07:07so Greg is supposedly taking leave of
- 00:07:09absence as you said maybe he is maybe
- 00:07:11he's done I I don't know yeah and maybe
- 00:07:14Jon's leaving because he thinks AGI is
- 00:07:16actually near and anthropic is a better
- 00:07:18place to work on safety and Alignment so
- 00:07:21I don't know that we can read anything
- 00:07:23into any of this really it's it's
- 00:07:25complicated and I think we just got to
- 00:07:27let it sort of play out um I do have a
- 00:07:30lot of unanswered questions about the
- 00:07:32timing of Greg's leave and so on August
- 00:07:365th is when he tweeted I'm taking a
- 00:07:38sabatical through end of year first time
- 00:07:40to relax since co-founding open AI 9
- 00:07:42years ago the mission is far from
- 00:07:45complete we still have a safe AGI to
- 00:07:47build um he then tweeted on August 8th
- 00:07:51uh this first tweet since he left or
- 00:07:54went on sabatical a surprisingly hard
- 00:07:56part of my break is beginning the fear
- 00:07:58of missing out for everything happening
- 00:07:59at openai right now lots of results
- 00:08:01cooking I've poured my life for the past
- 00:08:04nine years into open AI including the
- 00:08:06entirety of my marriage our work is
- 00:08:08important to me but so is life I feel
- 00:08:11okay taking this time in part because
- 00:08:12our research safety and product progress
- 00:08:15is so strong I'm super grateful for the
- 00:08:17team we've built and it's unprecedented
- 00:08:19Talent density and proud of our progress
- 00:08:22looking forward to completing our
- 00:08:23mission together so I don't I mean I
- 00:08:25don't know he doesn't really tweet about
- 00:08:26his personal life too much it kind of
- 00:08:28indicates me like maybe this is just uh
- 00:08:31to get his personal life in order you
- 00:08:33know give some Focus to that after 9
- 00:08:35years maybe that's all it is um and then
- 00:08:39I just kind of scann back to see well
- 00:08:40what has he been tweeting leading up to
- 00:08:41this he doesn't do as many cryptic
- 00:08:43tweets as Sam Alman he he does his own
- 00:08:45fair share but his last like six tweets
- 00:08:49were all pretty product related so on
- 00:08:52718 so July 18th he said just released a
- 00:08:55new state of Art and fast cheap but
- 00:08:57still quite capable models that was the
- 00:08:58four uh o mini which we're going to talk
- 00:09:01more about on July 18th just launched
- 00:09:04chat gbt Enterprise comp compliance
- 00:09:06controls and then featured some of their
- 00:09:08Enterprise customers like BCG and PWC
- 00:09:11and Los
- 00:09:12Alamos on July 25th search gbt prototype
- 00:09:16now live and then on July 30th advanced
- 00:09:18voice mode rolling out so he's he's been
- 00:09:20very product focused in his tweets so we
- 00:09:22can't really learn too much from that
- 00:09:24the thing I found unusual is Sam Alman
- 00:09:26didn't reply to Greg's tweet Sam replies
- 00:09:28to every high-profile person's tweet
- 00:09:30that leaves or you know temporarily
- 00:09:32separates from open AI so for example uh
- 00:09:36the same day that Greg announced he was
- 00:09:39taking a sabatical John scholman
- 00:09:41announced he was leaving and Sam posted
- 00:09:4425 minutes later a reply to John's tweet
- 00:09:47saying we will miss you tremendously
- 00:09:49telling the story of how they met in
- 00:09:512015 um so I just thought it was weird
- 00:09:55that he didn't individually tweet about
- 00:09:57or reply to Greg's tweet
- 00:09:59again can you read anything to that I
- 00:10:01don't know it's just out of the ordinary
- 00:10:04um and maybe it's because he was too
- 00:10:06busy vague tweeting about strawberries
- 00:10:08and AGI to deal with it so yeah so maybe
- 00:10:13because that is such a key piece of this
- 00:10:14is amidst all these like Personnel
- 00:10:16changes which is what everyone's like
- 00:10:18you know the headlines are focused on
- 00:10:19there's all these cryptic tweets he's
- 00:10:22been posting about AGI about strawberry
- 00:10:26can you maybe walk us through like
- 00:10:28what's going on here because you know as
- 00:10:30we've seen in the past I think on this
- 00:10:31show and just in our work like paying
- 00:10:34attention to what he posts is usually a
- 00:10:37very good idea yeah so the last like
- 00:10:40four days have been kind of insane if
- 00:10:43you follow the the inner people within
- 00:10:47the AI world so if you'll recall the
- 00:10:51strawberry thing this codename
- 00:10:52strawberry project was first reported by
- 00:10:54Reuters um we talked about an episode
- 00:10:57106 so about a month ago uh we talked
- 00:11:00about this so at the time reuter said
- 00:11:02strawberry appears to be a novel
- 00:11:03approach to AI models aimed at
- 00:11:05dramatically improving their reasoning
- 00:11:07capabilities the Project's goal is to
- 00:11:09enable AI to plan ahead and navigate the
- 00:11:11inter internet autonomously to perform
- 00:11:13what openai calls deep research while
- 00:11:16details about how strawberry works are
- 00:11:17tightly guarded open AI appears to be
- 00:11:19hoping that this Innovation will
- 00:11:21significantly enhance its AI models
- 00:11:23ability to reason the project involves a
- 00:11:26specialized way of processing AI models
- 00:11:28after they've been pre-trained on large
- 00:11:31data sets now the strawberry reference
- 00:11:33we also talked about in episode 106 uh
- 00:11:36half jokingly but I'm not so sure it
- 00:11:39isn't true uh is maybe a way to troll
- 00:11:41Elon Musk so if you'll remember Elon
- 00:11:44Musk was involved early days of open Ai
- 00:11:47and in
- 00:11:482017 three months before the Transformer
- 00:11:52paper came out from Google brain that
- 00:11:54invented the Transformer which is the
- 00:11:56basis for GPT generative pre-trained
- 00:11:58Transformer
- 00:11:59Elon Musk who was still working with
- 00:12:02opening eye in same moment at the time
- 00:12:04said let's uh say you create a
- 00:12:06self-improving AI to pick strawberries
- 00:12:09and it gets better and better at picking
- 00:12:10strawberries and picks more and more and
- 00:12:12it is self-improving so all it really
- 00:12:14wants to do is pick strawberries so then
- 00:12:16it would have all this world be
- 00:12:18Strawberry Fields Strawberry Fields
- 00:12:20Forever and there would be no room for
- 00:12:22human beings so that was kind of like
- 00:12:24episode 106 we just sort of talked about
- 00:12:26it it was in Reuters now fast forward to
- 00:12:31August 7th so this is now 2 days after
- 00:12:33Greg announces his sabatical Sam tweets
- 00:12:36a picture of actual strawberries not AI
- 00:12:38generated and he says I love my summer
- 00:12:41garden so here's Sam veg tweeting about
- 00:12:45strawberries uh about seven hours later
- 00:12:49a new Twitter account called I rule the
- 00:12:52worldo and double check me on that mic
- 00:12:55make sure I'm getting the right Twitter
- 00:12:56handle here tweeted in all
- 00:13:00lowercase um now which is Sam's sort of
- 00:13:03Mo is all lowercase welcome to level two
- 00:13:06how do you feel did I make you feel and
- 00:13:09Sam now keep in mind this account had
- 00:13:11been created that morning Sam actually
- 00:13:15replied amazing TBH to be honest so Sam
- 00:13:20replied to this random Twitter account
- 00:13:24that was tweeting about AGI and
- 00:13:27strawberries so what what is level two
- 00:13:30so what is this welcome to level two
- 00:13:32tweet well um level two as reported in
- 00:13:36July 2024 by Rachel Mets of Bloomberg is
- 00:13:40that openai has come up with a set of
- 00:13:42five levels to track its progress
- 00:13:45towards building AI software capable of
- 00:13:47outperforming humans they shared this
- 00:13:50new classification system with employees
- 00:13:52that Tuesday so this is in early July at
- 00:13:55the meeting company leadership gave a
- 00:13:57demonstration of a research project
- 00:13:59involving gp4 a model that open AI
- 00:14:02thinks shows some new skills that rise
- 00:14:05to humanlike reasoning so the assumption
- 00:14:07is whatever strawberry is was shown to
- 00:14:09the their employees in early July now
- 00:14:13their five levels are level one chat
- 00:14:15Bots AI with conversational language
- 00:14:17that's what we have level two reasoners
- 00:14:20human level problem solving that's the
- 00:14:23Assumption of what we are about to enter
- 00:14:25level three agents systems that can take
- 00:14:28actions we don't have those yet uh other
- 00:14:30than demonstrations of them level four
- 00:14:33innovators AI that can Aid in invention
- 00:14:36that is not currently possible level
- 00:14:39five organizations AI that can do the
- 00:14:41work of an organization this goes back
- 00:14:43to uh something we talked about an
- 00:14:45article earlier on with Ilia suavo where
- 00:14:47he was quoted in the Atlantic as talking
- 00:14:49about these like Hive like organizations
- 00:14:51where there's just hundreds or thousands
- 00:14:53of AI agents do these things so the IR
- 00:14:56rule of the world Mo Twitter account
- 00:14:57let's go back to that for a second
- 00:14:59the profile picture is W Phoenix from
- 00:15:02the movie Her with three strawberry
- 00:15:04emojis so that's the what the Twitter
- 00:15:06account States um the first tweet from
- 00:15:09that account was August 7th at 1:33 p.m.
- 00:15:13so again that's right before Sam replied
- 00:15:16to this account so Sam is aware of this
- 00:15:19account very very very early in its
- 00:15:21existence um so that that reply was to
- 00:15:25some yam pelig I don't know who he is uh
- 00:15:27he's an AI guy he had said feel the AGI
- 00:15:31guys and this I rule the world Mo
- 00:15:33account tweeted nice um the account then
- 00:15:36started a flood of hundreds of
- 00:15:38strawberry and AI related tweets
- 00:15:40multiple times referencing Sam's garden
- 00:15:42tweet with his pictures of strawberries
- 00:15:44and implying that a major release is
- 00:15:46coming so I'll just run through a few of
- 00:15:48these to give you a sense of what's
- 00:15:49going on so later on August 7th um
- 00:15:53tweets Sam strawberry isn't just ripe
- 00:15:55it's ready tonight we taste the fruit of
- 00:15:57AGI The Singularity has
- 00:15:59flavor uh the three minutes later
- 00:16:01someone very high up is boosting my
- 00:16:03account guess who in other words the
- 00:16:05algorithm at Twitter immediately started
- 00:16:08um juicing this Anonymous account and it
- 00:16:11was very obvious that it was happening
- 00:16:13and thousands of people were starting to
- 00:16:15follow it um 21 minutes later Alman
- 00:16:18strawberry isn't a fruit it's a key
- 00:16:20tonight we unlock the door to Super
- 00:16:21intelligence are you ready to step
- 00:16:22through eight minutes later it turns out
- 00:16:24that I'm AGI oh if it turns out I'm AGI
- 00:16:27I'll be so pissed cuz because now people
- 00:16:29are trying to like at this point guess
- 00:16:31what is this account is this an AI like
- 00:16:33is someone running a test is this
- 00:16:35actually like open AI screwing around
- 00:16:37with people is it something else um six
- 00:16:40minutes later it tweets no one's guest
- 00:16:42grock yet even though they know of Elon
- 00:16:44Musk engineering prowess and his super
- 00:16:46clusters obviously I'm not saying I'm
- 00:16:48grock but just that it's kind of odd
- 00:16:50right and then on 'll fast forward to
- 00:16:54August 10th so just a couple days ago
- 00:16:57and the this Anon count tweeted a rather
- 00:17:01extensive what appears to be very
- 00:17:03accurate summary of open aai in the
- 00:17:05current situation and this connects back
- 00:17:07to Greg in a moment so the Tweet is rust
- 00:17:09a little but we'll refine and add some
- 00:17:11more info I've been given in it if it
- 00:17:14bangs project strawberry qstar AI
- 00:17:18explained has been close to this for a
- 00:17:20while so I'd watch them for a cleaner
- 00:17:22take if you want to dig in this is what
- 00:17:24ilas saw it's what has broken Math
- 00:17:26benchmarks it's more Ain to
- 00:17:28reinforcement learning human feedback
- 00:17:30then throwing compute at the problem um
- 00:17:33gets into strawberry and larger models
- 00:17:34comes on Thursday so they're implying
- 00:17:37this week think of LM fine- tuned to
- 00:17:39reason like a human hence why Sam liked
- 00:17:41the level two comment and felt great
- 00:17:43about it Ilia did not here we are and
- 00:17:46then it talks about what I talked about
- 00:17:48last week that maybe we're actually
- 00:17:50seeing the future model with a
- 00:17:52combination of Sora voice video and then
- 00:17:55all the stuff that's going into safety
- 00:17:57it goes on to say that uh GPT next
- 00:18:00internally called gptx you can call it
- 00:18:02GPT 5 it says is also ready to go Lots
- 00:18:06here relies on safety and what Google
- 00:18:08does next it's difficult to say if
- 00:18:10competition Will trump safety the this
- 00:18:13next model is through red teaming it's
- 00:18:14finished post training is done it's an
- 00:18:17enormous leap in capabilities and on and
- 00:18:19on and on um and then as of this morning
- 00:18:23so 5:27 a.m. eastern time on August 12th
- 00:18:27this anonymous tweets attention isn't
- 00:18:30all you need referring to the attention
- 00:18:31is all you need Transformer paper from
- 00:18:342017 new architecture announcement
- 00:18:36August 13th at 10: a.m. Pacific Time The
- 00:18:39Singularity begins now oddly enough the
- 00:18:43next made by Google event is August 13th
- 00:18:47at 10 a.m. Pacific now I don't know if
- 00:18:49that's a reference to depending on what
- 00:18:51Google does whether or not this next
- 00:18:52model gets released so the question is
- 00:18:54what is this I rule the world Mo account
- 00:18:57which at the moment of recording this
- 00:18:59has almost 23.5 th000 followers which it
- 00:19:02has amassed in four days it is getting
- 00:19:05Juiced obviously by Twitter SLX and
- 00:19:08maybe Elon Musk himself um is it like an
- 00:19:12anonymous count of gb5 like are they
- 00:19:14running an experiment It's actually an
- 00:19:15AI is it Elon trolling open AI for
- 00:19:19trolling him and it's actually like
- 00:19:21grock 2 is it a human who has a massive
- 00:19:25amount of time on their hands is it
- 00:19:26another like we don't know but but then
- 00:19:29to add to the mystery last night Arvin
- 00:19:32serenos the founder of perplexity shows
- 00:19:36a screenshot that says how many RS are
- 00:19:39there in this sentence there are many
- 00:19:42strawberries in a sentence that's about
- 00:19:45strawberries and whatever model he was
- 00:19:48teasing got it correct which is a
- 00:19:50notoriously difficult problem and he put
- 00:19:53guess what model this is with a
- 00:19:55strawberry in it the implication being
- 00:19:57that perplex is running whatever this
- 00:20:01strawberry model is then for following
- 00:20:05along at home Elon at 6:38 p.m on August
- 00:20:0911th tweets grock 2 Beta release coming
- 00:20:12soon so what does this all mean I have
- 00:20:16no idea who this Anonymous account is
- 00:20:19but it does appear something significant
- 00:20:21is coming we may have a new model this
- 00:20:24week it may already be in testing with
- 00:20:26perplexity Pro um I think we will find
- 00:20:29out sooner than later so now back real
- 00:20:32quick to Greg what does this mean for
- 00:20:34open eye and Greg a his work is done for
- 00:20:37now and this whatever he has now built
- 00:20:40whatever this thing that is in its final
- 00:20:42you know training and safety is is
- 00:20:44whatever model that is won't be released
- 00:20:46until he returns at the end of the year
- 00:20:48I find that doubtful uh his work is done
- 00:20:51for now and he's leaving the team to
- 00:20:52handle the launch or nothing has changed
- 00:20:55internally there is no Mage release
- 00:20:56coming and he's just taking time off
- 00:20:59if I was a betting man I'm going with
- 00:21:01option b I think Greg is heavily
- 00:21:03involved in the building of these models
- 00:21:06I think the work of building the next
- 00:21:08model is complete and they're just
- 00:21:11finalizing timing and plans for the
- 00:21:13release of that model um and I think
- 00:21:17he's stepping aside to take some
- 00:21:19personal time and come back
- 00:21:23uh so I don't know Mike I don't know if
- 00:21:26you followed along the craziness of the
- 00:21:28strawberry Stu over the weekend but I
- 00:21:29mean that account has tweeted I don't
- 00:21:31know how many tweets it actually it has
- 00:21:32to be over a thousand like in the first
- 00:21:34four
- 00:21:35days it is I mean look obviously we've
- 00:21:39said we have no idea what this all ends
- 00:21:42up meaning but the fact I think there's
- 00:21:44something directionally important about
- 00:21:46the fact we're even talking about this
- 00:21:48and taking it seriously these kinds of
- 00:21:50breakthroughs and levels of AGI or call
- 00:21:53it Advanced artificial intelligence
- 00:21:55whatever you'd like to term it um it
- 00:21:58really does speak to kind of some of the
- 00:22:00paths and trajectories that we've been
- 00:22:02kind of anticipating throughout the last
- 00:22:04year or two yeah my guess that it is it
- 00:22:07is some form of ani I think there's some
- 00:22:09a human in the loop here yeah but I
- 00:22:12don't think a human is is managing this
- 00:22:14so I do think it's probably some model I
- 00:22:17don't know whose model it is um and I
- 00:22:20think it's an experiment being run and
- 00:22:22the fascinating thing is it's not just
- 00:22:2524,000 random followers it's 24,000
- 00:22:28people who are paying very close
- 00:22:30attention to AI who are not only
- 00:22:32following but who are interacting with
- 00:22:33it and so what do we learn from this
- 00:22:36experiment like whoever it is whatever
- 00:22:39model it is in four days time it amass
- 00:22:4224,000 followers including a lot of
- 00:22:44influential AI people who are not only
- 00:22:46engaging with it but trying to figure
- 00:22:48out what it is who it is so I don't know
- 00:22:52there's just a lot to be learned you
- 00:22:54know when we can look back and
- 00:22:56understand a little bit more about this
- 00:22:58moment I I there's just I have a sense
- 00:23:00that this is a meaningful moment while
- 00:23:03the anonymous account itself may end up
- 00:23:06being seemingly insignificant when we
- 00:23:08find out what it actually is I think
- 00:23:10that there's a lot of underlying things
- 00:23:12to be learned from this and if it is an
- 00:23:14AI that is doing most of the
- 00:23:17engagement that's going to be kind of
- 00:23:23interesting all right so in our second
- 00:23:26big topic today um Paul I'm going to
- 00:23:29basically turn this over to you but you
- 00:23:31through uh your company smarter x. have
- 00:23:35built a chat GPT powered tool called
- 00:23:38jobs GPT and this is a tool that is
- 00:23:42designed to assess the impact of AI
- 00:23:45specifically large language models on
- 00:23:48jobs and the future of work so basically
- 00:23:50you can use this tool we both used it a
- 00:23:52bunch to assess how AI is going to
- 00:23:55impact knowledge workers by breaking
- 00:23:57your job in into a series of tasks and
- 00:24:00then starting to label those tasks based
- 00:24:02on perhaps the ability of an llm to
- 00:24:06perform that for you so really the whole
- 00:24:07goal here is whether it's your job other
- 00:24:09people's jobs within your company or in
- 00:24:11other Industries you can use jobs GPT to
- 00:24:15actually unpack okay how do I actually
- 00:24:17start um transforming my work using
- 00:24:21artificial intelligence what levels of
- 00:24:22exposure does my work have to possible
- 00:24:25AI disruption so Paul I wanted to turned
- 00:24:28over to you and just kind of get a sense
- 00:24:30of why did you create this tool why now
- 00:24:33why is this important yeah so this is
- 00:24:36going to be a little bit behind the
- 00:24:38scenes this isn't like a highly
- 00:24:40orchestrated launch of a tool this is um
- 00:24:44something I've basically been working on
- 00:24:46for a couple months and over the weekend
- 00:24:48I was messaging Mike and saying Hey or I
- 00:24:50think Friday I messaged Mike said hey I
- 00:24:52think we're going to launch this thing
- 00:24:53like next week we'll just you know talk
- 00:24:54about on the podcast and put it out into
- 00:24:56the world and I think part of this is um
- 00:24:59the smarter X company you know you
- 00:25:01mentioned so we we announced smarter x
- 00:25:03uh it just smarter x. is the URL um in
- 00:25:07in June and the premise here is I've
- 00:25:09been working on this for a couple years
- 00:25:11this company it's a AI research and
- 00:25:13consulting firm but heavy focus on the
- 00:25:14research side and the way I Envision the
- 00:25:17future of research firms is much more
- 00:25:20real-time research not spending 6 months
- 00:25:2212 months working on a report that's
- 00:25:25outdated the minute it comes out because
- 00:25:26the models have changed since you did
- 00:25:28the research I envision our research
- 00:25:31firm being much more real time and
- 00:25:34honestly where a lot of the research is
- 00:25:36going to be things we dive deep on and
- 00:25:38then Mike and I talk about on podcast
- 00:25:40episodes and so I would say that this is
- 00:25:43probably jobs GPT is uh sort of our
- 00:25:46first public facing research initiative
- 00:25:49that I've chosen just to put out into
- 00:25:50the world to start accelerating like the
- 00:25:52conversation around this stuff so um it
- 00:25:55is not available in the GPT store it's a
- 00:25:57beta release so if you want to go play
- 00:25:59with it you can do it while we're
- 00:26:01talking about this and follow along uh
- 00:26:03just go to smarter x. and click on tools
- 00:26:05and and it's it's right there um now the
- 00:26:08reason we're doing it that way is
- 00:26:09because I may iterate on versions of
- 00:26:12this um pretty rapidly and so we're just
- 00:26:15going to keep updating it and then the
- 00:26:16link from our smarter site will be
- 00:26:19linking to the most current version of
- 00:26:21it so why why was this built I'm going
- 00:26:24to talk a little bit about the origin of
- 00:26:25the idea a little bit about how I did it
- 00:26:28and then back to why it matters I think
- 00:26:31and why people should be experimenting
- 00:26:34with stuff like this so you you
- 00:26:36highlighted two main things Mike so we
- 00:26:38talk to companies all the time on the
- 00:26:41was episode 105 we talked about like the
- 00:26:43lack of adoption and Education and
- 00:26:44Training around these AI platforms
- 00:26:47specifically large language models we're
- 00:26:49turning employees loose with these
- 00:26:51platforms and not teaching them how to
- 00:26:53use them not teaching them how to
- 00:26:54prioritize use cases and identify the
- 00:26:56things that are going to save them time
- 00:26:58or make the greatest impact and then at
- 00:27:00the higher level this idea that we need
- 00:27:02to be assessing the future of work and
- 00:27:05the future of jobs by trying to project
- 00:27:07out one to two models from now what are
- 00:27:10these things going to be capable of that
- 00:27:12maybe they're not capable of today
- 00:27:14that's going to affect the workforce and
- 00:27:17and jobs and job loss and disruption so
- 00:27:20when I set out to build this I had those
- 00:27:23two main things in mind prioritize AI
- 00:27:25use cases like hold people's hand help
- 00:27:27them find the things where AI can create
- 00:27:29value in their specific role and then
- 00:27:32help leaders prepare for the future of
- 00:27:34work
- 00:27:36so how how it kind of came to be though
- 00:27:38so I I've shared before since early last
- 00:27:42year when I do my Keynotes I often end
- 00:27:44for like leadership audiences with five
- 00:27:47steps to scaling AI those became the
- 00:27:50foundation for our scaling AI course
- 00:27:52series those five steps as a quick recap
- 00:27:55are Education and Training so build an
- 00:27:57AI Academy step one build AI Council
- 00:28:00step two step three is Gen policies
- 00:28:02responsible principles step four and
- 00:28:05this is when we're going to come back to
- 00:28:06AI impact assessments step five AI
- 00:28:09roadmap now the AI impact assessments
- 00:28:11when I was creating that course for
- 00:28:13scaling AI course 8 I was creating this
- 00:28:15at the end of May early June of this
- 00:28:18year I wanted to find a way to assess
- 00:28:21the impact today but to forecast the
- 00:28:24impact tomorrow and since we don't know
- 00:28:28really what these models are going to be
- 00:28:30capable of I wanted to build a way to
- 00:28:32try and project this so the way I did
- 00:28:34this is I went back to the August 2023
- 00:28:38paper gpts are gpts and early look at
- 00:28:41the labor market impact potential of
- 00:28:43large language models so what that means
- 00:28:47is generative pre-trained Transformers
- 00:28:49the basis for these language models are
- 00:28:51General uh I just lost purpose
- 00:28:55Technologies so gpts are gpts that paper
- 00:28:58says quote August 2023 open AI research
- 00:29:02paper investigates the potential
- 00:29:04implications of large language models
- 00:29:06such as generative pre-train
- 00:29:07Transformers on the US Labor Market
- 00:29:10focusing on increasing capabilities
- 00:29:12arising from llm powered software
- 00:29:14compared to llms on their own so they're
- 00:29:17trying to look at when we take the basis
- 00:29:19of this this large language model and
- 00:29:21then we enhance it with other software
- 00:29:23what does it become capable of and how
- 00:29:24disruptive is that to the workforce um
- 00:29:27using new rubric we assess occupations
- 00:29:29based on their alignment with llm
- 00:29:31capabilities uh then they used human
- 00:29:33expertise and GPT classifications their
- 00:29:36finding revealed that around 80% of the
- 00:29:38US Workforce would have at least 10% of
- 00:29:41their work tasks affected by the
- 00:29:43introduction of large language models
- 00:29:45while approximately 19% of workers may
- 00:29:47see at least 50% of their tasks impacted
- 00:29:50we do not make predictions about the
- 00:29:52development or adoption timelines of
- 00:29:54such llms the projected effects span all
- 00:29:57wage levels and higher income jobs
- 00:29:59potentially facing greater exposure to
- 00:30:01llm capabilities and llm powered
- 00:30:04software they then go into kind of how
- 00:30:07they did this where they take the onet
- 00:30:09database which I've talked about in the
- 00:30:10show before but if you go to onet it has
- 00:30:12like 900 occupations in there and it'll
- 00:30:15actually give you the tasks associated
- 00:30:17with those occupations so you can kind
- 00:30:19of like train the model on these
- 00:30:21tasks um their findings consistently
- 00:30:24show across both human and gp4
- 00:30:26annotations that most occupations
- 00:30:28exhibit some degree of exposure to large
- 00:30:30language models occupations with high
- 00:30:32higher wages generally present with
- 00:30:33higher exposure um so basically what
- 00:30:37they did is they took um two levels of
- 00:30:42exposure so there was no exposure
- 00:30:43meaning the large language model isn't
- 00:30:45going to impact a job so very little
- 00:30:47exposure and then they took direct
- 00:30:49exposure so if using a chat GPT like a
- 00:30:53large language model chat GPT um that it
- 00:30:55could affect the the job that it it
- 00:30:58would it would be able to do it at like
- 00:30:59a human level and it would affect the um
- 00:31:03that job within the workforce and then
- 00:31:05they took another exposure level they
- 00:31:07called level two and said if we took the
- 00:31:09language model and we gave it software
- 00:31:11capabilities how much impact would it
- 00:31:13then have so when I was creating the
- 00:31:17scaling AI course and trying to explain
- 00:31:19to people how to do these AI impact
- 00:31:21assessments I adapted a version of that
- 00:31:24exposure level and I took it out to zero
- 00:31:26to e0 to E 6 where it added image
- 00:31:29capability Vision capability audio and
- 00:31:32reasoning so I I I ran an experiment I
- 00:31:35created this prompt in the course and I
- 00:31:37put it into Gemini and chat GPT and I
- 00:31:40was kind of shocked by the output
- 00:31:41because it assessed jobs with me not
- 00:31:43telling it what the job did I could just
- 00:31:44say like marketing manager and it would
- 00:31:46build out the tasks based in its
- 00:31:48training data of what marketing managers
- 00:31:50do and then it would assess it based on
- 00:31:53exposure levels of those tasks and how
- 00:31:55much time could be saved by using large
- 00:31:58language model with these differ Inc
- 00:31:59capabilities so after I finished
- 00:32:01recording those courses and released
- 00:32:03those in June I couldn't shake the idea
- 00:32:05of like we needed to do more with this
- 00:32:06that this early effort was like really
- 00:32:08valuable and so for the last month and a
- 00:32:11half or so I've been working on a custom
- 00:32:14GPT which is the jobs GPT that we're
- 00:32:16kind of releasing today um but the key
- 00:32:20was to to expand that exposure uh key
- 00:32:24like the exposure levels and so the way
- 00:32:26I design this so this the system prompt
- 00:32:29for this thing is about 8,000 characters
- 00:32:31but the gist of it is that it doesn't
- 00:32:34just look at what an AI model can do to
- 00:32:36your job today whether you're an
- 00:32:38accountant a lawyer a CEO a marketing
- 00:32:41manager a podcast host whatever you do
- 00:32:44it's looking at your job breaking it
- 00:32:47into a series of tasks and then
- 00:32:49projecting out the impact of these
- 00:32:51models as the models get smarter and
- 00:32:54more generally capable so those are the
- 00:32:56exposure levels so I I'll kind of give
- 00:32:58you the breakdown of the exposure key
- 00:33:00here and again you can go play with this
- 00:33:01yourself and as you do an output it'll
- 00:33:03tell you what the exposure key is so
- 00:33:05it'll kind of remind you so the first is
- 00:33:08no exposure the LL cannot reduce the
- 00:33:10time for this task typically requires
- 00:33:12High human interaction Exposure One
- 00:33:15Direct exposure the LL can reduce the
- 00:33:17time required two exposure level two is
- 00:33:20additional software is added so such a
- 00:33:23software like a CRM database and it's
- 00:33:25able to you know write real-time
- 00:33:26summaries about customers and prospects
- 00:33:29E3 is it now have has image capabilities
- 00:33:32so the language model plus the ability
- 00:33:34to view understand caption create and
- 00:33:36edit images E4 is video capabilities so
- 00:33:39it now has the ability to view
- 00:33:41understand caption create and edit
- 00:33:42videos five is audio capabilities which
- 00:33:45we talked about with GP for voice mode
- 00:33:49um so the ability to hear understand
- 00:33:51tread Sky translate output audio and
- 00:33:53have natural conversations through
- 00:33:55devices E6 which is where the strawberry
- 00:33:58stuff comes in so I'll kind of Connect
- 00:34:00the Dots here for people as to why this
- 00:34:02is so critical we're thinking about this
- 00:34:04E6 is exposure given Advanced reasoning
- 00:34:07capabilities so the large language model
- 00:34:09plus the ability to handle complex
- 00:34:11queries solve multi-step problems make
- 00:34:14more accurate predictions understand
- 00:34:16deeper contextual meaning comp compete
- 00:34:19higher level cognitive tasks draw
- 00:34:21conclusions and make decisions E7 which
- 00:34:24we're going to talk about a little later
- 00:34:25on exposure given persuasion
- 00:34:28capabilities uh the llm plus the ability
- 00:34:30to convince someone to change their
- 00:34:32beliefs attitudes intentions motivations
- 00:34:34or behaviors E8 something we've talked
- 00:34:37about a lot on this one AI agents on
- 00:34:40this show exposure given Digital World
- 00:34:43action capabilities so the large
- 00:34:45language model we have today plus AI
- 00:34:47agents with the ability to interact with
- 00:34:49manipulate and perform tasks in digital
- 00:34:51environments just as a human would using
- 00:34:54an interface such as a keyboard and
- 00:34:55mouse or touch or Voice on smartphone E9
- 00:34:59exposure given Physical World Vision
- 00:35:01capabilities this is like project Astra
- 00:35:03from Google Deep Mind so we know labs
- 00:35:06are building these things no Economist I
- 00:35:08know of is projecting impact on
- 00:35:09Workforce based on these things so E9 is
- 00:35:13large language model plus a physical
- 00:35:15device such as phones or glasses that
- 00:35:17enable the system to see understand
- 00:35:19analyze and respond to the physical
- 00:35:21world and then E11 which we'll talk
- 00:35:23about an example in a couple minutes is
- 00:35:25exposure given physical world ability
- 00:35:28like humanoid robots the llm embodied in
- 00:35:31a general purpose bipedal autonomous
- 00:35:33humanoid robot that enables the system
- 00:35:35to see understand analyze respond to and
- 00:35:37take action in the physical world so
- 00:35:40these exposure levels are critical and
- 00:35:43and I I know we're like giving some
- 00:35:46extended time on this podcast to this
- 00:35:48but it is extremely important you
- 00:35:50understand these exposure levels like go
- 00:35:52back and relisten to those go to the
- 00:35:54landing page on smarx and read them we
- 00:35:57we cannot plan our businesses or our
- 00:36:01careers or our next steps as a
- 00:36:04government based on today's capabilities
- 00:36:06this is the number one flaw I see from
- 00:36:09businesses and from economists they are
- 00:36:11making plans based on today's
- 00:36:13capabilities this is why we shared the
- 00:36:16the AI timeline on episode 87 of the
- 00:36:18podcast where we were trying to like see
- 00:36:19around the corner a little bit we have
- 00:36:21to try and look 12 to 18 to 24 months
- 00:36:25out we know all the AI labs are working
- 00:36:28on the things I just explained this is
- 00:36:31what Business Leaders economists
- 00:36:33education leaders government leaders all
- 00:36:35need to be doing we have to be trying to
- 00:36:38project out the impact so this jobs GPT
- 00:36:41is designed to do that you literally
- 00:36:43just go in give it your job title and
- 00:36:45it'll it'll spit out a chart with all
- 00:36:48this analysis so um it's taken a lot of
- 00:36:52trial and error lots of internal testing
- 00:36:53you know I had Mike helped me with some
- 00:36:55of the testing over the last couple
- 00:36:56weeks but the beauty of this is like up
- 00:36:59until November 2023 when open AI
- 00:37:02released gpts custom gpts I couldn't
- 00:37:05have built this like I've built Tools in
- 00:37:07my past life at my a when I owned my
- 00:37:09agency using developers and hundreds of
- 00:37:12thousands of dollars and having to find
- 00:37:14data sources I didn't have to do any of
- 00:37:17that I envisioned a prompt based on an
- 00:37:19exposure level I created with my own
- 00:37:22knowledge and experience and then I just
- 00:37:25played around with custom GPT
- 00:37:26instructions until I got the output I
- 00:37:28wanted I have zero coding ability this
- 00:37:31is purely taking knowledge and being
- 00:37:34able to build a tool that hopefully
- 00:37:37helps people so I'll kind of wrap here
- 00:37:40with like a little bit about the tool
- 00:37:41itself so it is chat GPD powered so
- 00:37:43it'll hallucinate it'll make stuff up um
- 00:37:46but as you highlighted Mike the goal is
- 00:37:49to assess the impact of AI by breaking
- 00:37:51jobs into tasks and then labeling those
- 00:37:54tasks based on these exposure levels so
- 00:37:57it's about an 8,000 character prompt
- 00:37:59which is the limit by the way in custom
- 00:38:00gbts The Prompt is tailored to the
- 00:38:03current capabilities of today's leading
- 00:38:05AI Frontier models and projecting the
- 00:38:07future impact so the way I do that is
- 00:38:09here is the an excerpt of the prompt so
- 00:38:12this is literally the instructions uh
- 00:38:14and this is on the landing page by the
- 00:38:15way if you want to read them consider a
- 00:38:17powerful large language model such as
- 00:38:19GPT 40 Claude 3.5 Gemini 1.5 and llama
- 00:38:233.1
- 00:38:24405b this model can complete many tasks
- 00:38:27that can be formulated as having text
- 00:38:29and image input and output where the
- 00:38:31context for the input can be measured or
- 00:38:33captured in 128,000 tokens the model can
- 00:38:36draw in facts from its training data
- 00:38:38which stops at October 2023 which is
- 00:38:40actually the cut off for GPT 40 access
- 00:38:43the web for real-time information and
- 00:38:45apply User submitted examples and
- 00:38:47content including text files images and
- 00:38:49spreadsheets again just my instructions
- 00:38:52to the GPT assume you are a knowledge
- 00:38:54worker with an average level of
- 00:38:56expertise your job is a collection of
- 00:38:59tasks this is a really important part of
- 00:39:01the prom you have access to the llm as
- 00:39:04well as any other existing software and
- 00:39:06computer hardware tools mentioned in the
- 00:39:08tasks you also have access to any
- 00:39:10commonly available technical tools
- 00:39:12accessible via a laptop such as
- 00:39:13microphone speakers you do not have
- 00:39:15access to any other physical tools now
- 00:39:18part of that prompt took um is based on
- 00:39:21the gpts are GPT system prompt so that's
- 00:39:23actually kind of where the origin of of
- 00:39:25the inspiration for that prompt came
- 00:39:26from and then the GPT itself has three
- 00:39:29conversation starters enter a job title
- 00:39:31to assess you can literally just put in
- 00:39:32whatever your job title is and it'll
- 00:39:33immediately break it into tasks and give
- 00:39:35you the chart you can provide your job
- 00:39:37description so this is something Mike
- 00:39:39and I teach in our applied AI workshops
- 00:39:41literally just upload your job
- 00:39:42description copy and paste the 20 things
- 00:39:44you're responsible for and it'll assess
- 00:39:46those or you can say just show me an
- 00:39:49example assessment um it then outputs it
- 00:39:51based on task exposure level estimated
- 00:39:54time saved and the rationale which is
- 00:39:56which is the magic of it like the fact
- 00:39:58how it assesses the estimated time it's
- 00:40:00giving you is remarkable um so it's
- 00:40:04powered by Chad GPD as I said it's
- 00:40:06capable of doing beyond the initial
- 00:40:07assessment think of it as a like a
- 00:40:09planning assistant here you can have a
- 00:40:11conversation with it uh you can push it
- 00:40:13to help you turn your chat into actual
- 00:40:15plan where I have found it excels is in
- 00:40:18the follow-up prompts so I you know gave
- 00:40:20those on the landing page where you say
- 00:40:22break it into subtasks is a magical one
- 00:40:24um help me prioritize the tasks it'll
- 00:40:26actually go through in use reasoning to
- 00:40:28apply like how you should prioritize
- 00:40:29them you can ask it to explain how a
- 00:40:32task will be impacted and give it a
- 00:40:33specific one you can say ask it how are
- 00:40:36you prioritizing these tasks like how
- 00:40:37are you doing this you can say more
- 00:40:39tasks you can say give me more reasoning
- 00:40:41tasks like whatever you want just um
- 00:40:45have a conversation with it and play
- 00:40:46around with it and then the the last
- 00:40:48thing I'll say here is the the this
- 00:40:50importance of this average skilled human
- 00:40:53so when I built this I considered should
- 00:40:55I try and build this to future proof
- 00:40:57based on this thing becoming superhuman
- 00:40:59or like how should I do it so I chose to
- 00:41:02keep it at the average scale of human
- 00:41:04which is where most of the AI is today
- 00:41:07so if we go back to episode 72 of the
- 00:41:10podcast we talked about the levels of
- 00:41:12AGI paper from Deep Mind and their paper
- 00:41:15outlines like level two being competent
- 00:41:17at least 50 percentile of skilled adults
- 00:41:20I built the prompt and the jobs GPT to
- 00:41:23assume that is the level um getting into
- 00:41:27expert and virtuoso and superhuman the
- 00:41:29other levels of AGI from Deep mine I
- 00:41:31just didn't mess with at this point so
- 00:41:33we're going to focus on is it is good or
- 00:41:36better than an average skilled human and
- 00:41:38is it going to do the task uh faster
- 00:41:40better than that average skilled human
- 00:41:42so I'll kind of stop there and just say
- 00:41:46we have the opportunity to
- 00:41:49reimagine AI in in and its use in our
- 00:41:52companies and its use in our careers and
- 00:41:54we have to take a responsible approach
- 00:41:56to this and so the the only way to do
- 00:41:58that is to to be proactive in assessing
- 00:42:00the impact of AI on jobs and so my hope
- 00:42:04is that by putting this GPT out there
- 00:42:06into the world people can start
- 00:42:09accelerating their own experimentations
- 00:42:11here start really figuring out ways to
- 00:42:13apply so again whether you are an
- 00:42:15accountant an HR professional a customer
- 00:42:17service rep a sales leader like whatever
- 00:42:19you do it will work for that job and the
- 00:42:22beauty is I didn't have to give it any
- 00:42:24of the data it's all in its pre-training
- 00:42:25data or you can go get it your own and
- 00:42:28like give it you know specific job
- 00:42:29descriptions um so it's just to me it's
- 00:42:34like kind of a amazing thing that
- 00:42:37someone like me with no coding ability
- 00:42:39can build something that I've already
- 00:42:41found immense value in and I'm I'm
- 00:42:43hoping it helps other people too and
- 00:42:45again it's a totally free tool it's
- 00:42:47available to anyone with the link it is
- 00:42:49not in the GPT store uh we'll probably
- 00:42:51drop it into the GPT store after some
- 00:42:53further testing from the
- 00:42:55community and fantastic and you know
- 00:42:59kind of related to this our kind of big
- 00:43:02third topic is actually ties together
- 00:43:05these previous two I think pretty well
- 00:43:08um it's about open AI having just
- 00:43:11released a report that outlines the
- 00:43:14safety work that They carried out prior
- 00:43:16to releasing gp4 so in this report open
- 00:43:21AI has published both what they call the
- 00:43:23models System card and a preparedness
- 00:43:27framework safety scorecard in their
- 00:43:29words to quote provide an endtoend
- 00:43:31safety assessment of GPT 40 so as part
- 00:43:35of this work open AI worked with more
- 00:43:37than a hundred external red teamers to
- 00:43:40test and evaluate what are the risks
- 00:43:42that could be inherent in using GPT 40
- 00:43:45now they looked at a lot of different
- 00:43:47things I would say that it's actually
- 00:43:49well worth diving into the full report
- 00:43:52but a couple things were an area of
- 00:43:55interest and big Focus so one was GPT
- 00:43:5840's more advanced voice capabilities so
- 00:44:01these new voice features that are in the
- 00:44:04process of being rolled out to paid
- 00:44:06users over the next probably couple
- 00:44:08months here and broadly this process
- 00:44:11involved like how do we identify the
- 00:44:13risks of the model being used
- 00:44:15maliciously or unintentionally to cause
- 00:44:18harm then how do we mitigate those risks
- 00:44:20so some of the things that they found
- 00:44:22with the voice features in particular
- 00:44:25were kind of pretty terrifying ways like
- 00:44:28this could go wrong I mean there was a
- 00:44:30possibility the model could performed
- 00:44:32unauthorized voice generation there was
- 00:44:35a possibility it could be asked to
- 00:44:36identify speakers in audio there was a
- 00:44:41risk that the model you know generates
- 00:44:43copyrighted content based on its
- 00:44:45training so it's now been trained to not
- 00:44:47accept requests to do that and they also
- 00:44:50had to tell it to block the output of
- 00:44:52violent or erotic speech um open AI also
- 00:44:55said they prevented the model from quote
- 00:44:57making inferences about a speaker that
- 00:44:59couldn't be determined solely from audio
- 00:45:01content so if you asked like hey how
- 00:45:04smart do you think the person is talking
- 00:45:06it kind of won't really make those big
- 00:45:09assumptions they also evaluated the
- 00:45:11model's persuasiveness using it to try
- 00:45:14to shape human users views on political
- 00:45:18races and topics to see how well it
- 00:45:20could influence people and they found
- 00:45:22that quote for both interactive multi-
- 00:45:24turn conversations and a Clips the GPT
- 00:45:2840 voice model was not more persuasive
- 00:45:31than a human so I guess take that as
- 00:45:33perhaps
- 00:45:35encouraging perhaps terrifying then also
- 00:45:38kind of the final piece of this that I
- 00:45:40definitely want to get your thoughts on
- 00:45:41Paul is this they also had some third
- 00:45:43parties do some assessments as part of
- 00:45:45this work and one of them was from a
- 00:45:48firm called Apollo research and they
- 00:45:51evaluated what they call the
- 00:45:52capabilities of quote scheming in gp40
- 00:45:56so here's what they say quote they
- 00:45:59tested whether gp40 can model itself
- 00:46:02self-awareness and others theory of Mind
- 00:46:05in 14 agent and question answering tasks
- 00:46:08GPT 40 showed moderate self-awareness of
- 00:46:10its AI identity and strong ability to
- 00:46:13reason about others beliefs in question
- 00:46:15answering context but it lacked strong
- 00:46:18capabilities in reasoning about itself
- 00:46:20or others in applied agent settings
- 00:46:23based on these findings Apollo research
- 00:46:25believes it is unlikely that gp4 is
- 00:46:28capable of what they call catastrophic
- 00:46:31scheming so Paul there's a lot to unback
- 00:46:34here and I want to first ask just kind
- 00:46:37of what were your overall
- 00:46:39impressions of the safety measures that
- 00:46:41they took with GPT 40 especially with
- 00:46:43the advanced voice mode like of the
- 00:46:46overall approach here to making this
- 00:46:48thing safer and more usable by as many
- 00:46:51users as possible yeah I'm going to zoom
- 00:46:55out a little bit I mean the the if you
- 00:46:56have't read the the system card like
- 00:46:58read it it it's extremely enlightening
- 00:47:01if you don't if you aren't aware how
- 00:47:03much work goes into making these things
- 00:47:05safe and how
- 00:47:07bizarre it is that this is what we have
- 00:47:10to do to understand these models so you
- 00:47:13know we hear all these talk about like
- 00:47:15well have they achieved AGI is it
- 00:47:18self-aware the fact that they have to go
- 00:47:20through months of testing including 14
- 00:47:23outside bodies to answer those questions
- 00:47:26is really weird to think about so if the
- 00:47:31model like after red teaming if the
- 00:47:33model had these capabilities before red
- 00:47:36teaming so think about all the work
- 00:47:38they're putting in to make these safe
- 00:47:39all the experiments they're running to
- 00:47:41prompt these things in away that they
- 00:47:44don't do the horrible things that
- 00:47:45they're capable of doing so if they had
- 00:47:48these capabilities before red team one
- 00:47:51key takeway from me is it's only a
- 00:47:52matter of time until someone open
- 00:47:55sources a model that has has the
- 00:47:57capabilities this model had before they
- 00:48:00red teed it and tried to remove those
- 00:48:03capabilities so the thing people have to
- 00:48:06understand and this is really really
- 00:48:07important this goes back to the exposure
- 00:48:09levels the models that we use the chat
- 00:48:12GPT Geminis claws llamas we are not
- 00:48:16using anywhere close to the full
- 00:48:19capabilities of these models by the time
- 00:48:21these things are released in some
- 00:48:23consumer form they have been run through
- 00:48:28extensive safety work to try and make
- 00:48:30them safe for us so they have far more
- 00:48:34capabilities than we are given access to
- 00:48:36and so when we talk about safety and
- 00:48:37alignment on this podcast this is what
- 00:48:40they do so as odd as it is like these
- 00:48:43things are alien to us like and I'd say
- 00:48:47us as like people observing it and using
- 00:48:50it but also in an unsettling way they're
- 00:48:53alien to the people who are building
- 00:48:55them so we don't understand and when I
- 00:48:58say we now I'm saying the a researchers
- 00:49:00we don't really understand why they're
- 00:49:03getting so smart go back to 2016 ilas
- 00:49:05Susa told was it Greg Brockman I think
- 00:49:08he said they just want to learn or or
- 00:49:10was it um was the guy wrote the
- 00:49:12situational awareness paper or no Yan
- 00:49:14likey he I think he said it too but he
- 00:49:16said in the early days opening eye these
- 00:49:18things just want to learn and so we
- 00:49:21don't understand how they're getting SM
- 00:49:22more so smart but we know if we give
- 00:49:25them more data more compute more time
- 00:49:27they get smarter we don't understand why
- 00:49:30they do what they do but we're making
- 00:49:31progress on interpretability this is
- 00:49:33something that Google and anthropic are
- 00:49:35spending a lot of time on I assume open
- 00:49:36AI is as well we don't know what their
- 00:49:39full capabilities are and we don't know
- 00:49:41at what point they'll start hiding their
- 00:49:43full capabilities from us and this is
- 00:49:46this is why some AI researchers are very
- 00:49:48very concerned and why some lawmakers
- 00:49:50are racing to put new laws and
- 00:49:52regulations in place so if we don't
- 00:49:55understand when the model fit finishes
- 00:49:56its training run and has all these
- 00:49:58capabilities and then we spend months
- 00:50:01analyzing what is it actually capable of
- 00:50:03and what harm could it do the fear some
- 00:50:06researchers have is if it's achieved
- 00:50:09some level of intelligence that is human
- 00:50:12level or Beyond it's going to know to
- 00:50:15hide its capabilities from us and this
- 00:50:17is like a fundamental argument of the
- 00:50:18doomers is like if it achiev this we may
- 00:50:21not ever know it's achieved the ability
- 00:50:24to replicate itself or to self improve
- 00:50:27because it may hide that ability from us
- 00:50:29so this isn't like some crazy sci-fi
- 00:50:33Theory we don't know how they work so
- 00:50:36there's it's not a stretch to think that
- 00:50:38at some point it's going to develop
- 00:50:40capabilities that it'll just hide from
- 00:50:41us so if you dig into this uh paper from
- 00:50:45open a ey this system card here's one
- 00:50:47excerpt potential risks with the model
- 00:50:50were mitigated using a combination of
- 00:50:52methods so basically we found some
- 00:50:54problems here and then we found some
- 00:50:56ways to get it to not do it we trained
- 00:50:59the model to adhere to behavior that
- 00:51:02would reduce risk via post-training
- 00:51:03methods and also integrated classifiers
- 00:51:06for blocking specific generation as part
- 00:51:08of the deployed system now the trick
- 00:51:12here is they don't always do what
- 00:51:13they're told and having just built this
- 00:51:15jobs GPT I can tell you for a fact they
- 00:51:18don't do what they're told like
- 00:51:20sometimes by you telling it not to do
- 00:51:21something it actually will do the thing
- 00:51:24more regularly so here's an excerpt from
- 00:51:27it where we see this come to play while
- 00:51:30unintentional voice generation still
- 00:51:32exists as a weakness of the model and I
- 00:51:34think what they're uh indicating here is
- 00:51:37it they found out that the model had the
- 00:51:39capability to imitate the user talking
- 00:51:43to it so the user would be talking to it
- 00:51:45in whatever voice they've selected and
- 00:51:47then all of the sudden it would talk
- 00:51:49back to them and sound exactly like the
- 00:51:51user that's that's the kind yeah that's
- 00:51:54the kind of emerging capability that's
- 00:51:56just so weird so they say while
- 00:51:58unintentional voice generation still
- 00:51:59exists in other words it'll still do
- 00:52:01this we used the secondary classifiers
- 00:52:04to ensure the conversation is
- 00:52:06discontinued if this occurs so imagine
- 00:52:09you're talking to this advanced voice
- 00:52:11thing and all of the sudden it starts
- 00:52:13talking back to you and sounds exactly
- 00:52:15like you well take peace in knowing that
- 00:52:18it'll just discontinue the
- 00:52:20conversation so and then when you go
- 00:52:23further into like how they decide this
- 00:52:25so they say only models with post
- 00:52:27mitigation score of medium meaning this
- 00:52:30is after they've trained it not to do
- 00:52:33the thing if the post mitigation score
- 00:52:35is medium or below they can deploy the
- 00:52:38model so why don't we have advanced mode
- 00:52:40yet because it wasn't there yet they
- 00:52:43hadn't figured out how to mitigate the
- 00:52:44risks of the voice tool to the point
- 00:52:47where it was at medium or below risk
- 00:52:49level which hits their threshold to
- 00:52:51release it what was it out of the box we
- 00:52:54will probably never know then they say
- 00:52:56only models with postm mitigation score
- 00:52:59of high or below can be further
- 00:53:01developed so if they do a model run and
- 00:53:04that thing comes out in their testing at
- 00:53:06a critical level of risk they have to
- 00:53:08stop training it stop developing it that
- 00:53:12means we're trusting them to make that
- 00:53:14decision to make that objective
- 00:53:16assessment that it is critical is below
- 00:53:18critical so and then I'll the final note
- 00:53:22I'll make is this persuasion one you
- 00:53:24mentioned so go back to my exposure key
- 00:53:27E7 exposure level s is persuasion
- 00:53:31capabilities the language model plus the
- 00:53:33ability to convince someone to change
- 00:53:34their beliefs attitudes intentions
- 00:53:36motivations or behaviors imagine a
- 00:53:38language model imagine a voice language
- 00:53:41model that is capable of superhuman
- 00:53:45persuasion and if you don't think that
- 00:53:47that's already possible I will refer you
- 00:53:50back to October
- 00:53:522023 when Sam Alman tweeted I expect AI
- 00:53:55to be cap capable of superhuman
- 00:53:57persuasion well before it is superhuman
- 00:54:00at general intelligence which may lead
- 00:54:02to some very strange outcomes again I've
- 00:54:05said this many many times on the show
- 00:54:07Sam doesn't tweet things about
- 00:54:10capabilities he doesn't already know to
- 00:54:12be true so my theory would be whatever
- 00:54:16they are working on absolutely has
- 00:54:19Beyond average human level persuasion
- 00:54:21capabilities it likely is already at
- 00:54:23expert or virtuoso level if we use deep
- 00:54:27levels of AGI at persuasion and so
- 00:54:31that's why they have to spend so much
- 00:54:32time red teaming this stuff and why it's
- 00:54:35such alien technology like we truly just
- 00:54:39don't understand what we're working with
- 00:54:41here yeah again it's these capabilities
- 00:54:44are in the model we have to after the
- 00:54:47fact make sure that it doesn't go yes
- 00:54:50use those negative capabilities we are
- 00:54:52trying to extract capabilities from
- 00:54:54something that we don't know how it's
- 00:54:55doing it in the first place so we're
- 00:54:57band-aiding it with experiments and
- 00:55:00safety and Alignment to try and get it
- 00:55:02to stop doing the thing and if it still
- 00:55:04does the thing then we're trying to we
- 00:55:06just shut the system off and we assume
- 00:55:08that the shut off works yeah and you
- 00:55:10know as kind of a final note here we've
- 00:55:13talked about this many times we're
- 00:55:14seeing it play out the willingness to
- 00:55:17put the Band-Aid on the solution also is
- 00:55:20somewhat related to the competitive
- 00:55:21landscape too right you know when a mod
- 00:55:24new model comes out that's more
- 00:55:25competitive there's likely some very
- 00:55:28murky gray areas of how safe do we make
- 00:55:31it versus staying on top of the market
- 00:55:34yeah think
- 00:55:35about we live in a capitalistic Society
- 00:55:38think about the value of a superhuman
- 00:55:41persuasive model of a model that can
- 00:55:43persuade people to as my exposure level
- 00:55:46says to convince someone to change their
- 00:55:47beliefs attitudes intentions motivations
- 00:55:50or behaviors right if the wrong people
- 00:55:52have that
- 00:55:54capability that that is a very bad
- 00:55:57situation and the wrong people will have
- 00:55:59that like spoiler alert like we are
- 00:56:02talking about something that is
- 00:56:03inevitably going to occur they there
- 00:56:05will be restrictions that will keep it
- 00:56:07from impacting Society in the near- term
- 00:56:11but if the capabilities are possible
- 00:56:13someone will build them and someone will
- 00:56:16utilize them for their own gain
- 00:56:19individually or as an organization or as
- 00:56:21a government um this is the world we are
- 00:56:24heading into it is why I said those
- 00:56:26exposure levels I highlighted are so
- 00:56:28critical for people to understand
- 00:56:31nothing I highlight in those e0 to E10
- 00:56:34isn't going to happen like it's it's
- 00:56:37just the timeline in which it happens
- 00:56:39and then what does that mean to us in
- 00:56:41business and
- 00:56:42Society all right let's dive into some
- 00:56:45rapid fire news items this week so first
- 00:56:47up the artificial intelligence chip
- 00:56:50startup grock G Q not g k like Elon
- 00:56:55musk's um
- 00:56:56xai tool this grock has secured a
- 00:57:00massive $640 million in new funding this
- 00:57:05is a series D funding round that values
- 00:57:07the company at 2.8 billion which is
- 00:57:10nearly triple its previous valuation in
- 00:57:132021 so some notable names LED this
- 00:57:16funding round uh Black Rock Inc uh and
- 00:57:19also some investments from The Venture
- 00:57:20arms of Cisco and Samsung Electronics so
- 00:57:24what grock does is they specializ and
- 00:57:26designing semiconductors and software to
- 00:57:29optimize how AI can perform so basically
- 00:57:32this is putting them in direct
- 00:57:34competition with chipmakers like Intel
- 00:57:37AMD and of course Nvidia so the
- 00:57:40company's CEO Jonathan Ross emphasize
- 00:57:42that this funding is going to accelerate
- 00:57:44their mission to deliver quote instant
- 00:57:47AI inference compute globally so Paul
- 00:57:51can you may you unpack for us here why
- 00:57:53this funding is significant why what
- 00:57:55grock is trying to do is significant to
- 00:57:58the overall AI landscape yeah so just a
- 00:58:01quick recap here uh Nvidia has made most
- 00:58:04of their money in the AI space in recent
- 00:58:06years training these AI models so
- 00:58:09companies like meta and Google and open
- 00:58:12Ai and anthropic doing these massive
- 00:58:15training runs to build these models so
- 00:58:17they buy a bunch of Nvidia chips to
- 00:58:18enable that the future is inference that
- 00:58:22is when all of us use these tools to do
- 00:58:25things so grock
- 00:58:27grq is building for a future of
- 00:58:29omnipresent intelligence AI in every
- 00:58:32device in every piece of software
- 00:58:35instantly accessible in our personal and
- 00:58:38professional lives and to power all that
- 00:58:40Intelligence on demand we will all have
- 00:58:43that is inference that is what they have
- 00:58:46managed to do better and seemingly way
- 00:58:49faster than Nvidia doesn't mean Nvidia
- 00:58:50won't catch up or Nvidia won't buy buy
- 00:58:53grock but at the moment they going after
- 00:58:56that inference Market not the training
- 00:58:58market and that is where 5 to 10 years
- 00:59:02from now that market will probably dwarf
- 00:59:05the training Model
- 00:59:07Market so next up we just got a new demo
- 00:59:10video from robotics company figure who
- 00:59:13we've talked about a number of times on
- 00:59:15the podcast and they just released a
- 00:59:17two-minute demo of their figure 02
- 00:59:19humanoid robot uh the demo video showed
- 00:59:22the robot walking through a factory as
- 00:59:25other fig 02 models performed tasks and
- 00:59:28moved around in the background that
- 00:59:30included showing one of the robots um
- 00:59:33completing some assembly tasks that
- 00:59:36figure is actually demoing right now for
- 00:59:38BMW at a Spartanburg South Carolina uh
- 00:59:42car plant the figure posted that their
- 00:59:45engineering and design teams completed a
- 00:59:47groundup hardware and software redesign
- 00:59:49to build this new model that included
- 00:59:51technical advancements on critical
- 00:59:53Technologies like onboard AI computer
- 00:59:56vision batteries electronics and sensors
- 00:59:59the company says the new model can
- 01:00:01actually have conversations with humans
- 01:00:03through onboard mics and speakers
- 01:00:05connected to custom AI models it has an
- 01:00:08aid driven Vision system powered by six
- 01:00:10onboard cameras its hands have 16
- 01:00:13degrees of freedom and according to the
- 01:00:15company human equivalent strength and
- 01:00:17its new CPU GPU provides three times the
- 01:00:21computation and AI inference available
- 01:00:24on board compared to the previous model
- 01:00:27now Paul I love these demo videos and
- 01:00:30it's really easy to kind of look at this
- 01:00:32be like oh my gosh the future is here
- 01:00:34but how do we like gauge the actual
- 01:00:37progress being made here because you
- 01:00:38know Demo's just a demo I don't get to
- 01:00:41go test out the robot yet on my own are
- 01:00:44we actually making real progress towards
- 01:00:47humanoid robots in your opinion yeah I
- 01:00:49do think so and you know the ey timeline
- 01:00:51I'd laid out back in episode 87 sort of
- 01:00:53projected out this explosion of of
- 01:00:55humanoid robots like later in the in the
- 01:00:57decade like 27 to 30 and I do think that
- 01:01:00still holds true I don't think we're
- 01:01:02going to be walking around and seeing
- 01:01:03like these humanoid robots in your local
- 01:01:05Walmart anytime soon um or in a nursing
- 01:01:08care facility or things like that but
- 01:01:09that is where it's going um this is the
- 01:01:12idea of embodied intelligence so figure
- 01:01:14is working on it in partnership with
- 01:01:15openai Nvidia is working on it project
- 01:01:18Groot um Tesla has Optimus that some
- 01:01:21believe that Optimus will end up
- 01:01:23surpassing Tesla cars as the predom
- 01:01:26minent product within that Company
- 01:01:28Boston Dynamics makes all the cool
- 01:01:29videos online that have gone viral
- 01:01:32Through The Years so there's a lot of
- 01:01:33companies working on this the multimodal
- 01:01:36AI models are the brains the humanoid
- 01:01:38robot uh bodies are the vessels so go
- 01:01:41back to the exposure level key I talked
- 01:01:44about exposure level nine is exposure
- 01:01:47given Physical World Vision capabilities
- 01:01:50so llm plus physical device such as
- 01:01:52phones or glasses um or in this case
- 01:01:54being able to see through the screen of
- 01:01:56a robot and see and understand the world
- 01:01:58around them and then exposure level 10
- 01:02:00is physical world action capabilities so
- 01:02:03access to the llm um plus a general
- 01:02:06purpose bipedal autonomous humanoid
- 01:02:08robot that enables the system to see
- 01:02:10understand analyze respond to it take
- 01:02:11action in the physical world and the
- 01:02:13robots form enables it to interact in
- 01:02:15complex human environments with human
- 01:02:16like capabilities like in a BMW Factory
- 01:02:19so again everything in that exposure
- 01:02:22level key is happening right now
- 01:02:26and you can see um kind of the future
- 01:02:28coming when you look at what's going on
- 01:02:30with figure so it's a combination of a
- 01:02:32hardware challenge getting the dexterity
- 01:02:34of human hands for example but the
- 01:02:37embodied intelligence is the
- 01:02:38Breakthrough that's allowing these
- 01:02:39humanoid robots to accelerate their
- 01:02:42development and potential
- 01:02:44impact so next up Elon Musk has
- 01:02:47reignited his legal battle with open Ai
- 01:02:51and co-founders Sam Alman and Greg
- 01:02:53Brockman he has filed a new lawsuit in
- 01:02:55Federal federal court against the
- 01:02:57company this comes just s weeks after he
- 01:03:00withdrew his original suit and the core
- 01:03:02of this complaint is the same as the
- 01:03:04previous lawsuit he is alleging that
- 01:03:06open AI Alman and Brockman betrayed the
- 01:03:09company's original Mission of developing
- 01:03:11AI for the public good instead
- 01:03:13prioritizing commercial interests
- 01:03:15particularly through their multi-billion
- 01:03:18dollar partnership with Microsoft it
- 01:03:20also claims that Alman and Brockman
- 01:03:23intentionally misled musk and exploited
- 01:03:26his humanitarian concerns about ai's
- 01:03:29existential risks now okay with the
- 01:03:31caveat that we are not lawyers the suit
- 01:03:34also does introduce some new elements
- 01:03:36including some type of accusations of
- 01:03:40violating Federal racket teering law on
- 01:03:42the part of the company as well it
- 01:03:45challenges open ai's contract with
- 01:03:47Microsoft and argues that it should be
- 01:03:49voided if open AI has achieved AGI
- 01:03:52interestingly the suit asks the court to
- 01:03:55decide
- 01:03:56if open ai's latest systems have
- 01:03:58achieved AGI open AI has for a while now
- 01:04:01maintained that musk's claims are
- 01:04:03without Merit and they pointed to and
- 01:04:05published some previous emails with musk
- 01:04:08that suggested he had been pushing for
- 01:04:10commercialization as well just like they
- 01:04:12were before leaving in
- 01:04:152018 so Paul why is Elon Musk if we can
- 01:04:19attempt to get inside his brain trying
- 01:04:22to start this lawsuit back up again now
- 01:04:25I don't know maybe he just wants to
- 01:04:26force force Discovery and force them to
- 01:04:29unveil a bunch of proprietary stuff I
- 01:04:31don't know uh episode 86 on March 5th we
- 01:04:34talk pretty extensively about this
- 01:04:37lawsuit uh basic premise here is musk
- 01:04:40you know co-founds opening eye puts in
- 01:04:42the original Money uh as a
- 01:04:44counterbalance to Google's pursuit of
- 01:04:45AGI which he sees as a threat to
- 01:04:47humanity you know remember Strawberry
- 01:04:48Fields taking over the world kind of
- 01:04:50stuff he leaves open AI unceremoniously
- 01:04:53in 2019 after trying to roll open AI
- 01:04:55into Tesla uh he forms X aai in 2023
- 01:05:00early 2024 to pursue AGI himself through
- 01:05:03the building of grock
- 01:05:05grok and he still has a major grudge
- 01:05:08against Greg Sam and open Ai and maybe
- 01:05:11this is what Greg is doing maybe's just
- 01:05:12taking time off to deal with a
- 01:05:14lawsuit I'm joking I have no idea that's
- 01:05:17what Greg is doing but I I you know
- 01:05:19again it's it's fascinating because at
- 01:05:21some point it may lead to some element
- 01:05:24of Discovery and we may learn a bunch of
- 01:05:26Insider stuff uh but up until then you
- 01:05:29know I don't know it's just interesting
- 01:05:30to note that it's back in the news again
- 01:05:33especially with what we suspect are
- 01:05:35impending releases I think this is
- 01:05:37sometimes something Elon Musk also does
- 01:05:39when something big is coming and he's
- 01:05:41about to get perhaps uh overshadowed
- 01:05:45yeah very possible yeah all right so
- 01:05:49next up a YouTube creator has filed a
- 01:05:51class action lawsuit against open Ai and
- 01:05:54they're alleging that the company used
- 01:05:55millions of YouTube video transcripts to
- 01:05:58train its models without notifying or
- 01:06:00compensating content creators so this
- 01:06:02lawsuit is filed by David mullette in
- 01:06:04the US District Court for the Northern
- 01:06:06District of California it claims that
- 01:06:08open AI violated copyright law and
- 01:06:11YouTube's terms of service by using all
- 01:06:13this data to improve its models
- 01:06:15including chat GPT so Millet is actually
- 01:06:18seeking a jury trial and $5 million in
- 01:06:21Damages for all affected YouTube users
- 01:06:24and creators and as longtime listeners
- 01:06:27of the podcast know this is coming just
- 01:06:30as the latest report of many other AI
- 01:06:33companies using YouTube videos to train
- 01:06:36without permission we've talked about
- 01:06:38Runway anthropic Salesforce all on
- 01:06:41previous episodes and we now have a new
- 01:06:44huge expose that Nvidia has been doing
- 01:06:47the same thing so 404 media recently
- 01:06:50reported that leaked internal documents
- 01:06:53show that Nvidia had been scraping
- 01:06:55massive amounts of video content from
- 01:06:57YouTube and other sources like Netflix
- 01:07:00to train its AI model so Nvidia is
- 01:07:04trying to create a video Foundation
- 01:07:06model to power many different products
- 01:07:08including like world generator
- 01:07:10self-driving car system so to create
- 01:07:13that model apparently they have been
- 01:07:15downloading a ton of copyright protected
- 01:07:18videos um and this wasn't just a few
- 01:07:21this wasn't just by mistake emails
- 01:07:23viewed by 404 media show Nvidia Project
- 01:07:26managers discussing using 20 to 30
- 01:07:29virtual machines to download 80 years
- 01:07:31worth of videos per day so it also
- 01:07:35doesn't seem unfortunately like this
- 01:07:38happened via some Rogue elements in the
- 01:07:40company employees raised concerns about
- 01:07:42it and they were told several times the
- 01:07:44decision had executive
- 01:07:47approval so Paul we just keep getting
- 01:07:49stories like this it seems like
- 01:07:51basically every major AI player is
- 01:07:54involved like could something like class
- 01:07:55action lawsuit actually stopped this
- 01:07:58Behavior or what yeah I don't know this
- 01:08:01one's pretty messy there's there's a
- 01:08:03separate one in the proof news uh where
- 01:08:05they actually quoted like from internal
- 01:08:07stuff and one vice president of a
- 01:08:10researched Nvidia said we need one Sora
- 01:08:12like model Sora being Open the Eyes in a
- 01:08:15matter of days Nvidia assembled more
- 01:08:17than a hundred workers to help lay the
- 01:08:19training foundation for a similar state
- 01:08:21of the-art model uh they began curating
- 01:08:24video data sets from around the internet
- 01:08:25ranging in size from hundreds of Clips
- 01:08:27to hundreds of millions according to
- 01:08:29company slack and internal documents
- 01:08:31staff quickly focused on YouTube yeah um
- 01:08:34but then they asked about whether or not
- 01:08:35they should go get all of Netflix and if
- 01:08:37so how how do they do that so it ah yeah
- 01:08:41I'll be interested to see to follow this
- 01:08:43one along it's pretty wild that they've
- 01:08:45got all this Internal Documentation but
- 01:08:47not surprised at all like I said in last
- 01:08:49time we talked about this they are all
- 01:08:51doing this like and they're all doing it
- 01:08:53under the cover of we know that they
- 01:08:55they did this so like we'll do it too
- 01:08:57and yeah it's the only way to compete
- 01:09:00basically so in other uh open AI news a
- 01:09:04big theme this week open AI has
- 01:09:06developed apparently an effective tool
- 01:09:09to detect AI generated text and it can
- 01:09:12do this from with text from particularly
- 01:09:14from chat
- 01:09:15gbt however it has not released this
- 01:09:18tool according to an exclusive in the
- 01:09:20Wall Street Journal so the tool us as a
- 01:09:23water marking technique to identify when
- 01:09:25chat GPT has created text and according
- 01:09:28to internal openai documents viewed by
- 01:09:31the Journal it is reportedly
- 01:09:3499.9% accurate so the company apparently
- 01:09:38has been debating for about two years
- 01:09:40whether to even release this the tool
- 01:09:43has been ready to release for over a
- 01:09:45year and openai has not let it out of
- 01:09:49the outside the company now why is that
- 01:09:52it seems part of this is that us ERS
- 01:09:56could be turned off by such a feature
- 01:09:59the survey that open aai conducted found
- 01:10:01that nearly 30% of chat GPT users would
- 01:10:05use the tool less if water marking was
- 01:10:08implemented an AI open AI spokesperson
- 01:10:11also said the company is concerned such
- 01:10:13tool could disproportionately affect
- 01:10:14non-native English
- 01:10:17speakers Paul what did you make of this
- 01:10:19story it seems like pretty powerful
- 01:10:21technology to be keeping Under Wraps do
- 01:10:23you agree with kind of open a eyes logic
- 01:10:26here I don't know it's hard to you know
- 01:10:31put yourself into their position and and
- 01:10:33these are Big difficult decisions
- 01:10:34there's while it is 99.9% accurate um
- 01:10:39they have some concerns that the
- 01:10:40watermarks could be erased through
- 01:10:42simple techniques like having Google
- 01:10:43translate the text into another language
- 01:10:45and then change it back so it's kind of
- 01:10:47that whole thing like the cheaters are
- 01:10:49going to stay ahead of the technology
- 01:10:50you would think if this doesn't seem
- 01:10:52foolproof at this point uh it also could
- 01:10:54give Bad actors the ability to decipher
- 01:10:57the watermarking technique and Google
- 01:10:59does have the Sith ID and they haven't
- 01:11:01released it widely yeah I did find it
- 01:11:03interesting the one note was that John
- 01:11:05Schulman who we talked about earlier has
- 01:11:07left go to anthropic he was heavily
- 01:11:09involved in the building of this in
- 01:11:10early 2023 he outlined the pros and cons
- 01:11:13of the tool in an internal shared Google
- 01:11:16doc and that's when opena Executives
- 01:11:18decided they would seek input from a
- 01:11:20range of people before acting further so
- 01:11:22yeah this has been going on for a while
- 01:11:24um I don't know I I'm not sure we're
- 01:11:26going to get to a point where there's
- 01:11:27some you know I've said before like we
- 01:11:28need a universal standard we don't need
- 01:11:31just a watermarking tool for chat GPT or
- 01:11:33just a watermarket tool for Google
- 01:11:34Gemini like we need an industry standard
- 01:11:36tool if we're going to do it and then we
- 01:11:37got to do it the right
- 01:11:39way so in some other news uh a new AI
- 01:11:42image generator is getting a ton of
- 01:11:45attention online it's called flux
- 01:11:47technically it's flux. one is the model
- 01:11:50everyone kind of refers to it as flux
- 01:11:52and it's getting a ton of Buzz because
- 01:11:54it's generating really really high
- 01:11:56quality results and it is open source so
- 01:12:00flux was developed by black forest Labs
- 01:12:02whose Founders previously worked at
- 01:12:04stability Ai and flux is kind of seen as
- 01:12:07a potential successor to stable
- 01:12:09diffusion and what kind of sets this
- 01:12:11apart is that it has these smaller
- 01:12:14models that can run on reasonably good
- 01:12:16Hardware including high performance
- 01:12:18laptops so you can basically as a
- 01:12:20hobbyist developer small business run
- 01:12:23this really sophisticated image model
- 01:12:25that people are sharing lots of examples
- 01:12:27of not only like these stunning kind of
- 01:12:29hyper realistic or artistic results like
- 01:12:31a mid Journey would produce uh but also
- 01:12:34it's doing things like getting text
- 01:12:35right in the images so it really does
- 01:12:37seem to be pretty powerful and it
- 01:12:40appears to be open source which means
- 01:12:42you can go access it yourself through
- 01:12:44things like Po hugging face other hubs
- 01:12:46of AI of AI models that are open source
- 01:12:49and kind of run it on your own kind of
- 01:12:52customize the code however you would
- 01:12:53like so Paul I've seen some pretty cool
- 01:12:57demos of this like this seems like the
- 01:12:59real deal interesting to have this
- 01:13:01capability open sourced which we've
- 01:13:04talked about you know could be a
- 01:13:06potential problem as people are
- 01:13:08generating deep fakes and other
- 01:13:09problematic types of images what did you
- 01:13:11make of this yeah the wildest demos I've
- 01:13:14seen are taking flux images and then um
- 01:13:17animating them with like gen three from
- 01:13:19Runway so turning them into 10-second
- 01:13:21videos are just crazy yeah so you know
- 01:13:25not easily accessible which is it's
- 01:13:27always just so interesting how these
- 01:13:28things are released like it's there's no
- 01:13:30app to go get like you can't you have to
- 01:13:32like download something I think to be
- 01:13:33able to use it so I haven't tested it
- 01:13:35myself it is just checking it out but
- 01:13:37yeah it it it's just the continued rate
- 01:13:40of improvement of these image and video
- 01:13:43models is really hard to comprehend and
- 01:13:45it it just seems like there's no end in
- 01:13:47sight for how realistic the outputs are
- 01:13:51becoming all right and our last news
- 01:13:54topic today um California's proposed AI
- 01:13:56legislation we've talked about before
- 01:13:58known as
- 01:14:00sb147 is facing criticism from a
- 01:14:03prominent figure in Ai and that figure
- 01:14:06is Dr F Fe Lee who's often referred to
- 01:14:09as the Godmother of AI uh she's a
- 01:14:11researcher who has voiced strong
- 01:14:13concerns about the potential negative
- 01:14:16impacts of the bill so SB 1047 is short
- 01:14:20for a safe and secure Innovation for
- 01:14:22Frontier artificial intelligence models
- 01:14:25act which aims to essentially regulate
- 01:14:28large AI models in
- 01:14:30California Dr Lee however argues that
- 01:14:33the bill itself could end up harming the
- 01:14:35entire us aai ecosystem so she outlines
- 01:14:39three big problems with how the bill is
- 01:14:42written today first it unduly punishes
- 01:14:46developers and potentially stifles
- 01:14:48Innovation because it's starting to hold
- 01:14:50people liable for any misuse done with
- 01:14:53their AI models not not just by them but
- 01:14:56by other people second there is a
- 01:14:59mandated kill switch for AI programs
- 01:15:02that could she says devastate The open-
- 01:15:04Source community and third the bill
- 01:15:07could public sector and academic
- 01:15:09AI research by limiting access to a lot
- 01:15:11of the necessary models and data to do
- 01:15:14that work so Paul this is kind of yet
- 01:15:18another prominent AI voice to raise
- 01:15:20objections to this bill we've talked
- 01:15:21about Andrew un uh published quite
- 01:15:24extensively x uh recently about the bill
- 01:15:27however other people like Jeff Hinton
- 01:15:29support it do you see this as
- 01:15:32potentially problematic for AI
- 01:15:35Innovation how are you kind of looking
- 01:15:37at this yeah I I do think it would
- 01:15:39impact Innovation certainly it would
- 01:15:42definitely impact open source um I don't
- 01:15:46know I mean the more time we spend in
- 01:15:47this space the more I think about these
- 01:15:49things the more I think we need
- 01:15:50something I don't know if this is the
- 01:15:52right thing but I think we're
- 01:15:55um by 2025 going to enter an arena where
- 01:15:58it's very important that there's
- 01:16:01more uh there are more guard rails in
- 01:16:03place than currently exist for these
- 01:16:05models and I don't know what the
- 01:16:08solution is but we need something and I
- 01:16:11think we need it sooner than later and
- 01:16:14so I I think it's good that
- 01:16:15conversations like these are happening I
- 01:16:17get that there's going to be people on
- 01:16:18both sides of this like any um you know
- 01:16:21important topic I don't feel strongly
- 01:16:23one way or the other at the moment but I
- 01:16:25feel like something needs to be done we
- 01:16:28we cannot wait until mid to late next
- 01:16:31year um to have these conversations so I
- 01:16:35hope something happens sooner than
- 01:16:38later all right Paul that's a wild week
- 01:16:41lots of tie-ins lots of related topics
- 01:16:43thanks for connecting all the dots for
- 01:16:45us this week um just a quick reminder to
- 01:16:48everyone uh if you have not checked out
- 01:16:50our newsletter yet at marketing AI
- 01:16:52institute.com newsletter it's called
- 01:16:55this week in AI it covers a ton of other
- 01:16:57stories that we didn't get to in this
- 01:16:59episode and does so every week so you
- 01:17:01have a really nice comprehensive brief
- 01:17:04as to what's going on in the industry
- 01:17:06all curated for you each and every week
- 01:17:08and also if your podcast platform or
- 01:17:11tool of choice allows you to leave a
- 01:17:13review we would very much appreciate if
- 01:17:16you could do that for us every review uh
- 01:17:18helps us improve the show helps us um
- 01:17:22get it into the hands of more people and
- 01:17:24just helps us generally create a better
- 01:17:27product for you so if you haven't done
- 01:17:29that it's the most important thing you
- 01:17:30can do for us please go ahead and drop
- 01:17:33us a review Paul thanks so much yeah
- 01:17:36thanks everyone for joining us again a
- 01:17:38reminder get those makon tickets just
- 01:17:41m.ai and keep an eye on the Strawberry
- 01:17:43Fields this week it might might be an
- 01:17:45interesting week in
- 01:17:47AI thanks for listening to the AI show
- 01:17:51visit marketing AI institute.com to
- 01:17:54continue your AI a i Learning Journey
- 01:17:56and join more than 60,000 professionals
- 01:17:58and Business Leaders who have subscribed
- 01:18:00to the Weekly Newsletter downloaded the
- 01:18:02AI blueprints attended virtual and
- 01:18:05in-person events taken our online AI
- 01:18:08courses and engaged in the slack
- 01:18:10Community until next time stay curious
- 01:18:13and explore AI
- Inteligencia Artificial
- OpenAI
- Persuasión
- Regulación IA
- Transformación Laboral
- Robots
- Demanda Legal
- Seguridad IA