CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival

00:45:22
https://www.youtube.com/watch?v=lPvqvt55l3A

الملخص

TLDRThis discussion features Demis Hassabis, a significant figure in the AI industry and current CEO of Microsoft AI, who has been involved in AI from the early days through ventures like DeepMind, later acquired by Google. The conversation touches on the evolution of AI, from language models to agents, and anticipates a future where AI becomes increasingly integrated into everyday life. The discussion addresses the current stage of AI development, likening it to the early innings of a baseball game, indicating immense growth potential. Demis and the moderator explore the notion of AGI (artificial general intelligence), the possible societal impacts, and the need for stringent ethics and regulation. The conversations extend to the importance of setting global governance systems to ensure AI's benefits outweigh potential harms. Demis highlights the vital need to balance innovation with regulation, drawing parallels with historical tech like automobiles which were eventually regulated to improve safety. The conversation also delves into the implications of vast data consumption by AI, ethical considerations of data sourcing, and ensuring AI benefits are distributed fairly without exacerbating inequality. Demis suggests education and creating a multidisciplinary background are essential in preparing for a future with AI. In the future, AI is seen as a tool to aid productivity and innovation while necessitating careful management of associated risks.

الوجبات الجاهزة

  • 🤖 AI is at an early development stage with enormous potential.
  • 🔍 There's a critical need to address AI safety and regulation.
  • 📚 Education must adapt to incorporate AI and digital learning methods.
  • 🌐 Global governance is essential for managing AI impacts.
  • 📊 Data use in AI raises intellectual property concerns.
  • 💡 Innovation and regulation must balance for societal benefit.
  • 🔒 Open vs. Closed AI models remains a debated issue.
  • 🧠 Emotional intelligence in AI reflects on deeper human engagement.
  • 🌎 Power is concentrating in AI tech companies due to high resource needs.
  • 📈 AI could become a major tool for economic and cultural advancement.

الجدول الزمني

  • 00:00:00 - 00:05:00

    This introductory segment sets the stage for a discussion about the future of AI and its safety challenges with a prominent figure in the industry, Demis Hassabis, currently leading Microsoft’s AI initiatives. Hassabis is recognized for his significant contributions to AI, including co-founding DeepMind and Inflection along with Reid Hoffman.

  • 00:05:00 - 00:10:00

    The discussion revolves around how AI is perceived at different stages and Hassabis emphasizes the rapid, exponential growth of AI technologies. He discusses the transition from languages models to advanced AI agents and how these developments are changing daily lives, making AI a fundamental part of societal progress currently in its early stages.

  • 00:10:00 - 00:15:00

    Hassabis cautions against getting overly fixated on achieving superintelligence, emphasizing the importance of addressing AI safety and governance. He argues that while theoretically achievable, superintelligence is not imminent, and current focus should be on ensuring AI benefits humanity positively without causing harm, advocating for balanced and effective global governance.

  • 00:15:00 - 00:20:00

    The conversation switches to the topic of AI regulation. Hassabis states it’s challenging to determine whether tech leaders' calls for regulation are sincere or self-serving. However, he insists on the necessity of constructive regulation, arguing that successful regulation, like that of automobiles, improves sectors and that such regulation should not stall innovation, especially in open source areas.

  • 00:20:00 - 00:25:00

    The speaker addresses dynamics within AI companies like OpenAI. Despite some employees leaving and criticizing safety measures, Hassabis appreciates the importance of whistleblowing for ethical practices. He acknowledges OpenAI's achievements and their efforts in prioritizing safety, which he highlights as essential amidst rapid technological advancements.

  • 00:25:00 - 00:30:00

    Debates in AI development, data sources, and AI's ability to self-improve are brought up next. Hassabis explains that disagreements within AI arise from differing perspectives on future advancements and data availability. He highlights some limitations, such as running out of data, but remains optimistic about continued progress and adapting to these challenges.

  • 00:30:00 - 00:35:00

    Hassabis discusses the ethical concerns of AI training on existing data, emphasizing the need to address IP rights. He highlights a societal shift where the economic model of information production is changing, causing potential disruptions but also major advancements in knowledge accessibility which underpin technological progress.

  • 00:35:00 - 00:40:00

    The dialogue pivots to the implications of AI becoming integrated into various aspects of life, questioning professionals about how young generations should prepare for a future dominated by AI. Hassabis suggests cultivating adaptability and learning across various domains as crucial for future success, reflecting the necessary shifts in education approaches.

  • 00:40:00 - 00:45:22

    Finally, the discussion wraps up with audience questions, exploring issues from misinformation to national security concerns in AI, and reflecting on educational reforms needed to integrate AI effectively. Hassabis emphasizes pragmatic approaches, acknowledging dualistic challenges in both risk management and opportunity harnessing within the current tech landscape.

اعرض المزيد

الخريطة الذهنية

Mind Map

الأسئلة الشائعة

  • Who is Demis Hassabis?

    Demis Hassabis is the CEO of Microsoft AI, co-founder of DeepMind and Inflection, and a prominent figure in AI development.

  • What was the purpose of the discussion?

    The discussion focused on AI's present state, future potential, safety challenges, ethical considerations, and societal impacts.

  • How does AI affect data usage and IP?

    AI relies heavily on data, often from open sources, raising questions about intellectual property rights and data usage ethics.

  • What are the concerns regarding AI safety?

    AI safety concerns include the potential for misuse, regulation needs, and ensuring technology benefits outweigh the risks.

  • What stance does Demis Hassabis take on AI regulation?

    Demis supports thoughtful regulation of AI to ensure safety and ethical use while embracing technological advancement.

  • How is AI compared to past technologies like cryptocurrency?

    AI is seen as far more impactful and enduring than cryptocurrency, which had issues delivering tangible value.

  • What role does AI play in education?

    AI can enhance education by aiding lesson planning and fostering interactive, real-time learning environments.

  • What is the role of AI in elections?

    Microsoft has taken a stance against using AI for electioneering due to current limitations and ethical concerns.

  • What is the challenge of AI in power concentration?

    The major challenge lies in the concentration of AI capabilities becoming limited to powerful tech companies due to high resource demands.

  • What is the debate over open source AI?

    There is debate over open vs. closed source AI models, balancing transparency with potential risks of misuse.

عرض المزيد من ملخصات الفيديو

احصل على وصول فوري إلى ملخصات فيديو YouTube المجانية المدعومة بالذكاء الاصطناعي!
الترجمات
en
التمرير التلقائي:
  • 00:00:06
    TO BE HERE TO TALK ABOUT AI THE FUTURE OF IT WHERE WE ARE,
  • 00:00:11
    THE CHALLENGES OF ITS SAFETY AND SO MUCH MORE. AND WE'RE
  • 00:00:17
    HERE WITH DEMONSTRABLY ONE OF THE OJI'S OF THE AI WORLD WILL
  • 00:00:21
    STOP SELLING AND IS THE CEO NOW OF MICROSOFT AI HE SPENT WITH A
  • 00:00:26
    DECADE AT THE FOREFRONT OF THIS INDUSTRY BEFORE WE EVEN HAD
  • 00:00:29
    GOTTEN TO FEEL IT IN THE PAST COUPLE OF YEARS. NOW HE
  • 00:00:34
    CO-FOUNDED AT DEEPMIND BACK IN 2010. THAT WAS LATER ACQUIRED
  • 00:00:37
    BY GOOGLE IN 2022, HE CO-FOUNDED A COMPANY CALLED
  • 00:00:44
    INFLECTION. HE DID THAT WITH REID HOFFMAN AND THEN RECENTLY
  • 00:00:49
    MOVE THAT COMPANY INSIDE OF MICROSOFT AND IS NOW THE HEAD
  • 00:00:53
    OF MICROSOFT'S AI EFFORTS. SO THANK YOU SO MUCH FOR BEING
  • 00:00:55
    HERE. AND JUST AS A SIDE NOTE,
  • 00:00:58
    HIS PERSONAL STORY, IF I COULD, JUST FOR A SECOND, HIS DAD WAS
  • 00:01:03
    A TAXI DRIVER. HIS MOTHER WAS A NURSE. AND TO SEE HIM NOW IS
  • 00:01:06
    QUITE A THING. SO THANK YOU FOR BEING HERE. REALLY IS SOMETHING
  • 00:01:10
    VERY, VERY SPECIAL. THANK YOU. THANKS A LOT.
  • 00:01:14
    I WANT TO TALK ABOUT THE GOOD, THE BAD, THE UGLY AND
  • 00:01:17
    EVERYTHING IN BETWEEN WHERE WE'RE GOING. BUT LET'S JUST
  • 00:01:22
    START VERY BASICALLY IF IF I WAS A BASEBALL GAME,
  • 00:01:26
    WHAT INNING ARE WE IN? WHERE ARE WE IN THIS? BECAUSE I THINK
  • 00:01:30
    SO MANY OF US HAVE PLAYED WITH CHAT, GPT AND OTHER KINDS OF AI
  • 00:01:33
    PRODUCTS. WE ALL THINK WE KNOW WHAT IT IS A LITTLE BIT.
  • 00:01:37
    WE ALL KEEP TALKING ABOUT WHAT THE FUTURE OF OUR LIFE IS GOING
  • 00:01:39
    TO BE WITH AI.
  • 00:01:43
    IT MIGHT BE HELPFUL JUST TO SORT OF LEVEL SET FOR US WHERE
  • 00:01:44
    WE REALLY ARE.
  • 00:01:47
    >> BECAUSE I THINK THE CHALLENGE IS THAT AS HUMANS WE
  • 00:01:52
    RIDDLED WITH BIASES OF ALL KINDS. SO WE LOOK BACK AND IT
  • 00:01:56
    FEELS INCREMENTAL. BUT IN THE MOMENT, IT REALLY IS FAR MORE
  • 00:02:00
    EXPONENTIAL THAN PEOPLE REALIZE. I MEAN, IT 2 YEARS
  • 00:02:05
    AGO, NOBODY WAS TALKING ABOUT AGENTS, YOU KNOW, AI SYSTEMS
  • 00:02:08
    THAT CAN TAKE ACTIONS A NEW WORLD TODAY. EVERYBODY IS
  • 00:02:11
    EXCITED THAT OVER THE NEXT COUPLE OF YEARS, AGENTS GOING
  • 00:02:14
    TO BE EVERY WAY DOING THINGS ON OUR BEHALF, ORGANIZING OUR
  • 00:02:18
    LIVES, PRIORITIZING AND PLANNING 4 YEARS AGO. NOBODY
  • 00:02:21
    WAS REALLY TALKING ABOUT LANGUAGE MODELS. TODAY.
  • 00:02:24
    EVERYBODY IS TALKING ABOUT LANGUAGE MODELS. SO IT'S JUST A
  • 00:02:28
    VERY SURREAL EXPERIENCE TO BE INVENTING TECHNOLOGIES, WHICH
  • 00:02:32
    AT ONE MOMENT FEEL IMPOSSIBLE. AND THEN JUST A FEW YEARS LATER
  • 00:02:36
    AND NOW, YOU KNOW, COMING TO PASS, IF YOU JUST THINK ABOUT
  • 00:02:36
    IT,
  • 00:02:40
    IT IS NOW POSSIBLE TO ASK ANY ONE OF THESE CHATBOTS ANY
  • 00:02:42
    QUESTION THAT YOU COULD POSSIBLY IMAGINE AND GET IN ON.
  • 00:02:46
    SO THAT IS PRETTY MUCH AS GOOD AS ANYONE OF US COULD. AUNTS
  • 00:02:49
    ARE IN THIS ROOM ON AVERAGE. AND IT'S EASY TO TAKE THAT FOR
  • 00:02:53
    GRANTED. BUT THAT IS LIKE YEARS AND YEARS OF VERY SLOWLY AND
  • 00:02:58
    STEADILY ADDING CAPABILITY FOR SUCH A EXPERIMENT ING TESTING
  • 00:03:02
    THINGS OUT. AND THEN IN SOME IT REALLY, I THINK CREATES QUITE A
  • 00:03:05
    MAGICAL EXPERIENCE. IF I WERE TO LOOK OUT OVER THE NEXT 10
  • 00:03:09
    YEARS, IT FEELS LIKE WE'RE RIGHT AT THE VERY BEGINNING OF
  • 00:03:10
    THIS MOMENT.
  • 00:03:14
    >> WE'VE ALL PLAYED WITH WITH WITH CHAT GPT BUT OTHERS TALK
  • 00:03:19
    ABOUT G I THIS THIS SORT OF A WORLD WHERE ARTIFICIAL,
  • 00:03:21
    GENERALIZED, INTELLIGENT INTELLIGENCE CAN EFFECTIVELY DO
  • 00:03:24
    EVERYTHING, ANYTHING AND EVERYTHING. THAT IS SORT OF THE
  • 00:03:28
    THING THAT IS BOTH THE GREAT OPPORTUNITY AND ALSO THE THING
  • 00:03:31
    THAT I THINK PEOPLE FEAR IN MANY WAYS
  • 00:03:33
    HE'S AT ON YOUR DECADE LONG ROAD MAP.
  • 00:03:38
    >> I JUST NOT QUITE SURE THAT'S A SORT OF HELPFUL FRAMING AND
  • 00:03:43
    WE'VE GOT TO GET FIXATED ON SUPER INTELLIGENCE THIS MOMENT
  • 00:03:47
    WHEN AN AI CAN DO EVERY SINGLE TOSS THAT A HUMAN CAN DO BETTER
  • 00:03:48
    THAN HUMANS. AND
  • 00:03:52
    IF YOU REALLY PUSH ME, I HAVE TO SAY, THEORETICALLY, IT IS
  • 00:03:56
    POSSIBLE AND WE SHOULD TAKE THE SAFETY RISKS ASSOCIATED WITH
  • 00:04:00
    THAT SUPER SERIOUSLY. I HAVE BEEN ADVOCATING FOR THE SAFETY
  • 00:04:02
    AND ETHICS OF H I
  • 00:04:05
    ALMOST 15 YEARS. AND SO I I REALLY DO CARE ABOUT IT A LOT.
  • 00:04:10
    BUT PEOPLE TEND TO LUNGE AT IT AS THOUGH IT IS A INEVITABLE BE
  • 00:04:13
    DESIRABLE AND SEE COMING TOMORROW. I JUST THINK NEITHER
  • 00:04:18
    OF THOSE THINGS, THE TRUE, IT IS AN UNPRECEDENTED TECHNOLOGY.
  • 00:04:21
    AND AS A SPECIES, WE'RE GOING TO HAVE TO FIGURE OUT A NEW
  • 00:04:24
    KIND OF GLOBAL GOVERNANCE MECHANISMS. SO THAT TECHNOLOGY
  • 00:04:27
    CONTINUES TO ALWAYS SOUTH AUSTIN, MAKE A SMALL, HEALTHY
  • 00:04:30
    AND HAPPY OR MAKE US MORE EFFICIENT IN ADD VALUE IN THE
  • 00:04:33
    WELL, BECAUSE THERE IS A RISK THAT THE TECHNOLOGIES THAT WE
  • 00:04:38
    DEVELOP THIS CENTURY DO END UP CAUSING MORE HARM THAN GOOD.
  • 00:04:42
    AND THAT IS JUST THE KIND OF CUTS OF PROGRESS. WE HAVE TO
  • 00:04:45
    FACE THAT REALITY, RIGHT? THESE TECHNOLOGIES ARE GOING TO
  • 00:04:49
    SPREAD FAR AND WIDE. EVERYBODY IS GOING TO HAVE TO CAPACITY TO
  • 00:04:53
    INFLUENCE PEOPLE AT SCALES PREVIOUSLY CONSIDERED TO BE
  • 00:04:55
    COMPLETELY UNIMAGINABLE. AND SO THAT'S WHERE YOU'RE
  • 00:04:58
    GOING TO BE A QUESTION OF GOVERNMENT TO ASK YOU A
  • 00:04:58
    QUESTION ABOUT THAT.
  • 00:05:01
    >> WHICH IS THAT LEADERS IN THIS INDUSTRY, INCLUDING
  • 00:05:03
    YOURSELF AND SAM MOMENT IS SO MANY OTHERS HAVE GONE TO
  • 00:05:05
    CAPITOL HILL AND ELSEWHERE AROUND THE WORLD. HE SAID,
  • 00:05:09
    PLEASE, WE MAY NEED NOT. WE MAY WE NEED REGULATION THAT
  • 00:05:12
    THERE ARE LOTS OF PROBLEMS POTENTIALLY WITH THIS
  • 00:05:15
    TECHNOLOGY. IT'S A UNIQUE IN THAT VERY FEW VERY FEW
  • 00:05:18
    INDUSTRIES TO THE CEOS GO TO THE REGULATORS SAY, PLEASE
  • 00:05:21
    REGULATE ME. IS THAT A SINCERE EFFORT
  • 00:05:26
    OR IS THAT A PROTECTIVE EFFORT TO SAY, LOOK, WE TOLD YOU THERE
  • 00:05:29
    COULD BE PROBLEMS. WE KNEW YOU WERE NEVER GOING TO REGULATE US
  • 00:05:31
    ANYWAY, SEE SOCIAL MEDIA.
  • 00:05:33
    AND HERE WE ARE NOW.
  • 00:05:36
    >> YOU KNOW, AGAIN, I I THINK THAT THIS IS ALSO A FALSE
  • 00:05:40
    FRAME. YOU KNOW, IS IT UP, YOU KNOW, A SINCERE EFFORT OR IS IT
  • 00:05:41
    ACTUALLY ABOUT REGULATORY CAPTURE?
  • 00:05:44
    >> BECAUSE THEY GET UP. THERE WAS ACTUAL REGULATION. I I HAVE
  • 00:05:47
    A FUNNY FEELING. WE WOULD HAVE A DIFFERENT CONVERSATION ABOUT
  • 00:05:49
    WHICH PIECES ARE REGULATED, HOW IT'S REGULATED. THERE BE A LOT
  • 00:05:52
    OF PUSHBACK ON THE SENATE MADE. MAYBE IT'S BECAUSE I'M A SORT
  • 00:05:56
    OF, YOU KNOW, BRIDGE WITH EUROPEAN TENDENCIES. BUT I
  • 00:05:57
    DON'T FEAR REGULATION.
  • 00:05:59
    >> IN THE WAY THAT SORT OF EVERYONE SEEMS TO BY DEFAULT
  • 00:06:03
    THINK THE REGULATION IS EVIL HERE. I THINK THERE IS NO
  • 00:06:06
    TECHNOLOGY IN HISTORY THAT HASN'T BEEN SUCCESSFULLY
  • 00:06:09
    REGULATED ALL ABOUT JUST LOOK AT THE CAR AND WE'VE BEEN
  • 00:06:13
    REGULATING CAUSE SINCE THE 20'S AND HAS CONSISTENTLY STEADILY
  • 00:06:16
    MADE CARS BETTER. AND IT'S NOT JUST THE CAR ITSELF. IT'S THE
  • 00:06:19
    STREETLIGHTS. IT'S THE TRAFFIC LIGHTS. IT'S THE ZEBRA
  • 00:06:22
    CROSSINGS. IT'S THE CULTURE OF SAFETY AROUND DEVELOPING A
  • 00:06:26
    CAUGHT DRIVING A CALL BE. SO THIS IS JUST A HEALTHY
  • 00:06:29
    DIALOGUE THAT WE JUST NEED TO ENCOURAGE AND STOP FRAMING AS A
  • 00:06:33
    KIND OF BLACK AND WHITE. IT'S I THE EVIL REGULATION OR
  • 00:06:36
    IT CYNICALLY MINDED TO DRIVE REGULAR TREE CAPTURE. I THINK
  • 00:06:39
    IT'S A GREAT THING. THE TECHNOLOGISTS AND ENTREPRENEURS
  • 00:06:42
    AND CEOS OF COMPANIES LIKE MYSELF AND SAM, WHO I LOVE
  • 00:06:46
    DEARLY AND THINK IS ALSO HE'S NOT CYNICAL. HE IS SINCERE.
  • 00:06:49
    HE BELIEVES IT GENUINELY SO TO THE REST OF THEM. AND I THINK
  • 00:06:51
    THAT'S JUST GREAT. WE SHOULD EMBRACE THAT. AND OF COURSE,
  • 00:06:54
    THAT WILL BE DOWNSIDES TO IT, RIGHT? SO I'M NOT DENYING THAT,
  • 00:06:58
    YOU KNOW, PULL REGULATION CAN SLOW US DOWN, MAKE US LESS
  • 00:07:01
    COMPETITIVE, CREATE CHALLENGES WITH OUR INTERNATIONAL AVERAGE
  • 00:07:04
    TREES. YOU KNOW IT. AND IT CERTAINLY SHOULDN'T SLOW DOWN
  • 00:07:06
    THE OPEN SOURCE COMMUNITY, ALL THE STOPS.
  • 00:07:09
    >> LET ME ASK A MORE COMPLICATED QUESTION. MICROSOFT
  • 00:07:10
    HAS A DEEP,
  • 00:07:12
    DEEP RELATIONSHIP WITH OPEN AI.
  • 00:07:17
    THAT IS HOW MICROSOFT, AT LEAST FOR RIGHT NOW IS PURSUING IT AI
  • 00:07:19
    AMBITIONS. I KNOW THERE'S LOTS OF OTHER EFFORTS UNDERNEATH AND
  • 00:07:22
    WE'LL TALK ABOUT THOSE IN A MINUTE. BUT YOU REFERENCE AM
  • 00:07:25
    ALTMAN WHO'S GOING TO BE HERE AND SPEAKING TOMORROW. THERE'VE
  • 00:07:27
    BEEN A LOT OF HEADLINES AND I WANT YOU TO HELP US MAKE SOME
  • 00:07:31
    SENSE OF THEM WHERE WE HAVE SEEN A NUMBER OF PEOPLE INSIDE
  • 00:07:36
    OPENAI WHO WORKED ON THEIR SAFETY TEAMS AT OPENAI NOT JUST
  • 00:07:41
    LEAVE THE COMPANY BUT OPENLY OBJECT TO THE
  • 00:07:43
    APPROACH THAT THE COMPANY HAS TAKEN. I'M GOING TO READ YOU A
  • 00:07:46
    QUOTE FROM ONE OF THE PEOPLE ON THE SAFE TEAMS LEFT. YOU SAID
  • 00:07:50
    THESE PROBLEMS ARE QUITE HARD TO GET RIGHT AND I AM CONCERNED
  • 00:07:53
    WE AREN'T ON A TRAJECTORY TO GET THEM RIGHT. IT IS VERY
  • 00:07:57
    UNUSUAL TO SEE EMPLOYEES LEAVE AND SPEAK OUT THE WAY THEY HAVE
  • 00:08:01
    ABOUT WHAT'S HAPPENING INSIDE OPEN AT WHAT DO YOU THINK IS
  • 00:08:01
    HAPPENING?
  • 00:08:04
    >> I'M PROUD THAT WE LIVE IN A COUNTRY AND WE OPERATE IN A
  • 00:08:08
    TECH ECOSYSTEM WHERE THAT CAN BE WHISTLE BLOWS AND THOSE
  • 00:08:11
    WHISTLE BLOWS ARE ENCOURAGED AND SUPPORTED. AND IF THOSE
  • 00:08:14
    PEOPLE FEEL LIKE THIS IS THE MOMENT WHERE THEY NEED TO MAKE
  • 00:08:17
    THAT STATEMENT, I CELEBRATE AND SUPPORT THAT. I PERSONALLY HAVE
  • 00:08:20
    ENORMOUS RESPECT FOR EVERYTHING THE OPENAI HAS ACHIEVED.
  • 00:08:23
    I THINK THEY CAN GO DOWN AS ONE OF THE DEFINING TECHNOLOGY
  • 00:08:27
    COMPANIES IN HISTORY. AND I GENUINELY THINK THAT THEY'RE
  • 00:08:30
    GRAPPLING IN A SINCERE WAY WITH THE CHALLENGE OF PUSHING
  • 00:08:33
    FORWARD WITH THE TECHNOLOGY IS POSSIBLE POSSIBLE, BUT ALSO
  • 00:08:35
    PUTTING SAFETY ON THE TABLE FRONT AND CENTER. I MEAN,
  • 00:08:39
    THEY'VE BEEN LEADING THE SAFETY RESEARCH AGENDA. LET'S JUST BE
  • 00:08:42
    CLEAR ABOUT THAT. PUBLISHING PAPERS COMMITTING TO ACADEMIC
  • 00:08:45
    ARCHIVES ATTENDING THE CONFERENCE IS RAISING AWARENESS
  • 00:08:49
    OF THESE ISSUES, OPEN SOURCING, SAFETY, TOOLING AND AND
  • 00:08:51
    INFRASTRUCTURE TICKETS. NOW INSIDE THE ROOM, THOUGH,
  • 00:08:54
    WHEN PEOPLE ARE HAVING THIS DEBATE TO THE EXTENT WORSE AND
  • 00:08:55
    WE'RE SEEING IT PLAY OUT.
  • 00:08:58
    >> WHAT IS THE DEBATE OVER? IS IT ABOUT RESOURCES? HOW MUCH
  • 00:09:01
    MONEY IS BEING DEVOTED TOWARDS THE SAFETY ISSUE? IT'S ABOUT
  • 00:09:06
    THE PSYCHOLOGY AND PHILOSOPHY ABOUT HOW HOW QUICKLY TO MOVE
  • 00:09:07
    FORWARD
  • 00:09:10
    AND WHERE DOES MICROSOFT SIT IN THAT DISCUSSION?
  • 00:09:15
    >> BUT I THINK SMART PEOPLE CAN SINCERELY DISAGREE ABOUT THE
  • 00:09:19
    SAME OBSERVATIONS. SO SOME INDIVIDUALS LOOK AT THE
  • 00:09:23
    EXPONENTIAL TRAJECTORY THAT WE'RE ON AND THEY WILL ARGUE
  • 00:09:29
    THAT FOR THE LAST 60 IS WE HAVE SEEN AN ORDER OF MAGNITUDE MORE
  • 00:09:34
    COMPUTE AND MORE DATA APPLIED TO THE SAME METHOD. SO 10 TIMES
  • 00:09:38
    MORE COMPUTER THAN 10 TIMES MORE DATA APPLIED EVERY SINGLE
  • 00:09:42
    YEAR. IN FACT, FOR THE LAST DECADE AND EACH TIME YOU ADD 10
  • 00:09:46
    X MALL, YOU DO LIVE A VERY MEASURABLE IMPROVEMENTS IN
  • 00:09:50
    CAPABILITIES. THE MODELS PRODUCE FEW LOOSE, THE NATION'S
  • 00:09:52
    THEY CAN ABSORB MORE INFORMATION, MORE FACTION
  • 00:09:56
    INFORMATION. THEY CAN INTEGRATE REAL-TIME INFORMATION AND
  • 00:10:00
    THEY'RE MORE OPEN TO STYLISTIC CONTROL. SO YOU CAN TUNE THAT
  • 00:10:03
    BEHAVIOR, BOTH FROM A PRODUCT PERSPECTIVE AND FROM UP.
  • 00:10:06
    SO TO THE PERSONALITY OF THE MOBILE AND THE PEOPLE WHO ARE
  • 00:10:10
    MOST SCAD ABOUT THIS ARGUE THAT THEY CAN SEE A PASS OVER THE
  • 00:10:15
    NEXT 3, 0, 4, 0, 5, ORDERS OF MAGNITUDE. CALL IT 5 IS A E IS
  • 00:10:19
    WET THAT CAPABILITY. SHE ACTUALLY DOESN'T SLOW DOWN.
  • 00:10:22
    ACTUALLY, IT JUST GETS BETTER AND BETTER AND BETTER. THE
  • 00:10:24
    COUNTER ARGUMENT TO THAT IS THAT WE'RE ACTUALLY RUNNING OUT
  • 00:10:27
    OF DATA. WE NEED TO FIND NEW SOURCES OF INFORMATION TO LEARN
  • 00:10:30
    FROM. AND WE DON'T KNOW THAT IT'S JUST GOING TO KEEP GETTING
  • 00:10:32
    BETTER. THAT COULD ASK THEM TO WRITE. WE'VE SEEN THAT MANY
  • 00:10:35
    TIMES IN THE HISTORY OF TECHNOLOGY. SO I THINK IT'S A
  • 00:10:38
    HEALTHY DEBATE TO HAVE. YOU MENTIONED THE WORD WHOSE NATION
  • 00:10:39
    AND ANYONE WHO'S PLAYED.
  • 00:10:42
    >> WITH ANY OF THESE PRODUCTS IS PROBABLY EXPERIENCED A
  • 00:10:48
    MOMENT WHERE IT APPEARS THAT THE BOT HAS LOOSEN AID. ONE OF
  • 00:10:50
    THINGS WE HEAR FROM PEOPLE IN THE INDUSTRIES, THEY SAY,
  • 00:10:53
    YOU KNOW WHAT, WE CAN'T REALLY EXPLAIN WHY WE'VE SEEN IT AND
  • 00:10:59
    WE CAN'T REALLY EXPLAIN WHAT'S GOING ON INSIDE THE BOX,
  • 00:11:00
    CAN YOU?
  • 00:11:06
    >> I THINK I SAW COM IN A WAY THAT WOULD SATISFY YOU. BUT I
  • 00:11:09
    ALSO KHAN EXPLAIN WHY YOU CHOSE TO WEAR A BLUE SHOT THIS
  • 00:11:13
    MORNING. I CAN EXPLAIN VERY LITTLE ABOUT TO YOU OR I OFF
  • 00:11:18
    THE REQUIREMENT FOR EXPLANATION IS I THINK, AGAIN, A LITTLE BIT
  • 00:11:21
    OF A HUMAN BIAS. WHEN IOS YOU TO EXPLAIN WHY YOU HAD
  • 00:11:25
    SCRAMBLED EGGS FOR BREAKFAST THIS MORNING, YOU WILL CREATE A
  • 00:11:26
    FULLY IMAGINE
  • 00:11:29
    AN EXPLANATION IN HINDSIGHT AND DEPENDING ON YOUR MOOD AND THE
  • 00:11:33
    CONTEXT THAT YOU'RE IN AND THE OTHER LIKE ASSOCIATED METHOD,
  • 00:11:36
    A TOUR OF THE EXPLANATION, YOU'RE PROBABLY GOING TO SAY IT
  • 00:11:38
    SLIGHTLY DIFFERENTLY. AND IT'S STILL CLEAR THAT IT WAS
  • 00:11:42
    ENTIRELY KOZEL. THERE IS SOME BASIS IN REALITY FOR IT.
  • 00:11:46
    BUT THIS ONE TO ONE MAPPING IS A VERY HIGH FOR RATIONAL
  • 00:11:49
    IMPOSITION ON THE WAY THAT WE THINK IS HUMANS. IN FACT,
  • 00:11:53
    WE JUST DON'T REALLY OPERATE LIKE THAT. WE OPERATE, I THINK,
  • 00:11:54
    FAR MORE BY ASSOCIATION.
  • 00:11:57
    >> SHOULD WE WORRY ABOUT THAT? BUT I MEAN, SHOULD WE WORRY THE
  • 00:12:00
    WEEK ONE CANNOT UNDERSTAND OURSELVES TO ANOTHER, TO NOT
  • 00:12:03
    UNDERSTAND HOW THE COMPUTER GETS. YOU KNOW, THE MOST PEOPLE
  • 00:12:06
    THINK THAT ALL OF THIS IS JUST A BIG MATHEMATICAL PROBLEM,
  • 00:12:10
    RIGHT TO PUSH TO SHOULD ALWAYS EQUAL 4. IT'S NOT A BELIEF OR
  • 00:12:14
    WE NEED TO UNDERSTAND IT WAS JUST CUTE OR RATHER SO THAT
  • 00:12:15
    THAT'S JUST NOT HOW.
  • 00:12:18
    >> SO HUMAN REASONING AND CULTURE ACTUALLY WORKS. IN MY
  • 00:12:23
    OPINION, WE HUMAN REASONING WORKS AS A RESULT OF BEHAVIORAL
  • 00:12:23
    OBSERVATIONS.
  • 00:12:28
    WHEN YOU CONSISTENTLY DO THE SAME THING OVER AND OVER AGAIN,
  • 00:12:33
    I GAIN TRUST. YOU BECOME MORE RELIABLE WHEN USED TO TELL ME
  • 00:12:37
    THAT YOU ARE UNSURE ABOUT SOMETHING AND THEN I LOOK BACK
  • 00:12:40
    AND IT TURNS OUT THAT YOU WILL SUCCINT HE WAS A CORRECT
  • 00:12:43
    ASSESSMENT OF THE OUTCOME. THAT UNCERTAINTY GIVES ME
  • 00:12:47
    REASSURANCE, RIGHT? SO I LOVE THAN TO TRUST YOU AND INTERACT
  • 00:12:49
    WITH YOU BY OBSERVATION. AND I THINK THAT THAT'S
  • 00:12:51
    ACTUALLY HOW WE WILL OPERATE IN RELATION TO OUR MOTHERS,
  • 00:12:54
    HOW WE PRODUCE CULTURE AND SOCIETY. AND THAT'S HOW WE'LL
  • 00:12:57
    TREAT THESE MODELS. IT'S, YOU KNOW, THIS. THERE WAS A THERE
  • 00:13:01
    WAS AN EXPOSE COMPETITION OF RADIOLOGISTS 3 MONTHS AGO,
  • 00:13:05
    PANELS OF RADIOLOGISTS IN THIS LIVE ENVIRONMENT, 10'S OF THE
  • 00:13:08
    VERY BEST CONSULTANTS GOING HEAD TO HEAD AGAINST THE
  • 00:13:10
    MACHINE. AND OBVIOUSLY, THERE'S LOTS OF SKEPTICISM. WHY IS THE
  • 00:13:14
    MACHINE PRODUCING THIS CORRECT CANCER DIAGNOSIS? WHY IS IT
  • 00:13:17
    PRODUCING THIS CORRECT EYE EXAMINATION AND IDENTIFYING
  • 00:13:20
    THAT THIS IS WELCOME? WELL, YOU KNOW, WE CAN SORT OF ALLUDE TO
  • 00:13:23
    SOME OF THE TRAINING DATA THAT IS USED TO TRAIN ON IT. BUT THE
  • 00:13:27
    FACT IS IT'S DOING IT MORE ACCURATELY MORE RELIABLY DOWN
  • 00:13:29
    THE HUMAN. I THINK THAT THAT IMPAIR A COOL OBSERVATION OF
  • 00:13:32
    THE FACTS IS WHAT WE SHOULD RELY ON TO BE ABLE TO TRUST THE
  • 00:13:35
    SYSTEMS. ONE OF THE THINGS YOU SAID JUST BEFORE THAT
  • 00:13:40
    >> WAS THE IDEA THAT THE AI MACHINES, THE TRAINING THAT
  • 00:13:44
    THAT IDA'S IS RUNNING OUT OF DATA TO CONSUME.
  • 00:13:46
    AND WE'LL HAVE A CONVERSATION. I THINK IN A MOMENT WITH US,
  • 00:13:49
    I WANT TO TALK ABOUT THE IDEA OF SYNTHETIC DATA THAT IS
  • 00:13:50
    ACTUALLY
  • 00:13:53
    DIGITAL DATA. STILL MOST REPRODUCED OFF OF DATA THAT
  • 00:13:57
    THAT DOESN'T EXIST. BUT I WANT TO ACTUALLY ABOUT ASK ABOUT THE
  • 00:14:00
    DATA THAT DOES EXIST. THERE ARE A NUMBER OF OFFICERS HERE AT
  • 00:14:03
    THE ASPEN IDEAS FESTIVAL AND A NUMBER OF JOURNALISTS AS WELL.
  • 00:14:07
    AND IT APPEARS THAT A LOT OF THE INFORMATION THAT HAS BEEN
  • 00:14:12
    TRAINED ON OVER THE YEARS HAS COME FROM THE WEB AND THE
  • 00:14:14
    QUESTION AND AND SOME OF ITS THE OPEN WEB AND SUMS NOT.
  • 00:14:19
    AND WE'VE HEARD STORIES ABOUT HOW OPEN OPEN AI WAS TURNING A
  • 00:14:22
    YOUTUBE VIDEOS INTO TRANSCRIPTS AND THEN TRAINING ON THE
  • 00:14:28
    TRANSCRIPTS AND WHO IS SUPPOSED TO ONLY IP WHO WAS SUPPOSED TO
  • 00:14:32
    GET VALUE FROM THAT. I P AND WHETHER TO PUT IT IN A VERY
  • 00:14:36
    BLUNT TERMS WHETHER THE AI COMPANIES HAVE EFFECTIVELY
  • 00:14:38
    STOLEN THE WORLD'S I P.
  • 00:14:40
    >> YEAH, I THINK THAT'S A VERY FAIR ARGUMENT. I THINK THAT
  • 00:14:44
    WITH RESPECT TO CONTENT THAT IS ALREADY ON THE OPEN WEB,
  • 00:14:47
    THE SOCIAL CONTRACT OF THAT CONTENT SINCE THE 90'S HAS BEEN
  • 00:14:51
    IT IS FAY USE. ANYONE CAN COPY IT, RECREATE WITH IT REPRODUCED
  • 00:14:55
    WITH IT. THAT HAS BEEN FREEWAY. IF YOU LIKE, THAT'S BEEN THE
  • 00:14:58
    UNDERSTANDING THERE'S A SEPARATE CATEGORY WHERE A
  • 00:15:01
    WEBSITE OR PUBLISH OR NEWS ORGANIZATION HAD EXPLICITLY
  • 00:15:05
    SAID DO NOT SCRAPE WILL CREW ME FOR ANY OTHER REASON THAN
  • 00:15:08
    INDEXING ME SO THAT OTHER PEOPLE CAN FIND THAT CONTENT.
  • 00:15:10
    BUT THAT'S THE GRAY AREA. AND I THINK THAT'S GOING TO
  • 00:15:12
    WORK ITS WAY THROUGH THE COURTS.
  • 00:15:14
    AND WHAT DOES THAT MEAN WHEN YOU SAY IT'S A GRAY AREA?
  • 00:15:17
    WELL, IF IF HAS SO FAR, SOME PEOPLE HAVE TAKEN THAT
  • 00:15:20
    INFORMATION. I DON'T KNOW WHO WON, WHO HASN'T, BUT THAT'S
  • 00:15:23
    GOOD TO GET LITIGATED AND THE U.S. RIGHT. DO YOU THINK THAT
  • 00:15:26
    THE IP LAWS SHOULD BE DIFFERENT, RIGHT? I COULD GO AS
  • 00:15:30
    AN OFFER. I CAN GO WRITE A BOOK IN THE PROCESS OF WRITING MY
  • 00:15:33
    BOOK. I COULD GO TO THE THE LIBRARY WHERE BY ALL.
  • 00:15:36
    >> 40 OTHER BOOKS ON AMAZON. I COULD READ THOSE BOOKS,
  • 00:15:40
    PUT THEM IN MY BIOGRAPHY AND HOPEFULLY PRODUCE A BOOK AND I
  • 00:15:42
    WOULD THE AUTHORS OF THOSE 40 BOOKS, NOTHING MORE THAN
  • 00:15:46
    WHATEVER IT COST ME TO BUY THOSE PRICE THE AND MAYBE THE
  • 00:15:50
    LIBRARY BOTTOM WANTS AS WELL. NOBODY EVER IMAGINED THAT I WAS
  • 00:15:54
    GOING TO BE ABLE TO PRODUCE A MILLION BOOKS EVERY 10 SECONDS,
  • 00:15:58
    RIGHT. AND WHAT THE ECONOMICS OF THAT SHOULD BE RIGHT.
  • 00:16:00
    >> YOU KNOW, LOOK AT THE ECONOMICS OF INFORMATION ARE
  • 00:16:05
    ABOUT TO RADICALLY CHANGE BECAUSE WE CAN REDUCE THE COST
  • 00:16:09
    OF PRODUCTION OF KNOWLEDGE TO 0 MARGINAL COST.
  • 00:16:11
    AND THIS IS THIS IS JUST OF A VERY
  • 00:16:15
    DIFFICULT THING FOR PEOPLE TO ENSURE IT. BUT IN 15 OR 20
  • 00:16:18
    YEARS TIME, WE WILL BE PRODUCING NEW SCIENTIFIC
  • 00:16:22
    CULTURAL KNOWLEDGE AT ALMOST 0 MARGINAL COST. IT WILL BE
  • 00:16:26
    WIDELY OPEN SOURCE AND AVAILABLE TO EVERYBODY. AND I
  • 00:16:29
    THINK THAT IS GOING TO BE, YOU KNOW, A TRUE INFLECTION POINT
  • 00:16:32
    IN HISTORY OF A SPECIES, BECAUSE WHAT ALL WE
  • 00:16:36
    COLLECTIVELY AS AN ORGANISM OF HUMANS, OTHER THAN A KNOWLEDGE
  • 00:16:40
    AND INTELLECTUAL PRODUCTION ENGINE, WE PRODUCE KNOWLEDGE,
  • 00:16:43
    OUR SCIENCE MAKES US BETTER. AND SO WHAT WE REALLY WANT IN
  • 00:16:46
    THE WORLD, IN MY OPINION, ON NEW ENGINES THAT CAN
  • 00:16:47
    TURBOCHARGE DISCOVERY AND INVENTION.
  • 00:16:49
    >> OKAY. NOW LET'S GET THE TOPICAL GOING TO GET YOU ON THE
  • 00:16:52
    COUCH. YOU KNOW, THE WHOLE DID NOT KNOW ABOUT THAT BECAUSE I
  • 00:16:55
    THINK YOU ACTUALLY JUST RAISE THE FUNDAMENTAL QUESTION,
  • 00:17:00
    WHICH IS WHO ARE WE AND WHAT ARE WE AND WHAT ARE WE IF IF
  • 00:17:05
    THIS IS 6 TO IS AS SUCCESSFUL AS I THINK YOU WANT IT TO BE,
  • 00:17:08
    WHAT IS OUR VALUE? I HAVE CHILDREN WHO ARE IN HIGH SCHOOL
  • 00:17:10
    RIGHT NOW, A JUNIOR HIGH RIGHT NOW, LOU, IT AND WE'RE TRYING
  • 00:17:13
    TO FIGURE OUT WHAT ARE THEY SUPPOSED TO STUDY, HOW I THINK
  • 00:17:15
    IT'S BEST TO THINK ABOUT WRITING PAPERS. ARE THEY EVER
  • 00:17:18
    GOING TO HAVE TO LEARN HOW TO WRITE A PAPER?
  • 00:17:20
    SPEAK TO THAT?
  • 00:17:28
    >> LOOK, I THINK WHEN PEOPLE SEE WHAT SHOULD MY KIDS STUDY,
  • 00:17:32
    I I SAY SPEND MORE TIME DIGITALLY LEARNING FROM
  • 00:17:35
    EVERYBODY ELSE AS FAST AS POSSIBLE BECAUSE THE
  • 00:17:39
    TRADITIONAL METHOD OF TEXTBOOK BASED LEARNING IS STILL GOING
  • 00:17:42
    TO BE VALUABLE. AND IT IS OBVIOUSLY GOING TO BE REALLY
  • 00:17:46
    IMPORTANT THAT YOU CAN DEEPLY PAY ATTENTION AND, YOU KNOW,
  • 00:17:50
    READ FOR 2 HOURS AND THEN WRITE AN ESSAY FOR 2 HOURS, OF
  • 00:17:51
    COURSE, AS VALUABLE.
  • 00:17:55
    I THINK THE GENERALIST WHO CAN SPEAK MULTIPLE SOCIAL LANGUAGE
  • 00:17:59
    IS WHO CAN ADAPT VERY QUICKLY. THAT AGILITY IS GOING TO BE ONE
  • 00:18:03
    OF THE MOST VALUABLE SKILLS AND OPEN MINDEDNESS AND NOT BEING
  • 00:18:06
    JUDGMENTAL, FEARFUL OF WHAT'S COMING IN. EMBRACING THE
  • 00:18:09
    TECHNOLOGY. BY THE WAY, YOUR KIDS NOT NOT YET. NOT YET.
  • 00:18:13
    OKAY. HERE'S MY QUESTION FOR YOU FOR THE PARENTS IN THE
  • 00:18:13
    ROOM.
  • 00:18:15
    >> AND BECAUSE YOU'RE SAYING WE SHOULD EMBRACE THE STUFF.
  • 00:18:19
    ONE OF THESE THAT I WORRY ABOUT IS AND I WONDER WHETHER YOU
  • 00:18:21
    THINK THERE SHOULD BE POLICIES WHETHER THE MICROSOFT
  • 00:18:24
    IMPLEMENTED MORE APPLE AAPL, BY THE WAY, JUST PUT APPLE
  • 00:18:28
    INTELLIGENCE INTO THERE PHONES, NEW PHONES ON ON ON TO THE POOR
  • 00:18:31
    PAGES. YOU CAN YOU WRITE THE PAPER FOR YOU AND THESE KIDS
  • 00:18:33
    ARE GOING TO GO TO SCHOOL AND THEY'RE GOING TO HAVE TO WRITE
  • 00:18:35
    THE PAPER. THEY REALLY GOING RIGHT? THE PAPER OR THEY'RE NOT
  • 00:18:39
    GOING RIGHT. THE PAPER, WHAT KINDS OF TOOLS OR RESTRICTIONS
  • 00:18:42
    OR OTHER THINGS YOU THINK SHOULD BE PUT IN PLACE IT FOR
  • 00:18:46
    STUDENTS WHEN ALONE ADULTS? YEAH, LOOK, I THINK WE HAVE TO
  • 00:18:48
    BE SLIGHTLY CAREFUL ABOUT.
  • 00:18:51
    >> FEARING THE DOWNSIDE OF EVERY TOOL, YOU KNOW, JUST AS
  • 00:18:54
    WHEN CALCULATORS CAME IN, THERE WAS A KIND OF THIS GOT REACTION
  • 00:18:57
    ON EVERYONE'S GOING TO HAVE TO SORT OF SOULFUL THE EQUATIONS
  • 00:18:59
    INSTANTLY. AND IT'S GOING TO MAKE US DAMA BECAUSE WE'RE
  • 00:19:03
    ABOUT TO DO MENTAL ARITHMETIC. PARTLY THAT'S TRUE. YOU KNOW,
  • 00:19:06
    I'VE GOT WORSE AT REMEMBERING TELEPHONE NUMBERS SINCE I'VE
  • 00:19:09
    HAD MY PHONE RIGHT WHEN I WAS YOUNG. I I I REMEMBERED
  • 00:19:13
    PROBABLY 40 OR 50 NUMBERS, YOU KNOW, SO I DEFINITELY A LOSING
  • 00:19:15
    SOME SKILLS. MAYBE MY MAP NAVIGATIONAL SKILLS AND NOT
  • 00:19:18
    QUITE AS GOOD NOW BECAUSE I JUST GO TO GOOGLE MAPS OR APPLE
  • 00:19:21
    MAPS STRAIGHTAWAY, RIGHT? SO THERE ARE GOING TO BE SOME
  • 00:19:25
    SHIFTS THAT WE HAVE TO MAKE. BUT NET NET, I BELIEVE THAT IF
  • 00:19:28
    WE REALLY EMBRACE THIS TECHNOLOGY AND RESPONDED WITH
  • 00:19:33
    THE GOVERNMENT IN AN AGILE WAY, SUPER IMPORTANT IS CLEAR.
  • 00:19:35
    BUT SOME OF OUR KIDS WERE GETTING PHONE ADDICTION.
  • 00:19:38
    IT'S CLEAR THAT THEY WERE GETTING SPENDING FAR TOO MUCH
  • 00:19:41
    TIME AND SOCIAL MEDIA THAT IT WAS MAKING THEM FEEL ANXIOUS
  • 00:19:44
    AND FRUSTRATED. AND FRANKLY, IT WAS PROBABLY OBVIOUS TO THOSE
  • 00:19:48
    OF US IN TECHNOLOGY SOONER THAN WE MADE A BIG FUSS ABOUT IT.
  • 00:19:50
    RIGHT. SO WITH WHAT WE HAVE TO DO IS MAKE SURE THAT WE ARE
  • 00:19:53
    REACTING WITH OUR INTERVENTION, OUR INTERVENTIONS AND A
  • 00:19:56
    GOVERNMENT AS FAST AS WE CREATING THE TECHNOLOGY CAN ASK
  • 00:19:59
    A DIFFERENT QUESTION. THERE ARE A NUMBER OF ENTREPRENEURS IN
  • 00:20:03
    THIS ROOM, SOME IN THE DIGITAL SPACE WHO ARE TRYING TO GET
  • 00:20:06
    INTO. I KNOW THAT WE'RE THINKING ABOUT IT.
  • 00:20:09
    WHAT WOULD YOU ADVISE THEM IN TERMS OF WHERE TO GO AND WHAT
  • 00:20:11
    TO DO AND WHAT INDUSTRIES ARE PLACES?
  • 00:20:14
    >> THAT THERE'S GOING TO BE OPPORTUNITY AND IN A WORLD IN
  • 00:20:17
    WHICH I SHOULD BE ABLE TO WRITE ALL OF THESE APPS AND OTHER
  • 00:20:20
    PROGRAMS FOR ME AT ANY MOMENT. YEAH,
  • 00:20:24
    IF THERE IS NO NO DEFENSIVE MOAT AROUND A BUSINESS BECAUSE
  • 00:20:28
    AI IS GOING TO BE ABLE TO JUST DO IT FOR YOU,
  • 00:20:30
    WOULD ANYBODY INVEST IN ANY WAY?
  • 00:20:33
    >> I THINK THAT THERE IS GOING TO BE THIS BIFURCATION WET,
  • 00:20:36
    THE AT THE VERY LARGEST SCALES WITH THE GREATEST CAPITAL
  • 00:20:40
    INFRASTRUCTURE. WE'RE GOING TO BE PRODUCING MODELS, WHICH I
  • 00:20:42
    WILL TRAIN THE IMPRESSIVE, BUT EQUALLY
  • 00:20:47
    KNOWLEDGE SPREADS FOSSA THAN WE'VE EVER SEEN IN THE HISTORY
  • 00:20:52
    OF HUMANITY. AND SO THE OPEN SOURCE MODELS WHICH 100% FREE
  • 00:20:55
    WITHIN 18 MONTHS OF THE QUALITY AND PERFORMANCE AS THEY WERE IN
  • 00:20:58
    THE PRIVATE SPACE, YOU KNOW, JUST A MOMENT AGO, NOT THAT'S A
  • 00:21:01
    VERY, VERY BIG DEAL, RIGHT? AND I THINK THAT SHOULD
  • 00:21:04
    ACTUALLY IS GOING TO CONTINUE GPT 3
  • 00:21:09
    TENS OF MILLIONS OF DOLLARS TO TRAIN AND IS NOW AVAILABLE FREE
  • 00:21:12
    AND OPEN SOURCE. YOU CAN OPERATE ON A SINGLE PHONE
  • 00:21:15
    SUNDAY ON A LAPTOP GBT FOR THE SAME STORY. SO I THINK THAT
  • 00:21:18
    THAT'S GOING TO MAKE THE RULE MATERIALS NECESSARY TO BE
  • 00:21:22
    CREATIVE AND ENTREPRENEURIAL, CHEAPER AND MORE AVAILABLE THAN
  • 00:21:22
    EVER BEFORE.
  • 00:21:25
    >> OK, SO I KNOW YOU'RE AT ONE OF THE DOW, THE BIG GIANTS IN
  • 00:21:29
    TECH. BUT LET ME ASK YOU THIS. HOW DO YOU THINK,
  • 00:21:32
    HONESTLY, ABOUT THE CONCENTRATION OF POWER IN
  • 00:21:35
    TECHNOLOGY, ESPECIALLY AS IT RELATES TO I BECAUSE ONE OF
  • 00:21:37
    THINGS THAT'S PASSING, BUT I AT LEAST THESE LARGE LANGUAGE
  • 00:21:41
    MODELS, YOU NEED AN ENORMOUS AMOUNT OF COMPUTE, WHICH MEANS
  • 00:21:44
    YOU NEED AN ENORMOUS AMOUNT OF MONEY. AND
  • 00:21:46
    IF THIS IS NOT STUFF THAT'S BEING BUILT IN A GARAGE
  • 00:21:51
    ANYMORE. AND SO THE THE TRUE POWER HAS ACTUALLY GONE BACK TO
  • 00:21:55
    MIKE OF YOU. YOU ENDED UP BACK AT MICROSOFT. THERE'S GOOGLE.
  • 00:21:59
    AMAZON IS IS TRYING TO BUILD. AND I EVEN OPENAI WHICH WAS ON
  • 00:22:03
    ITS OWN, DECIDED TO ULTIMATELY PARTNER WITH MICROSOFT.
  • 00:22:06
    IS THIS A GOOD THING IS A BAD THING? LINA KHAN AND OTHERS
  • 00:22:09
    HAVE LOOKED AT IT. IT'S NOT THAT
  • 00:22:13
    THERE'S BEEN WHOLESALE MERGERS OF THESE THINGS BUT EVEN THE
  • 00:22:15
    PARTNERSHIPS UNTO THEMSELVES.
  • 00:22:18
    >> YEAH, THIS LOOK, IT MAKES ME VERY ANXIOUS THAT THE REALITY
  • 00:22:22
    IS EVERY WAY YOU LOOK, WE SEE BRAD PITT CONCENTRATIONS,
  • 00:22:25
    WHETHER IT'S IN NEWS MEDIA AND THE POWER OF THE NEW YORK
  • 00:22:28
    TIMES, THE FINANCIAL TIMES, THE ECONOMIST, THE GREAT NEWS
  • 00:22:32
    ORGANIZATIONS OR WHETHER IT'S IN CITIES, THE CONCENTRATION OF
  • 00:22:35
    POWER AROUND A FEW BIG METROPOLITAN ELITE CITIES,
  • 00:22:39
    WHETHER IT'S IN TECHNOLOGY COMPANIES. AND THE PRACTICAL
  • 00:22:43
    FACT IS THAT OVER TIME POWER OF COMPOUND,
  • 00:22:46
    POWELL HAS A TENDENCY TO ATTRACT MORE POWER BECAUSE IT
  • 00:22:49
    CAN GENERATE THE INTELLECTUAL RESOURCES IN THE FINANCIAL
  • 00:22:52
    RESOURCES TO BE SUCCESSFUL AND OUT COMPETE IN OPEN MARKETS
  • 00:22:55
    ALL. ON THE ONE HAND, IT FEELS LIKE AN INCREDIBLY COMPETITIVE
  • 00:23:00
    SITUATION BETWEEN MICROSOFT, META, GOOGLE AND SO ON. YOU
  • 00:23:04
    KNOW, IT CLEARLY IS ALSO TRUE THAT WE'RE ABLE TO MAKE
  • 00:23:07
    INVESTMENTS THAT JUST PRECEDENT IN CORPORATE HISTORY.
  • 00:23:11
    >> CAN YOU SPEAK TO WHAT I IMAGINE IS A FRIENDLY OR IT
  • 00:23:14
    COULD TURN INTO A FRIEND OF THE SITUATION WITH OPEN AI, WHICH
  • 00:23:16
    IS YOU HAVE THIS PARTNERSHIP NOW, BUT YOU GUYS ARE
  • 00:23:20
    DEVELOPING YOUR OWN TECH AS WELL. YOUR YOUR OWN I IN FACT
  • 00:23:22
    THEY HELP YOU GET TO THE LEAD. I WANT TO READ YOU SOMETHING.
  • 00:23:25
    THIS IS SEMAPHORE I THOUGHT WAS ACTUALLY A FASCINATING
  • 00:23:28
    ANALOGIES. YOU KNOW THIS COOL. OKAY WITH THAT WITH THE TOUR DE
  • 00:23:32
    FRANCE COMING UP, HERE'S A BIKE ANALOGY THAT I THINK EVERYONE
  • 00:23:35
    WILL UNDERSTAND. OPENAI MICROSOFT DURING A 2 PERSON
  • 00:23:39
    BREAKAWAY FAR AHEAD FROM THE PELOTON BY WORKING TOGETHER.
  • 00:23:44
    NOW THEY MIGHT BE ABLE TO KEEP THE LEAD IF EITHER GO SO LOW
  • 00:23:48
    THEY MAY FALL. BOTH BEHIND BOTH COMPANIES HAVE TO THINK ABOUT
  • 00:23:53
    THEIR FINISH LINE STRATEGY WHEN THEY WILL HAVE TO DITCH THE
  • 00:23:53
    OTHER.
  • 00:23:58
    >> YOU'RE ON THE BIKE AGAIN. I DON'T BUY THE METAPHOR THAT
  • 00:24:03
    THERE IS A FINISH LINE. THIS IS ANOTHER FALSE FRAME IS ACTUALLY
  • 00:24:07
    VERY MUCH ALSO TRUE IN THE CONTEXT OF ALL RACE WITH CHINA,
  • 00:24:10
    WE'RE GOING HEAD TO HEAD ITS 0 SUM. THAT WILL BE A FINISH
  • 00:24:13
    LINE. WHEN WE CROSS THAT WE WILL HAVE AN A GIANT IF WE GET
  • 00:24:16
    IT 3 YEARS BEFORE THEM WILL BE ABLE TO DESCEND HOW BIG THIS
  • 00:24:20
    IS. JUST STOP IT. WE HAVE TO STOP FRAMING EVERYTHING IS AN
  • 00:24:24
    ADVERSARIAL RACE LIKE IT IS TRUE THAT WE HAVE FEROCIOUS
  • 00:24:26
    COMPETITION WITH THEM. THEY'RE AN INDEPENDENT COMPANY. WE
  • 00:24:29
    DON'T OWN OR CONTROL THEM. WE DON'T EVEN HAVE ANY BOARD
  • 00:24:31
    MEMBERS. AND YOU KNOW, SO THEY DO THAT, HEIDI, THEIR OWN
  • 00:24:34
    THING. BUT WE HAVE A DEEP PARTNERSHIP, VERY GOOD FRIENDS
  • 00:24:38
    WITH SAM HAVE HUGE RESPECT AND, YOU KNOW, TRUST AND FAITH IN
  • 00:24:41
    WHAT THEY'VE DONE. AND THAT'S HOW IT'S GOING TO ROLL FOR
  • 00:24:44
    MANY, MANY YEARS TO COME. YOU MENTIONED CHINA.
  • 00:24:46
    >> SPEAK TO WHERE YOU THINK AND I KNOW YOU DON'T LIKE THE
  • 00:24:49
    FRAME, BUT I THINK PEOPLE WANT AT LEAST UNDERSTAND WHAT'S
  • 00:24:54
    GOING ON, WHICH IS TO SAY, WHERE ARE WE RELATIVE TO?
  • 00:24:56
    WHERE ARE THEY? YEAH.
  • 00:25:00
    >> WE APPROACHED THIS WITH THE DEFAULT ADVERSARIAL MINDSET,
  • 00:25:05
    WHICH WITH ALL DUE RESPECT TO MY GOOD FRIENDS IN DC AND THE
  • 00:25:08
    MILITARY INDUSTRIAL COMPLEX, IS THAT THE FULL FRAME THAT IT CAN
  • 00:25:12
    ONLY BE A NEW COLD WAR, THEN THAT IS EXACTLY WHAT IT WILL BE
  • 00:25:15
    BECAUSE IT WILL BECOME A SELF-FULFILLING PROPHECY.
  • 00:25:17
    THEY WILL FEAR THAT WE FEAR THAT WE'RE GOING TO BE
  • 00:25:21
    ADVERSARIAL. SO THEY HAVE TO BE ADVERSARIAL AND THIS IS ONLY
  • 00:25:24
    GOING TO ESCALATE AND WILL END IN CATASTROPHE. AND SO WE HAVE
  • 00:25:28
    TO FIND WAYS TO COOPERATE, BE RESPECTFUL OF THEM WAS ALSO
  • 00:25:30
    ACKNOWLEDGING THAT WE HAVE DIFFERENT SET OF VALUES.
  • 00:25:34
    AND FRANKLY, WHEN I LOOK OUT OVER THE NEXT CENTURY, I THINK
  • 00:25:39
    THAT PEACE IS GOING TO BE A PRODUCT OF US AS IN THE WEST
  • 00:25:41
    AND PARTICULARLY AMERICAN LEADING THE TIP OF THE SPEAR
  • 00:25:46
    THAT KNOWING HOW TO GRACEFULLY DEGRADE THE THE THE EMPIRE THAT
  • 00:25:49
    WE HAVE MANAGED OVER THE PREVIOUS CENTURY, BECAUSE THIS
  • 00:25:52
    IS A RISING POWER OF PHENOMENAL FALLS WITH A DIFFERENT SET OF
  • 00:25:57
    VALUES FOR US. AND SO WE HAVE TO FIND WAYS TO COEXIST WITHOUT
  • 00:25:59
    JUDGMENT WITHOUT GOING TO WAR WITH THEM UNNECESSARILY,
  • 00:26:02
    BECAUSE I THINK THAT WOULD BE TERRIBLE FOR BOTH OF US.
  • 00:26:07
    >> YOU MAY DIFFER WITH MY NEXT FRAMING SONG, BUT I'M GONNA TRY
  • 00:26:07
    GREAT.
  • 00:26:11
    THERE IS A DEBATE IN THE INDUSTRY WHETHER YOU LIKE ABOUT
  • 00:26:18
    THIS IDEA OF OPEN SOURCE VERSUS CLOSE SOURCE. I WANT TO CLOSE
  • 00:26:21
    OR SAYS A LOT OF THE THINGS THAT ACTUALLY OPENAI YOU'RE
  • 00:26:25
    DOING AT MICROSOFT. OPEN SOURCE IS AVAILABLE TO THE PUBLIC.
  • 00:26:29
    YOU CAN SEE INSIDE OF IT. AND RIGHT NOW YOU WOULD ARGUE
  • 00:26:32
    THAT A META. THE OWNER, FACEBOOK IS THE LEADER THERE
  • 00:26:34
    WITH ITS LAMA PRODUCT.
  • 00:26:36
    THERE ARE QUESTIONS ABOUT
  • 00:26:40
    BOTH MODELS. ELON MUSK'S THINKS EVERYTHING SHOULD BE OPEN
  • 00:26:46
    SOURCE. I DO A WHOLE BUNCH OF PEOPLE WHO THINK THAT
  • 00:26:49
    FEAR THAT ACTUALLY, IF YOU ALLOW THE OPEN SOURCE TAP INTO
  • 00:26:51
    THE WILD, THAT IT COULD BE MISUSED AND THEREFORE, THAT'S
  • 00:26:55
    WHY THEY'RE ADVOCATES FOR CLOSE SOURCE. YEAH, BECAUSE IT AGAIN
  • 00:26:59
    OVERSEE OBJECT TO THE FRAMING. YOU KNOW, SO.
  • 00:27:05
    >> WE AT MICROSOFT IN MY 4TH WEEK OPEN SOURCE, THE STRONGEST
  • 00:27:09
    MODEL FOR ITS WEIGHT CLASS 5, 3, THAT IS IN THE WILD HANDS
  • 00:27:12
    DOWN. STILL, AS YOU CAN USE IT ON BALLS AND ON DESKTOP,
  • 00:27:15
    THE BEST OPEN SOURCE MODEL. SO WE TOTALLY BELIEVE IN OPEN
  • 00:27:18
    SOURCE AS WELL. AGAIN, IT'S A MISS FRAMING, BUT WE ALSO
  • 00:27:21
    BELIEVE IN CREATING VERY POWERFUL, VERY LARGE, VERY
  • 00:27:25
    EXPENSIVE MODELS, WHICH WE MAY OPEN. SO SOME OF OVERTIME
  • 00:27:28
    SYSTEM, THE WHERE AGAINST OPEN SOURCE TOOL, WE JUST, YOU KNOW,
  • 00:27:31
    TO SEE THAT YOU HAVE TO HAVE A MIXTURE OF APPROACHES.
  • 00:27:34
    >> ALL OF THESE LARGE LANGUAGE MODELS AND ANOTHER SMALLER
  • 00:27:36
    MODELS AS WELL. BUT THE LARGE LINE WAS, MILES IN PARTICULAR,
  • 00:27:39
    AS I MENTIONED AT THE BEGINNING, REQUIRE HUGE AMOUNT
  • 00:27:42
    OF COMPUTING COMPUTING POWER. AND THAT ALSO MEANS THEY
  • 00:27:45
    REQUIRE A TON OF ENERGY. THERE ARE A WHOLE NUMBER OF
  • 00:27:49
    CONVERSATIONS HAPPENING HERE AT THE ASPEN IDEAS FESTIVAL AROUND
  • 00:27:53
    ENERGY USE AND CLIMATE AND THE LIKE. AND I'M CURIOUS WHAT YOU
  • 00:27:56
    THINK ABOUT GIVING ALL THE PLEDGES, FRANKLY, THAT
  • 00:27:59
    TECHNOLOGY COMPANIES MADE EVEN ONLY 2, 3, IN 4 YEARS AGO
  • 00:28:04
    AROUND CLIMATE AND GETTING TO TO 0 BY 2030 AND THE LIKE
  • 00:28:08
    IT APPEARS THAT ON THIS TRAJECTORY, ALL OF THOSE THINGS
  • 00:28:08
    WOULD GO OUT THE WINDOW.
  • 00:28:12
    >> NO. SO WHEN WITH MICROSOFT'S ALREADY 100% RENEWABLE BY THE
  • 00:28:14
    END OF THIS YEAR BY TWENTY-THIRTY WILL BE FULLY
  • 00:28:19
    NET-ZERO, WE'RE ACTUALLY, YOU KNOW, 100% SUSTAINABLE AND OUR
  • 00:28:23
    WATER BY THE END OF NEXT YEAR. SO LOOK WITH VERY COMMITTED TO
  • 00:28:26
    KEEPING UP WITH THAT. AND THE GOOD NEWS ABOUT NEW TO MOM IS
  • 00:28:28
    THAT IT COMES WITH A NEW OPPORTUNITY TO REINFORCE
  • 00:28:32
    SUSTAINABILITY. WHERE'S ALL SUPPLY, I THINK IS MUCH HARDER
  • 00:28:34
    TO MOVE. SO I THINK THAT WE'RE ACTUALLY IN A VERY GOOD PLACE
  • 00:28:37
    ON ITS TRUE THAT IS GOING TO UNPRECEDENTED BUDGET ON THE
  • 00:28:42
    GRID BECAUSE SCALE WISE, WE'RE DEFINITELY CONSUMING FALL MORE
  • 00:28:45
    ENERGY THAN THE GREAT HAS PREVIOUSLY MANAGED. BUT IT'S,
  • 00:28:48
    YOU KNOW, THE GREAT IS LONG OVER DEW, A RADICAL UPLIFT,
  • 00:28:50
    RIGHT? THESE ARE THE KINDS OF INFRASTRUCTURE INVESTMENTS THAT
  • 00:28:53
    WE NEED TO BE MAKING IN OUR COUNTRIES IN THE WEST FOR THE
  • 00:28:57
    FUTURE OF, YOU KNOW, SOCIETY SPENDING LESS ON WAR AND MORE
  • 00:29:00
    ON SPENDING HUNDREDS OF BILLIONS OF DOLLARS UP LEVELING
  • 00:29:03
    IOWA, OUR GRID SO THAT WE CAN MANAGE ALL OF THE NEW BATTERY
  • 00:29:06
    SPENT BY SHRI CAPACITY. THAT'S GOING TO COME FROM LOCAL
  • 00:29:10
    GENERATION IN YOUR HOMES AND SMALL BUSINESSES AND COSTS.
  • 00:29:12
    >> DO YOU THINK YOU SPENT A LOT OF TIME IN WASHINGTON? DO YOU
  • 00:29:15
    THINK THE FOLKS IN WASHINGTON WHICH I THINK STRUGGLE TO EVEN
  • 00:29:18
    UNDERSTAND SOCIAL MEDIA UNDERSTAND THIS WEATHER COMES
  • 00:29:21
    TO THE IMPLICATIONS OF AI BUT ALSO THE ENERGY IMPLICATIONS
  • 00:29:24
    AND EVERYTHING ELSE? YEAH, I THINK TO SOME EXTENT THE
  • 00:29:27
    CONVERSATION IS SHIFTING NOW TOWARDS TO RENEWABLE ENERGY
  • 00:29:29
    QUESTION. I THINK PEOPLE TO REALIZE THAT WE CONSUMING VAST
  • 00:29:33
    AMOUNTS OF ENERGY END, YOU KNOW THIS, THIS THIS HAS TO SHIFT,
  • 00:29:36
    BUT IT NEEDS THIS ISN'T GOING TO SHIFT JUST BECAUSE MICROSOFT
  • 00:29:39
    BY 100% RENEWABLE, WHICH OF COURSE GOOGLE DOES AS WELL AND
  • 00:29:41
    THE OTHER COMPANIES HEADING IN THAT DIRECTION. YOU KNOW,
  • 00:29:44
    SO I THINK IT SHOULD BE CELEBRATED. THAT WAS SAYING A
  • 00:29:47
    BIT OF A STANDARD, BUT IT NEEDS NATIONAL INFRASTRUCTURE TO
  • 00:29:50
    SUPPORT THAT TRANSITION.
  • 00:29:55
    HOW DO YOU THINK ABOUT THE COMPETITION WITH OTHERS AND THE
  • 00:29:58
    IDEA THAT SOME IT IS POSSIBLE THAT ACTUALLY USE LARGE
  • 00:30:00
    LANGUAGE MODELS. MAYBE YOU DISAGREE WITH THIS FRAMING
  • 00:30:05
    COULD BECOME COMMODITIZED, WHICH IS TO SAY YOU'VE GOT ONE.
  • 00:30:07
    GOOGLE HAS WON.
  • 00:30:09
    APPLE'S NOW PARTNER WITH OPEN AI, BUT THEY'RE APPARENTLY
  • 00:30:13
    GOING TO PARTNER WITH LOTS OF OTHER PEOPLE. AMAZON, WHICH
  • 00:30:15
    OWNS A PIECE OF ANTHROPIC
  • 00:30:18
    YOU COULD ARGUE HAS ONE IN HIS BUILDING ITS OWN WITH EVERYBODY
  • 00:30:21
    HAVE WON. IF EVERYBODY HAVE ONE, HOW HOW VALUABLE IS IT
  • 00:30:25
    ANYWAY, THAT HAS KNOWLEDGE PROLIFERATES. EVERYTHING
  • 00:30:29
    ESSENTIALLY BECOMES COMMODITIZED AT DEVELOPMENT
  • 00:30:32
    USED TO BE A SUPER UNIQUE, HIGHLY SKILLED.
  • 00:30:35
    >> SET OF THINGS TO A TINY GROUP. NOW EVERYONE CAN SPIN UP
  • 00:30:38
    AND UP INTO THE EARLY WEB DEVELOPMENT. REMEMBER WHEN THAT
  • 00:30:41
    WAS LIKE A REAL THING NOW YOU CAN JUST SEE THE PLUG AND PLAY
  • 00:30:44
    AND DRAG BUTTONS AROUND AND YOU NEED NO TECHNICAL SKILLS
  • 00:30:46
    WHATSOEVER. OTHER THAN BEING ABLE TO POINT A MOUSE TO BUILD
  • 00:30:50
    A PRETTY DECENT WEBSITES OVER TIME AS THE KNOWLEDGE AND
  • 00:30:54
    CAPABILITY PROLIFERATES, IT DOES BECOME COMMODITIZED.
  • 00:30:58
    >> HOW DO YOU THINK ABOUT AI AND HOW FULLY INTEGRATED IN
  • 00:31:01
    THIS COASTAL PRIVACY QUESTION ALSO GOES, BY THE WAY THAN TO
  • 00:31:02
    TRUST QUESTION
  • 00:31:06
    SHOULD GET INTEGRATED INTO EVERYTHING, WHICH IS TO SAY,
  • 00:31:09
    FOR EXAMPLE, RIGHT NOW, THE EU, AS YOU KNOW, HAS ACCUSED
  • 00:31:11
    MICROSOFT OF BREACHING ANTITRUST RULES. YOU SHOULD
  • 00:31:15
    KNOW APPLE HAS DELAYED A INTRODUCING ITS OWN AI FEATURES
  • 00:31:18
    IN THE U BECAUSE OF THE REGULATORY ENVIRONMENT. AND I
  • 00:31:22
    THINK THERE'S A REAL QUESTION ABOUT, YOU KNOW, HOW CONNECTED
  • 00:31:26
    ALL OF THESE PRODUCTS NEED TO BE, FRANKLY, BOTH TO WORK,
  • 00:31:29
    BUT ALSO THERE FOR THEM WHAT THEY DO TO THE REST OF THE
  • 00:31:29
    BUSINESS.
  • 00:31:33
    >> YEAH, I MEAN, I I THINK THAT'S A GOOD QUESTION. SO
  • 00:31:38
    YOU LIKE THE FRONT? YES, FINALLY, HAHA
  • 00:31:44
    FRICTION IS GOING TO BE OFF FRIEND HERE. AND THAT IS A
  • 00:31:48
    DIFFERENT YOU KNOW, THAT IS A A DIFFERENT REALITY TO WHAT WE'VE
  • 00:31:52
    EXPERIENCED IN THE PAST WHERE EVERY, YOU KNOW, SECONDS THAT
  • 00:31:55
    WE CAN GAIN IN PUTTING TECHNOLOGY OUT INTO THE WORLD
  • 00:31:59
    IS ALWAYS PRODUCING NET BENEFIT. AND I JUST THINK,
  • 00:32:02
    YOU KNOW, THESE TECHNOLOGIES OF BECOMING SO POWERFUL, THEY WILL
  • 00:32:05
    BE SO INTIMATE. THEY'LL BE SO EVER-PRESENT THAT THIS IS A
  • 00:32:09
    MOMENT WHERE IT'S FINE TO TAKE STOCK AND THINK AND IF IT TAKES
  • 00:32:12
    6 MONTHS LONGER, 18 MONTHS LONGER, WILL, YOU KNOW, MAYBE
  • 00:32:17
    EVEN LONGER THAN THAT. IT'S IT'S IT'S TIME WELL SPENT.
  • 00:32:19
    >> I WANT TO OPEN UP TO QUESTIONS IN JUST A MOMENT.
  • 00:32:22
    BUT I WANT TO TALK ABOUT THE IDEA BEFORE WE DO
  • 00:32:26
    OF THE I GIVE EMOTIONAL INTELLIGENCE AI. THIS WAS
  • 00:32:28
    SOMETHING YOU ACTUALLY SPENT A LOT OF TIME THINKING ABOUT AND
  • 00:32:31
    WORKING ON WHEN IT CAME TO INFLECTION AND WHAT INFLECTION
  • 00:32:35
    WAS ABOUT. YEAH, INFLECTION. HE'S HE'S HE'S A I REGIONAL
  • 00:32:40
    PROJECT HAS BEEN ARE A REMARKABLE EMOTIONAL IQ OR DQ.
  • 00:32:44
    YEAH, OPPONENT TO IT. AND THERE WERE SOME SURVEYS AND OTHER
  • 00:32:46
    THINGS. IT SAID THAT PEOPLE ACTUALLY THOUGHT THAT THEY WHEN
  • 00:32:49
    THEY WERE HAVING CONVERSATIONS WITH IT WAS BETTER THAN HAVING
  • 00:32:51
    A CONVERSATION WITH A HUMAN.
  • 00:32:54
    MAYBE THIS GETS BACK TO SOME OF THE PHILOSOPHICAL CONVERSATIONS
  • 00:32:55
    WE WERE HAVING BEFORE. BUT,
  • 00:32:58
    YOU KNOW, WE'VE ALL SEEN ARE A LOT OF PEOPLE SEE THE MOVIE
  • 00:33:00
    HER. IS THAT REALISTIC?
  • 00:33:03
    >> YOU KNOW, I THINK FOR THE LONGEST TIME IN SILICON VALLEY
  • 00:33:06
    AND IN TECHNOLOGY, WE HAVE BEEN OBSESSED WITH FUNCTIONAL
  • 00:33:09
    UTILITARIANISM. YOU KNOW, YOU WANT TO EFFICIENTLY GET
  • 00:33:13
    SOMEBODY INTO A NAP. SO THAT PROBLEM HELP THEM TO BUY
  • 00:33:16
    SOMETHING, BOOK SOMETHING AND SOMETHING AND THEN GET THEM
  • 00:33:16
    OUT.
  • 00:33:20
    WE NOW HAVE A NEW KIND OF DESIGN MATERIAL, A NEW CLAY,
  • 00:33:23
    IF YOU LIKE, AS CREATIVES DEVELOPED TO PRODUCE
  • 00:33:27
    EXPERIENCES, WHICH SPEAK TO THE OTHER SIDE OF OUR BRAINS,
  • 00:33:31
    YOU KNOW, AND THAT'S AN AMAZING OPPORTUNITY TO DESIGNED WITH
  • 00:33:34
    REAL INTENT TO BE KIND AND RESPECTFUL AND EVEN-HANDED.
  • 00:33:37
    AND YOU'RE RIGHT IN A BUNCH OF THE STUDIES PEOPLE FOUND UP
  • 00:33:42
    HIGH. THE AI THAT I DO PREVIOUSLY WAS JUST VERY KIND
  • 00:33:46
    AND CARING. IT JUST AUST YOU QUESTIONS. IT LISTENED. IT
  • 00:33:49
    RESPONDED WITH ENTHUSIASM. IT WAS SUPPORTIVE. A CHALLENGE
  • 00:33:53
    TO OCCASIONALLY IT WAS ALWAYS EVEN HANDED A NON JUDGMENTAL.
  • 00:33:57
    EVEN IF YOU CAME WITH A QUOTE, UNQUOTE, YOU KNOW, JUDGMENT OR
  • 00:34:01
    WITH THE SORT OF RACIST OR DISCRIMINATE REVIEW, IF YOU,
  • 00:34:04
    YOU KNOW, TALKING ABOUT HOW YOU ARE AFRAID OF NEW IMMIGRANTS
  • 00:34:07
    ARRIVING IN YOUR AREA, TAKING YOUR JOBS OR AFRAID OF YOUR
  • 00:34:11
    CHILD MARRYING A BLACK PERSON OR, YOU KNOW, IT WOULDN'T JUST
  • 00:34:13
    SHUT YOU DOWN AND SAY THAT'S LIKE YOU SHOULDN'T BE SAYING
  • 00:34:16
    THAT IT WOULDN'T ADD MORE TOXICITY TO THE EQUATION.
  • 00:34:19
    IT WOULD ACTUALLY ENGAGE WITH YOU AND TALK AND HELP YOU
  • 00:34:22
    EXPLORE YOUR FEELINGS WAS ALSO BEING HYPE AND KNOWLEDGEABLE
  • 00:34:25
    AND, YOU KNOW, TEACHING YOU IN PROVIDING YOU WITH ACCESS TO
  • 00:34:28
    INFORMATION IN REAL TIME ON THE WEB. AND SO I THINK IT WAS A
  • 00:34:33
    PROOF POINT THAT TECHNOLOGY IS DESIGNED WITH INTENTIONALITY
  • 00:34:36
    REALLY CAN MAKE A DIFFERENCE AT ALL POSSIBLE. NOW WITH THIS
  • 00:34:38
    NEW, LET ME READ ONE INTERESTING POINT, THOUGH,
  • 00:34:40
    BECAUSE OF THE CONTROVERSIAL TOPICS IN ALL OF US.
  • 00:34:44
    >> MAY ULTIMATELY END UP TALKING TO AN AI BOT ABOUT
  • 00:34:48
    WHICH IS WHO IS THE THE GOD, IF YOU WILL. THAT'S GOING TO TELL
  • 00:34:49
    US WHAT'S RIGHT AND WRONG.
  • 00:34:52
    AND THIS IS SOMETHING WE'VE HEARD ABOUT, BY THE WAY,
  • 00:34:55
    FROM ELON MUSK WHO SAID THAT HE BELIEVES THAT THERE SHOULD BE
  • 00:34:58
    MULTIPLE AI BOTS AND THERE SHOULD BE, YOU KNOW, IF YOU
  • 00:35:01
    WANT TO HAVE A RACIST AI BOT THAT SHOULD BE AVAILABLE TO YOU
  • 00:35:04
    FROM A FREE SPEECH PERSPECTIVE. I DON'T KNOW IF YOU THINK
  • 00:35:05
    THAT'S TRUE. YEAH.
  • 00:35:07
    >> YEAH, I MEAN THAT THAT'S A VERY GOOD QUESTION. SO AT THE
  • 00:35:10
    VERY LEAST, I THINK THE DEFAULT POSITION IS THAT IT SHOULD
  • 00:35:13
    FACILITATE YOUR OWN THINKING AN EXPLORATION IN A NON-JUDGMENTAL
  • 00:35:16
    WAY. NOW THAT WILL BE BOUNDARIES THAT BECAUSE IF YOU
  • 00:35:20
    WANT TO FACILITATE MORE EXTREME EXPERIENCES, THEN THIS IS NOT
  • 00:35:23
    GOING TO BE I'M NOT GONNA BUILD A PLATFORM OR PRODUCT THAT
  • 00:35:27
    ENABLES YOU TO REINFORCE THOSE IDEAS THAT COULD BE POTENTIALLY
  • 00:35:31
    HARMFUL TO SOCIETY MORE GENERALLY. THE QUESTION IS WHO
  • 00:35:34
    GETS TO DEFINE WHAT THAT BOUNDARY IS THAT HOME? AND I
  • 00:35:36
    THINK YOU DON'T RAISE A GOOD POINT. WE'RE ALL THINKING ABOUT
  • 00:35:40
    EXACTLY THAT QUESTION IN SOCIAL MEDIA. THERE REALLY WAS AN
  • 00:35:44
    INDEPENDENT VOICE THAT SAID, WELL, THIS KIND OF CONSPIRACY
  • 00:35:48
    THEORY IS LEGITIMATE AS A SOURCE OF ACTIVE PUBLIC INQUIRY
  • 00:35:51
    AND DISCUSSION. BUT THIS KIND OF CONSPIRACY THEORY, QUOTE,
  • 00:35:55
    UNQUOTE, IS OVER THE LINE AND WE SHOULD REMOVE IT. AND I I
  • 00:35:58
    JUST THINK WE'VE HAD 10 TO 15 YEARS OF EXPERIENCE OF SOCIAL
  • 00:36:03
    MEDIA AND WE STILL DON'T HAVE GOOD PROPOSALS FOR HOW SOCIETY
  • 00:36:06
    MORE GENERALLY, NOT NECESSARILY JUST THE POLITICIANS WILL THE
  • 00:36:10
    JOHN LIST. SO THE ELITES BUT COLLECTIVELY GETS TO INFLUENCE
  • 00:36:12
    WHERE THAT BOUNDARY OF MODERATION IS. WE'RE ABOUT TO
  • 00:36:16
    SEE THAT BOUNDARY BE CONTESTED, EVEN MORE ACUTELY AND
  • 00:36:18
    DYNAMICALLY WITH THESE MODELS. RIGHT? AND SO, YOU KNOW,
  • 00:36:22
    I THINK THAT SHOULD BE LIKE THE THAT IS MUCH MORE IMPORTANT
  • 00:36:25
    QUESTION THAN WHEN WILL THE SINGULARITY HAPPENED? OKAY.
  • 00:36:27
    FINAL QUESTION FOR ME. AND THEN WE'RE GOING TO OPEN UP AND I
  • 00:36:29
    KNOW THERE'S MICROPHONES IN THE ROOM. THERE'S A WHOLE BUNCH OF
  • 00:36:31
    PEOPLE IN THIS ROOM. THEY'RE GOING TO WATCH AND GO TO THE
  • 00:36:35
    WATCH PARTY ON THURSDAY NIGHT OF THE DEBATES. AND WE HAVEN'T
  • 00:36:38
    REALLY TALKED ABOUT POLITICS IN THIS ELECTION.
  • 00:36:42
    JUST PLAYED OUT FOR US. THE ROLE OF AI.
  • 00:36:47
    >> THE ELECTION OF 2024 AND THEN PLAY OUT THE ROLE OF AI IN
  • 00:36:50
    THE ELECTION OF 2028. YEAH.
  • 00:36:55
    >> SO THE VIEW THAT WE'VE TAKEN FOR MICROSOFT CO PILOT SO FAR
  • 00:36:59
    HAS BEEN THAT'S HEY, ICE SHOULD NOT PARTICIPATE IN
  • 00:37:03
    ELECTIONEERING CAMPAIGNING, EVEN IF THEY SUCCESSFULLY
  • 00:37:04
    PROVIDE FACTUAL INFORMATION.
  • 00:37:07
    AND, YOU KNOW, THAT'S A VIEW. PEOPLE CAN DISAGREE WITH THAT
  • 00:37:11
    VIEW. BUT WE'VE TAKEN A PRETTY HARD LINE ON THAT BECAUSE WE
  • 00:37:16
    KNOW THE TECHNOLOGY ISN'T QUITE YET GOOD ENOUGH TO ARTICULATE
  • 00:37:19
    THAT BOUNDARY BETWEEN WHAT IS, YOU KNOW, A FALSEHOOD AND WHAT
  • 00:37:23
    IT'S TRUE IN REAL TIME, RIGHT? AND THIS IS JUST TOO SENSITIVE
  • 00:37:25
    FOR US TO PARTICIPATE. SO THERE'S GOING TO BE SOME
  • 00:37:29
    DOWNSIDE TO THAT BECAUSE THE AI DOES ALSO PROVIDE QUITE BALANCE
  • 00:37:32
    INFECTION INFORMATION MOST OF THE TIME. BUT SOMETIMES IT GETS
  • 00:37:35
    A CATASTROPHICALLY WRONG. SO I'VE ALWAYS BEEN ADVOCATING
  • 00:37:38
    THAT I SHOULD NOT BE ABLE TO PARTICIPATE IN ELECTIONS AND
  • 00:37:41
    DEMOCRACY IS SACROSANCT THROUGH ALL OF ITS FAULTS AND
  • 00:37:44
    STRENGTHS. IT IS SOMETHING THAT HUMANS PARTICIPATE IN THE I
  • 00:37:51
    SHOULDN'T IN 2030, OR 2035, WE KINDA HAVE TO FACE THE REALITY
  • 00:37:54
    THAT SOME PEOPLE FEEL SO ATTACHED TO THAT PERSONAL EYES
  • 00:37:57
    THAT THEY WILL ADVOCATE FOR PERSON. HOOD, JUST AS THERE ARE
  • 00:37:59
    SOME PEOPLE WHO ADVOCATE FOR PERSON AND FOR THE ANIMALS,
  • 00:38:02
    FOR EXAMPLE, YOU KNOW, IN KIND OF MORE EXTREME ANIMAL RIGHTS
  • 00:38:06
    CASES. AND I I AGAIN, THIS IS A PERSONAL VIEW, BUT THIS IS
  • 00:38:07
    SOMETHING THAT WE'LL HAVE TO DEBATE.
  • 00:38:10
    I THINK WE SHOULD TAKE A HARD LINE ON THAT. I JUST WANT TO
  • 00:38:15
    KNOW WILL DEMOCRACY IS FOR HUMANS AND OTHER KIND FORMS OF
  • 00:38:18
    BEING SHOULDN'T REALLY BE ABLE TO PARTICIPATE. KT KNOW WE HAVE
  • 00:38:23
    A PANEL AND 30 IN 2035. THEM ABOUT HUMAN HOOD FOR FOR THE
  • 00:38:23
    BUCS.
  • 00:38:25
    >> WE'VE GOT A WHOLE BUNCH OF HANDS. WE'VE GOT NOT A LOT OF
  • 00:38:31
    TIMES. WE'LL TRY TO GO AS FAST AS WE CAN. WHAT DO WE GO RIGHT
  • 00:38:34
    OVER HERE? AND THEN WE'LL COME DOWN HERE AND WE'LL SEE IF WE
  • 00:38:39
    CAN MOVE AROUND AS QUICKLY AS WE CAN IT ABOUT. YEAH.
  • 00:38:43
    >> THANK YOU BOTH FOR BEING HERE TODAY ON VERY INTELLIGENT
  • 00:38:47
    CONVERSATIONS HERE. I JUST WANT MY NAME IS MONICA MAN AND THE
  • 00:38:50
    FORMER DEPUTY MAYOR OF THE CITY OF BOCA RATON, FLORIDA.
  • 00:38:54
    AND I'M ALSO AN OLD-SCHOOL TECHNOLOGIST, OK, DATE MYSELF,
  • 00:38:59
    IBM, COBOL PROGRAMMERS. SO I I WATCH TECHNOLOGY THROUGH THE
  • 00:39:01
    LAST 30, 40 YEARS
  • 00:39:05
    AND I UNDERSTAND THE SIGNIFICANCE AND IMPLICATIONS
  • 00:39:07
    AND APPLICATIONS OF AI
  • 00:39:10
    THE LAST SEVERAL YEARS. CRYPTOCURRENCY HAS BEEN LIKE A
  • 00:39:13
    BIG BUZZWORD. EVERYBODY'S TALKING ABOUT THE CURRENCY,
  • 00:39:14
    THIS AND THAT
  • 00:39:18
    UNTIL SPF CONVICTION FRAUD SEVERAL MONTHS AGO.
  • 00:39:22
    SO NOW I FEEL LIKE EYES THE NEW WORD.
  • 00:39:25
    WE'VE GOT A VERY SHORT AMOUNT OF TIMES. YOU'VE GOT TO PUT A
  • 00:39:26
    QUESTION, MARK,
  • 00:39:31
    WHAT ARE THEIR CONCERNS THAT I WILL BE COOL UNTIL THE NEXT BIG
  • 00:39:33
    TECHNOLOGY REVOLUTION?
  • 00:39:37
    >> NO, I DON'T THINK SO. I HAVE HEARD THE COMPARISONS
  • 00:39:38
    AND I THINK THAT
  • 00:39:41
    CRYPTOCURRENCY, IN MY OPINION TO STA DIDN'T REALLY DELIVER
  • 00:39:45
    VALUE EVEN IN THE MOMENT. AND I THINK THAT THE VALUE THAT
  • 00:39:49
    THESE MODELS ALREADY DELIVERIES KIND OF OBJECTIVELY TO
  • 00:39:53
    TO GET A MICROPHONE RIGHT DOWN HERE.
  • 00:39:58
    >> IF YOU'D LIKE, THEY WON'T BE. BUT WE'RE GOING TO WE'RE
  • 00:40:00
    GOING TO EVERY QUESTIONS 20 SECONDS OR LESS, LITERALLY,
  • 00:40:03
    BECAUSE WE HAVE WE'VE GOT TO HAVE A LOT OF MONAHAN'S AND
  • 00:40:06
    VERY LITTLE TIME AND LYNDA RESNICK, WE EDUCATE 5,000
  • 00:40:07
    UNDERSERVED CHILDREN.
  • 00:40:11
    >> AND I'M I'M GRAPPLING, YOU KNOW, WE FORCED HIM BEHIND THEM
  • 00:40:15
    FOR THE LAST 10 YEARS. AND NOW I'M BEGINNING TO THINK THAT WE
  • 00:40:17
    HAVE TO TEACH MORE ABOUT THE HUMANITIES.
  • 00:40:20
    AND I JUST WONDERED HOW YOU FELT ABOUT THAT BECAUSE THAT
  • 00:40:24
    WAY THEY CAN MAKE A CHOICE AND UNDERSTAND IF WE LOOK AT THE
  • 00:40:27
    GREAT SCHOLARS OF OUR HISTORY LIKE THE ASPEN INSTITUTE,
  • 00:40:31
    HE TEACHES US GREAT YESTERDAY. I'M I'M WITH YOU ON THAT FELT
  • 00:40:31
    ABOUT THAT.
  • 00:40:33
    >> I'M VERY MUCH WITH YOU ON THAT. I THINK ACTUALLY, IF YOU
  • 00:40:36
    LOOK BACK IN HISTORY, THE GREATEST GOAL AS A VOICE BEEN
  • 00:40:38
    AL MULTIDISCIPLINARY SCHOLARS AND IN THE PASS, THAT REALLY
  • 00:40:41
    WASN'T THIS ACCUTE DISTINCTION BETWEEN STEM AND HUMANITIES.
  • 00:40:45
    AGAIN, IT'S QUITE OF SIMPLISTIC ADVERSARIAL POSITION. I'M A
  • 00:40:48
    GREAT BELIEVER IN BOTH. AND YOU KNOW, I THINK THAT
  • 00:40:51
    MULTIDISCIPLINARY SKILLS SAW GOING TO BE THE, YOU KNOW,
  • 00:40:53
    THE ESSENCE OF THE FUTURE.
  • 00:40:58
    >> MYRIAM SEVERE O C S I S NONRESIDENT SENIOR ADVISER
  • 00:41:01
    WANTED TO ASK YOU A LITTLE MORE SPECIFICALLY ABOUT
  • 00:41:04
    MISINFORMATION AND DISINFORMATION AND WHAT YOU
  • 00:41:08
    THINK SHOULD BE DONE TO ADDRESS THOSE CONCERNS.
  • 00:41:10
    >> YEAH, I MEAN, I THINK ON THE DIS-INFORMATION FROM THERE ARE
  • 00:41:14
    CLEARLY BAD ACTORS WHO, FOR MANY, MANY, IS HAVE BEEN
  • 00:41:17
    ACTIVELY TRYING TO POLLUTE THE INFORMATION ECOSYSTEM. AND I
  • 00:41:22
    THINK THAT, YOU KNOW, OUR OUR BIG TECH PLATFORMS COULD DO
  • 00:41:25
    MORE AGGRESSIVE JOB OF WEEDING THOSE OUT. AT THE SAME TIME,
  • 00:41:28
    WE'RE TRYING TO STRETCH TRENT THIS LINE FOR NOT SORT OF
  • 00:41:31
    DAMAGING, YOU KNOW, THE THE FREE SPEECH THAT WE WILL VALUE,
  • 00:41:33
    RIGHT? SO, YOU KNOW, THIS IS GOING TO BE A CONSTANT
  • 00:41:37
    WHAC-A-MOLE GAME, ESPECIALLY AS THE COST OF PRODUCTION OF THAT.
  • 00:41:41
    THIS INFORMATION GOES DOWN AND THE EASE WITH WHICH LESS
  • 00:41:44
    CAPABLE ACT IS LOOKING TO BE ABLE TO PRODUCE VAST AMOUNTS OF
  • 00:41:47
    THAT CONTENT. AT THE SAME TIME, HAVE FAITH IN HUMANITY WERE
  • 00:41:51
    INCREDIBLY RESILIENT AND ADAPTIVE. AND YOU AS WE SOAR IN
  • 00:41:53
    THE INDIAN ELECTION WITH THE SPREAD OF MISINFORMATION,
  • 00:41:56
    DEEPFAKES THE PEOPLE THAT THAT REAL QUICK AND THEY CAN TELL
  • 00:41:59
    THE DIFFERENCE IN THE SKEPTICAL AND THAT IF IT MAKES US A
  • 00:42:02
    LITTLE MORE INQUISITIVE AND QUESTIONING AND ENGAGED IN THE
  • 00:42:04
    PROCESS, I THINK THAT'S A GOOD THING.
  • 00:42:08
    >> GOOD QUESTION OVER HERE. THANK YOU. AGAIN, I GO BACK TO
  • 00:42:09
    CHINA.
  • 00:42:12
    I FEEL YA. NATIONAL SECURITY IS DRIVING A POLICY
  • 00:42:20
    TECHNOLOGY AND AND CLEARLY WHAT POINT A POTENTIALLY NO RETURN
  • 00:42:25
    WITH THAT RESULTING TO DEVELOP INTO DIFFERENT TECHNOLOGIES
  • 00:42:25
    STANDARDS.
  • 00:42:27
    WHAT ARE WE THERE IN YOUR OPINION?
  • 00:42:29
    >> NO, I TOOK A I THINK,
  • 00:42:32
    FOR MIGHT TALK ABOUT COOPERATION. I'M ALSO VERY
  • 00:42:36
    PRAGMATIC ABOUT THE BALKANIZATION, WHICH IS ALREADY
  • 00:42:39
    TAKING PLACE. THE FACT THAT, YOU KNOW, OUR EXPORT CONTROLS
  • 00:42:43
    NOW HAVE ESSENTIALLY FORCE CHINA TO DEVELOP THEIR OWN
  • 00:42:46
    CHIP, WHICH THEY WON'T NECESSARILY ON A PASS TO DO.
  • 00:42:48
    AND, YOU KNOW, AGAIN, SMART PEOPLE CAN DISAGREE ABOUT THE
  • 00:42:52
    SAME FACTS WHEN WE TOOK THE VIEW. I MEAN, UDC TOOK THE VIEW
  • 00:42:55
    THAT, YOU KNOW, WE HAVE TO SEPARATE AND THAT'S THE PASTA
  • 00:42:58
    WE NOW ON AND THAT BUILDING THEIR OWN TECHNOLOGY ECOSYSTEM
  • 00:43:01
    AND THERE SPREADING THAT AROUND THE WORLD. WE SHOULD REALLY PAY
  • 00:43:04
    CLOSE ATTENTION MUCH AS WE'RE OBSESSED OVER THE QUESTION OF
  • 00:43:07
    THE MIDDLE EAST AND THAT, YOU KNOW, IT'S AN IMPORTANT
  • 00:43:09
    QUESTION, BUT LET'S NOT NEGLECT. THE FACT THAT, YOU
  • 00:43:13
    KNOW, DIGITIZE ATION IS A NEW KIND OF, YOU KNOW,
  • 00:43:16
    INSTEAD OF SHAPING OF VALUES IN AFRICA. AND THE PROVISION OF
  • 00:43:20
    SATELLITE COMMUNICATIONS AND OPERATING SYSTEMS ARE READY
  • 00:43:24
    GOING TO SHAPE THE NEXT FEW DECADES IN A MUCH MORE PROFOUND
  • 00:43:25
    WAY THAN BOOTS ON THE GROUND.
  • 00:43:27
    >> OK, THAT'S GOING MAKE A PHONY IN THE CHEAP SEATS ARE
  • 00:43:30
    THE IMPORTANT SEATS ALL THE WAY IN THE BACK? I THINK I THINK I
  • 00:43:33
    SEE A HAND IN THE ABSOLUTE BACK ROW AND JUST FOR SITTING IN THE
  • 00:43:34
    BACK ROW. WE'RE GOING GIVE YOU A QUESTION,
  • 00:43:38
    MAKE IT QUICK BECAUSE WE'VE GOT ABOUT LITERALLY A MINUTE.
  • 00:43:43
    >> HELLO A SUMMER. THEY COME TO US AND US AND I'M AN EDUCATOR
  • 00:43:45
    IN NEW JERSEY FOR 8 YEARS. SHOUT OUT TO MY STUDENTS.
  • 00:43:49
    SHE'S RIGHT HERE. SO I'VE BEEN ARGUING A LOT WITH A LOT OLDER
  • 00:43:54
    TEACHERS FOR USING AI BECAUSE I'M A HUGE ADVOCATE FOR IT.
  • 00:43:57
    WHAT WOULD YOU SAY TO THOSE SPECIFIC TEACHERS AND HOW DO
  • 00:44:00
    YOU THINK TEACHERS CAN ACTUALLY UTILIZE? I KNOW YOU SPOKE
  • 00:44:05
    BRIEFLY ABOUT STUDENTS, BUT I WANT TO KNOW THE EDUCATOR'S
  • 00:44:05
    PERSPECTIVE.
  • 00:44:07
    >> YEAH, GREAT, GREAT QUESTION. I MEAN, YOU KNOW, I THINK
  • 00:44:12
    SKIING AI TO PRODUCE A LESSON PLANS AS SORT OF THE FIRST ONE
  • 00:44:17
    INTRODUCTION, BUT PERFORMING ALONGSIDE A I IT'S KIND OF AN
  • 00:44:20
    INTERESTING NEW DIRECTION TO TAKE IT LIKE WHAT WOULD IT LOOK
  • 00:44:24
    LIKE FOR A GREAT TEACH OR EDUCATE TO HAVE A PROFOUND
  • 00:44:28
    CONVERSATION WITH AN AI THAT IS LIVE IN IN FRONT OF THEIR
  • 00:44:32
    AUDIENCE AND THEN OCCASIONALLY ENGAGE, YOU KNOW, THAT STUDENTS
  • 00:44:36
    TO GET INVOLVED? I MEAN, YOU KNOW, LITERALLY BY THE END OF
  • 00:44:41
    THIS YEAR, WE'LL HAVE REAL-TIME VOICE-BASED INTERFACES THAT
  • 00:44:42
    ALLOW
  • 00:44:45
    FULL DYNAMIC INTERRUPT. AND IT FEELS LIKE JUST TALKING,
  • 00:44:48
    YOU KNOW, LIKE I AM RIGHT NOW WITHOUT INJURY. IT'S JUST A
  • 00:44:52
    COMPLETELY DIFFERENT EXPERIENCE. I THINK THAT LIKE
  • 00:44:55
    LEANING INTO THAT SHOWING THE BEAUTY OF THAT AND BEING
  • 00:44:57
    CREATIVE WITH IT IS IS IS REALLY THE FUTURE.
  • 00:45:01
    >> FOLKS IT TO BOTH. THANK YOU AND ALSO APOLOGIZE BECAUSE
  • 00:45:03
    WE'RE OUT OF TIME. I WANT TO THANK YOU FOR YOUR FABULOUS
  • 00:45:06
    QUESTIONS. BUT PLEASE JOIN ME IN THANKING THIS GENTLEMAN FOR
  • 00:45:08
    A FABULOUS CONVERSATION.
  • 00:45:10
    >> ON THE HUNT.
  • 00:45:12
    THANK YOU, BUDDY.
الوسوم
  • AI
  • Safety
  • Regulation
  • DeepMind
  • Microsoft
  • Demis Hassabis
  • Technology
  • Future
  • Governance
  • Education