"RDF and the future of LLMs" by Luke VanderHart
摘要
TLDRThis talk explores the intersection of RDF (Resource Description Framework) and language models, delving into their potential integration and utility. Initially, the speaker presents opposing controversial views: language models are a technological advance and benefit but also have harmful societal impacts. The discussion transitions to RDF's capability in tackling knowledge representation, linking data, and semantic web ideals. The challenges with RDF, historically tethered to complex formats and costly tools, are acknowledged. Language models, powered by the Transformer architecture, revolutionized natural language processing by structuring grammar, semantics, and pragmatics. Emphasizing how RDF's structure aligns with language modeling, the speaker suggests that RDF aids in reasoned data handling, validating inquiries, and enabling inferential logic. This insightful approach is part of a broader neurosymbolic AI agenda, aiming to combine neural networks with logical symbols. The talk concludes with a call to ethically and innovatively utilize AI technology, ensuring benefits outweigh drawbacks and enriching societal value.
心得
- 📊 RDF tackles complex knowledge representation issues.
- 🧠 Language models revolutionize natural language processing.
- 📉 RDF perceived as outdated but still valuable.
- 🔗 Semantic web overhyped, underdelivered due to effort and incentives.
- 🛠️ Language models function by predicting the next token in sequences.
- 🔍 RDF aids in precise data queries and reasoning.
- 🌐 Integration of RDF and language models enhances data handling.
- 💡 Neurosymbolic AI aims to merge neural and logical systems.
- 🤔 Mixed feelings about AI's societal impacts.
- 🔨 Ethical and beneficial use of AI remains crucial.
时间轴
- 00:00:00 - 00:05:00
The speaker introduces the talk by acknowledging that it has been previously given to a smaller group and was well received. They express their interest in explaining the basic concepts of RDF and language models and their integration, starting with controversial statements about the value and challenges of AI technology, particularly its societal impacts.
- 00:05:00 - 00:10:00
The speaker discusses the Resource Description Framework (RDF) as a solution for knowledge representation and reasoning, conceived by the same group responsible for internet specifications. They outline the technical achievements of RDF and its core components, despite its perceived obsolescence compared to newer technologies.
- 00:10:00 - 00:15:00
In this segment, the speaker explores criticisms and challenges RDF faces, such as its association with outdated formats and costly enterprise solutions. However, they note RDF's continued utility in complex data modeling across various industries, and address the misconceptions stemming from the overhype of the semantic web.
- 00:15:00 - 00:20:00
The speaker describes the elemental RDF concepts—resources and their descriptions—and how unique identifiers (IRIs) help overcome natural language ambiguity. They extend the discussion by linking RDF's principles to philosophical concepts of meaning and semiotics, highlighting symbolic connections and precision offered by RDF.
- 00:20:00 - 00:25:00
RDF's use of triples is explained as architecturally simple yet capable of detailed and precise data representation, which can integrate with various database formats. The speaker also touches on the ability to compose datasets without losing meaning, exemplifying RDF's flexibility and utility in unifying disparate data sources.
- 00:25:00 - 00:30:00
Attention is turned to entailment in RDF, a way of reasoning over data to derive new insights and ensure data validity. The talk diverges into a historical context, linking RDF development to symbolic AI's history, and touches upon its aspirations to encompass logical reasoning within interconnected systems referencing standards like first-order logic.
- 00:30:00 - 00:35:00
The speaker transitions to language models, starting with the breakthrough 'Attention is All You Need' paper that enabled modern NLP. Language models' applications are briefly showcased, along with their operational simplicity and underlying complexity. The speaker encourages exploring their operation via educational resources for deeper understanding.
- 00:35:00 - 00:41:53
The talk concludes with a practical view on using language models, positing that all advancements revolve around refining model inputs to produce superior outputs. By integrating RDF data in prompts, models can potentially bridge the gap between structured data and language processing, ultimately enhancing utility and reasoning capabilities.
思维导图
常见问题
What are the two controversial statements made by the speaker?
The first statement is that language models are beneficial and innovative. The second is that AI and language models have negative societal impacts such as environmental harm and misinformation.
What is RDF?
RDF stands for Resource Description Framework, a model to represent data about resources in the form of relations, using triples: subject, predicate, and object.
Why does the speaker have mixed feelings about language models?
The speaker appreciates their technological advancement but is concerned about their environmental impact, effect on art and labor, and the negative culture around AI.
What is the relation between RDF and language models?
RDF provides a framework for structured data which can be useful when integrated with language models, facilitating precise queries and data handling.
What is a language model?
A language model is a computational model that predicts the next token in a sequence, capturing grammar, syntax, semantics, and pragmatics of language.
What are the difficulties with RDF today?
RDF is seen as outdated by some, it started with complex formats like RDF/XML, and is associated with overarchitected libraries and enterprise-grade costly tools.
How does the speaker propose to make AI technology beneficial?
By being mindful of impacts, avoiding hasty discussions on platforms like social media, and striving to make the technology as good as possible within its trade-offs.
What is the 'semantic web'?
The semantic web is an idea to link data globally, allowing data to be easily traversed and united. It was overhyped and didn’t gain traction due to manual effort and lack of immediate benefits.
查看更多视频摘要
Literasi (Team Teaching) - #29 Linkage and Place Theories in Urban Design (M3)
QFD - Quality Function Deployment
Chubby Elastigirl Kronos Infiltration - THE INCREDIBLES
Sekali Lagi tentang Hijab - M. Quraish Shihab
DXO Pure Raw & Topaz Raw Ai Denoise Workflow for Raw Therapee and Darktable Users.
The Trump Presidency's Foreign Policy: Implications for the Middle East and Shifting Global Dynamics
- 00:00:04um thanks for coming everyone this is a
- 00:00:05version of a talk that I gave to a much
- 00:00:07smaller group um several months ago uh
- 00:00:09was pretty well received um it's what
- 00:00:11I'm working on right now and so I'm just
- 00:00:14hopefully explain some of the the basic
- 00:00:16concepts and ideas we'll go over what
- 00:00:18rdf is uh what language models are um
- 00:00:21and then why they go well together which
- 00:00:23I think is unintuitive to most people
- 00:00:25unless you've already been thinking
- 00:00:26along these lines um yeah so we'll go
- 00:00:28from there going to start with a
- 00:00:30controversial statement
- 00:00:33um and I'm going to make two
- 00:00:34controversial statements that are
- 00:00:35mutually exclusive so you'll find
- 00:00:37yourself in agreement with one and in
- 00:00:38disagreement with the other most likely
- 00:00:40or maybe you're like me and you feel the
- 00:00:41tension so the first one is uh language
- 00:00:44models are pretty neat and I like them
- 00:00:47uh we never imagine that we'd be able to
- 00:00:49compute over unstructured text like this
- 00:00:52uh and it really feels like a technical
- 00:00:54advance and I like technical advances I
- 00:00:56think that technology on net is a real
- 00:00:59Force for good um especially if you can
- 00:01:01get the societal structures aligned so
- 00:01:03definitely go vote in a few days um but
- 00:01:07the interest I have in this specific
- 00:01:08Tech is also deeply personal um my
- 00:01:11academic background is in philosophy and
- 00:01:14Linguistics so and then all my work has
- 00:01:17been in computers and data so having
- 00:01:19them merge like this it's you know very
- 00:01:22interesting personally we have a real
- 00:01:24serial Chinese room in the room with us
- 00:01:26right now that's wild uh for those of
- 00:01:28you are familiar to philosophy uh but
- 00:01:31the second controversial statement is
- 00:01:34language models suck or rather AI sucks
- 00:01:37and specifically the way our culture has
- 00:01:39been using it um it's bad for the
- 00:01:41environment um it's bad for art it's bad
- 00:01:44for labor um or at least artists and um
- 00:01:47labor are upset about many aspects of it
- 00:01:49um we are drowning in slop and spam and
- 00:01:52disinformation um and the ecosystem at
- 00:01:54large has some good people in it but it
- 00:01:55also attracts a lot of the absolute
- 00:01:56worst sorts and so I have very mixed
- 00:01:59feelings about working on this stuff um
- 00:02:01so here's how I've decided to approach
- 00:02:03it first of all just be mindful of the
- 00:02:05impact I think we don't as technologists
- 00:02:06have the luxury of doing something just
- 00:02:08because it's cool or just because we can
- 00:02:09you really have to think about how the
- 00:02:11products we build are going to affect
- 00:02:12the world um also do not talk about this
- 00:02:15stuff on social media most for the most
- 00:02:17part um nobody there has a reasoned well
- 00:02:19thought out opinion um these are for for
- 00:02:22long more thoughtful
- 00:02:23conversations and the final thing what
- 00:02:25this talk is about is how can we make
- 00:02:26the tech actually good if it comes with
- 00:02:28all these trade-offs if there's all
- 00:02:29these negative externalities how good
- 00:02:31does it have to be before it becomes net
- 00:02:33good for the world in a positive force
- 00:02:35and it seems like it's here people are
- 00:02:37going to be using it so let's make it as
- 00:02:39good as it can possibly be but how um
- 00:02:44technology is good language models our
- 00:02:46technology how do we make language
- 00:02:47models useful and my answer is the
- 00:02:50resource description framework so
- 00:02:53hopefully um at the end of this talk you
- 00:02:55will sort of see what I see but first
- 00:02:58let's talk a little bit about rdf what
- 00:02:59it is is where it came from so rdf is an
- 00:03:02attempt to tackle some of the hardest
- 00:03:04problems of knowledge representation and
- 00:03:06reasoning um it came about just after
- 00:03:08the internet um from the same group of
- 00:03:10people that put together all the
- 00:03:11internet specifications the w3c um and a
- 00:03:14lot of people that came out of the
- 00:03:15symbolic AI um boom of the 80s so we're
- 00:03:18trying to you know reach AI through
- 00:03:20logic
- 00:03:22um these people are all giant nerds and
- 00:03:24they're really trying to do the right
- 00:03:26thing and to build like the ultimate
- 00:03:28data framework and
- 00:03:30you know it's got it pros and cons um
- 00:03:32but I think to a large degree they
- 00:03:34succeeded um this is the kind of
- 00:03:37definitive document the the concepts an
- 00:03:39abstract syntax because of course it's
- 00:03:40abstract um I really love this document
- 00:03:43it's quite readable um if you're enough
- 00:03:45of a nerd you might find this document
- 00:03:47sort of inspiring um and I think rdf in
- 00:03:50general at its core is a pretty
- 00:03:52brilliant technical achievement um for
- 00:03:54clarity and vision around how we can or
- 00:03:57how we might work with knowledge on
- 00:04:00computer systems they're trying to solve
- 00:04:02this at a very high level however rdf
- 00:04:04today um and many of you may be
- 00:04:06surprised to see this in a talk um at a
- 00:04:09contemporary Tech conference a lot of
- 00:04:10people see it the same way they see soap
- 00:04:12or ejbs or Visual Basic or react um it's
- 00:04:16one of these tactics you know it's just
- 00:04:17not that cool anymore um and of course
- 00:04:20it's gone through the whole hype cycle
- 00:04:21and now it's very much not in the hype
- 00:04:23cycle uh and there's a few reasons for
- 00:04:25this that I do want to kind of go over
- 00:04:26before we get more into the meat of it
- 00:04:28uh one is rdf XML which is one of the
- 00:04:31initial formats um I don't know how many
- 00:04:33of you remember the early 2000s well um
- 00:04:36but everyone was just doing lines of XML
- 00:04:38at work all the time it
- 00:04:41was there was a lot going on there um
- 00:04:44this is a verbos complex format it's
- 00:04:46just honestly not great but for the
- 00:04:48early days of rdf it kind of got really
- 00:04:49strongly associated with this even
- 00:04:51though this is only one of many many
- 00:04:52formats you can use also a product of
- 00:04:55its times that all the libraries that
- 00:04:57were written to use it were in the early
- 00:05:002000s um and what are libraries in the
- 00:05:02early 2000s like they're all massively
- 00:05:05over architected object-oriented
- 00:05:06programming um this is a historical
- 00:05:08accident that all the library all the
- 00:05:10rdf libraries look this way because it's
- 00:05:14just that they co-occurred in history
- 00:05:15there's nothing about rdf that makes it
- 00:05:17a good fit for object-oriented
- 00:05:18programming and closure is actually a
- 00:05:20way better fit um so I wish there was a
- 00:05:21good closure library for
- 00:05:25this another reason is that there
- 00:05:27actually are a lot of pretty good really
- 00:05:29solid robust um Enterprise grade uh rdf
- 00:05:32implementations out there and they all
- 00:05:34cost call me money um you cannot just
- 00:05:38start using these so there's a real Gap
- 00:05:39in the market for Quality accessible
- 00:05:41tools um but the existence of all these
- 00:05:45really heavyweight Enterprise uh tools
- 00:05:47for rdf does tell us something which is
- 00:05:49there's actually a quite established
- 00:05:51market for this stuff rdf is being used
- 00:05:53really productively in science heavy
- 00:05:55industry government anywhere you need to
- 00:05:58really model super complex information
- 00:05:59information with Precision um so there's
- 00:06:02a surprising number of domains where rdf
- 00:06:04based standards are the standard uh for
- 00:06:06modeling data another reason is that the
- 00:06:09semantic web was quite simply overhyped
- 00:06:11uh for the time it's a great idea we'll
- 00:06:13link all our data we can Traverse all
- 00:06:14the data everything will be unified and
- 00:06:15we will live in this data Utopia um but
- 00:06:18the problem was it required so much
- 00:06:19manual effort to publish and consume
- 00:06:21constantly um and all the benefits were
- 00:06:24like abstract there was no immediate
- 00:06:25financial incentive for anyone to
- 00:06:27actually go Implement all their data as
- 00:06:29link data and expose it in this way um
- 00:06:31and so despite tons and tons of books
- 00:06:33because it's a great idea everyone got
- 00:06:34super excited about it and then nobody
- 00:06:36actually did it and then people just
- 00:06:37kind of came to the conclusion that this
- 00:06:38is a bad
- 00:06:40idea um so this is really what I want um
- 00:06:44you know I wish this book got popular um
- 00:06:47that I lived in that history uh but the
- 00:06:50truth is rdf is all good parts because
- 00:06:52it's all separate all the things that
- 00:06:54are not good parts are not they're just
- 00:06:56kind of the surrounding ecosystem which
- 00:06:57we can replace or or not choose to use
- 00:07:00um so hopefully I can convince you that
- 00:07:04rdf is great and what is the elegant
- 00:07:06core uh I'll describe what the actual
- 00:07:08technology is and how you use it um it's
- 00:07:11the resource description framework so
- 00:07:13let's talk about resources first um what
- 00:07:16is a resource a resource is anything in
- 00:07:18the world that you can talk about people
- 00:07:21fictional entities abstract Concepts
- 00:07:23material objects um data anything that
- 00:07:26can be the subject of language can be
- 00:07:28the subject of rdf
- 00:07:30how do we represent anything in a
- 00:07:33computer computers can't represent
- 00:07:35anything they have bits well literals
- 00:07:37are easy because they are bits so
- 00:07:38anything that is itself um can be a
- 00:07:41resource you know then a computers do
- 00:07:43know how to actually have the number
- 00:07:44numbers and and strings as
- 00:07:46resources um we can also represent
- 00:07:48things kind of abstractly without
- 00:07:50talking about what they are these are
- 00:07:51like variables or or pronouns that you
- 00:07:53can talk about something without ever
- 00:07:55trying to say exactly what it is um so
- 00:07:59we can talk about something without
- 00:08:01using its name which can be useful
- 00:08:03sometimes and then sometimes we do want
- 00:08:04to name things and this is very this is
- 00:08:07a lot of letters to say a unique
- 00:08:09ID um and really that's all it is is it
- 00:08:12needs to be unique ID so what are some
- 00:08:14Iris resource identifiers um U IDs right
- 00:08:17they're unique we know that um anything
- 00:08:20that has a known naming
- 00:08:23Authority so we know the ISBN system
- 00:08:25this is guaranteed to be unique um most
- 00:08:28commonly you'll see urls and URLs are
- 00:08:30great for uniqueness because they
- 00:08:31establish in their domain an authority
- 00:08:33and whoever owns that domain takes on
- 00:08:34responsibility and Authority for making
- 00:08:36sure there's no duplications or um
- 00:08:39ambiguity in the rest of the URL and
- 00:08:42these uis can be resolvable you can
- 00:08:43actually hit this in your browser and
- 00:08:44it'll ask you what format you want to
- 00:08:46download Abraham Lincoln's data in and
- 00:08:48they can also be non-resolvable so for
- 00:08:50example this is URI that I made just now
- 00:08:53this morning and it defines my concept
- 00:08:56of Excellence I haven't written this
- 00:08:58down anywhere nobody knows what it is is
- 00:08:59except for me um but I still can make a
- 00:09:01U iri for it and talk about it um and
- 00:09:04it's the name of a um something you
- 00:09:07can't resolve this but I can still use
- 00:09:09it to talk about that Concept in the rdf
- 00:09:11vocabulary and of course because Iris
- 00:09:13are pretty long and cumbersome you can
- 00:09:15shorten them every syntax has shortened
- 00:09:17prefixes um kind of like closure
- 00:09:18keywords so when you see a bunch of Iris
- 00:09:21around in practice you can ually it
- 00:09:24looks a lot
- 00:09:26cleaner so why Iris why do we have why
- 00:09:28is unique identifier so important um why
- 00:09:31do we put so much focus into making sure
- 00:09:33resources are unique well it's because
- 00:09:35in language context is everything take
- 00:09:37the sentence my friend Joe what I just
- 00:09:39start a sentence that way what do you
- 00:09:41need to know in order to make sense of
- 00:09:42that sentence you need to know who's
- 00:09:45speaking you need to know uh do you know
- 00:09:47Joe or not that could change the context
- 00:09:50um Joe who I haven't said his last name
- 00:09:52I just said Joe right so natural
- 00:09:54language is really inherently ambiguous
- 00:09:57and we rely a ton on context to fix it
- 00:10:00and the problem is we do this with
- 00:10:01programming too in most programming
- 00:10:02systems when you get data it comes to
- 00:10:04you like this you say I have a name and
- 00:10:06I have a handle and I have an ID and now
- 00:10:10I can process it but in order for me to
- 00:10:12process as a programmer I need to supply
- 00:10:13the context I need to understand the
- 00:10:15system I need to understand where the
- 00:10:16day is coming from I need to understand
- 00:10:17what it means and then maybe I can write
- 00:10:19code against
- 00:10:20it the goal of Iris an rdf is that they
- 00:10:24carry their context with them right um
- 00:10:29my friend Joe his handle well that's his
- 00:10:30LinkedIn handle and when I see that if
- 00:10:32you just hand me a scrap of paper that
- 00:10:34says oh well it's a LinkedIn handle and
- 00:10:36oh that's his social security number
- 00:10:38that's the ID it's not his real one
- 00:10:41um
- 00:10:42um you know it it carries its own
- 00:10:45context the data brings its own
- 00:10:48values and this also gets super
- 00:10:50philosophical really quick which is
- 00:10:51probably why I like it um Iris are very
- 00:10:54closely related to the philosophical
- 00:10:56field of semiotics which is really
- 00:10:58important for logic philosophy
- 00:11:00Linguistics and literature lotss of
- 00:11:01fields use this um there's a ton of
- 00:11:03thought about how a sign or a symbol can
- 00:11:05be about something in the real world
- 00:11:06that's what this famous painting is
- 00:11:08about like is it a is it a pipe is it a
- 00:11:09picture of a pipe is it me talking about
- 00:11:11a picture of a pipe um what are the
- 00:11:13layers of indirection how do you
- 00:11:14dreference a pointer in your brain to
- 00:11:16Something in the real world that's the
- 00:11:18field of semiotics and for rdf it's how
- 00:11:21do you dreference
- 00:11:22a identifier in a computer to Something
- 00:11:25in the real world and of course you
- 00:11:26don't actually reference it but you can
- 00:11:27use it in systems with the understanding
- 00:11:29that it does dfference conceptually
- 00:11:32something this is also very similar to
- 00:11:34the work of Ferdinand D he was uh one of
- 00:11:36the foundational figures in modern
- 00:11:38Linguistics he wrote A Course in general
- 00:11:40Linguistics in 1916 his concept of
- 00:11:42meaning was that everything is a
- 00:11:44semantic Network every sign or every
- 00:11:47word gains meaning by its opposition and
- 00:11:49relation to every other sign in the
- 00:11:52vocabulary um and that was his
- 00:11:54definition of meaning he's like there's
- 00:11:55really nothing much more to meaning than
- 00:11:56that except it's this network and
- 00:11:58everything is defined in relation into
- 00:11:59everything else so it's like a densely
- 00:12:02connected graph of all the words which
- 00:12:03sounds kind of familiar um sounds a
- 00:12:05little bit like link data Maybe and uh
- 00:12:07it also maybe sounds like some other
- 00:12:08things that we'll talk about later okay
- 00:12:11so we have resources how do we have
- 00:12:14descriptions um well we've got resources
- 00:12:17in Iris so how do we express relations
- 00:12:19between different resources I think
- 00:12:21everyone in this room is pretty
- 00:12:22comfortable with triples subject
- 00:12:24predicate object entity attribute value
- 00:12:26if you're in English it's subject verb
- 00:12:28object it is a one of the most granular
- 00:12:31ways of representing a singular piece of
- 00:12:33information in kind of the you can't
- 00:12:35really decompose it further you can have
- 00:12:37a single resource but then it's just I
- 00:12:39say a name but if I want to say anything
- 00:12:40about that name this is about as small
- 00:12:42as you can get
- 00:12:45um yeah so here's a bunch of things I'm
- 00:12:47saying about Joe and there are all
- 00:12:49different things I can say his real name
- 00:12:51I can say his name according to LinkedIn
- 00:12:52I can say he knows me I can say he's the
- 00:12:54same as this other we can just say a
- 00:12:56bunch of stuff about jono
- 00:12:59and it's important that both the subject
- 00:13:00and the predicate and the object of an
- 00:13:02rdf triple all three parts of it um are
- 00:13:05very very precise we can be more precise
- 00:13:07than English so sorry the text is a
- 00:13:09little bit small but um there are two
- 00:13:11statements here and one is Luke
- 00:13:14loves their child and the other one is
- 00:13:16Luke loves closure and normally in
- 00:13:18English again semantic ambiguity but in
- 00:13:21rdf I've actually have the iri of two
- 00:13:23separate dictionary subheadings of the
- 00:13:25word love so I can be very very precise
- 00:13:27about what I mean and if one was not in
- 00:13:29the dictionary I could go make up my own
- 00:13:30iri that had the nuances that I wanted
- 00:13:32to attach to that statement so we're
- 00:13:35we're packing a lot of meaning into the
- 00:13:36iris and we can be much much more
- 00:13:38precise than English and actually much
- 00:13:40much more precise than most other data
- 00:13:44formats this generality of triples also
- 00:13:47means that you can go back and forth
- 00:13:50between almost any other data format
- 00:13:51relational databases key value column
- 00:13:53stores document based all of these can
- 00:13:55be converted to triples and the
- 00:13:57operation is actually conceptually
- 00:13:58similar add the context so in example in
- 00:14:01a relational database the context is
- 00:14:02like oh what are all the tables and what
- 00:14:04is the structure and where does the
- 00:14:04database live you can kind of compress
- 00:14:06that all that information into the
- 00:14:09semantics of the iris themselves and
- 00:14:12then of course you can go back in the
- 00:14:13opposite
- 00:14:13direction what does this mean it means
- 00:14:16that an rdf data set is a set in the
- 00:14:19closure sense or the mathematical sense
- 00:14:21it's just a bunch of triples that are
- 00:14:22not duplicated because every iri and
- 00:14:24every triple is guaranteed to be unique
- 00:14:27and we don't need to know anything else
- 00:14:28about what table there are or what
- 00:14:30folders there are or what trees or
- 00:14:31directory structures or anything we just
- 00:14:34have
- 00:14:35sets and that means that we can also
- 00:14:37safely Union sets so this is where the
- 00:14:39concept of the semantic web comes from I
- 00:14:42can I can take your data I can take my
- 00:14:43data I can just slam them together with
- 00:14:44a set Union and it's still something
- 00:14:46meaningful and intelligible which is not
- 00:14:48true of most other database systems it's
- 00:14:50Federation for
- 00:14:53free
- 00:14:55so we've described the core of rdf um
- 00:14:59that's really it resources and triples
- 00:15:03uh not nothing conceptually difficult
- 00:15:04there but it's worth saying something
- 00:15:06else about what the designers Invision
- 00:15:07you know they call it a framework and
- 00:15:08with with a framework they want you to
- 00:15:10build things with a framework right and
- 00:15:12we can describe data but what more is
- 00:15:15there so this guy is actually the
- 00:15:17primary contributor to the rdf standard
- 00:15:19um Aristotle um I'm not even joking he
- 00:15:24invented the word subject predicate and
- 00:15:25object in the context in which we're
- 00:15:27using them now um and this entire book
- 00:15:30is about you know the first chapter of
- 00:15:31this is listing all the types of things
- 00:15:32that exist in the world according to
- 00:15:33Aristotle and the rest of the book is
- 00:15:35the foundation of modern Western logic
- 00:15:39um he builds it all starting from here
- 00:15:41what what what can you say about what
- 00:15:42kinds of things um and rdf is really
- 00:15:46bigger than just data or storing data
- 00:15:49like we normally think of it's more than
- 00:15:50just a spreadsheet or a table or a
- 00:15:51bucket that I can put data it's about
- 00:15:53representing knowledge and knowledge is
- 00:15:56not limited to just things I've written
- 00:15:58down it's also limited to the things I
- 00:16:00know because of the things I wrote down
- 00:16:02right it's basically rdf as a
- 00:16:06system is designed to make it possible
- 00:16:08to talk about all the things I know
- 00:16:10whether I know them concretely or
- 00:16:12abstractly or in theory but I never
- 00:16:13bother to actually think about them in
- 00:16:14calculate them the actual data that is
- 00:16:16actually sitting on bits in a database
- 00:16:17is largely incidental um to a lot of
- 00:16:20uses of
- 00:16:21rdf so this is all about entailment
- 00:16:23entailment means that given a set of
- 00:16:25triples I can derive other triples from
- 00:16:28them con ctually either lazily or
- 00:16:30proactively doesn't matter um or if I
- 00:16:32have a set of triples I can tell if it
- 00:16:34is valid According to some definition of
- 00:16:38validity and that can be really
- 00:16:41useful because there's so many different
- 00:16:42ways to do this the rdf uh Community has
- 00:16:45a number of what they call different
- 00:16:46entailment profiles um the best and kind
- 00:16:49of gold standard for entailment profiles
- 00:16:50is the entirety of first order logic
- 00:16:52which is beyond the scope of this stent
- 00:16:54expain but those are the symbols on the
- 00:16:55other sides the full Suite of if then
- 00:16:57else not composed with any level of
- 00:17:00complexity first sorder of logic is
- 00:17:02great um it's not great to use as a
- 00:17:05programmer because it happens to be NP
- 00:17:07complete in fact it is the NP complete
- 00:17:08problem it is the definitional problem
- 00:17:11for very very hard problems to solve um
- 00:17:13efficiently in computer science so we
- 00:17:15have a lot of other profiles that do
- 00:17:17less um and are less expressive but are
- 00:17:19also calculable over large data sets you
- 00:17:22know before the heat death of the
- 00:17:23universe and the most important thing we
- 00:17:25use these for is to get back some sort
- 00:17:27of level of a schema kind of tell what
- 00:17:29sort of statements are meaningful and
- 00:17:30what aren't you know my date of birth
- 00:17:33cannot be the color purple so if I have
- 00:17:35a data set that says my date of birth is
- 00:17:36the color purple I can use entailment
- 00:17:39over schema to say no it's not I don't
- 00:17:41accept that data into my database um and
- 00:17:44same as is important because it really
- 00:17:46helps with Federation I can say hey
- 00:17:47these two concepts they start out as
- 00:17:48separate Concepts but now I'm bringing
- 00:17:50after the fact a third uh statement
- 00:17:53which is these are actually the same
- 00:17:54concept and then that means that I can
- 00:17:55now query across that and reason across
- 00:17:57that really effectively
- 00:18:00and really there's a large sense in
- 00:18:01which rdf and and all the entailment and
- 00:18:04logic associated with the rdf ecosystem
- 00:18:07is the cumulation of 20th century AI
- 00:18:10which was all about symbol manipulation
- 00:18:12formal logic rules based expert systems
- 00:18:14you know you had psych trying to build a
- 00:18:16database of every fact in the universe
- 00:18:17and and make you know intelligence would
- 00:18:19emerge and people were very optimistic
- 00:18:21about that and all these things were
- 00:18:22getting very funded uh and then it
- 00:18:24turned out that that didn't actually
- 00:18:25lead to general intelligence it's very
- 00:18:27useful in programming systems l is
- 00:18:29useful um but it's not it doesn't lead
- 00:18:30to intelligence so these people all went
- 00:18:32and built rdf
- 00:18:35instead and they they brought these
- 00:18:36Concepts specifically in a way that
- 00:18:39works with the internet era where
- 00:18:40everything is networked and everything
- 00:18:41has the potential to be linked so that's
- 00:18:44rdf 20 years later little paper out of
- 00:18:47Google attention is all you need this
- 00:18:50paper defines the Transformer
- 00:18:52architecture which is the underlying
- 00:18:53breakthrough that allows all the
- 00:18:55language models to work um it has an
- 00:18:57intention mechanism which basically
- 00:18:59allows it to train on tokens like taking
- 00:19:03their position into consideration but
- 00:19:05also independent of their position in a
- 00:19:06sequence and once you can train with
- 00:19:08that kind of flexibility it just unlocks
- 00:19:10everything that language models can do
- 00:19:12today so National L uh natural language
- 00:19:15processing as a discipline was
- 00:19:16immediately revolutionized chat GPT came
- 00:19:18out just five years later which is
- 00:19:19lightning speed this was like on a tiny
- 00:19:21little test demo data set and then they
- 00:19:24built something uh giant off of it
- 00:19:26really really fast and I don't need to
- 00:19:28to describe how big a Mania it is right
- 00:19:30now they're they're eating the world at
- 00:19:31least from a hype point of view if not
- 00:19:33from a actual productivity point of view
- 00:19:35yet how do they work I can't tell you
- 00:19:38well I can tell you but I cannot tell
- 00:19:39you in the next 20 minutes so if you
- 00:19:42want to do it go to this URL there's a
- 00:19:43really great carpy walks through about
- 00:19:4516 hours of dense video where he live
- 00:19:47codes a mini GPT I followed through I
- 00:19:49did it enclosure you can too you will
- 00:19:51deeply understand this when you're done
- 00:19:53um yeah I'm not going to talk anymore
- 00:19:56about the internals of how the model
- 00:19:57actually works scope um what I do care
- 00:20:00about is like defining them and like how
- 00:20:02do we use them and how should we think
- 00:20:03about them as software
- 00:20:04developers um
- 00:20:07so the atmology is actually very
- 00:20:09straightforward you take a measure and
- 00:20:11you have the diminutive form of it a
- 00:20:13small measure a model is a small measure
- 00:20:16of something
- 00:20:18and this is actually really um important
- 00:20:21for what these things are what is a
- 00:20:23model it's a measurement of something
- 00:20:26what we're doing is we're taking
- 00:20:27language we're measuring it we're
- 00:20:29analyzing every aspect of language and
- 00:20:31we're quantifying it as much as we can
- 00:20:33and we're specifying the distances
- 00:20:34between all the different concepts we're
- 00:20:35putting language on a bench and building
- 00:20:37a small copy and measuring it along
- 00:20:38every Dimension there turns out to be
- 00:20:40about you know hundreds of billions of
- 00:20:41Dimensions which is why there's hundreds
- 00:20:42of billions of
- 00:20:44parameters
- 00:20:46um so the act of generating from a
- 00:20:48generative language model is to create
- 00:20:50replicas based on those measurements hey
- 00:20:52let's emit some language but if it fits
- 00:20:54up with these measurements that's kind
- 00:20:56of what the real language is or then it
- 00:20:57looks and and acts like language because
- 00:20:59it's based off of the same measurements
- 00:21:00we
- 00:21:02took and interestingly an rdf data set
- 00:21:04is also called a model um kind of fits
- 00:21:06more in the second definition here um
- 00:21:08but it's also a set of measurements
- 00:21:10about the world or a set of things I've
- 00:21:11chosen to say about the
- 00:21:14world so what are we modeling is a model
- 00:21:17what aspects of language are we
- 00:21:18measuring what are we capturing we're
- 00:21:19capturing grammar and syntax and we've
- 00:21:21actually been modeling grammar and
- 00:21:22syntax since long before we had
- 00:21:24computers um you can build a simple
- 00:21:26rule-based generative grammar we'll talk
- 00:21:28about that more later um and and build a
- 00:21:30model but language models absolutely do
- 00:21:32capture the grammar and the syntax as
- 00:21:33well they also capture a lot of the
- 00:21:36semantics
- 00:21:38um how the words stand in relation to
- 00:21:40each other remember C it's almost like
- 00:21:42we've built a model of that definition
- 00:21:45of words with respect to each other
- 00:21:48because it captures a lot of semantics
- 00:21:49and actually the attention mechanism
- 00:21:50lets you capture the semantics
- 00:21:53contextually right which matters you
- 00:21:55like it's not just defining the words
- 00:21:56it's also if you have the semantics and
- 00:21:58then the itics of the word in different
- 00:21:59situations that's language that's um
- 00:22:03what we're building a model of also the
- 00:22:04pragmatics how you use it in practice
- 00:22:06what the colloquialisms are how people
- 00:22:07tend to
- 00:22:08talk it also captures a lot of patterns
- 00:22:11and this is where we can get in trouble
- 00:22:13and I don't want to talk about too much
- 00:22:14about this because it's a it's a bit of
- 00:22:16a rabbit hole but it will pick up on
- 00:22:18fact patterns uh if it sees a pattern
- 00:22:21enough in the wild it will be able to
- 00:22:22reproduce it pretty reliably but that's
- 00:22:24not the same thing as knowing a pack a
- 00:22:25fact as to be trained on a fact pattern
- 00:22:28and has reasoning patterns if it sees
- 00:22:29like a certain way of thinking enough in
- 00:22:31its training data it can reproduce those
- 00:22:33with some fair amount of accuracy or
- 00:22:35even produce things by analogy or
- 00:22:36extensions of them does that count as
- 00:22:38true reasoning um not in the way
- 00:22:41somebody writing an INF engine would
- 00:22:43think of it um and it's certainly not
- 00:22:45100%
- 00:22:47reliable for a programmer what's the API
- 00:22:50I want to use them we have a model we
- 00:22:52have all the measurements of language
- 00:22:53how do I how do I take measurements how
- 00:22:54do I get things out of here uh this is
- 00:22:56the entire API of language model it's a
- 00:22:58pure function that predicts the next
- 00:23:00token um all I've done is emitted a bit
- 00:23:02of high school level algebra and um in
- 00:23:05those three dots there but then I get
- 00:23:07the probabilities of the next token
- 00:23:09given a sequence of all the previous
- 00:23:11tokens um and that really works and if I
- 00:23:13want to get many tokens I just iterate
- 00:23:15over that and choose the most probable
- 00:23:17one at each step simple recursive
- 00:23:20function generating text you know what I
- 00:23:22admitted is that it's a very true
- 00:23:24statement that I emitted some high
- 00:23:26school level algebra in there um it
- 00:23:28happens to be a trillion is floating
- 00:23:30Point operations and about uh you know
- 00:23:32hundreds of billions of constants um so
- 00:23:34it really doesn't fit on the slide but
- 00:23:36that's all model is it's a pure function
- 00:23:38and it's a pure function that does math
- 00:23:39and has some constants in
- 00:23:41it that's what training is is finding
- 00:23:43the constants for the
- 00:23:46function so how does it work I give it a
- 00:23:48sequence I say Mary had a little and
- 00:23:50because it sees in patterns all over the
- 00:23:52Internet it says lamb that is by far the
- 00:23:54most likely answer because that's a very
- 00:23:56common little rhyme in the English
- 00:23:58language
- 00:24:01langage I say Mozart had a
- 00:24:03little well it's it's not lamb that
- 00:24:05doesn't make sense it's a sister why
- 00:24:08does it say sister I don't know could
- 00:24:10have been anything bit star those are
- 00:24:12less common than sister but you know
- 00:24:13they're very they're up there also star
- 00:24:16is up there because you know moart wrote
- 00:24:18the music for Twinkle Twinkle Little
- 00:24:20Star which is probably captured in the
- 00:24:22internet it turns out that the reason
- 00:24:23sister is up there and brother is not is
- 00:24:25that moart does have a sister and he did
- 00:24:27not have a brother
- 00:24:28so you can see we're start of capturing
- 00:24:31fact patterns from the training data but
- 00:24:33also not in a 100% reliable
- 00:24:36way it's just kind of all in the
- 00:24:39stats incidentally the sister was older
- 00:24:41so this is an incorrect
- 00:24:43fact but just because he had a sister
- 00:24:46that that bumps up the probability of
- 00:24:47that
- 00:24:48word so I want I want to add this to my
- 00:24:51toolbox I have a bunch of tools um it's
- 00:24:53great that I can like use this model to
- 00:24:56kind of academically understand language
- 00:24:58um I have a lot of tools at my disposal
- 00:25:00to do that and I want to add this tool
- 00:25:01to my toolbox but I'm still trying to
- 00:25:03figure out how to use it and how to get
- 00:25:05it to do what I want it to do and do
- 00:25:06something useful and be on a chatbot you
- 00:25:08know chatbots are great I think we've
- 00:25:10fully explored the capabilities of J jpt
- 00:25:13all on our own now we want to build more
- 00:25:15interesting things that maybe provide a
- 00:25:16little more societal
- 00:25:17value well if I have a pure function and
- 00:25:20I want to get different output what are
- 00:25:22my
- 00:25:23options I can either find a different
- 00:25:25function but I don't have millions of
- 00:25:26dollars to train a new function so my
- 00:25:29options are I can change the input it's
- 00:25:31literally mathematically the only thing
- 00:25:33I can do to get different results or
- 00:25:35better results out of a language model
- 00:25:37and this is the entire field of um quote
- 00:25:40unquote AI programming is putting the
- 00:25:42right stuff into the model to try to get
- 00:25:44it to get out the stuff that you
- 00:25:47want where does this data come from you
- 00:25:49can have human input like a chatbot or a
- 00:25:51programmer manually putting an input you
- 00:25:53can have an old fashioned program like a
- 00:25:55regular program that builds a bunch of
- 00:25:56strings and concat stuff and then sends
- 00:25:58them off to the model you can have the
- 00:26:00result of one model feed into another
- 00:26:02model or the same model invoked
- 00:26:03recursively and really any combination
- 00:26:06of the above and all of AI programming
- 00:26:09unless you're working on the models
- 00:26:10themselves is some combination of these
- 00:26:14altering the inputs of the function in
- 00:26:16various ways and building programs to
- 00:26:17programmatically alter the inputs of the
- 00:26:19functions there's a bunch of patterns
- 00:26:21for this um this is just descriptive
- 00:26:23this is how people are using these out
- 00:26:25in the world the simplest one is prompt
- 00:26:26engineering if I have user input and
- 00:26:28maybe a history of past messages how do
- 00:26:30I get the the this the language model to
- 00:26:34act different or emit different things I
- 00:26:36give it a system prompt I may say Talk
- 00:26:38Like a Pirate or emit Json right that's
- 00:26:41the system prompt prompt it's not
- 00:26:43engineering it's just trying to find
- 00:26:45stuff and experimenting around with the
- 00:26:47model this one's really gaining a lot of
- 00:26:49popularity which is um you know you want
- 00:26:51to sometimes feed real information into
- 00:26:53the model that maybe it can't reliably
- 00:26:55get out of its internals so you take the
- 00:26:58user input you pass it to a search
- 00:26:59engine could be a keyword based search
- 00:27:00engine sematic search doesn't really
- 00:27:02matter and you pass the topend results
- 00:27:03along with your system prompt and the
- 00:27:04user input into the model right and
- 00:27:09assuming your search is good assuming
- 00:27:10the data that you want to talk about or
- 00:27:11you want the model to to uh relay to you
- 00:27:14is assuming the data is there models do
- 00:27:17a good job the attention mechanism is
- 00:27:18actually pretty reliable at kind of
- 00:27:20zeroing in on the the relevant parts of
- 00:27:22the input data of course if your search
- 00:27:24wasn't good um and you don't have good
- 00:27:26recall and the answer you actually
- 00:27:27wanted is not in the topend results the
- 00:27:29model is back to bullshitting and it
- 00:27:31will not be able to give you reliable
- 00:27:34information okay so an extenstion of
- 00:27:36that is we're going to invoke the model
- 00:27:38twice we're going to invoke the model
- 00:27:39and we're going to give it our database
- 00:27:40schema and a question from the user
- 00:27:42we're going to say write some SQL that
- 00:27:44answers this
- 00:27:45question and this actually works for
- 00:27:47simple queries and simple databases um
- 00:27:49you get the results from the database
- 00:27:51then you call the model again with all
- 00:27:52those results and uh the user input uh
- 00:27:56and it works but it's completely reliant
- 00:27:59on the ability of the model to generate
- 00:28:00SQL code and to do so correctly and also
- 00:28:02you can't code review it before it runs
- 00:28:03if you're using it in
- 00:28:05production so some issues with this but
- 00:28:07people are using it effectively and and
- 00:28:09if you can get your query simple enough
- 00:28:10and your schema simple enough you can
- 00:28:12get get some reliability up there
- 00:28:14another big thing open AI just released
- 00:28:16this feature a few weeks ago is tool use
- 00:28:18I can alter my
- 00:28:20inputs uh such that the model can emit
- 00:28:24Json that matches a certain pattern
- 00:28:26which then I can can pass off to an API
- 00:28:29which may go to the side effect in the
- 00:28:30world it could order a pizza and then it
- 00:28:32can go back and feed it into the model
- 00:28:34again so you know people talk about tool
- 00:28:37use as if the model is doing something
- 00:28:38incredible but all all it is is telling
- 00:28:40the model to emit API calls and then
- 00:28:43some external system has to observe
- 00:28:45those and actually execute
- 00:28:48them and the other big thing
- 00:28:50that um is is really highly hyped these
- 00:28:54days and there's a billion startups and
- 00:28:56people are talking about this is you
- 00:28:57know going to lead to AGI and whatnot um
- 00:28:59is this concept of Agents all agents are
- 00:29:01is arbitrary combinations of the above
- 00:29:03patterns and invoking language models
- 00:29:06recursively that's it at the end of the
- 00:29:08day each model invocation is still a
- 00:29:10pure function of input to output and
- 00:29:11we're still just trying to Marshall up
- 00:29:13the correct inputs at each phase and
- 00:29:16this is actually I think closer building
- 00:29:18a good Agent I think is closer to
- 00:29:20traditional software engineering than it
- 00:29:22is to you know magic AI programming
- 00:29:28it is different from traditional
- 00:29:29programming in one way did it work um
- 00:29:31the model output is always going to be a
- 00:29:32bit Opa it is going to be deterministic
- 00:29:35but it will be opag and it can be
- 00:29:36non-deterministic if you decide to turn
- 00:29:38up the randomness
- 00:29:40um it's not like regular programming
- 00:29:42where once a function works on a variety
- 00:29:44of test cases you can be pretty sure it
- 00:29:46works um it needs to work across all
- 00:29:49test cases and the only way to validate
- 00:29:51that is to statistically um you have to
- 00:29:54apply experimental techniques to
- 00:29:56actually give it a variety of inputs and
- 00:29:57then see see what your uh result uh
- 00:30:00success rate is and do datadriven
- 00:30:02analysis of the results and you need to
- 00:30:04know your problem domain like for some
- 00:30:06problem domains 90% accuracy may be
- 00:30:08great um for other domains you may need
- 00:30:11five nines of accuracy um probably not
- 00:30:13going to get that from language model
- 00:30:15ever but you need to know what that
- 00:30:17number
- 00:30:20is all right so that's the state of
- 00:30:24language model programming today um all
- 00:30:28the fur and activity and I can't keep up
- 00:30:30with all of it but everything I have
- 00:30:31kept up with and have read falls into
- 00:30:32two categories it's either improving the
- 00:30:34models themselves and like the the core
- 00:30:36data science used to train them or it's
- 00:30:38working on what are techniques for
- 00:30:41giving the models better inputs so that
- 00:30:43we can get better outputs and what kind
- 00:30:44of programs can we write up as a
- 00:30:46scaffolding around the models to to
- 00:30:48formulate those
- 00:30:52prompts um mix success this is a very
- 00:30:55active field sometimes they work well
- 00:30:56sometimes they don't
- 00:31:00one problem
- 00:31:02well one thing I do well in programming
- 00:31:05we all do well here is uh data and logic
- 00:31:07we've been working with data and Logic
- 00:31:09for quite some time business logic data
- 00:31:11databases we're all very comfortable
- 00:31:12with those um and we write programs that
- 00:31:15work between them a lot we also have
- 00:31:18language um now we can now work with
- 00:31:21language using the techniques I just
- 00:31:23described but it's still how do I get my
- 00:31:26data to meet my language
- 00:31:28right I I can just shove it in the
- 00:31:29prompt and I have to shove it in the
- 00:31:31prompt the context the input to the pure
- 00:31:34function that's the only thing I can do
- 00:31:37there is no other way I can make my data
- 00:31:40accessible to a system what what's the
- 00:31:43best way to do
- 00:31:45that what possible technology kind of
- 00:31:47lives in the intersection of data and
- 00:31:50logic and language um that kind of has a
- 00:31:53foot in each World such that I can work
- 00:31:54with it in a very data way on the data
- 00:31:56side and work it with it in a very
- 00:31:57language way on the language
- 00:31:59side and obviously this is a leading
- 00:32:02question it is rdf
- 00:32:06um so we should be putting rdf data in
- 00:32:09our prompts and when we are asking to
- 00:32:11get kind of more structured data out of
- 00:32:12models we should be asking for it in rdf
- 00:32:14format and this works quite
- 00:32:17well so at a syntactic level
- 00:32:21um well let's step back and talk about n
- 00:32:25Chomsky always love to do that good old
- 00:32:28gnome he's still kicking around um this
- 00:32:31book uh establishes the concept of a
- 00:32:34generative grammar which is a language
- 00:32:36model it is a simple language model that
- 00:32:37fits on a page that that math right
- 00:32:40there is his language model it's a
- 00:32:41generative grammar of language and how
- 00:32:42it works he built it based on observing
- 00:32:45many languages and trying to kind of
- 00:32:46figure out what is the essence of
- 00:32:48language or this language particular
- 00:32:49languages or all languages um try
- 00:32:52believed that there's a biological basis
- 00:32:54these rules are actually in human brains
- 00:32:56that there was like some mutation that
- 00:32:58gave us these rules um and that's why
- 00:33:00humans are language using
- 00:33:01creatures um and you know this is
- 00:33:04different for every actual language but
- 00:33:06one thing that is super foundational is
- 00:33:08subjects predicates and objects or
- 00:33:10subjects verbs and objects you know
- 00:33:13there are languages that kind of stretch
- 00:33:14the definition in one way or another or
- 00:33:16leave it a little bit more confusing but
- 00:33:17there's something pretty fundamental um
- 00:33:19to cognition in this and that's what CH
- 00:33:21is exploring in this book and so when
- 00:33:24you go back to a language model because
- 00:33:25the language model is trained on
- 00:33:27language those concepts are also sort of
- 00:33:29baked into language model they are
- 00:33:31captured they are measured as part of
- 00:33:33the model making process the process of
- 00:33:35measuring and so rdf is really
- 00:33:38surprisingly good at going forth back
- 00:33:40and forth between natural language and
- 00:33:42rdf you can go to any snit of text and
- 00:33:45paste it into chat GPT and say give the
- 00:33:47facts here an rdf format and if you
- 00:33:49wanted to do an even better job you can
- 00:33:50say give me the facts and here's the the
- 00:33:52predicates I really care about you know
- 00:33:54given this list of predicates find
- 00:33:56anything in here that could be to those
- 00:33:58predicates and give it to me in rdf
- 00:33:59format and the other way works well too
- 00:34:01you can give it rdf data and then just
- 00:34:02have a conversation with that data
- 00:34:04really easily um and it works better in
- 00:34:07my experience than you know trying to
- 00:34:09upload csvs or spreadsheets or any of
- 00:34:11the other ways you can get structured
- 00:34:12data into a model because they're just
- 00:34:14statements and the difference between a
- 00:34:16statement in language and a statement in
- 00:34:19rdf is not that big a conceptual
- 00:34:23lead tool use it also does a good job of
- 00:34:26tool use there are many things that
- 00:34:27currently people people use to use
- 00:34:29four say I have this question who are
- 00:34:31Luke's parents and I want to ask it of
- 00:34:33the model and I want it to use a variant
- 00:34:36of tool use which is the query
- 00:34:37generation right say I want to do this
- 00:34:40with rdf the model can emit an rdf query
- 00:34:44Luke is the child of who right it can
- 00:34:46convert that statement that English
- 00:34:49statement in or that English question
- 00:34:52into this rdf
- 00:34:54question and here's where rdf shines so
- 00:34:56far this is not that different than SQL
- 00:34:58right it's just a different query
- 00:35:00format what if my rdf implementation
- 00:35:03supports
- 00:35:05reasoning now the language model is
- 00:35:06asking a different question who is Luke
- 00:35:08a descendant of it's a different
- 00:35:10question I can ask but the language
- 00:35:12model doesn't know any
- 00:35:14different it's to the language model
- 00:35:16this is exactly the same sort of
- 00:35:17question where we're quering about a
- 00:35:19property of Luke even though under the
- 00:35:21hood there's probably like a bunch of
- 00:35:22data log rules firing to answer this
- 00:35:24question and return the result set but
- 00:35:26the key point is all that complexities
- 00:35:28abstracted out of the model and if you
- 00:35:30did something like that in SQL you would
- 00:35:31have to like put all that in the model
- 00:35:35and make its tool use is much more
- 00:35:36complex so rdf really simplifies tool
- 00:35:39use for um at least as far as tool use
- 00:35:41involves calculating
- 00:35:44data and you can even ask it like super
- 00:35:46complex questions that are much more
- 00:35:48open-ended over structured data who am I
- 00:35:51or what is the relationship between Luke
- 00:35:53and
- 00:35:54rembrand you know that's that's a very
- 00:35:56open-ended question language model can't
- 00:35:57answer it but if I have a full
- 00:35:59genealogical database and I have the
- 00:36:01correct inference rules in there this
- 00:36:03query can precisely answer you know
- 00:36:05Lucas REM Brand's 13 times removed great
- 00:36:08uncle which is true um he is actually
- 00:36:11way back there in the family tree but
- 00:36:14that's structured data that is that is a
- 00:36:16true fact that's not like a a maybe a
- 00:36:18fact pattern that it maybe saw somewhere
- 00:36:19on the internet that's actually real
- 00:36:21logic and reasoning that gives me that
- 00:36:25answer let's talk about semantics
- 00:36:29well we know language models model
- 00:36:31semantics how do language models model
- 00:36:32semantics
- 00:36:34well it's hard to get into it in our
- 00:36:36remaining time um but stated briefly
- 00:36:40there is models have what's called a
- 00:36:42latent space which is a high dimensional
- 00:36:44Vector space um technically it's a
- 00:36:47semantic field with a distance metric um
- 00:36:49but it's basically um a mathematical
- 00:36:52high-dimensional mathematical construct
- 00:36:54such that two points that are close in
- 00:36:56this mathematical space are Al closely
- 00:36:58reled somehow um
- 00:37:01conceptually this is really abstract
- 00:37:03though this this latent space of a model
- 00:37:04these things are pretty opaque uh
- 00:37:06there's a lot of research on how to
- 00:37:07observe them how to interpret them
- 00:37:11um but you know they're mostly opaque to
- 00:37:14humans even though they are a
- 00:37:15measurement of language it's just a
- 00:37:17mathematical object with you know
- 00:37:19thousands and thousands of Dimensions
- 00:37:21it's the human brain doesn't easily wrap
- 00:37:23around it well how does rdf model data
- 00:37:26rdf models data as a graph or concepts
- 00:37:29that are linked to other Concepts the
- 00:37:31human brain is pretty good at at
- 00:37:32grasping data in this
- 00:37:35format
- 00:37:37so you know a Knowledge Graph linked
- 00:37:40resources I have a bunch of
- 00:37:43information what you can do conceptually
- 00:37:46and I'm still working on the best ways
- 00:37:48to do this in practice is you can
- 00:37:50project your rdf conceptual map into the
- 00:37:54conceptual space you can embed your
- 00:37:57concrete logical symbols and Concepts
- 00:37:59into the conceptual space and this does
- 00:38:02a few things it gives you
- 00:38:03interpretability of that conceptual lat
- 00:38:05in
- 00:38:07space you can say you know you can say
- 00:38:11oh this region of that space that's
- 00:38:13where that fact landed that tells me
- 00:38:16something about the topology of the
- 00:38:17space I can kind of overlay them on top
- 00:38:18of each
- 00:38:19other so it also gives me
- 00:38:22insights I might say hey I had these two
- 00:38:24entities that I embedded in the model
- 00:38:27and landed pretty close I'd never
- 00:38:29thought of them as close before maybe I
- 00:38:30should explore that relationship kind of
- 00:38:33a soft way right so if you're trying to
- 00:38:36do exploratory information based on the
- 00:38:38data you have that can be really
- 00:38:41interesting the other thing you can do
- 00:38:43is soft
- 00:38:44inference right so what's one of the
- 00:38:46reasons the semantic web
- 00:38:47failed or indeed why did attempts to
- 00:38:51solve AI with rule-based systems and
- 00:38:53logic alone
- 00:38:54fail it's because the world is more full
- 00:38:57of rules than anyone ever has the
- 00:38:58patience to write down there are so many
- 00:39:01aspects of the world um common knowledge
- 00:39:04um implicit assumptions that it is
- 00:39:07impossible to enumerate them all as a
- 00:39:09human and people try go look up the
- 00:39:11psych project coic they really
- 00:39:13tried um but it's hard and even if I did
- 00:39:17it my rdf graph would soon grow
- 00:39:21intractably
- 00:39:24large but what the language model can do
- 00:39:27is it can kind of give me for like
- 00:39:28really implicit things that are
- 00:39:31obvious I can just ask the language
- 00:39:33model to give me rdf expressing the
- 00:39:35relationship between arbitrary objects
- 00:39:37and it'll just spit out a set of facts
- 00:39:38that are most likely true right because
- 00:39:41they're they're kind of implicit in the
- 00:39:42world and they're implicit in what has
- 00:39:44been trained into the
- 00:39:46model they may not be 100% accurate
- 00:39:48again models are probabilistic that's
- 00:39:49why I call this soft
- 00:39:51inference but it means that we now have
- 00:39:53kind of like we have the hard inference
- 00:39:54of our reasoning system our our
- 00:39:56inference engine we have the soft reason
- 00:39:58of the language model and if you combine
- 00:40:00those together you can do a lot of
- 00:40:01reasoning that you couldn't do with
- 00:40:02either one alone and I think that's a
- 00:40:04pretty compelling this is the central
- 00:40:07Insight behind what's called neuros
- 00:40:08symbolic AI it's a small subfield of of
- 00:40:10AI research um it's basically anything
- 00:40:15any research that is trying to combine
- 00:40:18uh the abstract fuzzy neural network
- 00:40:20with hard concrete logical symbols and
- 00:40:22there's a bunch of different approaches
- 00:40:23for it some people are using prologue to
- 00:40:25do this um kind of exactly the same way
- 00:40:27I describ for rdf but doing it with
- 00:40:29prologue instead of rdf um other people
- 00:40:31are like actually trying to encode
- 00:40:34symbols as items in this dimensional
- 00:40:36Vector space and then use that for
- 00:40:38training the models there's a lot of
- 00:40:39complicated things people are doing
- 00:40:42um U but if you're you're interested in
- 00:40:44this you know this is Def this approach
- 00:40:46of using rdf to interact with language
- 00:40:47models is definitely a sub it's it's a
- 00:40:50specific approach to neuros symbolic
- 00:40:53AI so finally um
- 00:40:57you know the biggest problem I think we
- 00:40:59have with AI is a lot of people are
- 00:41:00using it for the wrong things um and you
- 00:41:02know I I don't want it to do my writing
- 00:41:05for me or my singing for me or my music
- 00:41:07playing for me um I don't even
- 00:41:09necessarily want it to do my coding for
- 00:41:11me like I I find the code it produce
- 00:41:13kind of slop um but I do a lot of
- 00:41:17programming is dishes and laundry the
- 00:41:19dishes and laundry of data just like you
- 00:41:21were saying earlier and uh particularly
- 00:41:24on the data side so I think is a tool to
- 00:41:28actually automate a lot of the dishes
- 00:41:29and the laundry of working with data and
- 00:41:34I'm trying to build it I'm this is what
- 00:41:36I'm working on now and if anyone is
- 00:41:37interested in talking about that um I'd
- 00:41:39love to chat with you so yeah we're
- 00:41:41going to bring back the CTIC web with uh
- 00:41:43s mayi under the hood all right thanks
- RDF
- language models
- AI ethics
- technology
- knowledge representation
- semantic web
- neurosymbolic AI
- Transformer
- data modeling