Build Specialized Fine-Tuned AI Agents | No Code

00:36:49
https://www.youtube.com/watch?v=VcelnXSWhzg

Ringkasan

TLDRThe video tutorial guides viewers through creating AI agents using fine-tuned models without needing to write code. It emphasizes the underestimated power of fine-tuning, which can dramatically enhance AI's performance in various tasks, such as replicating a brand's tone across social media channels. The presenter offers step-by-step instructions on setting up fine-tuned models, using platforms such as Relevance AI and OpenAI, to create versatile content creation agents. He illustrates this with a case where a client used agents to handle content across platforms like LinkedIn and Instagram. Additionally, the video touches on other potential applications for fine-tuning, including customer service. Throughout, the tutorial highlights the setup's potential to revolutionize content management and digital strategy by automating and optimizing content generation, paving the way for businesses to engage more effectively with their audience.

Takeaways

  • 🤖 Fine-tuning AI can significantly optimize content creation.
  • 🔍 No-code tutorials make advanced AI setups accessible.
  • 📊 Fine-tuned models adapt content to different platforms seamlessly.
  • 🛠️ Using OpenAI and Relevance AI simplifies fine-tuning processes.
  • 💬 AI agents can maintain brand voice across social media platforms.
  • 🚀 Companies can scale content creation with minimal effort.
  • 🎯 Fine-tuning aligns AI outputs with specific business goals.
  • 💡 Learn to customize AI models to your brand’s tone of voice.
  • 🌐 Explore diverse use cases for fine-tuned AI models.
  • 🧠 Understanding fine-tuning's potential is crucial for digital expansion.

Garis waktu

  • 00:00:00 - 00:05:00

    In this video, the creator aims to demystify the process of fine-tuning AI models for creating specialized agents without coding. The main focus is to show the potential of fine-tuned AI models for various business applications, particularly in content creation across different social media platforms. Fine-tuning allows for customized AI interactions, a practice that's underutilized but highly beneficial in aligning AI outputs with specific brand styles and tones.

  • 00:05:00 - 00:10:00

    The creator shares an overview of a content management system built on Relevance AI. This system uses fine-tuned models for each social media platform, effectively managing and repurposing content through Slack. By integrating AI agents within existing platforms like Slack, the workflow resembles human interaction, creating content or repurposing existing media based on simple commands. This setup minimizes the need for new software solutions, leveraging existing communication channels.

  • 00:10:00 - 00:15:00

    Each social media platform has a specialized AI agent with tools tailored for specific tasks, like transcription or content scraping, enabling the AI to create platform-specific posts. A demonstration agent for LinkedIn uses a fine-tuned model to produce content closely mimicking a brand's past posts. The aim is to enhance consistency and engagement through tailored AI output, showcasing the stark difference between generic and fine-tuned model outputs.

  • 00:15:00 - 00:20:00

    The intricacies of fine-tuning AI models involve refining pre-trained models with specific datasets to improve task-specific accuracy. While fine-tuning increases proficiency for certain tasks, it reduces generalization beyond trained data. The creator emphasizes the crucial balance between dataset size and task specificity to avoid overfitting and memory degradation in AI models, which can limit flexibility and performance.

  • 00:20:00 - 00:25:00

    Fine-tuning is highlighted as beneficial for customizing style, improving reliability for specific tasks, and managing edge cases, with practical use cases like content creation and customer service. Limitations include overfitting and the potential loss of generalized knowledge, necessitating careful dataset preparation—ideally between 100-1000 data points for optimal tone adaptation in content generation tasks.

  • 00:25:00 - 00:30:00

    Steps for fine-tuning include preparing specific datasets, converting them to a recognized format (JSONL), training the model, and integrating it with Relevance AI for real-world application. Detailed guidance is provided on leveraging tools and platforms like Replicate and OpenAI for effective, cost-efficient fine-tuning processes. The importance of high-quality and correctly formatted datasets is stressed for successful training outcomes.

  • 00:30:00 - 00:36:49

    Finally, connecting the fine-tuned models with application tools like API setups in platforms such as Relevance AI is explained, ensuring optimized integration and usage of the models. The potential of fine-tuning for enhancing AI tasks and tone adaptation is praised, along with an announcement about a new online community for further collaboration and knowledge sharing with people interested in AI advancements.

Tampilkan lebih banyak

Peta Pikiran

Mind Map

Pertanyaan yang Sering Diajukan

  • What is the main focus of the video?

    The video focuses on creating highly specialized AI agents using fine-tuned models without coding.

  • Why is fine-tuning considered powerful according to the video?

    Fine-tuning is powerful because it allows increased accuracy for specific tasks and can be tailored to different use cases, such as content creation on social media.

  • What example is given for using fine-tuned models?

    An example given is a content creation agent for a client that utilizes fine-tuned models to align with their brand's tone across different social media platforms.

  • How is fine-tuning done without code in this video?

    Fine-tuning without code is achieved using platforms like OpenAI and tools like Replicate, where users can upload their data sets and configure models easily.

  • What are the possible use cases mentioned for fine-tuned models?

    Use cases include content creation for social media, customer service agents, and specialized systems requiring brand-aligned responses or knowledge-rich interactions.

Lihat lebih banyak ringkasan video

Dapatkan akses instan ke ringkasan video YouTube gratis yang didukung oleh AI!
Teks
en
Gulir Otomatis:
  • 00:00:00
    hey guys so in this video I'm going to
  • 00:00:02
    show you how to create highly
  • 00:00:03
    specialized AI agents by giving them
  • 00:00:05
    access to your own fine tuned models now
  • 00:00:08
    I really wanted to make this video
  • 00:00:09
    because in my opinion fine tuning has
  • 00:00:11
    been really underused and is actually
  • 00:00:13
    extremely powerful for many different
  • 00:00:15
    use cases and I think it is because most
  • 00:00:18
    people are a bit intimidated by the
  • 00:00:20
    setup and most tutorials out there are
  • 00:00:22
    code based so in this video I'm going to
  • 00:00:24
    show you how to set up your own fet
  • 00:00:26
    models without code and secondly I'll
  • 00:00:29
    show you how you can get give your AI
  • 00:00:30
    agents access to multiple specialized
  • 00:00:33
    fine tuned models which opens up a whole
  • 00:00:35
    range of very interesting use cases I
  • 00:00:37
    recently for example delivered a Content
  • 00:00:40
    creation agent to a client that had
  • 00:00:42
    access to five different optimized
  • 00:00:44
    ftuned models each trained on a past
  • 00:00:47
    post tone of voice and style of each of
  • 00:00:50
    their different social media platforms
  • 00:00:52
    LinkedIn X Instagram Facebook and their
  • 00:00:55
    blog and the results were pretty amazing
  • 00:00:57
    both to me and the company uh and by
  • 00:01:00
    giving one agent access to all these
  • 00:01:02
    different fine tuned models companies
  • 00:01:04
    can create and post hyper optimized and
  • 00:01:07
    brand aligned content across all their
  • 00:01:09
    platforms by simply interacting inside a
  • 00:01:11
    one chat uh but you can imagine there
  • 00:01:13
    are many more interesting use cases and
  • 00:01:16
    possibilities with this setup so if you
  • 00:01:18
    finally want to learn how to F tune the
  • 00:01:20
    easy way and learn how to build
  • 00:01:22
    specialized AI agents stick with me and
  • 00:01:24
    I'll show you how to do it so I've
  • 00:01:26
    broken this video down into four steps
  • 00:01:28
    first I'll show you the content team
  • 00:01:30
    system overview because I think this
  • 00:01:32
    system will give you a good Insight of
  • 00:01:34
    the possibilities for this setup and um
  • 00:01:37
    the use cases for this setup then I'll
  • 00:01:39
    give you a quick example of a fine-tuned
  • 00:01:41
    LinkedIn post writer agent that I just
  • 00:01:43
    recreated for myself quickly so you can
  • 00:01:45
    also see the difference between a normal
  • 00:01:47
    model output and a fine-tuned model
  • 00:01:49
    output then I'll give you a very quick
  • 00:01:51
    practical breakdown on what fine tuning
  • 00:01:52
    actually is when to use it and why to
  • 00:01:54
    use it and lastly I'll show you step by
  • 00:01:56
    step how you can fine-tune your own
  • 00:01:58
    model now before starting uh I'm very
  • 00:02:00
    excited to announce that I just launched
  • 00:02:02
    my community I'll be sharing everything
  • 00:02:05
    I've learned all my templates will be
  • 00:02:06
    available on there too and really I'm
  • 00:02:08
    there to help out with anything I can
  • 00:02:11
    and but it really is about for anyone
  • 00:02:13
    who sees the potential in this and wants
  • 00:02:15
    to become one of the first experts in
  • 00:02:17
    the AI agent field uh because if you
  • 00:02:19
    jump on this bandw bandwagon right now
  • 00:02:22
    you will be one of the first experts
  • 00:02:24
    which in my opinion ideally positions
  • 00:02:26
    yourself when this Market matures and
  • 00:02:28
    the AI space actually Tak off so if
  • 00:02:31
    you're interested in something like this
  • 00:02:32
    I have more information uh in a link in
  • 00:02:34
    the description below and uh I would
  • 00:02:36
    love to see you there anyway that aside
  • 00:02:38
    let me get to the content team system
  • 00:02:40
    overview as always I built this system
  • 00:02:43
    on relevance AI if you're completely new
  • 00:02:45
    to my channel um this might go over your
  • 00:02:47
    head a little bit uh it's not the
  • 00:02:49
    easiest setup so if it's hard to follow
  • 00:02:51
    make sure to check out my beginner
  • 00:02:53
    tutorial and on relevance AI I do have
  • 00:02:55
    many other tutorials on relevance AI too
  • 00:02:57
    but uh I'll try to keep it
  • 00:02:59
    straightforward and and simple now if
  • 00:03:01
    you've seen my previous video on the AI
  • 00:03:02
    agent social media team this is quite a
  • 00:03:04
    similar setup um with a a few slight
  • 00:03:07
    adjustments but the way this system is
  • 00:03:10
    uh set up is here we have our triggers
  • 00:03:12
    here we have our content manager agent
  • 00:03:14
    and his tools and here we have the sub
  • 00:03:16
    agents with their uh specific tools
  • 00:03:19
    right so in this case I've actually uh
  • 00:03:22
    set this agent up inside of their slack
  • 00:03:24
    which I notic is a very interesting use
  • 00:03:26
    case to delivering these AI agents to
  • 00:03:27
    clients or to companies because instead
  • 00:03:29
    of having to manage another software
  • 00:03:31
    they could just uh whenever they need
  • 00:03:32
    their agent they could just open their
  • 00:03:34
    slack channel of their agent and
  • 00:03:36
    basically instruct them what to do and
  • 00:03:38
    it sort of makes sense because uh most
  • 00:03:40
    people already manage sort of their
  • 00:03:41
    employees or their or their U colleagues
  • 00:03:44
    inside of slack so it makes sense to
  • 00:03:46
    sort of put these agents inside of their
  • 00:03:48
    slack too so the way the system uh is
  • 00:03:51
    set up it could basically do two things
  • 00:03:53
    right it can either repurpose content uh
  • 00:03:55
    based on long form content like uh
  • 00:03:57
    YouTube videos uh podcasts or blog posts
  • 00:04:00
    and basically sort of repurpose that
  • 00:04:02
    across all the different social channels
  • 00:04:04
    or it can generate uh posts from ideas
  • 00:04:08
    right so someone could just have an idea
  • 00:04:10
    write it in the slack Channel and it
  • 00:04:12
    will start generating posts for the
  • 00:04:13
    different platforms based on that idea
  • 00:04:15
    so the way this in practice works out is
  • 00:04:18
    in the following way you can see here I
  • 00:04:19
    have the two different triggers right
  • 00:04:20
    generate from idea or repurpose content
  • 00:04:23
    right now that will be sent to the
  • 00:04:24
    content manager agent who has three main
  • 00:04:27
    responsibilities now the first one as
  • 00:04:29
    always
  • 00:04:30
    is to delegate this task to the right
  • 00:04:32
    sub agent and you can see each of the
  • 00:04:34
    sub agent is basically a specialist in
  • 00:04:37
    each of the platforms right so we have
  • 00:04:38
    our blog writer agent our X writer agent
  • 00:04:41
    LinkedIn writer agent Facebook writer
  • 00:04:42
    agent and Instagram writer agent now
  • 00:04:45
    they're all set up in a very similar way
  • 00:04:46
    so for example if we look at the
  • 00:04:47
    LinkedIn writer agent he has access to
  • 00:04:49
    three tools right and if this is really
  • 00:04:52
    where the magic happens because in his
  • 00:04:54
    first tool he will have access to the
  • 00:04:56
    fine-tuned model right so the first tool
  • 00:04:58
    is the LinkedIn writer and that fine tun
  • 00:05:00
    model is trained on all the past
  • 00:05:02
    LinkedIn post of this company to mimic
  • 00:05:06
    the style T of voice and sort of length
  • 00:05:09
    as much as possible on this specific
  • 00:05:11
    platform in this case LinkedIn right
  • 00:05:13
    then the other tools he has are more for
  • 00:05:15
    the repurposing right we have the
  • 00:05:16
    YouTube transcription tool so if we
  • 00:05:18
    actually want to repurpose it will need
  • 00:05:20
    context on the YouTube video so it will
  • 00:05:21
    first transcribe to understand the
  • 00:05:23
    context and then write the LinkedIn post
  • 00:05:25
    right and the same for podcast with the
  • 00:05:26
    audio transcription and I actually
  • 00:05:28
    forgot one which is uh scraper tool
  • 00:05:30
    where it can for example if they share a
  • 00:05:32
    blog post link then it can scrape that
  • 00:05:35
    blog post on understand the context and
  • 00:05:36
    then write the LinkedIn uh post now
  • 00:05:39
    basically this is the way all of these
  • 00:05:40
    agents are set up but with the
  • 00:05:42
    difference that each of these uh agents
  • 00:05:44
    has access to a different fine-tuned
  • 00:05:47
    model right that's trained on that
  • 00:05:49
    specific platform's data right so that's
  • 00:05:53
    the way this uh system is set up so
  • 00:05:54
    let's say our LinkedIn WR agent writes a
  • 00:05:57
    post and then he reports it back to the
  • 00:05:59
    content manager agent and then basically
  • 00:06:02
    his second responsibility comes into
  • 00:06:03
    play which is we've instructed him to
  • 00:06:04
    always first report back that uh post to
  • 00:06:08
    us and that's why we have here a tool
  • 00:06:10
    that's called send slap message on slack
  • 00:06:13
    um to make sure that everything's all
  • 00:06:15
    right we can check if we like the post
  • 00:06:17
    if we want to change something um and if
  • 00:06:19
    we approve we send it back to the
  • 00:06:21
    content manager agent and then he can
  • 00:06:24
    perform his third responsibility which
  • 00:06:25
    is actually posting it to the different
  • 00:06:27
    platforms and that's why you can see he
  • 00:06:28
    has these other tools post to X post to
  • 00:06:31
    LinkedIn post to blog post to Instagram
  • 00:06:34
    and again I forgot one which is post to
  • 00:06:36
    Facebook right so that's how it's set up
  • 00:06:39
    um you can see quite powerful sort of
  • 00:06:41
    setup uh and this is just one use case
  • 00:06:44
    of using sort of These Fine T models
  • 00:06:46
    inside of agent systems but you I can
  • 00:06:49
    imagine there are many more uh very
  • 00:06:51
    interesting use cases for this you can
  • 00:06:52
    for example imagine a customer service
  • 00:06:55
    agent that has access to a fine Tu model
  • 00:06:58
    so it can actually really resp respond
  • 00:06:59
    in the tone of voice of a company which
  • 00:07:01
    is still really hard to do with normal
  • 00:07:03
    LMS and you can also imagine that a
  • 00:07:06
    combination of a knowledge base or a rag
  • 00:07:08
    so getting the the context of the
  • 00:07:10
    information context of the company
  • 00:07:12
    combined with the tone of voice through
  • 00:07:13
    a fine to model can also be a very very
  • 00:07:15
    powerful uh um solution for companies so
  • 00:07:20
    unfortunately but understandably the uh
  • 00:07:22
    the company I built this for actually
  • 00:07:23
    didn't want me to share this uh inside
  • 00:07:26
    of inside of this YouTube channel and to
  • 00:07:28
    recreate the whole
  • 00:07:30
    uh system for myself would be quite a
  • 00:07:31
    big job especially considering that I
  • 00:07:33
    have to fine tune each of these
  • 00:07:34
    different social media platforms which
  • 00:07:36
    would be quite a big job so what I did
  • 00:07:38
    is I basically recreated the LinkedIn
  • 00:07:40
    writer agent for myself and through that
  • 00:07:43
    example I'll also share that template
  • 00:07:45
    with you uh on the on the community um
  • 00:07:49
    and also through that example you
  • 00:07:51
    understand how to do this for other
  • 00:07:52
    platforms if you want to and also how
  • 00:07:54
    you could recreate this entire system
  • 00:07:56
    because again it's very similar to the
  • 00:07:58
    social media
  • 00:07:59
    um agent team template that I do have
  • 00:08:02
    available uh so if you understand how to
  • 00:08:04
    do this you could set this system up for
  • 00:08:06
    yourself if you're interested now let me
  • 00:08:08
    show you a quick example of the LinkedIn
  • 00:08:09
    writer agent I set up for myself so here
  • 00:08:12
    we are in my relevance AI dashboard um
  • 00:08:15
    now here I have my fine-tuned LinkedIn
  • 00:08:16
    writer agent and basically I copied that
  • 00:08:18
    one and recreated another one LinkedIn
  • 00:08:21
    writer one which basically has access to
  • 00:08:22
    a normal GPT 40 model so not a fine
  • 00:08:25
    tuned one just so we can check the
  • 00:08:27
    differences in output now for context
  • 00:08:30
    it's good to know that I trained this
  • 00:08:32
    fine T model based on LinkedIn posts
  • 00:08:34
    that I like which are sort of in the AI
  • 00:08:36
    space that are the characteristics are
  • 00:08:39
    sort of like these short Punchy
  • 00:08:41
    sentences very value driven and uh with
  • 00:08:44
    a strong sort of uh call to action if
  • 00:08:47
    you're on LinkedIn you probably know
  • 00:08:48
    what I mean uh but let's check so here
  • 00:08:50
    we have our non fine tuned uh agent
  • 00:08:53
    right so normally of course this would
  • 00:08:56
    be uh instructed this agent would be
  • 00:08:57
    instructed by the manager agent but now
  • 00:08:59
    we just uh for example purposes do it on
  • 00:09:02
    uh directly to the LinkedIn writer agent
  • 00:09:04
    and let's say in this case we want to
  • 00:09:06
    generate a blog post based on an idea
  • 00:09:08
    not repurposing uh we can say something
  • 00:09:11
    like right uh
  • 00:09:13
    LinkedIn
  • 00:09:15
    post
  • 00:09:16
    [Music]
  • 00:09:17
    on uh let's
  • 00:09:19
    say in the era of
  • 00:09:23
    AI English is the new programming
  • 00:09:27
    language
  • 00:09:31
    and maybe
  • 00:09:33
    say uh end with a
  • 00:09:36
    CTA to my free prompting
  • 00:09:42
    guide all right let's say let's copy
  • 00:09:45
    this and put in exactly the same uh
  • 00:09:47
    prompt inside of the ftuned LinkedIn WR
  • 00:09:50
    agent so let's
  • 00:09:54
    check and now let's check the
  • 00:09:57
    difference so here we have non-
  • 00:09:59
    fine-tuned LinkedIn post so in the area
  • 00:10:02
    of AI English becoming the new
  • 00:10:03
    programming language as artificial
  • 00:10:05
    intelligence continues to evolve the
  • 00:10:06
    ability to communicate effectively in
  • 00:10:08
    English is more crucial than ever I'm
  • 00:10:10
    already seeing that it didn't understand
  • 00:10:12
    what I was trying to say just this
  • 00:10:13
    coding languages like Python and
  • 00:10:15
    JavaScript have shaped the tech
  • 00:10:16
    landscape the way we articulate our
  • 00:10:18
    thoughts and ideas in English is now a
  • 00:10:19
    key skill in Levering leveraging AI
  • 00:10:21
    tools and Technologies whether you're
  • 00:10:22
    crafting prompts for AI models or
  • 00:10:24
    collaborating with teams across the
  • 00:10:26
    globe mastering English can okay so yes
  • 00:10:28
    you can see didn't really understand uh
  • 00:10:31
    uh what I was trying to say and second I
  • 00:10:33
    don't really like the tone of voice
  • 00:10:34
    especially for LinkedIn post it's it's
  • 00:10:37
    uh it's not very engaging and you've
  • 00:10:39
    probably seen this right when you try to
  • 00:10:41
    create content with uh with chat gbt for
  • 00:10:44
    for your social media or for your
  • 00:10:45
    company maybe it's very hard to get the
  • 00:10:47
    tone of voice right uh even with a lot
  • 00:10:50
    of prompting it's very hard to get that
  • 00:10:51
    tone of voice uh and that's why I think
  • 00:10:54
    it's such a good use case for these fine
  • 00:10:56
    to models now let's check the fine model
  • 00:10:59
    so here's a LinkedIn Post in the era you
  • 00:11:01
    can already see by the way that the the
  • 00:11:02
    sort of layout is very different right
  • 00:11:04
    in the era of AI English is the new
  • 00:11:06
    programming language the better you
  • 00:11:08
    communicate with AI the more you can
  • 00:11:09
    automate the more you automate the more
  • 00:11:10
    you can scale the more you scale the
  • 00:11:12
    more you win to help you win I've
  • 00:11:15
    created a free guide to help you master
  • 00:11:16
    the art prompting I love it you can see
  • 00:11:18
    the difference in style it's it's very
  • 00:11:20
    different and way more the style I'm
  • 00:11:22
    looking for it covers the three types of
  • 00:11:24
    prompts how to create effective prompts
  • 00:11:26
    how to use AI to create your prompts how
  • 00:11:28
    to use AI to refine your PR promps how
  • 00:11:29
    to use AI to create your own prompt
  • 00:11:31
    Library the the guide also includes a
  • 00:11:33
    video tutorial on Ocean template to get
  • 00:11:35
    access simply click my name plus follow
  • 00:11:37
    plus Bell okay I can already tell you
  • 00:11:39
    this part directly comes from the
  • 00:11:41
    training data because this is sort of
  • 00:11:43
    the style that I've trained this uh
  • 00:11:45
    model on comment prompting on this post
  • 00:11:48
    I'll send you the link BS if you don't
  • 00:11:49
    want to wait you can grab it here so yes
  • 00:11:51
    you can see the difference and the power
  • 00:11:52
    of These Fine T model you can really you
  • 00:11:55
    know enhance that tone of voice and
  • 00:11:57
    really get it lots more like you you
  • 00:11:58
    actually want it so I want to go over
  • 00:12:01
    quickly what is actually fine-tuning
  • 00:12:03
    when to use it and how we can set this
  • 00:12:05
    up so I want to go very quickly over the
  • 00:12:07
    basics of fine tuning I want to keep it
  • 00:12:08
    practical so it's not going to be long
  • 00:12:10
    but you do need to understand it a
  • 00:12:11
    little bit so fine tuning basically
  • 00:12:13
    refines pre-trained AI models with
  • 00:12:15
    specific data right so if you see the
  • 00:12:17
    diagram here uh we have a base model
  • 00:12:19
    just like GPT 40 for example that of
  • 00:12:21
    course is trained on a huge data set and
  • 00:12:24
    then we have our base model and what we
  • 00:12:25
    do when we fine tuning is we basically
  • 00:12:27
    refine that base model with a smaller
  • 00:12:29
    data set to make it specialized on a
  • 00:12:32
    specific task right so and through that
  • 00:12:35
    sort of specialization we can of course
  • 00:12:36
    get increased accuracy on those specific
  • 00:12:38
    tasks now important to note you get
  • 00:12:40
    increased accuracy on those specific
  • 00:12:42
    tasks right because you actually get
  • 00:12:44
    decreased accuracy on any task that's
  • 00:12:46
    not related to your training data and
  • 00:12:48
    you'll lose that you get that decreased
  • 00:12:51
    accuracy on these other tasks very
  • 00:12:52
    quickly right even with a small data set
  • 00:12:54
    like 500 or a thousand data points now
  • 00:12:57
    lastly some people have heard have some
  • 00:12:59
    confusion between fine tuning and rag
  • 00:13:02
    now the the difference very easy fine
  • 00:13:03
    tuning actually adjusts the underlying
  • 00:13:05
    language model while rag or knowledge
  • 00:13:07
    base just adds context um or knowledge
  • 00:13:11
    to the base model right so it doesn't
  • 00:13:13
    actually change the underlying model it
  • 00:13:15
    just adds context to it and why do we
  • 00:13:18
    actually find tune now I got this from
  • 00:13:19
    the open website uh uh these sort of but
  • 00:13:22
    I tried to put in an example for each
  • 00:13:24
    one now of course the first one is the
  • 00:13:26
    one I've been using which is setting the
  • 00:13:27
    style tone format or other qualitative
  • 00:13:30
    aspects right of course I think this is
  • 00:13:32
    one of the really powerful use cases
  • 00:13:34
    that's actually already working pretty
  • 00:13:36
    well uh which is you know trying to get
  • 00:13:38
    brand alligned or Persona alligned
  • 00:13:40
    content out right or you can even
  • 00:13:42
    imagine inside of chat Bots this could
  • 00:13:43
    also be a very interesting use case
  • 00:13:45
    right now uh secondly uh there are more
  • 00:13:48
    right but secondly you have improving
  • 00:13:50
    reliability at a produce producing a
  • 00:13:52
    desired output right so you can imagine
  • 00:13:54
    if you're looking for uh always have to
  • 00:13:57
    have a always have a very specific
  • 00:13:59
    output you can imagine that if you train
  • 00:14:01
    a model based on that specific output
  • 00:14:03
    you get high reliability in getting that
  • 00:14:05
    desired output right so for example an
  • 00:14:07
    e-commerce product description right uh
  • 00:14:09
    it always needs to be in the exact same
  • 00:14:11
    format length the different sections
  • 00:14:14
    right and you're using a model only to
  • 00:14:16
    do that well then it makes sense to F
  • 00:14:18
    tune it to get that reliability um
  • 00:14:21
    higher right and then uh the third one
  • 00:14:23
    is correcting failures to follow complex
  • 00:14:25
    prompts you can imagine you've already
  • 00:14:27
    experienced it probably language model
  • 00:14:29
    strugle with very long prompts and uh if
  • 00:14:32
    you need to do very complex things you
  • 00:14:34
    can better probably find tuno model or
  • 00:14:36
    uh to to get those sort of specifics
  • 00:14:39
    inside of a very large prompt I think
  • 00:14:40
    some of these use cases you can also
  • 00:14:42
    attack with chain prompting of course
  • 00:14:44
    breaking down a larger prompt into
  • 00:14:45
    smaller prompts and put it in a chain
  • 00:14:47
    but uh you can imagine they for example
  • 00:14:49
    had an example here detailed technical
  • 00:14:51
    manuals right and then uh the fourth one
  • 00:14:54
    is handling many edge cases in specific
  • 00:14:56
    ways you can imagine these base models
  • 00:14:58
    are are uh better at sort of this broad
  • 00:15:01
    knowledge and and generalized knowledge
  • 00:15:04
    so if you need to do something which
  • 00:15:07
    involves a lot of edge cases you can
  • 00:15:09
    imagine it's probably better to train a
  • 00:15:11
    a specific model on that sort of uh
  • 00:15:14
    smaller data set with the more Edge case
  • 00:15:17
    um uh data right for example diagnosing
  • 00:15:20
    rare or complex medical cases that fall
  • 00:15:22
    outside of the norm right you can
  • 00:15:23
    imagine a base model would be would have
  • 00:15:25
    a hard time with it because it would
  • 00:15:27
    first uh look at the general and the
  • 00:15:30
    broad picture while if you're really
  • 00:15:31
    looking for those specific identifying
  • 00:15:33
    those specific uh medical cases that
  • 00:15:36
    fall outside of the norm you can better
  • 00:15:37
    have a fine tube model that's
  • 00:15:38
    specialized in recognizing those uh
  • 00:15:41
    diseases right and then the last one is
  • 00:15:43
    uh performing a new scale skill or task
  • 00:15:45
    that's hard to articulate in a prompt
  • 00:15:47
    now a lot of the fine tuning that people
  • 00:15:49
    are doing right now is with images and
  • 00:15:51
    this is a very good use case for that
  • 00:15:53
    you can imagine it's very difficult to
  • 00:15:54
    describe in a prompt you've Al probably
  • 00:15:57
    also experienced it um to to uh to to
  • 00:16:01
    let a model generate a specific type of
  • 00:16:03
    image you want right it's hard to do
  • 00:16:05
    that in language so you can better train
  • 00:16:07
    or fine tuna model by showing it the
  • 00:16:09
    actual images of the type of images you
  • 00:16:12
    want in your output then doing it
  • 00:16:14
    through language with prompting right uh
  • 00:16:16
    so I think this is also a very
  • 00:16:18
    interesting use case especially for the
  • 00:16:19
    image uh image generation fine T models
  • 00:16:23
    so you can see here customizing model to
  • 00:16:24
    create artwork in a specific hardto
  • 00:16:26
    describe style now what are some of the
  • 00:16:28
    limitations right now or F tuning uh
  • 00:16:30
    because there are limitations still and
  • 00:16:33
    the main ones are really first of all
  • 00:16:35
    overfitting and overfitting basically
  • 00:16:36
    means overly specialized right so it
  • 00:16:40
    basically as soon as you start sort of
  • 00:16:41
    training this on this specific task it
  • 00:16:45
    very quickly even again with 500 to
  • 00:16:47
    1,000 data points start to struggle with
  • 00:16:50
    generalization or new or unseen tasks so
  • 00:16:53
    any task you haven't traded on and
  • 00:16:55
    therefore these models become very
  • 00:16:57
    limited in terms of their flexibility
  • 00:16:58
    it's it's very important to keep these
  • 00:17:00
    models very specific to a specific task
  • 00:17:03
    right and then the second thing is which
  • 00:17:05
    also happens quite quick is forgetting
  • 00:17:07
    right models very quickly lose some of
  • 00:17:10
    the broad knowledge they originally had
  • 00:17:12
    but because they're focusing so narrowly
  • 00:17:14
    on that specific data set right so you
  • 00:17:16
    can see this also happening I've seen it
  • 00:17:18
    happening you know with 700 800 data
  • 00:17:21
    points it already starts hallucinating
  • 00:17:23
    completely stop making sense and uh
  • 00:17:26
    putting in weird symbols and things like
  • 00:17:27
    this so this will happen quite quickly
  • 00:17:30
    so again you have to sort of find this
  • 00:17:33
    uh data set size and I'll I'll tell you
  • 00:17:34
    my recommendations later uh to to get
  • 00:17:37
    these models right right and then the
  • 00:17:38
    last one is of course data dependency
  • 00:17:40
    the output of of or the the the the
  • 00:17:43
    model output of your fine model only
  • 00:17:45
    depends on the quality of your data your
  • 00:17:47
    quality the quality of your data has to
  • 00:17:48
    be good if you expect a good outcome and
  • 00:17:51
    second as I said the quantity of data is
  • 00:17:53
    very important right so too much data
  • 00:17:56
    will mean more forgetting and more
  • 00:17:57
    overfitting so more specialized which
  • 00:17:59
    could be good for some use cases but if
  • 00:18:01
    you still need some broad knowledge or
  • 00:18:03
    general knowledge then you need to go
  • 00:18:05
    lower right but if you go too low uh
  • 00:18:07
    then it will not perform the way you
  • 00:18:09
    want of course so you have to sort of
  • 00:18:11
    find that sweet spot and I'll tell you
  • 00:18:14
    the sweet spot I found for getting sort
  • 00:18:16
    of that tone of voice uh right so how do
  • 00:18:19
    we actually fine-tune and how do we
  • 00:18:21
    actually F tune without code now there's
  • 00:18:22
    two main ways uh the first one is with
  • 00:18:24
    replicate right I really like replicate
  • 00:18:27
    um basically replicate makes it easy for
  • 00:18:29
    you to F tune your models and you the
  • 00:18:32
    cool thing is you can use all these open
  • 00:18:33
    source models you can use llama you can
  • 00:18:35
    use Mist you can choose the model you
  • 00:18:38
    want to um fine tune and you can also
  • 00:18:42
    use images video uh you can really do
  • 00:18:45
    anything you want here and then they
  • 00:18:46
    make it also really easy to export your
  • 00:18:49
    own fine tuned llm into an API which you
  • 00:18:51
    can then use wherever you want and you
  • 00:18:53
    could also for example export it into um
  • 00:18:56
    relevance AI uh for you can see there's
  • 00:18:59
    already a lot of people making fine two
  • 00:19:01
    models and putting them public here on
  • 00:19:03
    replicate so you can see take photos in
  • 00:19:05
    style ready Tokyo kns so they have many
  • 00:19:07
    of these image uh models but you can uh
  • 00:19:10
    you can check it out they have some cool
  • 00:19:11
    cool fine T models here already
  • 00:19:13
    available but in this specific use case
  • 00:19:16
    uh in this video I'm going to do it on
  • 00:19:18
    Open AI why because it's actually
  • 00:19:19
    completely for free to find two new
  • 00:19:21
    models uh the GPT 40 and GPT 40 mini
  • 00:19:24
    until the end of September on open Ai
  • 00:19:26
    and it's quite an easy process even
  • 00:19:28
    easier than replicate I'd say so we're
  • 00:19:30
    going to do it on Open Ai and lastly how
  • 00:19:34
    do we actually set this up there's
  • 00:19:36
    basically four steps involved and the
  • 00:19:37
    first one is the hardest one uh which is
  • 00:19:40
    preparing our data set right um now the
  • 00:19:43
    second one is converting that data set
  • 00:19:46
    to adjacent L file and basically
  • 00:19:48
    adjacent L file is what these models
  • 00:19:50
    expect to be trained on right so I I'll
  • 00:19:52
    explain in detail how you can do this
  • 00:19:53
    later it's not that difficult and uh
  • 00:19:56
    then we have to actually train our model
  • 00:19:58
    and lastly I'll show you how you can
  • 00:20:00
    give your uh trained model or your 52
  • 00:20:02
    model to your agent on relevance AI now
  • 00:20:05
    before doing that we actually have to
  • 00:20:07
    decide which data set we're going to go
  • 00:20:09
    for the size of our data set now as I
  • 00:20:12
    said take this with a greater salt
  • 00:20:14
    there's no art science and you sort of
  • 00:20:15
    have to play around with this a bit but
  • 00:20:17
    I have been doing it for quite some
  • 00:20:19
    weeks now so in my experience what I
  • 00:20:21
    found for this specific use case which
  • 00:20:23
    is matching a tone of voice uh we need
  • 00:20:26
    to go for a data set between 100 to a
  • 00:20:30
    th000 data points right and this is
  • 00:20:33
    still quite a a broad range so I wanted
  • 00:20:36
    to get even more specific um because
  • 00:20:39
    what I what I've been seeing is even if
  • 00:20:41
    you go close to the thousands right or
  • 00:20:43
    the upper hundreds let's say you already
  • 00:20:46
    start losing a lot of this general
  • 00:20:47
    knowledge it starts doing some weird
  • 00:20:49
    things sometimes or putting out weird
  • 00:20:51
    symbols so if we go for the content
  • 00:20:54
    generation from ideas if that's your
  • 00:20:55
    main use case for this uh F2 model then
  • 00:20:58
    we want to go a little bit on the lower
  • 00:21:00
    end of our data set why because we still
  • 00:21:02
    need a little bit of that broad
  • 00:21:03
    knowledge and general knowledge we don't
  • 00:21:05
    want it to start hallucinating and stop
  • 00:21:07
    making sense so if you want to sort of
  • 00:21:09
    do it like I did in my example then
  • 00:21:12
    probably a data set between 100 and 300
  • 00:21:14
    would work uh better for you it will get
  • 00:21:16
    the tone of voice pretty pretty good and
  • 00:21:19
    uh it will it will keep making sense
  • 00:21:21
    right uh but for repurposing we can
  • 00:21:23
    actually do it on a little bit of a a
  • 00:21:26
    bigger data set why because when we
  • 00:21:29
    repurpose content for example blog post
  • 00:21:31
    we give the L&M the model a lot more
  • 00:21:33
    context on what we want our post to be
  • 00:21:35
    about right so it has to do less
  • 00:21:37
    thinking in terms of generating and it
  • 00:21:40
    needs Les less knowledge right so
  • 00:21:42
    therefore we can get away with a little
  • 00:21:44
    bit uh of a lack of of of broad
  • 00:21:47
    knowledge uh and therefore we can trade
  • 00:21:50
    it on a little bit of a larger data uh
  • 00:21:51
    set right so I put in there three to 600
  • 00:21:54
    right now even even if you go over that
  • 00:21:56
    it starts I was pretty surprised how
  • 00:21:58
    quick it loses sort of that general
  • 00:22:00
    knowledge and starts doing weird things
  • 00:22:02
    so now take you step by step uh through
  • 00:22:04
    setting up this LinkedIn data set and
  • 00:22:06
    actually training fine-tuning your model
  • 00:22:07
    and giving it to your age now as I said
  • 00:22:10
    the hardest thing is actually preparing
  • 00:22:11
    this data set of course we don't want to
  • 00:22:13
    start manually uh copying and pasting
  • 00:22:16
    these LinkedIn posts or whatever uh post
  • 00:22:18
    we're trying to gather to the data set
  • 00:22:19
    so in this specific one for the LinkedIn
  • 00:22:21
    one I actually set up a tool that will
  • 00:22:23
    help you scrape LinkedIn posts and
  • 00:22:26
    automatically update them to a knowledge
  • 00:22:27
    base inside relevance Ai and we can then
  • 00:22:29
    export it into uh CSV which we can then
  • 00:22:33
    later transform into Json L but uh the
  • 00:22:35
    reason I could do this in relevance AI
  • 00:22:37
    because they actually have uh built in
  • 00:22:39
    LinkedIn post scraper right so I'm going
  • 00:22:41
    to share this tool with you so you can
  • 00:22:43
    use it yourself and it'll basically
  • 00:22:45
    allow you to very quickly create this
  • 00:22:47
    data set for you uh you will have to
  • 00:22:49
    change uh one or two things so I'm going
  • 00:22:51
    to show you that right now so uh I'll
  • 00:22:54
    put that that uh link of this tool in
  • 00:22:56
    the description here below for the other
  • 00:22:58
    platform
  • 00:22:59
    um uh Instagram Facebook x uh the blog
  • 00:23:02
    post I actually use make.com to get that
  • 00:23:06
    data into a spreadsheet um and I'll make
  • 00:23:08
    sure to share those templates in the uh
  • 00:23:11
    community so anyway if you've cloned
  • 00:23:13
    your tool um you will come to a screen
  • 00:23:16
    like this
  • 00:23:17
    right which is the build the build
  • 00:23:20
    screen right and basically the tool
  • 00:23:21
    works like this we have three user
  • 00:23:23
    inputs right you can either choose if
  • 00:23:24
    you want a company or a user LinkedIn
  • 00:23:26
    URL so which type of profile you want to
  • 00:23:28
    scrape right a company or LinkedIn so
  • 00:23:30
    let's say in this case we want to do a
  • 00:23:31
    company then we have to add in very
  • 00:23:34
    simple just the LinkedIn of the
  • 00:23:36
    companies we want to scrape so let's say
  • 00:23:39
    just for example we're going to scrape
  • 00:23:41
    two
  • 00:23:42
    companies um so we're just going to
  • 00:23:45
    scrape hopspot
  • 00:23:48
    and uh maybe pipe
  • 00:23:51
    Drive uh not a good example of course
  • 00:23:54
    these are competitors but it's just an
  • 00:23:56
    example so we put in these is uh
  • 00:23:59
    LinkedIn URS just here and
  • 00:24:02
    hopspot and then we can decide here uh
  • 00:24:05
    the amount of days in the past we want
  • 00:24:07
    to retrieve uh these post from so for
  • 00:24:09
    example we only want it from the last
  • 00:24:11
    four months so we can put in 120 days in
  • 00:24:14
    the past right so uh this tools already
  • 00:24:16
    set up for you the only thing you will
  • 00:24:18
    have to change here is this third step
  • 00:24:20
    which is called insert data to knowledge
  • 00:24:22
    right now because this this will
  • 00:24:24
    actually upload these scraped LinkedIn
  • 00:24:26
    posts onto your knowledge base now of
  • 00:24:28
    course this on my knowledge base which
  • 00:24:29
    will not be available in your relevance
  • 00:24:31
    so you have to click here on create new
  • 00:24:33
    right you can call it whatever you
  • 00:24:35
    want um find tuning
  • 00:24:38
    LinkedIn right create
  • 00:24:40
    table and I will upload it to that
  • 00:24:43
    knowledge base in your relevance AI
  • 00:24:45
    account now here actually I used a
  • 00:24:46
    little bit of code to transform that
  • 00:24:49
    scrape data into um an array of objects
  • 00:24:52
    which we need for uploading it to our
  • 00:24:55
    relevance AI knowledge base and here you
  • 00:24:57
    can actually change
  • 00:24:58
    you will have to change the amount of
  • 00:25:00
    LinkedIn post per LinkedIn URL to
  • 00:25:03
    retrieve right so let's say one out 50
  • 00:25:05
    but you can change this to if you want a
  • 00:25:07
    different number so uh that's what we do
  • 00:25:11
    uh so let's say I want to do 100 for now
  • 00:25:12
    50 for each company uh and what we do is
  • 00:25:15
    we can just run
  • 00:25:23
    it so it's scraping all the profiles now
  • 00:25:26
    it's transforming it into the right
  • 00:25:28
    array of objects right to upload it to
  • 00:25:30
    our knowledge
  • 00:25:33
    base and here you can see it's inserted
  • 00:25:35
    it so we can see it
  • 00:25:38
    now and we can see now we have all the
  • 00:25:41
    scraped LinkedIn post here in the
  • 00:25:42
    knowledge base right we are you official
  • 00:25:45
    apology yesterday video yeah whatever so
  • 00:25:48
    you now we from here we can just uh
  • 00:25:50
    export it into a CSV so that's what we
  • 00:25:52
    do
  • 00:25:58
    so now we've downloaded it and we have
  • 00:26:00
    to do one more thing which is actually
  • 00:26:02
    adding two empty columns right before we
  • 00:26:04
    take the next step which is transforming
  • 00:26:06
    this into a Json L file right so we just
  • 00:26:09
    open a Google um Google Sheets we import
  • 00:26:13
    and we import
  • 00:26:15
    the um the CSV we just exported right oh
  • 00:26:19
    it still loading right so we add that
  • 00:26:23
    one
  • 00:26:31
    and then we can delete this assistant
  • 00:26:33
    row right we don't need that and then
  • 00:26:35
    we're going to add in because these are
  • 00:26:37
    basically the the the the the the tool
  • 00:26:40
    outputs right or the the model outputs
  • 00:26:42
    right but to train these models we also
  • 00:26:44
    need the prompt right for these outputs
  • 00:26:47
    and we also need the system message or
  • 00:26:49
    the system prompt um just like the roll
  • 00:26:52
    prompt in this case so we're going to
  • 00:26:54
    leave those empty for now because we're
  • 00:26:56
    going to add in those later so we just
  • 00:26:57
    going to add in two um two empty columns
  • 00:27:02
    on the
  • 00:27:04
    left right and like this we're just
  • 00:27:06
    going to export it again so LinkedIn F
  • 00:27:10
    turn um whatever hop spot right
  • 00:27:15
    so we go and Export this into a
  • 00:27:24
    CSV right now the CSV file and now we
  • 00:27:26
    have to convert that into adjacent L now
  • 00:27:28
    easiest way to found uh to do that which
  • 00:27:30
    I found is through a platform or website
  • 00:27:33
    called novel crafter I'll share the link
  • 00:27:35
    in the description below too where they
  • 00:27:36
    have a very simple tool to to do this
  • 00:27:39
    right so we can just upload our our file
  • 00:27:42
    here right
  • 00:27:45
    so so we upload our
  • 00:27:49
    file and basically Auto automatically
  • 00:27:51
    you can see it added in all these um all
  • 00:27:54
    the posts inside of the assistant
  • 00:27:56
    message right and the assistant message
  • 00:27:58
    is basically our our output right where
  • 00:28:01
    you can see we also have uh the user
  • 00:28:04
    message here and the system message
  • 00:28:06
    right which is a system prompt now do we
  • 00:28:08
    actually have to add in a prompt I've
  • 00:28:10
    experimented a lot with this and in my
  • 00:28:12
    experience actually I've tried like sort
  • 00:28:14
    of like describing what the post is
  • 00:28:16
    about like to maybe give it a bit more
  • 00:28:18
    context but actually my experience it
  • 00:28:19
    works best when you leave the prompt
  • 00:28:21
    field empty while training this data
  • 00:28:24
    right um so we're leaving that empty and
  • 00:28:26
    we're just going to add in a system
  • 00:28:28
    prompts right so in this case you could
  • 00:28:31
    add something like you are uh world
  • 00:28:34
    class LinkedIn post writer you just give
  • 00:28:39
    it a roll right with some
  • 00:28:40
    characteristics the post it's writing
  • 00:28:42
    right so normally you could sort of
  • 00:28:44
    check what kind of posts you like right
  • 00:28:46
    what what what sort of the
  • 00:28:47
    characteristics of them um in this case
  • 00:28:49
    I haven't checked but let's say uh um
  • 00:28:53
    your four let's say four hopspot and
  • 00:28:57
    pipe
  • 00:28:59
    D and uh whatever always use emojis
  • 00:29:03
    right really really bad prompt but uh
  • 00:29:07
    you understand uh what what you should
  • 00:29:09
    do here for example the other one I got
  • 00:29:11
    like you know use short Punchy sentences
  • 00:29:13
    um value driven always include a you
  • 00:29:16
    know CTA so you can sort of look at the
  • 00:29:18
    post and describe the characteristics
  • 00:29:20
    inside of the system message right so
  • 00:29:22
    once you've done that you can then click
  • 00:29:24
    on set for all right and then it will
  • 00:29:26
    automatically set it the system message
  • 00:29:28
    for all of these different posts and the
  • 00:29:30
    user message we leave empty um of course
  • 00:29:33
    you could try maybe in your experience
  • 00:29:35
    it works better if you actually put in a
  • 00:29:36
    prompt uh but it also saves time uh and
  • 00:29:40
    then we can here just click on download
  • 00:29:41
    Json L right and that's the type of file
  • 00:29:44
    we need to actually start uh fine-tuning
  • 00:29:46
    our model now we have our JSL file all
  • 00:29:49
    we do now is we go to platform. open.com
  • 00:29:52
    I can imagine you have an open account
  • 00:29:54
    by now if you don't have you can create
  • 00:29:56
    a free account and then we go go here to
  • 00:29:59
    uh dashboard and in your dashboard you
  • 00:30:01
    can click here on
  • 00:30:03
    fine-tuning and this is for free right
  • 00:30:05
    to do until the end of September so
  • 00:30:07
    really recommend to try it out and all
  • 00:30:09
    we do here is we click on
  • 00:30:11
    Create and there we can first choose on
  • 00:30:14
    which base model we want to fine tune uh
  • 00:30:17
    our our our model on and then here we
  • 00:30:19
    can upload our training data which in is
  • 00:30:23
    of course our jonl file so I don't know
  • 00:30:26
    why it's not loading I'll do the jonl
  • 00:30:28
    file
  • 00:30:33
    first try
  • 00:30:42
    again yeah that's working so you can
  • 00:30:44
    choose your model here I would recommend
  • 00:30:46
    choosing GPT 40 best model and it's for
  • 00:30:48
    free anyway I don't think even on these
  • 00:30:51
    data sets we're using which is like a
  • 00:30:53
    few hundred this will be actually very
  • 00:30:55
    cheap even though even if you have to
  • 00:30:56
    pay uh it will probably
  • 00:30:58
    not be more than a few dollars so you
  • 00:31:01
    upload your JSL file and the rest you
  • 00:31:03
    don't really have to touch you can
  • 00:31:04
    change the name here if you want so we
  • 00:31:06
    can say something like typ Drive um
  • 00:31:10
    right and this the seed you can IGN
  • 00:31:12
    ignore this for now of course you can
  • 00:31:14
    play around with it when you get better
  • 00:31:15
    but uh you don't have to really touch
  • 00:31:17
    this and then you just click
  • 00:31:19
    create and then it starts fine tuning
  • 00:31:22
    mod now this will actually take some
  • 00:31:23
    time right so uh on a data set like this
  • 00:31:26
    100 probably takes around around 20
  • 00:31:28
    minutes to 30 minutes but of course if
  • 00:31:30
    you bigger data data set it can take up
  • 00:31:32
    to a few hours or a couple of hours and
  • 00:31:35
    you can actually when it's done sort of
  • 00:31:37
    play around with it here in the open ey
  • 00:31:39
    playground you can also sort of compare
  • 00:31:41
    it to the original base model uh to see
  • 00:31:44
    to see the differences and if you like
  • 00:31:46
    the model and then once you like the
  • 00:31:47
    model and you actually want to give it
  • 00:31:48
    to an agent uh I'm going to show you
  • 00:31:52
    right now how you can do that right so
  • 00:31:54
    it's not extremely straightforward in
  • 00:31:55
    relevance AI to get access to your own f
  • 00:31:57
    model but I explain right now how to do
  • 00:31:59
    it so if you're in your relevance SI
  • 00:32:02
    dashboard um actually I will I will also
  • 00:32:06
    share um the tool of this one in uh in
  • 00:32:11
    the description below too so it makes it
  • 00:32:13
    a lot easier for you to do this yourself
  • 00:32:16
    so but I'll still show you how to do
  • 00:32:17
    this because you will have to adjust
  • 00:32:18
    some things so unfortunately we can't
  • 00:32:21
    actually uh use the L&M step inside a
  • 00:32:23
    relevance AI right because normally you
  • 00:32:25
    can see L&M we can actually choose our
  • 00:32:27
    model like this too but the problem is
  • 00:32:29
    it won't allow us to add in our own fine
  • 00:32:32
    tuned model it will only allow the major
  • 00:32:34
    model so instead of the L&M model we
  • 00:32:35
    actually have to use an API step um now
  • 00:32:38
    again I will share this template so all
  • 00:32:40
    you'd have to change here because the
  • 00:32:42
    the all of these were already configured
  • 00:32:45
    for you is the API key here right so
  • 00:32:48
    behind Bearer you will have to add in
  • 00:32:50
    your personal API key from open AI if
  • 00:32:52
    you don't know where to find it you can
  • 00:32:53
    go back and here you have a section with
  • 00:32:55
    API key there you can generate it and
  • 00:32:58
    you add that in here behind beer right
  • 00:33:00
    make sure to have a space and then put
  • 00:33:02
    in your API key then this you can leave
  • 00:33:04
    the same content type application Json
  • 00:33:06
    and the authorization and then here the
  • 00:33:08
    body type is raw then we have to add in
  • 00:33:10
    this uh Json uh string which basically
  • 00:33:15
    has a few things the first one here is
  • 00:33:16
    the model which would you would have to
  • 00:33:18
    change right so this part you CH change
  • 00:33:20
    out for your own fine tube model now
  • 00:33:21
    where do you find that number really
  • 00:33:23
    easy you go here on your F Tube model
  • 00:33:26
    and here you have the name of the model
  • 00:33:28
    right so you copy this and you paste
  • 00:33:30
    that in
  • 00:33:32
    there right and uh here you have the
  • 00:33:35
    system prompt right which you can add in
  • 00:33:37
    here too you can just write it in right
  • 00:33:39
    so you probably want to change this
  • 00:33:41
    too and uh and then lastly here we have
  • 00:33:44
    the content which is our prompt of
  • 00:33:46
    course right which prompt are we going
  • 00:33:48
    to send to our uh fine tune model now in
  • 00:33:51
    this case I've given of course my agent
  • 00:33:54
    access to write this so my agent just
  • 00:33:56
    adds in the prompt in to the variable
  • 00:33:58
    right here the user input right and then
  • 00:34:01
    we passing that prompt to the API to
  • 00:34:04
    actually get the outcome of our fine
  • 00:34:07
    tube model now I actually add in one
  • 00:34:08
    more step because unfortunately this API
  • 00:34:11
    step the way we're doing it it can't
  • 00:34:13
    accept like these special characters or
  • 00:34:15
    you know bold or things like that it
  • 00:34:17
    will give an error so I just added in
  • 00:34:19
    one more step here which is before it
  • 00:34:22
    actually um calls our F Tube model we
  • 00:34:25
    pass in that prompt which it gets from
  • 00:34:27
    the agent below you'll see a prompt the
  • 00:34:28
    only job is to take out special
  • 00:34:30
    formatting for the prompt below do not
  • 00:34:31
    change any uh anything or add add or
  • 00:34:34
    take away any text right so then we
  • 00:34:36
    actually pass this variable inside of
  • 00:34:38
    the API col right and we do that here by
  • 00:34:42
    just adding it in the strings right and
  • 00:34:44
    adding in the variable right so that's
  • 00:34:47
    that's the way you can also change the
  • 00:34:48
    max tokens here right we can add that a
  • 00:34:51
    little bit to that and you can change
  • 00:34:52
    the temperature not recommendable
  • 00:34:54
    especially if you're using these fine T
  • 00:34:55
    models because they will start
  • 00:34:56
    hallucinating very quick quickly so uh
  • 00:35:00
    that should be it uh lastly you can
  • 00:35:03
    actually change the output right because
  • 00:35:05
    the normal API call you get a response
  • 00:35:07
    body which is sort of a piece of codes
  • 00:35:09
    Etc all we want back really from this
  • 00:35:11
    step is a message right the output of
  • 00:35:13
    our fine tuned LM so the way you do that
  • 00:35:16
    you'll see that when you uh do this
  • 00:35:19
    you'll actually have two other outputs
  • 00:35:21
    which is the response body and another
  • 00:35:22
    one the status I think you could just
  • 00:35:24
    take those away by clicking on the this
  • 00:35:27
    here and then you can add in uh a new
  • 00:35:30
    one right add new input and there you
  • 00:35:32
    just add this one right which is the
  • 00:35:34
    message right you'll find that in your
  • 00:35:36
    variables right I find I don't find that
  • 00:35:38
    right now because I haven't run it but
  • 00:35:40
    you'll find it when once you've run it
  • 00:35:42
    and then you can uh you can add that and
  • 00:35:45
    you can change also the the the name
  • 00:35:47
    here message and then basically what it
  • 00:35:49
    will do is will only output message
  • 00:35:51
    there and that's Bas the output of our
  • 00:35:53
    fine tun nlm we don't want really
  • 00:35:55
    anything else so that's it it that's how
  • 00:35:58
    you set it up and give it to your agent
  • 00:36:00
    uh quite straightforward and easy I
  • 00:36:02
    think the hardest part again is
  • 00:36:03
    preparing the data set um but I think
  • 00:36:06
    really interesting use cases especially
  • 00:36:08
    for this Stone of voice uh I think you
  • 00:36:11
    this the you can really start
  • 00:36:13
    implementing these things in your in
  • 00:36:14
    your builds and to make them better and
  • 00:36:17
    uh yeah I think these models will only
  • 00:36:19
    get better and this fine tuning will get
  • 00:36:20
    better so I see a bright future in in
  • 00:36:23
    fine tuning these models and as I said
  • 00:36:25
    before giving agents access to this sort
  • 00:36:27
    of knowledge bases tools and fine tube
  • 00:36:29
    models we can make these agent systems
  • 00:36:32
    even more powerful now if you're still
  • 00:36:34
    with me thank you so much for watching
  • 00:36:35
    again uh if you're interested uh please
  • 00:36:38
    check out my uh my community I'm really
  • 00:36:40
    excited to get started and uh work
  • 00:36:42
    together with you guys there so check it
  • 00:36:44
    out if you're if you're interested and
  • 00:36:46
    uh again thank you so much and I'll see
  • 00:36:48
    you in the next one
Tags
  • AI agents
  • fine-tuning
  • content creation
  • no-code tutorial
  • OpenAI
  • Relevance AI
  • digital strategy
  • social media marketing
  • automation
  • customer service