why you suck at prompt engineering (and how to fix it)

00:56:39
https://www.youtube.com/watch?v=3jxfk6nH5qk

Resumen

TLDRThe video critiques common pitfalls in prompt engineering and emphasizes advancing beyond simplistic templates towards strategic and knowledgeable approaches. It discusses the 'midwit' meme, where individuals overcomplicate tasks, missing efficient and direct solutions available to both low and high IQ individuals. By framing prompt engineering as a linear progression — from novice use of AI systems like ChatGPT to expert manipulation — the creator argues for a deepening understanding of techniques such as role prompting, chain of thought, and few-shot prompting. These strategies enhance AI output quality, reduce reliance on more powerful models, and highlight the evolving nature of AI in business. By treating English as a new programming language, the creator shares insights on structure, including markdown formatting, and the emotional influence on AI accuracy. Highlighting efficiency, the video also challenges viewers to improve prompt crafting to leverage cheaper models, ultimately promising significant accuracy gains and business efficacy.

Para llevar

  • 🧠 Understand the significance of good prompt engineering for effective AI solutions.
  • 🔄 Move beyond rigid templates to thoughtful prompt structuring.
  • 🏆 Strive for efficient and effective AI system design.
  • 🔍 Utilize techniques like role and emotional prompting for better results.
  • 💡 Recognize English as the new programming tool for AI.
  • 🏗️ Incorporate markdown formatting for clear AI instruction.
  • 📝 Implement few-shot prompting with examples to enhance accuracy.
  • 🎯 Focus on cheaper, faster AI models with optimized prompts.
  • 💼 Extend prompt engineering strategies to various AI applications.
  • 🚀 Aim for accuracy improvement and better AI task performance.

Cronología

  • 00:00:00 - 00:05:00

    In this video, the speaker discusses prompt engineering, comparing it with a meme about 'midwit' behavior, where people in the middle range overcomplicate simple processes unlike those with low or high IQ who reach the same simple solutions. The aim is to move viewers from being 'midwits' who rely on generic prompt templates to becoming insightful prompt engineers.

  • 00:05:00 - 00:10:00

    The speaker highlights how understanding the science and principles behind prompt engineering can help one become a proficient AI system user. This expertise allows for extracting maximum value from language models (LLMs), making AI systems more efficient and useful.

  • 00:10:00 - 00:15:00

    The speaker introduces the concept of conversational prompt engineering versus single shot prompting. Conversational prompting involves interactive prompts with follow-up questions, typically used for personal productivity. Single-shot prompting, however, involves creating reliable, automated systems with no further human intervention.

  • 00:15:00 - 00:20:00

    The speaker emphasizes the importance of mastering single-shot prompt engineering to build scalable AI systems that can execute tasks reliably, highlighting its potential to generate substantial economic value. The necessity of English proficiency in prompt creation is explained, citing it as the new 'programming language'.

  • 00:20:00 - 00:25:00

    The speaker asserts that mastering prompt engineering is crucial for working with AI voice systems, AI agents, AI task automations, and building custom AI tools. Such skills ensure the creation of valuable, efficient systems. They contrast effective prompt engineers, who optimize prompts to use cheaper AI models, with those who rely on more expensive models due to inefficient prompts.

  • 00:25:00 - 00:30:00

    A 'prompt formula' is introduced, consisting of role, task, specifics, context, examples, and notes. These components are based on scientific findings that improve prompt performance, such as role-specific prompting, chain-of-thought prompting, and emotional stimuli. The goal is to raise a user's prompt creation ability, enhancing the effectiveness and efficiency of AI systems.

  • 00:30:00 - 00:35:00

    Detailed techniques to improve each aspect of the prompt formula are discussed. Role prompting can enhance accuracy by 10-25%. Task-specific instructions benefit from chain-of-thought prompting, providing a 10-90% accuracy boost depending on task complexity. Emotional stimuli improve complex task accuracy by 115%, showing the importance of context in prompts.

  • 00:35:00 - 00:40:00

    The importance of providing examples is emphasized, with few-shot prompts significantly increasing prompt accuracy. Giving strategic examples helps the model understand desired input-output relationships. Notes are also critical for reiterating key issues and structuring prompts optimally, leveraging findings like the 'lost in the middle' effect to enhance performance.

  • 00:40:00 - 00:45:00

    Incorporating markdown for structured and readable prompts is advised, as format and structure contribute significantly to better AI response. While no definitive research is cited, evidence from OpenAI practices suggests structured inputs improve AI performance. Practical benefits of markdown include better maintenance and readability.

  • 00:45:00 - 00:50:00

    The speaker highlights real-world applications using AI systems developed beyond simple chat prompts, encouraging transition from GPT-4 to cheaper models like GPT-3.5 as effective prompting can replicate high-quality outputs of more expensive models, emphasizing economic benefits without sacrificing performance.

  • 00:50:00 - 00:56:39

    The conversation with the CTO reveals practical insights into enhancing AI model efficacy with prompt engineering, such as using confusing examples to train models more effectively. The speaker concludes by motivating viewers to apply these strategies to become adept at creating efficient, scalable, and economically viable AI-driven solutions.

Ver más

Mapa mental

Vídeo de preguntas y respuestas

  • Why is prompt engineering crucial for AI systems?

    Prompt engineering is essential because it directly influences the effectiveness and efficiency of AI models, determining how well they provide valuable outputs from inputs.

  • What is the 'midwit' problem in prompt engineering?

    The 'midwit' problem refers to individuals who overcomplicate AI tasks instead of simplifying approaches, hindering their efficiency in building AI solutions.

  • How can understanding different prompting techniques improve AI outputs?

    By understanding techniques like role prompting, chain of thought, and few-shot prompting, users can significantly boost the accuracy and reliability of AI responses.

  • What are the benefits of single-shot prompting compared to conversational prompting?

    Single-shot prompting can be automated into systems, ensuring consistent and reliable outputs without human intervention, unlike more forgiving conversational prompting.

  • Why is English considered the new programming language?

    English is viewed as the new programming language because writing effective prompts in natural language can oftentimes replace traditional scripting for AI tasks.

  • How does markdown formatting enhance prompt engineering?

    Markdown formatting can help organize prompts clearly, aiding both the engineer in structuring tasks and the AI in understanding and processing instructions effectively.

  • What impact does emotional prompting have on AI task performance?

    Emotional prompting can improve AI model performance by up to 115% for complex tasks, especially by enhancing truthfulness and informativeness.

  • How can prompt engineering help save costs and improve efficiency in AI solutions?

    Effective prompt engineering can utilize cheaper and faster AI models, providing high-quality outputs without relying on more expensive alternatives.

  • What role do examples play in improving AI performance?

    Providing examples, or few-shot prompting, helps the AI model understand desired output formats and styles, which can dramatically enhance accuracy.

  • Can prompt engineering be applied beyond text to AI systems like voice agents or business tools?

    Yes, prompt engineering principles can extend to areas like voice agents and business automation tools, enhancing their functionality and integration effectiveness.

Ver más resúmenes de vídeos

Obtén acceso instantáneo a resúmenes gratuitos de vídeos de YouTube gracias a la IA.
Subtítulos
en
Desplazamiento automático:
  • 00:00:00
    you probably suck at promt engineering
  • 00:00:01
    and in this video I'm going to tell you
  • 00:00:02
    why how you can fix it and how you
  • 00:00:04
    cannot be the guy in the middle here of
  • 00:00:06
    this mid with me so that's might a
  • 00:00:08
    little bit off topic but if you give me
  • 00:00:09
    a second I'll explain how this applies
  • 00:00:11
    to the majority of people who are trying
  • 00:00:13
    to do prompt engineering and build AI
  • 00:00:14
    systems and why it's probably holding
  • 00:00:16
    you back because you're stuck in this
  • 00:00:18
    midwit range so if you haven't seen this
  • 00:00:19
    meme before basically the low IQ people
  • 00:00:22
    and the high IQ people kind of converge
  • 00:00:24
    on the same solution uh as you see here
  • 00:00:26
    so we have the guy using Apple notes on
  • 00:00:28
    one side and the genius using Apple
  • 00:00:30
    notes on one side and in the middle you
  • 00:00:32
    have the midwit who's over complicating
  • 00:00:33
    it making it very difficult and painful
  • 00:00:35
    for themselves and then we have the same
  • 00:00:37
    thing with NES Cafe Classic on both
  • 00:00:39
    sides and in the middle we have the
  • 00:00:40
    midwit struggling with all these
  • 00:00:42
    different types of coffee and fancy
  • 00:00:44
    methods so how does this apply to prompt
  • 00:00:46
    engineering I know you're asking
  • 00:00:47
    considering you clicked on a video
  • 00:00:48
    that's about prompt engineering but it's
  • 00:00:50
    actually a not so bow curve when it
  • 00:00:52
    comes to prompt engineering and uh on
  • 00:00:54
    the far left we have the stupid person
  • 00:00:56
    who is just using chat GPT and prompting
  • 00:00:58
    it as they wish kind of just throwing
  • 00:01:00
    things in there on the far side we have
  • 00:01:02
    what we're trying to get you to after
  • 00:01:03
    this video which is a genius who has a
  • 00:01:05
    toolkit of prompts and understands the
  • 00:01:07
    science behind it and in the middle we
  • 00:01:08
    have probably you right now which is uh
  • 00:01:11
    I mean no no disrespect to these other
  • 00:01:13
    YouTubers cuz I've made videos on on
  • 00:01:15
    proing myself like I'm I'm I'm part of
  • 00:01:17
    the problem here but uh what these are
  • 00:01:19
    all about is chat gbt prompt templates
  • 00:01:23
    and and sort of taking the thinking away
  • 00:01:24
    from you and putting it in the hands of
  • 00:01:26
    this template that they've created so uh
  • 00:01:29
    I'm not going to sh on my videos too
  • 00:01:30
    much uh because these videos were
  • 00:01:32
    talking more conceptually as well so I'd
  • 00:01:34
    say I'm on that line and the content of
  • 00:01:36
    this presentation in this video is
  • 00:01:37
    intended to take you from this plateau
  • 00:01:40
    of someone trying to do PR engineering
  • 00:01:41
    but not actually understanding the the
  • 00:01:44
    science behind it which is what we're
  • 00:01:45
    going to go into this video point of
  • 00:01:46
    this video is to take you from someone
  • 00:01:48
    who's on that Plateau as you can see
  • 00:01:49
    here um and get you up to the sort of
  • 00:01:51
    genius and and very capable PR engineer
  • 00:01:53
    who's able to do great things with these
  • 00:01:55
    language models and it's so important
  • 00:01:57
    because your ability to prompt them and
  • 00:01:59
    and provide instruction of these models
  • 00:02:00
    directly impacts your ability to get
  • 00:02:02
    value out of them so if there's this
  • 00:02:04
    amazing new technology called llms and
  • 00:02:06
    you're better at using them you're going
  • 00:02:07
    to go further in the AI space and
  • 00:02:09
    further in life if you can better send
  • 00:02:11
    instructions to these models so
  • 00:02:13
    continuing on uh you may be wondering
  • 00:02:15
    hey why is this new style why is the
  • 00:02:17
    camera on the different side why is
  • 00:02:18
    everything so casual um and that's
  • 00:02:20
    because uh I've been wasting a not
  • 00:02:22
    wasting but I've been spending a lot of
  • 00:02:23
    time on my videos uh the past while as
  • 00:02:25
    you may have noticed some of you people
  • 00:02:26
    starting to think that I'm a YouTuber um
  • 00:02:29
    and I'm I've never really thought of
  • 00:02:30
    myself as a YouTuber personally um I'm a
  • 00:02:32
    businessman and YouTube is how I get
  • 00:02:34
    clients for my business and I think you
  • 00:02:36
    guys are starting to see me as a as a
  • 00:02:37
    YouTuber and I I really as much as I
  • 00:02:40
    love making videos and teaching you guys
  • 00:02:42
    everything what I really like doing is
  • 00:02:43
    working on my business and working my
  • 00:02:44
    team and building the cool software that
  • 00:02:46
    we're building through genive and work
  • 00:02:47
    on the morning side and also the cool
  • 00:02:49
    stuff we do with my my education
  • 00:02:50
    Community as well and teaching them how
  • 00:02:51
    to start their own businesses like I so
  • 00:02:54
    probably less Fancy videos that require
  • 00:02:56
    a lot of time and editing and and if I
  • 00:02:59
    have anything interesting to share and I
  • 00:03:00
    want to talk about it like in this video
  • 00:03:02
    because this video is coming out of me
  • 00:03:04
    seeing so many people that I talk to in
  • 00:03:05
    my community not understanding this
  • 00:03:07
    fundamental skill and it is so
  • 00:03:09
    fundamental but people have this
  • 00:03:10
    misconception that they know how to De
  • 00:03:12
    it which I'm going to break like just
  • 00:03:14
    absolutely destroy if you in this video
  • 00:03:16
    uh and rebuild your skills as a prompt
  • 00:03:17
    engineer so doing this because if I have
  • 00:03:20
    something to talk to you about and I
  • 00:03:21
    think it's important for you all um then
  • 00:03:23
    I'm going to share it and also you may
  • 00:03:24
    be wondering why do I do this at all and
  • 00:03:26
    it's because I have a SAS and it helps
  • 00:03:29
    agency owners to build AI solutions for
  • 00:03:32
    businesses so if I don't teach you guys
  • 00:03:34
    how to do prom engineering you're never
  • 00:03:35
    going to use my SAS so I have to do this
  • 00:03:37
    stuff so that I can succeed and and make
  • 00:03:38
    all the money with the SAS that I want
  • 00:03:40
    so I'm you guys get a byproduct of me
  • 00:03:42
    trying to build my SAS which is helping
  • 00:03:44
    you to learn these things so anyway
  • 00:03:46
    atics so why you're probably bad at
  • 00:03:48
    prompt engineering have conversational
  • 00:03:50
    prompt engineering versus single shot
  • 00:03:52
    conversational is what everyone thinks
  • 00:03:54
    is prompt engineering and they go onto
  • 00:03:55
    chat GPT and they go hey hey yeah I got
  • 00:03:57
    this got this cool prompt template and
  • 00:03:59
    they Chuck in there and they can get
  • 00:04:01
    some responses from it and they're like
  • 00:04:02
    man I'm so good at this and then they
  • 00:04:04
    switch off and think that they're a
  • 00:04:05
    prompt engineer and they know how to do
  • 00:04:06
    this stuff um this of course is human
  • 00:04:08
    operated there are follow-up prompts
  • 00:04:11
    that you can do so you can say oh could
  • 00:04:12
    you please like modify this a little bit
  • 00:04:14
    and because of these follow-up prompts
  • 00:04:15
    it's very forgiving in terms of what you
  • 00:04:17
    can say um and how you can tweak it to
  • 00:04:19
    get the RO responses and this really is
  • 00:04:21
    just good for personal use if you're
  • 00:04:22
    working at a job and you might want to
  • 00:04:24
    streamline some of the work play that
  • 00:04:25
    you do there great like I mean Chad GPT
  • 00:04:27
    is an incredible software and I use it
  • 00:04:29
    the time as well so I'm not not
  • 00:04:31
    on it but it is conversational prompting
  • 00:04:33
    and on the other side is single shot
  • 00:04:35
    prompting which is something that we can
  • 00:04:37
    actually bake into a system uh that can
  • 00:04:39
    be automated and can be part of a sort
  • 00:04:41
    of ongoing ongoing system or flow uh
  • 00:04:44
    where an AI task is embedded in it um
  • 00:04:47
    there are no follow-up prompts because
  • 00:04:48
    there's no no human involved in most
  • 00:04:50
    cases there's no room for error in that
  • 00:04:52
    case you can't have jgpt putting hey
  • 00:04:54
    here is the answer and they put in the
  • 00:04:55
    answer it just needs to give you the
  • 00:04:56
    answer every single time while the
  • 00:04:58
    system's going to break uh because of
  • 00:05:00
    this because if we can prompt it into
  • 00:05:01
    something that is reliable we can
  • 00:05:03
    actually have a very scalable system
  • 00:05:05
    that is AI built into it which is ideal
  • 00:05:07
    for these AI assisted systems and this
  • 00:05:09
    is really how you can create value so
  • 00:05:12
    the benefit of conversational prompting
  • 00:05:13
    skills which many of you will have I'm
  • 00:05:15
    sure is that it might make you better at
  • 00:05:17
    your job and might make your boss a bit
  • 00:05:19
    more money CU you're able to do more
  • 00:05:21
    work um maybe make you a bit more money
  • 00:05:23
    on the on the process but the benefit of
  • 00:05:25
    these single shot systems where we can
  • 00:05:26
    build an AI task to do a specific
  • 00:05:28
    function every every single time
  • 00:05:30
    reliably is that it will allow you to
  • 00:05:32
    build AI systems worth potentially
  • 00:05:34
    thousands of dollars a piece as as I've
  • 00:05:35
    done as many people in my community have
  • 00:05:37
    done as well if you don't believe me I
  • 00:05:38
    don't care furthermore on this point of
  • 00:05:39
    why you should take prompt engineering
  • 00:05:41
    seriously Andre Kathy here uh says the
  • 00:05:43
    hottest new programming language is
  • 00:05:45
    English and this is no dummy he is a
  • 00:05:47
    founding member of open AI he's also a
  • 00:05:49
    leading AI researcher what he means here
  • 00:05:51
    by saying the hottest new programming
  • 00:05:53
    language is English is that you being
  • 00:05:55
    able to write instructions in English is
  • 00:05:57
    going to allow you to one generate code
  • 00:05:59
    if you want to so you can translate from
  • 00:06:00
    English to codee that's one way of
  • 00:06:02
    programming in English technically but
  • 00:06:04
    another way is that if you can write
  • 00:06:06
    effective prompts you can replace the
  • 00:06:09
    the the programming required with a
  • 00:06:12
    massive program or a massive script you
  • 00:06:14
    can write a prompt that effectively does
  • 00:06:16
    all of the things that that that script
  • 00:06:18
    would have done so you can replace large
  • 00:06:19
    blocks of code with a well-written
  • 00:06:21
    prompt now which is really what I want
  • 00:06:23
    you guys to focus on and say well I can
  • 00:06:24
    have the abilities of a developer if I
  • 00:06:26
    can write these prompts well using llms
  • 00:06:28
    properly um and furthermore this guy
  • 00:06:31
    also this guy Liam otley I've founded a
  • 00:06:33
    couple AI companies I have my own AI
  • 00:06:36
    agency Morningside AI I have my own AI
  • 00:06:38
    education Community uh my tripa
  • 00:06:40
    accelerator and I also have a software U
  • 00:06:43
    my AI SAS called agentive which is
  • 00:06:45
    really what my focuses on right right
  • 00:06:46
    now and I've got some pretty smart
  • 00:06:48
    people working for me I'm not the brains
  • 00:06:49
    of the operation anymore I I hope I was
  • 00:06:52
    at one point but my CTO Spencer has like
  • 00:06:54
    five six years of NP uh experience and
  • 00:06:57
    he does some really cool stuff for us
  • 00:06:58
    and a lot of what I'm going to sharing
  • 00:06:59
    in this in terms of how you should be
  • 00:07:01
    doing your prod engineering and what
  • 00:07:03
    I've learned and what I now use is from
  • 00:07:05
    him so you might think I'm just some
  • 00:07:07
    goofball who's been doing YouTube for 12
  • 00:07:08
    months uh but I do have teamed and I've
  • 00:07:11
    paid people who are a lot smarter than
  • 00:07:13
    me to give me this knowledge so now I'm
  • 00:07:14
    giving it to you so now I want you to
  • 00:07:16
    remember this a well-written prompt can
  • 00:07:17
    replace hundreds of lines of code going
  • 00:07:19
    back to what I said before this is I
  • 00:07:21
    think it's my quote but I'm just going
  • 00:07:23
    to say someone said it cuz someone must
  • 00:07:24
    have said it but that's essentially what
  • 00:07:25
    you can do if you write a well-written
  • 00:07:27
    prompt um now here's an example so
  • 00:07:29
    there's there a video that will have
  • 00:07:30
    just gone out recently on my channel
  • 00:07:31
    where I manage my phone finances with AI
  • 00:07:33
    I set up a system where my assistant can
  • 00:07:35
    send money I can send screenshots these
  • 00:07:37
    things here through the system and out
  • 00:07:39
    comes the other side a tracker for all
  • 00:07:41
    my expenses within my notion um it
  • 00:07:43
    automatically extracts the extracts the
  • 00:07:46
    transactions from the screenshots
  • 00:07:47
    categorizes them stores them in my
  • 00:07:49
    expense data database within the notion
  • 00:07:52
    and this is kind of the system here you
  • 00:07:54
    can pause and take a look but basically
  • 00:07:55
    so it took me 2 hours to write a very
  • 00:07:57
    good prompt that can success categorized
  • 00:08:00
    format and then pass the data over to
  • 00:08:01
    notion um and that's ended up saving 8
  • 00:08:03
    hours per month for my system so example
  • 00:08:06
    there not the best one but you get the
  • 00:08:08
    idea um if you write a good prompt you
  • 00:08:10
    can replace what would have taken like
  • 00:08:12
    to me for me to do this expensive system
  • 00:08:14
    with code would have taken a a whole lot
  • 00:08:16
    longer and it would have been extremely
  • 00:08:17
    messy um but the AI can just throw all
  • 00:08:19
    the information at it say hey look this
  • 00:08:20
    is what I want you to do with it and
  • 00:08:21
    outcomes the transactions ready to go
  • 00:08:23
    into notion and no we're still not ready
  • 00:08:25
    to move forward because you need to
  • 00:08:26
    understand that if you can just get this
  • 00:08:27
    skill right that many people don't have
  • 00:08:29
    correct they think they can do
  • 00:08:30
    conversational prompt engineering and
  • 00:08:31
    that's going to be enough for them to go
  • 00:08:33
    in and build these systems but in AI
  • 00:08:35
    voice systems which are all the rage
  • 00:08:36
    right now I've done a ton of videos on
  • 00:08:37
    you can go watch them on my channel AI
  • 00:08:39
    voice systems if you can't prompt
  • 00:08:41
    correctly if you don't have good prompt
  • 00:08:42
    engineering skills you can't do AI voice
  • 00:08:44
    systems if you don't have good prompt
  • 00:08:46
    engineering skills you can't create AI
  • 00:08:47
    agents like gpts if you don't have good
  • 00:08:49
    prompt engineering skills you can't
  • 00:08:51
    build ai's tasks into AI automations
  • 00:08:53
    like on zapia and make Etc and you can't
  • 00:08:56
    build custom AI tools on relevance and
  • 00:08:58
    stack Ai and these other platform so if
  • 00:09:00
    you can't just get this thing right and
  • 00:09:01
    watch the rest of this video it's not
  • 00:09:02
    going to be a retention hookie and and
  • 00:09:04
    for your Tik Tok brain I don't care if
  • 00:09:06
    you watch the rest of it but I'm telling
  • 00:09:07
    you if you don't take the time to
  • 00:09:09
    actually soak in this information I'm
  • 00:09:11
    about to tell you and and get good at
  • 00:09:13
    this prompt engineering skill you are
  • 00:09:14
    not going to make any money in AI
  • 00:09:15
    because everything depends on it and
  • 00:09:17
    finally what I want to do is a little
  • 00:09:18
    comparison of the two different types of
  • 00:09:20
    people you can be you can either watch
  • 00:09:22
    this video and come out on the right
  • 00:09:23
    side here or you can continue to do your
  • 00:09:26
    whatever you think you're doing when
  • 00:09:27
    you're prompt engineering um and you can
  • 00:09:28
    be like the guy so go on the left the
  • 00:09:30
    midw he has a handy bag of prompt
  • 00:09:32
    templates he gets stuck when something
  • 00:09:34
    doesn't work because he doesn't
  • 00:09:37
    understand what what the template's even
  • 00:09:38
    doing so then he uses a more expensive
  • 00:09:41
    and a smarter model like he moves from
  • 00:09:42
    3.5 turbo to four turbo and he goes oh
  • 00:09:45
    yeah well now it works because he gets
  • 00:09:48
    the models to do the work inad of
  • 00:09:49
    himself so by doing this he creates
  • 00:09:51
    slower and more expensive systems and
  • 00:09:54
    therefore he struggles to create systems
  • 00:09:56
    that are actually valuable for the
  • 00:09:57
    clients cuz if it's costing them a lot
  • 00:09:59
    and they're really slow there's less
  • 00:10:01
    value for the client right and then
  • 00:10:03
    number six he gives up on trying to
  • 00:10:05
    start an AI business and get into this
  • 00:10:06
    AI solution space and then like some of
  • 00:10:08
    you guys in the comments they become a
  • 00:10:10
    triaa as a scam goofball and blame it on
  • 00:10:12
    the model and not your inability to
  • 00:10:13
    learn how to write English and then on
  • 00:10:15
    the right we have the guy that you want
  • 00:10:17
    to be uh he has a toolkit of prompt
  • 00:10:19
    components and methods based on Research
  • 00:10:21
    which I'm going to take you through in
  • 00:10:22
    this video he approaches problems like
  • 00:10:25
    an engineer he skillfully applies these
  • 00:10:28
    techniques he achieves the desired
  • 00:10:30
    performance with fastest and cheapest
  • 00:10:32
    model available so he uses the cheapest
  • 00:10:34
    model he can get and uses his skills to
  • 00:10:36
    make it do what he needed to do
  • 00:10:38
    therefore he's able to create lightning
  • 00:10:40
    quick and affordable AI systems for
  • 00:10:41
    clients that create actual value because
  • 00:10:44
    they're cheap and they're fast and then
  • 00:10:46
    therefore he actually makes money
  • 00:10:47
    because these clients like wow this
  • 00:10:49
    thing is awesome and number seven this
  • 00:10:50
    guy then finds other AI Chads like him
  • 00:10:52
    who know how to do prompt engineering
  • 00:10:54
    and are making money with AI and with
  • 00:10:56
    him and his friends they all get AI Rich
  • 00:10:58
    um yes I'm selling the dream there but
  • 00:11:00
    that is what's possible if you can get
  • 00:11:01
    this thing right and that is what myself
  • 00:11:03
    and a bunch of the other guys that I was
  • 00:11:04
    just namam with they're all doing it uh
  • 00:11:06
    it's happening um whether you like it or
  • 00:11:08
    not so be like this guy don't be like
  • 00:11:10
    this guy um yeah there you go so now we
  • 00:11:13
    get into the perfect prompt formula for
  • 00:11:15
    building AI systems which is the meat
  • 00:11:16
    and poates of this video um Beware Of
  • 00:11:19
    The Prompt formula as I mentioned you
  • 00:11:21
    don't want to be the guy who relies on
  • 00:11:22
    the formula and while is while I am
  • 00:11:25
    giving you a formula in this video I've
  • 00:11:26
    put it in asx's and user capital letters
  • 00:11:28
    so that you understand that I'm kind of
  • 00:11:30
    taking the piss out of formulas because
  • 00:11:32
    what I'm teaching you in this is going
  • 00:11:33
    to be the science behind them um so that
  • 00:11:35
    you guys if you run into an issue you'll
  • 00:11:37
    understand hey look I can apply this
  • 00:11:39
    technique to try and fix it so you'll
  • 00:11:41
    actually be able to write good prompts
  • 00:11:42
    forever if you understand the stuff I'm
  • 00:11:44
    going to teach and you actually absorb
  • 00:11:46
    it so components of this prompt are role
  • 00:11:48
    task specifics context examples and
  • 00:11:51
    notes and behind each of these
  • 00:11:53
    components is a related uh scientific
  • 00:11:56
    paper or some research that has been
  • 00:11:58
    done or some prompting technique that
  • 00:12:00
    has been discovered and backed up with a
  • 00:12:01
    research paper that you can see on
  • 00:12:03
    screen here we have roll prompting Chain
  • 00:12:05
    of Thought prompting emotion prompt F
  • 00:12:06
    shot prompting and lost in the middle
  • 00:12:08
    all of these are going to be covered in
  • 00:12:09
    the next section to this video so let's
  • 00:12:11
    jump into it um oh before we do that
  • 00:12:13
    actually what each of these techniques
  • 00:12:15
    have is a increase in accuracy or
  • 00:12:17
    performance for props and I'm going to
  • 00:12:19
    retention hook you here with all these
  • 00:12:21
    question marks because over time we're
  • 00:12:22
    going to reveal just how much
  • 00:12:24
    performance improvements you can get so
  • 00:12:25
    if you stack all of these up together uh
  • 00:12:27
    you get an increase in performance on
  • 00:12:28
    your PR
  • 00:12:29
    um just a lot of these are very easy to
  • 00:12:31
    implement um but you're going to get a
  • 00:12:33
    massive increase I'm not going to tell
  • 00:12:34
    you how much it is but a huge increase
  • 00:12:35
    just by applying these simple simple
  • 00:12:37
    techniques so we're going to be using an
  • 00:12:38
    example for this video which is an email
  • 00:12:39
    classification system uh and the the AI
  • 00:12:43
    task here in the middle uh is where
  • 00:12:44
    we're going to have be sending our
  • 00:12:46
    prompt and in this case it's going to be
  • 00:12:48
    someone comes onto uh someone's website
  • 00:12:50
    they fill out a form that form then gets
  • 00:12:52
    sent the form submission gets sent by
  • 00:12:54
    email to the company the CEO or the Ops
  • 00:12:57
    guy uh to his email and he gets it and
  • 00:12:59
    then normally has to read through it and
  • 00:13:00
    then classify it and and take action
  • 00:13:01
    from there but what we're going to be
  • 00:13:03
    doing is imagining a system where there
  • 00:13:05
    is this AI task or this AI node and
  • 00:13:07
    make.com or whatever you want to use
  • 00:13:09
    where the email comes in and then it's
  • 00:13:10
    going to be classified using our prompt
  • 00:13:13
    into opportunity needs attention or
  • 00:13:15
    ignore label so super basic system I
  • 00:13:16
    wanted to use as an example here let's
  • 00:13:19
    get into it um we're going to be
  • 00:13:20
    building up a prompt over time of of how
  • 00:13:22
    we can apply this techniques to make the
  • 00:13:24
    to make this thing better and perform
  • 00:13:25
    better so starting off we have the
  • 00:13:26
    typical chat GPT prompt if you asked any
  • 00:13:28
    mid midwit well not even midwit this
  • 00:13:30
    guy's the stupid guy uh if you asked any
  • 00:13:32
    regular uh bottom feeder chat GPT user
  • 00:13:35
    they' probably give you a prompt like
  • 00:13:37
    classify the following email into ignore
  • 00:13:39
    opportunity or need detention labels and
  • 00:13:41
    then they' paste in the email right so
  • 00:13:43
    this is our starting point this is the
  • 00:13:44
    typical CHT prompt and this is as far on
  • 00:13:46
    the on the IQ scale on the left as you
  • 00:13:49
    can go so we're breaking down by
  • 00:13:51
    component we're starting off with the
  • 00:13:52
    rooll I know for you Tik Tok brains here
  • 00:13:53
    you're probably going to look at this
  • 00:13:54
    and be like ah there a lot of writing
  • 00:13:55
    but uh can you just pause this video uh
  • 00:13:57
    I'm not going to go over all of it I
  • 00:13:59
    think some of you already know some of
  • 00:14:00
    these components Ro prompting is
  • 00:14:01
    something that you've definitely done
  • 00:14:02
    before but I want to draw attention here
  • 00:14:03
    to the research results with this little
  • 00:14:05
    rocket ship to show that it's increasing
  • 00:14:06
    the accuracy uh when you assign an
  • 00:14:09
    advantageous role in your role prompting
  • 00:14:10
    by saying you are an email
  • 00:14:11
    classification expert uh trained to be
  • 00:14:14
    the assist this it can increase the
  • 00:14:15
    accuracy of your prompts and the
  • 00:14:16
    performance of them by 10.3% and
  • 00:14:19
    secondly if you give complimentary
  • 00:14:20
    descriptions of your abilities to
  • 00:14:22
    further increase accuracy you can get up
  • 00:14:24
    to 15 to 25% increase in total so this
  • 00:14:26
    is as simple as here's the example you
  • 00:14:28
    are a highly skilled in Creative short
  • 00:14:29
    form content script writer that is the
  • 00:14:31
    role with a knack for crafting engaging
  • 00:14:34
    informative and concise videos so you
  • 00:14:36
    add a role and then you give it key
  • 00:14:38
    qualities like engaging informative and
  • 00:14:40
    concise and you basically hype it up and
  • 00:14:42
    tell it man you're so amazing at this
  • 00:14:44
    this this so you need have a role that
  • 00:14:46
    is strong and tells it that is
  • 00:14:47
    advantageous to what it's doing so if
  • 00:14:49
    you're solving a math problem you are an
  • 00:14:51
    expert math teacher and then you can
  • 00:14:53
    give it some more examples after that of
  • 00:14:54
    the key quality so takeaways here select
  • 00:14:56
    the role that is advantageous for the
  • 00:14:58
    specific task EG math teacher for math
  • 00:14:59
    problems and then enrich the rooll I
  • 00:15:01
    like that word enrich the rooll with
  • 00:15:03
    with additional words to highlight how
  • 00:15:05
    good it is at that task super simple um
  • 00:15:08
    that's Ro prompting so this is what
  • 00:15:09
    we're going to be doing to kind of tie
  • 00:15:10
    everything together in this video which
  • 00:15:11
    is a before and after so this was the
  • 00:15:13
    this was the low IQ one remember this so
  • 00:15:15
    this is our starting point and here we
  • 00:15:18
    have what happens after we add in the
  • 00:15:19
    roll thing so you're going to need to
  • 00:15:20
    pause this as this thing gets bigger
  • 00:15:22
    it's kind of hard for me to put the
  • 00:15:23
    whole prompt on the screen uh but the
  • 00:15:25
    before and after um you're going to have
  • 00:15:27
    the r prompt here highlighted and well
  • 00:15:30
    low lighted in Black uh so you can see
  • 00:15:32
    what we've changed so here we've still
  • 00:15:33
    got the task here we've still got the
  • 00:15:35
    bit before but it's just now part of a
  • 00:15:37
    Li and pront we have the role included
  • 00:15:39
    as well you are an experienced email
  • 00:15:40
    classification system that accurately
  • 00:15:42
    categorizes emails based on the content
  • 00:15:44
    and potential business
  • 00:15:45
    bagged great so task now going back
  • 00:15:48
    there that's pretty helpful this is
  • 00:15:49
    actually the task um so the thing that
  • 00:15:51
    most people actually put into ches or
  • 00:15:54
    into the prompt is the task itself so
  • 00:15:56
    it's basically just telling it what it's
  • 00:15:57
    going to do uh usually starting with a
  • 00:15:59
    verb we want to say generate a this
  • 00:16:01
    Analyze This write this but be
  • 00:16:03
    descriptive as possible while also
  • 00:16:04
    keeping it brief so an example here is
  • 00:16:06
    generate engaging and Casual Outreach
  • 00:16:08
    messages for users looking to promote
  • 00:16:10
    their services in the dental industry
  • 00:16:11
    especially focusing on the integration
  • 00:16:13
    of AI tools to scale businesses your
  • 00:16:14
    messages should be direct so it's
  • 00:16:16
    telling it what it should do use a verb
  • 00:16:18
    nothing too crazy here um but what I
  • 00:16:20
    will mention is that this is where
  • 00:16:22
    because we're doing these single shot
  • 00:16:23
    systems we need to insert values cuz
  • 00:16:25
    it's going to have our prompt written
  • 00:16:27
    and then we need to be throwing
  • 00:16:28
    different like in this case the email
  • 00:16:30
    content is the variable that we need to
  • 00:16:32
    put in this place so in this case you
  • 00:16:34
    see that I have the dental industry as
  • 00:16:35
    the niche and the pink one here which
  • 00:16:38
    the integration of tools as the offer um
  • 00:16:40
    this is from an earlier video that I've
  • 00:16:41
    done within the task is where you can
  • 00:16:43
    insert the variables that are going to
  • 00:16:44
    be used uh throughout the system so if
  • 00:16:46
    you go back a little bit uh we have the
  • 00:16:48
    email content variable and you can see
  • 00:16:50
    here that it's already become part of
  • 00:16:51
    the task so classify the D here's the
  • 00:16:54
    variable based input that we want then
  • 00:16:56
    we have the technique that's associated
  • 00:16:57
    with the task component um and that is
  • 00:16:59
    Chain of Thought prompting this is
  • 00:17:00
    something that's fairly common now and
  • 00:17:02
    pretty widely known um it involves
  • 00:17:04
    telling the model to think step by step
  • 00:17:05
    without our instructions or B yet you
  • 00:17:07
    can provide it with step-by-step
  • 00:17:08
    instructions uh for it to work through
  • 00:17:10
    each time which is my kind of preferred
  • 00:17:11
    way of doing it so here's the example um
  • 00:17:14
    we take this script writer example as
  • 00:17:16
    well um and in this case if you just
  • 00:17:18
    give it a list of six points so hook the
  • 00:17:20
    viewer in briefly explain provide one
  • 00:17:22
    two F standing facts described so we're
  • 00:17:24
    giving it step-by-step instructions on
  • 00:17:26
    how it should perform the task and the
  • 00:17:27
    research results of of thought prompting
  • 00:17:29
    being incorporated into your prompts
  • 00:17:31
    it's a 10% accuracy boost on simple
  • 00:17:33
    problems I me that's like very very
  • 00:17:34
    simple problems like solve this or 4
  • 00:17:37
    plus 2 equals blah BL blah uh but 90%
  • 00:17:39
    accuracy on complex multi-state problems
  • 00:17:41
    which is likely what many of you are
  • 00:17:42
    going to be uh dealing with with the
  • 00:17:44
    system that you're trying to build so
  • 00:17:46
    90% accuracy boost is pretty insane and
  • 00:17:48
    uh considering you only have to write up
  • 00:17:50
    a little list of what it should do chain
  • 00:17:52
    of th promting something you should uh
  • 00:17:53
    you should really incorporate uh key
  • 00:17:55
    takeaway here the more complex the
  • 00:17:57
    problem the more dramatic the
  • 00:17:58
    Improvement using chain of Thor
  • 00:17:59
    prompting so that's the task if we go
  • 00:18:02
    across now you see that we've included a
  • 00:18:04
    chain of Thor component to the task so
  • 00:18:06
    the old one which was just the chat GPT
  • 00:18:08
    uh low IQ person is this and we've added
  • 00:18:11
    on the roll prompt and we've also added
  • 00:18:13
    in a section for how it should approach
  • 00:18:16
    a task a step-by-step Chain of Thought
  • 00:18:18
    prompting method that we've Incorporated
  • 00:18:20
    next we have the specific section which
  • 00:18:22
    is below the task and this is really an
  • 00:18:24
    addition to the task so to not get it
  • 00:18:25
    too bloated on the task component you
  • 00:18:27
    can then have important bullet points
  • 00:18:29
    that reiterate uh more instructions or
  • 00:18:31
    important notes regarding the execution
  • 00:18:32
    of the task so using the example of the
  • 00:18:34
    Outreach message generator prompt
  • 00:18:35
    examples of specifics what this might be
  • 00:18:37
    each message should have an intro body
  • 00:18:39
    and outro with a tone that's informal
  • 00:18:41
    use placeholders like this so it's kind
  • 00:18:42
    of a list of additional points that
  • 00:18:44
    outside of just the core part of the
  • 00:18:46
    task you can give additional uh kind of
  • 00:18:48
    bullet points which is pretty handy uh
  • 00:18:50
    when you're modifying The Prompt when
  • 00:18:51
    you're editing it if you think it's not
  • 00:18:52
    doing something correctly you can just
  • 00:18:53
    easily add another bullet point on so
  • 00:18:55
    this is kind of what I will do most of
  • 00:18:56
    my modification when I'm writing my
  • 00:18:57
    prompts and the tech associated with
  • 00:18:59
    specifics is called emotion prompt and
  • 00:19:01
    this refers to adding short phrases um
  • 00:19:04
    containing emotional stimuli emotional
  • 00:19:06
    stimula emotional stimula right to
  • 00:19:09
    enhance the prom performance so here's
  • 00:19:11
    the research results emotional stimula
  • 00:19:13
    can be things like this is very
  • 00:19:14
    important to my career this task is
  • 00:19:16
    vital to my career and I really value
  • 00:19:18
    your thoughtful analysis this continues
  • 00:19:20
    on from role prompting a bit cuz you're
  • 00:19:22
    kind of continuing to hype this thing up
  • 00:19:23
    and say look like you I really
  • 00:19:25
    appreciate how how good you are at this
  • 00:19:27
    thing and and you being part of this
  • 00:19:28
    business and what we're doing is so
  • 00:19:29
    important and it has massive
  • 00:19:31
    implications on myself and my business
  • 00:19:33
    and also on society as a whole the more
  • 00:19:35
    you can hype it up and tell it that is
  • 00:19:37
    its task is like the world is going to
  • 00:19:39
    fall apart if it doesn't do this thing
  • 00:19:40
    right the better the performance you can
  • 00:19:41
    get out of it so the research results
  • 00:19:43
    here are adding emotional stimula which
  • 00:19:45
    can be as short as these two little
  • 00:19:47
    phrases here this is very important to
  • 00:19:48
    my career um and this is vital to my
  • 00:19:50
    career these little lines here uh
  • 00:19:53
    increased 8% on simple task and 115% on
  • 00:19:56
    complex task compared to zero short
  • 00:19:58
    problem
  • 00:19:59
    so huge increase on complex tasks which
  • 00:20:01
    is likely what you're going to be
  • 00:20:02
    building your problems for anyway and it
  • 00:20:04
    also enhanced the truthfulness and
  • 00:20:05
    informativeness of llm outputs by an
  • 00:20:08
    average of 19 and 12% respectively so
  • 00:20:10
    not only are you getting the increase in
  • 00:20:11
    accuracy is is this thing getting the
  • 00:20:13
    right uh the right output in the right
  • 00:20:15
    response but also it's more truthful and
  • 00:20:17
    informative which is me fluffy things
  • 00:20:19
    but more being more truthful and
  • 00:20:21
    informative is probably a good thing
  • 00:20:22
    right so the ROI just adding a few of
  • 00:20:24
    these words for the performance of your
  • 00:20:25
    prompt is ridiculous there's no reason
  • 00:20:27
    you shouldn't be throwing in a couple
  • 00:20:28
    these emotional kind of lines which is a
  • 00:20:30
    this is very important like this is such
  • 00:20:32
    a key thing in the business that you are
  • 00:20:33
    part of so the key takeaways here adding
  • 00:20:35
    simple phrases like these can encourage
  • 00:20:37
    the model to engage in more thorough and
  • 00:20:39
    deliberate processing which is
  • 00:20:40
    especially beneficial for your complex
  • 00:20:42
    tasks that require more careful thought
  • 00:20:44
    and Analysis so how does this actually
  • 00:20:45
    add into our prompt we have it below the
  • 00:20:47
    task section here I can zoom in and we
  • 00:20:50
    have the specifics this task is critical
  • 00:20:51
    to the success of our business if the
  • 00:20:53
    email contains blah blah blah blah and
  • 00:20:55
    it's just a list of additional
  • 00:20:56
    instructions and we can throw in that
  • 00:20:58
    emotion prompt in there as well so
  • 00:20:59
    that's specifics you can see it's sort
  • 00:21:01
    of coming together here then we jump
  • 00:21:03
    into context this is kind of
  • 00:21:05
    self-explanatory but just giving the
  • 00:21:06
    model a better idea of the environment
  • 00:21:08
    in which it's operating in and why can
  • 00:21:10
    be helpful to increase performance and
  • 00:21:12
    this also gives us an opportunity to
  • 00:21:13
    really further instill the role
  • 00:21:15
    prompting that we did at the start and
  • 00:21:17
    also the emersion prompting that we've
  • 00:21:18
    done in the specific so an example here
  • 00:21:20
    from our email classification system
  • 00:21:21
    could be our company provides AI
  • 00:21:23
    solutions to businesses across various
  • 00:21:24
    Industries but Accord about who the
  • 00:21:26
    business is we receive a high volume of
  • 00:21:28
    emails from potential clients through
  • 00:21:29
    our website contact form Your Role again
  • 00:21:32
    role prompting we're incorporating again
  • 00:21:34
    reminding it of the role that it has is
  • 00:21:35
    classifying this emails is essential
  • 00:21:37
    emotion prompt for our sales team to
  • 00:21:40
    prioritize the efforts and respond to
  • 00:21:41
    inquires inquiries in a timely manner by
  • 00:21:44
    accurately identifying motion prompt
  • 00:21:46
    again Etc so you can read the rest of
  • 00:21:47
    that but we're we're heading up with a
  • 00:21:49
    ro prompt again we're giving it context
  • 00:21:50
    on the system that it belongs to and
  • 00:21:52
    here's here's my general notes I'm
  • 00:21:53
    getting here to myself but General notes
  • 00:21:55
    for context is to provide context on the
  • 00:21:57
    business including the types of
  • 00:21:58
    customers types Services products values
  • 00:22:01
    Etc then you can provide context on the
  • 00:22:03
    system that it is part of as you can see
  • 00:22:04
    here we're saying this is part of our
  • 00:22:05
    sales process and we get a lot of emails
  • 00:22:08
    and then you can provide a little bit of
  • 00:22:09
    context on the importance of the task
  • 00:22:11
    and the impact on the business um so you
  • 00:22:13
    directly contribute to the growth and
  • 00:22:15
    success of our company therefore we
  • 00:22:17
    greatly value your careful consideration
  • 00:22:18
    and attention to classification so just
  • 00:22:20
    kind of reiterating a lot of the stuff
  • 00:22:22
    that we've done in the role and also in
  • 00:22:24
    the uh in the specific section as well
  • 00:22:25
    here's the before and after we've added
  • 00:22:27
    this context section section down the
  • 00:22:28
    bottom uh not rocket science the example
  • 00:22:31
    section kind of self-explanatory but we
  • 00:22:33
    want to give examples to the model on
  • 00:22:35
    how it should perform and and how it
  • 00:22:37
    should be replying to it so you given
  • 00:22:39
    input output pairs is what you usually
  • 00:22:40
    refer to them as um and this goes on to
  • 00:22:43
    the technique of few shot prompting uh
  • 00:22:45
    single shot one shot prompting um and in
  • 00:22:48
    this case we're going to be talking
  • 00:22:49
    about few shot prompting because that's
  • 00:22:50
    giving more than one example so uh I'll
  • 00:22:53
    give you a little bit of a a look into
  • 00:22:54
    the research results here um now all of
  • 00:22:56
    these research results attached to
  • 00:22:59
    Scientific papers that i' I've gone
  • 00:23:00
    through and and found and and put in
  • 00:23:01
    here for you so if you want to get
  • 00:23:03
    access to all of those research papers
  • 00:23:04
    I'll put it on a figma or put it on in
  • 00:23:06
    the description so you can have a look
  • 00:23:07
    at the papers themselves I'm not pulling
  • 00:23:08
    these out of my ass uh these are coming
  • 00:23:10
    from papers where people have actually
  • 00:23:11
    studied these things so um and this
  • 00:23:14
    graph here shows the effect of adding
  • 00:23:17
    these input output examples on the
  • 00:23:18
    performance and accuracy of the prompt
  • 00:23:20
    so zero shot prompting is on the far
  • 00:23:22
    left we have 10% accuracy for these 175
  • 00:23:25
    billion parameters version of gpt3 as
  • 00:23:28
    soon as you add one example to this it
  • 00:23:29
    jumps up from 10 to nearly 50 to 45%
  • 00:23:32
    accuracy and then we get sort of a a
  • 00:23:35
    diminishing returns as we continue to
  • 00:23:37
    increase up to here is 10 examples so
  • 00:23:40
    this is 10 input output pairs so a QA QA
  • 00:23:43
    QA one QA and one example of an input
  • 00:23:46
    and an output that is a a a shock with a
  • 00:23:48
    one shock prompt we got a 45% accuracy
  • 00:23:51
    and as we got up to 10 we got a 60% and
  • 00:23:54
    kind of flattened off after there so the
  • 00:23:56
    research results uh is that GB3 175
  • 00:23:59
    billion parameters achieved an average
  • 00:24:01
    14.4% improvement over its zero shot
  • 00:24:04
    accuracy of 57.4 when using 32 examples
  • 00:24:07
    per task so that's way up here um and
  • 00:24:10
    using a lot of them and it kind of crept
  • 00:24:11
    its way up uh but for us the key
  • 00:24:13
    takeaways is that providing just a few
  • 00:24:15
    examples literally going from zero
  • 00:24:17
    examples to one massively increases the
  • 00:24:20
    performance compared to zero shot
  • 00:24:21
    prompting when it doesn't have any
  • 00:24:22
    examples so accuracy scales with the
  • 00:24:24
    number of examples but it shows
  • 00:24:26
    diminishing returns most of the gains
  • 00:24:28
    can be achieved between uh 10 to 32 well
  • 00:24:31
    crafted examples and personally I go for
  • 00:24:33
    like 3 to 5 I don't really want to be
  • 00:24:34
    sitting there all day writing all these
  • 00:24:35
    examples and the more examples you give
  • 00:24:37
    the more tokens you're putting in the
  • 00:24:39
    input of your prompt and therefore the
  • 00:24:40
    more expensive it is every time every
  • 00:24:42
    time you call that prompt so if it's
  • 00:24:43
    part of this email classification system
  • 00:24:45
    and we have 32 examples we're going to
  • 00:24:47
    have 32 examples worth of context and
  • 00:24:49
    token usage in our Automation and that
  • 00:24:52
    means every single time an email comes
  • 00:24:53
    in it's going to be sending off huge
  • 00:24:55
    amounts of tokens uh as part of the
  • 00:24:57
    input and going to be charged on those
  • 00:24:59
    import tokens as well so 10 to 32 is is
  • 00:25:02
    a sweet spot according to this paper
  • 00:25:04
    just do 3 to 5 it does a job enough um
  • 00:25:06
    and at least in my experience and and
  • 00:25:08
    the stuff that we do at morning side as
  • 00:25:09
    well so a little bit more on examples I
  • 00:25:10
    won't bore you too much here but this is
  • 00:25:12
    kind of the key part here that these
  • 00:25:13
    guys doing these these uh these papers
  • 00:25:15
    and doing the research they documented
  • 00:25:17
    roughly predictable Trends and scaling
  • 00:25:18
    and performance without using fine
  • 00:25:20
    tuning so by giving examples you are
  • 00:25:22
    kind of impr prompt fine-tuning these
  • 00:25:24
    models uh and people talk about fine
  • 00:25:26
    tuning and everyone thinks that you need
  • 00:25:27
    to do it I personally for me and my
  • 00:25:30
    development company we build these AI
  • 00:25:32
    solutions for businesses and we've never
  • 00:25:33
    had to use fine tuning because we're
  • 00:25:35
    actually good at prpt engineering and
  • 00:25:37
    there's only a very limited number of
  • 00:25:38
    use cases where fine shunting actually
  • 00:25:40
    gives you an advantage um and that's
  • 00:25:42
    just from our experience so if you want
  • 00:25:43
    to avoid doing the messy stuff of data
  • 00:25:45
    collection and fine tuning and all that
  • 00:25:47
    crap uh just get good at prompting get
  • 00:25:49
    get good at writing these examples and
  • 00:25:51
    you can achieve the roughly similar uh
  • 00:25:54
    performance increases um as fine tuning
  • 00:25:56
    without fine tuning so this graph here
  • 00:25:58
    shows an interesting uh bit of data that
  • 00:26:00
    I do want to share is getting a little
  • 00:26:02
    bit Ticky but uh this graph on the right
  • 00:26:03
    here shows a significant increase in
  • 00:26:05
    performance from zero shot which is the
  • 00:26:06
    blue to few short completions so if you
  • 00:26:08
    add in some examples you're going to
  • 00:26:10
    jump up from I think it was 42 up to
  • 00:26:13
    nearly 55 60 a big jump immediately just
  • 00:26:17
    by adding a few examples but
  • 00:26:18
    interestingly the gold labels here so
  • 00:26:20
    these orange pillars these orange bars
  • 00:26:23
    uh that refers to the tests done where
  • 00:26:25
    the labels were correct so maybe if the
  • 00:26:27
    email classification was um here's the
  • 00:26:29
    email here's classification and we gave
  • 00:26:30
    it correct examples the performance
  • 00:26:33
    increase within the study was shown
  • 00:26:35
    regardless of whether those labels were
  • 00:26:36
    correct so this tells us something
  • 00:26:37
    interesting that the llm is not strictly
  • 00:26:39
    learning new information so by giving us
  • 00:26:42
    giving it few short examples that have
  • 00:26:44
    the correct labels it's not necessarily
  • 00:26:45
    learning that information it's actually
  • 00:26:47
    just learning from the format and
  • 00:26:49
    structure uh and that helps to increase
  • 00:26:51
    the accuracy of the outputs overall the
  • 00:26:53
    accuracy of the label itself does not
  • 00:26:55
    actually appear to matter too much uh on
  • 00:26:57
    the on the overall performance so you
  • 00:26:58
    can have incorrect labels and it's still
  • 00:27:00
    going to perform just as well um because
  • 00:27:02
    you've given it some examples on how it
  • 00:27:03
    should respond so long story short
  • 00:27:04
    throwing in three to five examples is
  • 00:27:06
    going to greatly increase the accuracy
  • 00:27:08
    and the performance of your prompt um
  • 00:27:10
    and it's also should be thought of more
  • 00:27:11
    as teaching it how to structure the
  • 00:27:13
    output so this is very important if
  • 00:27:14
    you're not getting the structure you
  • 00:27:15
    want and throwing in a whole bunch of
  • 00:27:17
    other rubbish like oh well this is the
  • 00:27:18
    answer to the question if you just give
  • 00:27:20
    it a few examples of how it should
  • 00:27:22
    respond it's going to look very closely
  • 00:27:23
    at that and it's going to perform much
  • 00:27:25
    better for you so think of it as fine
  • 00:27:27
    tuning of the St the tone and the length
  • 00:27:29
    and the structure of the output um and I
  • 00:27:31
    think this is something that a lot of
  • 00:27:32
    people miss out on when they don't add
  • 00:27:33
    these things in because it's it's so
  • 00:27:35
    important if you just wanted to give you
  • 00:27:36
    one word and you kind of try to tell it
  • 00:27:38
    in the task to just give one word
  • 00:27:39
    responses sure it might listen to it but
  • 00:27:41
    if you give five examples of input and
  • 00:27:44
    then just a one word output like in our
  • 00:27:45
    case opportunity or or needs attention
  • 00:27:48
    or ignore these labels for our email
  • 00:27:49
    classification system uh it's going to
  • 00:27:51
    perform so much better so here's a
  • 00:27:53
    before and after again we're getting a
  • 00:27:55
    little bit small here so I'll allow you
  • 00:27:56
    to pause this on screen as you wish but
  • 00:27:59
    we've given it a couple examples you can
  • 00:28:00
    see how I've done it here in this case
  • 00:28:02
    it's email label um I usually tend to go
  • 00:28:05
    for a q and
  • 00:28:08
    a uh that's usually my go-to strategy or
  • 00:28:11
    input output um but that's that's
  • 00:28:13
    basically how we do it we go example one
  • 00:28:15
    uh we give the QA and then we give a
  • 00:28:17
    space example two some you don't even
  • 00:28:19
    need to put these on um you can just
  • 00:28:20
    leave it as that and it sort of figures
  • 00:28:22
    it out uh but that's that's F shot
  • 00:28:25
    property and examples and how we've
  • 00:28:27
    compared them
  • 00:28:29
    now getting on to the final bit stick
  • 00:28:30
    with me because you are learning some
  • 00:28:31
    very good stuff here uh the notes
  • 00:28:33
    section is the final part and this is
  • 00:28:34
    our last chance to remind the llm of key
  • 00:28:36
    aspects of the task and add any final
  • 00:28:38
    details or tweaks uh this is something
  • 00:28:40
    that you'll end up using a lot as you're
  • 00:28:42
    actually doing the prompt engineering
  • 00:28:43
    workflow um in the list I usually end up
  • 00:28:46
    having things like output formatting
  • 00:28:48
    notes like you should put your output in
  • 00:28:49
    X format or do not do X like if it's
  • 00:28:52
    doing something as I do a test this is
  • 00:28:53
    kind of where I'm iterating on the on
  • 00:28:55
    the prompt so if I if it gives me an
  • 00:28:56
    output and it has doing something way
  • 00:28:58
    wrong or just say at the bottom at the
  • 00:29:00
    note section say do not do X or you are
  • 00:29:03
    not supposed to do this never include it
  • 00:29:04
    in your output uh these kind of things
  • 00:29:06
    are very easy to slap onto the note
  • 00:29:08
    section at the bottom um small tone
  • 00:29:10
    tweaks reminders of key points from the
  • 00:29:11
    task or specifics is really what I use
  • 00:29:14
    the note section for um and and as I say
  • 00:29:16
    here it usually starts out quite skinny
  • 00:29:18
    because if you do the all the prompt
  • 00:29:19
    incorrectly you'll have well I've got
  • 00:29:20
    nothing else to say in the prompt all
  • 00:29:22
    I've got nothing else to say at this
  • 00:29:23
    bottom section then you give it a spin
  • 00:29:25
    you throw some inputs at it and it
  • 00:29:26
    starts doing some wacky stuff and you
  • 00:29:28
    come back and go oh well this just
  • 00:29:30
    reminded of some things I've said
  • 00:29:31
    earlier on and you start to add this
  • 00:29:33
    list of things to the notes now don't
  • 00:29:34
    let it become too long u because it's
  • 00:29:36
    going to start to sort of water it down
  • 00:29:37
    you'll notice that it'll start
  • 00:29:38
    forgetting earlier notes if you put too
  • 00:29:39
    many notes in um but less is more here
  • 00:29:42
    and if it's it's really just to tweak
  • 00:29:43
    these outputs to to get the right right
  • 00:29:45
    kind of responses without refactoring
  • 00:29:47
    the whole thing and restructuring how
  • 00:29:49
    you did the task in the specific so it's
  • 00:29:50
    just kind of a lazy way of tacking
  • 00:29:52
    things on to just get it nudged towards
  • 00:29:53
    where you want it to go um now we have
  • 00:29:55
    the note section and it's based off the
  • 00:29:56
    Lost in the middle effect which is from
  • 00:29:58
    another scientific like research paper
  • 00:30:00
    um and this lost INE middle effect is is
  • 00:30:03
    most famous kind of for this graph here
  • 00:30:05
    uh which shows that language models
  • 00:30:07
    perform best when relevant information
  • 00:30:09
    is at the very beginning Primacy I'm
  • 00:30:11
    learning new stuff here as well or end
  • 00:30:13
    recency of the imput context so
  • 00:30:15
    performance significantly worsens when
  • 00:30:16
    the critical information is in the
  • 00:30:18
    middle of a long context and this effect
  • 00:30:20
    occurs even when the models are designed
  • 00:30:22
    for long input sequences so yes gbt 4
  • 00:30:25
    32k back in the day was designed for
  • 00:30:28
    32,000 tokens but it didn't really
  • 00:30:29
    listen to anything in the middle um
  • 00:30:31
    luckily the models that we work with now
  • 00:30:34
    um are much better at retrieving
  • 00:30:35
    information over large context um but
  • 00:30:38
    you should still keep this in mind
  • 00:30:39
    because it still seems to apply um and
  • 00:30:41
    this is why the note section is at the
  • 00:30:42
    end this little graph here basically
  • 00:30:44
    shows you that uh when you place the
  • 00:30:46
    information at the start the accuracy is
  • 00:30:47
    higher and when you place it in the
  • 00:30:49
    middle the accuracy is lower and when
  • 00:30:50
    you place it at the end the accuracy is
  • 00:30:52
    higher but not as high as the start so
  • 00:30:54
    it really listens to the stuff at the
  • 00:30:55
    start so the role prompt it takes it
  • 00:30:57
    very seriously and that's why we have
  • 00:30:58
    our task up the top as well that's why
  • 00:31:00
    we have the context in the middle
  • 00:31:01
    because it's not as important so see
  • 00:31:03
    he's starting to knit together all this
  • 00:31:05
    information understand these how all
  • 00:31:07
    these different uh techniques knitten
  • 00:31:09
    together so the way that I've structured
  • 00:31:11
    this prompt and the way my team have
  • 00:31:12
    structured it I'm going to really re
  • 00:31:14
    retelling you what we do at morning Side
  • 00:31:16
    by adding these things all in together
  • 00:31:18
    uh you see how it starts to fit together
  • 00:31:20
    into a proper strategy and not just
  • 00:31:21
    throwing over the wall and having some
  • 00:31:23
    kind of prompt formula it's actually
  • 00:31:25
    based off the science um and and if I L
  • 00:31:27
    to talk about science these days so uh
  • 00:31:29
    that is lost in the middle I think have
  • 00:31:31
    a little more
  • 00:31:32
    here the research results of course that
  • 00:31:34
    you've been anxiously waiting for is
  • 00:31:36
    that when a relevant document is at the
  • 00:31:38
    beginning or the end of a context GPD
  • 00:31:39
    345 turbo achieves around 95 around 75%
  • 00:31:43
    accuracy on a QA task um an increase of
  • 00:31:45
    20 to 25% compared to when the document
  • 00:31:47
    was placed in the middle um so the key
  • 00:31:49
    takeaways from this is instructions
  • 00:31:51
    given at the start and the end of The
  • 00:31:52
    Prompt are listened to by the LM far
  • 00:31:53
    more than anything in the middle um for
  • 00:31:56
    this reason the note section is a handy
  • 00:31:58
    to append reminders uh for anything that
  • 00:32:00
    happened in the task or the specifics
  • 00:32:02
    that you notice it maybe isn't listening
  • 00:32:03
    to and you need to reiterate um but be
  • 00:32:06
    aware that increasing the context length
  • 00:32:08
    alone does not ensure better performance
  • 00:32:09
    still having less context or fluff will
  • 00:32:12
    mean the remaining instructions are more
  • 00:32:14
    likely to be followed so while lost in
  • 00:32:16
    the middle refers to okay where should
  • 00:32:17
    we put where should we structure the
  • 00:32:19
    prompt to include uh the right
  • 00:32:21
    information to be listen what's the most
  • 00:32:22
    important thing in the prompt and where
  • 00:32:23
    should we put it yes that does that but
  • 00:32:25
    it also it also gives us information on
  • 00:32:27
    how we should try to keep our prompt as
  • 00:32:30
    short as possible because it's over
  • 00:32:31
    longer context periods that these things
  • 00:32:33
    start to get bad so the shorter you can
  • 00:32:35
    keep the prompt in general it could
  • 00:32:36
    listen to the whole thing very very well
  • 00:32:38
    but as soon as you've like really made
  • 00:32:39
    it bloated um it's going to be losing
  • 00:32:42
    some of that stuff in the middle so less
  • 00:32:43
    is more um and having less less fluff is
  • 00:32:46
    always going to make your your prods
  • 00:32:47
    perform better so here you can see in
  • 00:32:48
    the note section uh please provide the
  • 00:32:50
    email classification label and only the
  • 00:32:52
    label as your response so again
  • 00:32:53
    reiterating the format we want the
  • 00:32:54
    output to be in um do not include any
  • 00:32:56
    personal information in your response if
  • 00:32:58
    you're unsure uh on the side of caution
  • 00:33:00
    and assign the needs attenti label so
  • 00:33:02
    little reminders as we've gone through
  • 00:33:03
    and and we tweaking this email
  • 00:33:05
    classification prompt you will add those
  • 00:33:06
    things at over time so getting back to
  • 00:33:08
    this little diagram here we have the
  • 00:33:09
    role prompting covered off you know how
  • 00:33:11
    to use that technique is tell it a roll
  • 00:33:13
    and and tell it how good it is at that
  • 00:33:14
    role Chain of Thought give it a list of
  • 00:33:16
    things that it should do and how it
  • 00:33:17
    should break down the the task motion
  • 00:33:18
    prompt tell it how good it is tell it
  • 00:33:20
    how important everything is that it's
  • 00:33:21
    doing few shot prompting give it
  • 00:33:23
    examples that it knows the kind of
  • 00:33:24
    output format you want lost in the
  • 00:33:26
    middle kind of tells you how to
  • 00:33:27
    structure everything and where to put
  • 00:33:29
    the right information and you can add on
  • 00:33:30
    a couple little uh things at the bottom
  • 00:33:32
    so that it really listens to them at the
  • 00:33:33
    end and finally here we have markdown
  • 00:33:36
    formatting man I'm talking at a mile
  • 00:33:39
    here and I'm getting really hot anyway
  • 00:33:42
    markdown formatting is kind of the final
  • 00:33:43
    piece of this puzzle and tied all
  • 00:33:44
    together and I learned this from a CTO
  • 00:33:46
    Spencer he put me onto this technique
  • 00:33:48
    and I use it all the time now so uh
  • 00:33:50
    markdown formatting is a way that we can
  • 00:33:51
    structure our prompts um for both our
  • 00:33:54
    sake so that it's more readable CU When
  • 00:33:55
    you write these large prompts it can get
  • 00:33:57
    a little bit and like there's a lot of
  • 00:33:58
    stuff going on so for our sake it allows
  • 00:34:01
    us to structure the reprompt better but
  • 00:34:03
    also it allows the llm to understand the
  • 00:34:06
    structure a little bit better as well
  • 00:34:07
    while I don't have any research to back
  • 00:34:08
    that up uh my only data on why we should
  • 00:34:12
    be doing this and why it may perform
  • 00:34:13
    better is because you can see over here
  • 00:34:17
    uh someone managed to extract out the
  • 00:34:18
    system prompt from th 3 within chat GPT
  • 00:34:21
    and open AI themselves are actually
  • 00:34:23
    using uh using these the smart
  • 00:34:26
    formatting so you can see uh a pound
  • 00:34:28
    symbol here and then tool so these are
  • 00:34:29
    marked out headings as we're going to go
  • 00:34:31
    into in a second but if open AI is using
  • 00:34:32
    it um to train their systems and to to
  • 00:34:35
    prob their own systems we should
  • 00:34:36
    probably be using it as well which is
  • 00:34:38
    kind of why we're doing it here so uh
  • 00:34:41
    basically markdown gives us a few new
  • 00:34:42
    tools to structure um you may notice if
  • 00:34:45
    you're writing a prompt you just got PL
  • 00:34:46
    text you don't have any any method to to
  • 00:34:48
    Signal what a hitting would look like or
  • 00:34:50
    what bulb would look like but markdown
  • 00:34:52
    gives us uh those those techniques so we
  • 00:34:54
    have hittings uh hitting one is the
  • 00:34:56
    largest hitting two is the second lest
  • 00:34:58
    hting three is the third lest so you
  • 00:34:59
    have now different layers of hittings so
  • 00:35:02
    you can have like roll task all these in
  • 00:35:04
    the hitting one so just H one as a as a
  • 00:35:06
    pound symbol and then and then a space
  • 00:35:09
    and then whatever you want after it
  • 00:35:10
    which you'll see in a sign um but then
  • 00:35:12
    if you have little subsets or
  • 00:35:14
    subsections like examples hitter and
  • 00:35:16
    then you want example one you can have
  • 00:35:17
    example one as a hitting three or a
  • 00:35:19
    hitting two so you have different layers
  • 00:35:20
    of hitting and importance uh you also
  • 00:35:22
    have bolds italics underlines list
  • 00:35:24
    horizontal rules and more so if you want
  • 00:35:26
    to jump into the fancy stuff I'll teach
  • 00:35:28
    you the basics here of markdown but you
  • 00:35:29
    can also do these other things I'm not
  • 00:35:31
    sure what the effectiveness is um of
  • 00:35:33
    bolds and italics and stuff but I tend
  • 00:35:35
    to just use the use the headings as a as
  • 00:35:37
    a structure tool so key takeaways on
  • 00:35:39
    markdown formatting is use these H1 tags
  • 00:35:41
    single pound symbol uh to Mark each of
  • 00:35:43
    the components for your prompt and then
  • 00:35:45
    you can use the H2 or three tags or even
  • 00:35:47
    bolds and stuff to sort of add add
  • 00:35:49
    additional additional structure to other
  • 00:35:51
    parts of it so here's example of how you
  • 00:35:52
    should add it in hitting one roll
  • 00:35:54
    hitting one task specifics context and
  • 00:35:57
    then Within context I've added in here
  • 00:35:58
    look you might want to break the context
  • 00:36:00
    into subsections of okay let's use a
  • 00:36:02
    heading 2 and go about the business
  • 00:36:03
    about our system so you don't need to do
  • 00:36:05
    that all the time but this is how you
  • 00:36:06
    can start to use other types of headings
  • 00:36:09
    in like H2 or H3 tags to to split up uh
  • 00:36:12
    some of the other subsections under each
  • 00:36:14
    of your main headings and then again
  • 00:36:16
    examples we can have an example one as a
  • 00:36:18
    as a heading three and give the examples
  • 00:36:20
    and the notes so that's roughly and you
  • 00:36:22
    come in here and obviously you are a BL
  • 00:36:26
    blah um
  • 00:36:28
    generate BL BL blah you get what I'm
  • 00:36:30
    doing you get what I'm saying and so
  • 00:36:32
    what this all looks like when we tie it
  • 00:36:34
    together um we now have our completed
  • 00:36:38
    prompt which this is the before remember
  • 00:36:40
    this is where we started this is the uh
  • 00:36:42
    the the the super guy who doesn't not
  • 00:36:43
    had a prompt this is what we started
  • 00:36:45
    with and this is what we have after when
  • 00:36:47
    we apply all of these techniques now
  • 00:36:48
    this is a little bit overol for an email
  • 00:36:50
    classification system but what I want to
  • 00:36:52
    show you is that this is how you would
  • 00:36:54
    apply it to a simple task like this so
  • 00:36:55
    we have the roll that's wrapped in the
  • 00:36:57
    AG one tag we have H1 tag here
  • 00:37:00
    Etc um and we have all of these
  • 00:37:01
    different components role task specifics
  • 00:37:03
    context examples and notes all
  • 00:37:06
    integrating the uh techniques that we've
  • 00:37:08
    been over in this video and now stacking
  • 00:37:10
    up all of the increases in accuracy that
  • 00:37:12
    we get from these different techniques
  • 00:37:14
    we can see that we don't know how much
  • 00:37:15
    markdown formatting gives us uh but the
  • 00:37:17
    total is potentially above 300% increase
  • 00:37:21
    in accuracy then the final step here is
  • 00:37:22
    we can add up all of the different
  • 00:37:24
    increases and and the performance
  • 00:37:25
    increases that we get from these
  • 00:37:26
    techniques and we can can sum it up to a
  • 00:37:28
    300% or more increase in in performance
  • 00:37:31
    so me you can listen to me or you can
  • 00:37:34
    just ignore it or you can use these
  • 00:37:36
    place by Place wherever you think you
  • 00:37:37
    need it um but considering emotion
  • 00:37:39
    prompting is literally just a few words
  • 00:37:41
    saying you're the best and this is
  • 00:37:42
    really important to me and Ro prompting
  • 00:37:44
    is like one or two lines and lost in the
  • 00:37:46
    middle is really just more of a an
  • 00:37:48
    understanding of where to put the right
  • 00:37:49
    information you prompt you've now got a
  • 00:37:51
    toolkit and going back to this guy over
  • 00:37:53
    here look at this guy he's got a toolkit
  • 00:37:56
    he understands the science understands
  • 00:37:58
    from research papers at why these things
  • 00:38:00
    work the way they do and because he has
  • 00:38:02
    this this deeper understanding of what
  • 00:38:04
    makes llms do the right things that they
  • 00:38:06
    want them to do he's better able to
  • 00:38:08
    perform and as you can see he is on the
  • 00:38:10
    upper end of the spectrum here so this
  • 00:38:12
    is the guy that you should be now all
  • 00:38:14
    you need to do is take these and apply
  • 00:38:16
    it and you'll start to see and and
  • 00:38:18
    connect them go okay okay so lost in the
  • 00:38:20
    middle um that's not doing what I want
  • 00:38:22
    maybe I need to change the stuff at the
  • 00:38:23
    start and the end okay uh it's giving me
  • 00:38:25
    the wrong structure and style okay maybe
  • 00:38:27
    maybe I give some more F short examples
  • 00:38:29
    of how it should be responding and I I
  • 00:38:31
    take my time and I write them carefully
  • 00:38:32
    and I tell them the kind of style and
  • 00:38:34
    structure of the response I want it's
  • 00:38:35
    really not rocket science and people
  • 00:38:37
    have already done the hard work by doing
  • 00:38:38
    the the research to get these kind of
  • 00:38:40
    results so um to wrap up this video I've
  • 00:38:42
    given oh actually we have a
  • 00:38:43
    considerations page here uh context
  • 00:38:46
    length and costs as I mentioned earlier
  • 00:38:47
    for high volume tasks um like this
  • 00:38:50
    example of email classification system
  • 00:38:52
    uh I guess it's not too high volume but
  • 00:38:53
    if this thing is doing like 50 50 100
  • 00:38:56
    reps a day it's really being put through
  • 00:38:58
    the ringer and there's a lot of volume
  • 00:39:00
    going through the task that you're
  • 00:39:01
    building you need to focus on making
  • 00:39:03
    that prompt as short and succinct as
  • 00:39:04
    possible uh because every time you run
  • 00:39:06
    it you are charged for the input and the
  • 00:39:07
    output tokens so while you may only be
  • 00:39:09
    outputting a label in this case of just
  • 00:39:11
    needs needs work new opportunity or
  • 00:39:14
    needs attention or ignore you're also
  • 00:39:17
    charged for the input tokens as well so
  • 00:39:18
    all the prop that you put in you're
  • 00:39:20
    going to be charged for plus the
  • 00:39:21
    inserted variables as well so you've got
  • 00:39:22
    the prompt then you're inserting the
  • 00:39:24
    email context you're getting all of that
  • 00:39:26
    information and that over you're you're
  • 00:39:28
    going to get charged on that so uh keep
  • 00:39:30
    in mind that if you're doing a lot of
  • 00:39:31
    volume try to use a a cheaper model as
  • 00:39:33
    we're going into next but also keep the
  • 00:39:35
    The Prompt shorter as well the choice of
  • 00:39:36
    model is important as well better prompt
  • 00:39:38
    engineering and the skills that I've
  • 00:39:39
    just taught you on this going back to
  • 00:39:40
    this guy here he has better prompt
  • 00:39:42
    engineering skills and can get better
  • 00:39:43
    performance out of Cheaper models this
  • 00:39:45
    guy doesn't have the skills so he relies
  • 00:39:46
    on the more expensive and slower models
  • 00:39:48
    which are not good for the client um to
  • 00:39:51
    get the performance that he needs
  • 00:39:52
    because he doesn't have the skills to
  • 00:39:53
    get it to do what he wants and that
  • 00:39:54
    brings me back to this choice at model
  • 00:39:56
    point which is where possible you need
  • 00:39:57
    to use your skills and use your
  • 00:39:59
    advantage to bend the cheapest and
  • 00:40:01
    fastest model to execute the task
  • 00:40:02
    successfully so 3.5 turbo is basically
  • 00:40:05
    free like this thing open AI has made
  • 00:40:07
    that so cheap and whatever whenever
  • 00:40:09
    you're watching this video might be
  • 00:40:10
    different but the cheapest fastest model
  • 00:40:12
    should be your goto and if you can't get
  • 00:40:14
    it working there then you can go up but
  • 00:40:16
    you have the skills now um if it has
  • 00:40:18
    high volume and requires fast responses
  • 00:40:20
    this is when your skills will shine
  • 00:40:21
    because you can create prompts that do
  • 00:40:23
    and perform um fast and cheap then we
  • 00:40:26
    have the temperature and and other model
  • 00:40:27
    settings if you're doing creative rating
  • 00:40:29
    adiation Etc then test higher levels so
  • 00:40:31
    0.5 to1 uh but anything else if you're
  • 00:40:34
    putting systems like this whereas
  • 00:40:35
    classification or AI is kind of doing a
  • 00:40:37
    a a fixed piece of the of the puzzle uh
  • 00:40:40
    you want it to be on zero just have that
  • 00:40:42
    we're trying to fight against the
  • 00:40:44
    inconsistency and and natural randomness
  • 00:40:46
    of these models and in order to do that
  • 00:40:48
    we need to uh set that temperature to
  • 00:40:50
    zero and that's going to make the system
  • 00:40:52
    a lot more consistent uh so zero is what
  • 00:40:54
    I typically use for basically anything
  • 00:40:55
    apart from creative writing cutter uh
  • 00:40:57
    script rting prompts the other and the
  • 00:40:59
    other model settings like frequency
  • 00:41:00
    penalty and top PE are not needed in my
  • 00:41:02
    experience just play around with the the
  • 00:41:03
    temperature that's all you need to worry
  • 00:41:04
    about what I'm going to jump to now is
  • 00:41:05
    actually having a chat with my CTO
  • 00:41:06
    Spencer um and he's going to share what
  • 00:41:08
    we've done at morning side on one of our
  • 00:41:10
    projects where we had to go from GPT 4
  • 00:41:12
    uh which was doing the job great and
  • 00:41:14
    then the client wanted to change to GPT
  • 00:41:15
    3.5 turbo to save money and then we had
  • 00:41:17
    to kind of rebuild everything in order
  • 00:41:19
    to get it working so uh we're going to
  • 00:41:20
    jump to that and you get to here for
  • 00:41:21
    Spencer again lot smarter than me and a
  • 00:41:23
    lot of the stuff that I'm sharing
  • 00:41:24
    actually came from what he's learned uh
  • 00:41:26
    learned on the job and what he does at
  • 00:41:27
    warning side so everyone if you haven't
  • 00:41:29
    met Spencer already this Spencer my CTO
  • 00:41:31
    he's a lot smarter than I so I'm
  • 00:41:32
    bringing him on to chip into this prompt
  • 00:41:34
    engineering video just briefly because
  • 00:41:36
    um a lot of the stuff that I've just
  • 00:41:37
    told you about has actually come from
  • 00:41:39
    has big brain here he's been sharing a
  • 00:41:40
    lot of the the research papers
  • 00:41:42
    particularly within our slack across the
  • 00:41:44
    companies we're on the same page so
  • 00:41:46
    Spencer I wanted to bring you on here
  • 00:41:47
    particularly because we've been working
  • 00:41:48
    with a one of our biggest clients ever
  • 00:41:51
    today U and I want to particular focus
  • 00:41:53
    on how I was talking in this video about
  • 00:41:56
    the pr engineering skills allowing you
  • 00:41:57
    to get more out of uh lesser and cheaper
  • 00:41:59
    models um and how we've had to switch
  • 00:42:01
    from a gbg4 based SAS that we built over
  • 00:42:04
    to a hbt 3.5 turbo and and the
  • 00:42:07
    difficulties in transitioning that so if
  • 00:42:08
    you just want to um give any notes on
  • 00:42:10
    the on the presentation prior but also
  • 00:42:11
    specifically on uh getting more out of
  • 00:42:14
    these these lesson models really which
  • 00:42:15
    is what I'm trying to teach people in
  • 00:42:16
    this video yeah yeah definitely so um
  • 00:42:21
    yeah it's an interesting one I usually
  • 00:42:23
    uh like to try and break things down so
  • 00:42:26
    um when going through these path the key
  • 00:42:29
    is is that obviously want to use the
  • 00:42:30
    cheaper models first so 3.5 comes comes
  • 00:42:33
    first to mind um in this case
  • 00:42:35
    specifically for this client there's a
  • 00:42:37
    lot of complex uh kind of information
  • 00:42:39
    that they were synthesizing out of it so
  • 00:42:41
    we made the decision to start off with
  • 00:42:44
    gp4 um to to make sure that we were
  • 00:42:46
    getting the responses that we wanted now
  • 00:42:49
    once it kind of got closer to uh to
  • 00:42:52
    release there we realized that the the
  • 00:42:53
    cost that was Associated um with running
  • 00:42:55
    these models is going to be ative so we
  • 00:42:58
    had to yeah kind of take that transition
  • 00:43:00
    now and and gauge down to 3.5 so
  • 00:43:02
    whenever I'm doing that specific task
  • 00:43:06
    the key one that I'm looking at is yes
  • 00:43:07
    prompt engineering one um and then two
  • 00:43:10
    is scope reduction um gp4 is really good
  • 00:43:14
    at a bunch of different things uh and
  • 00:43:17
    and understanding kind of the hidden
  • 00:43:18
    context that uh that's in the words that
  • 00:43:20
    you're doing uh 3.5 is is much less so
  • 00:43:23
    so um you almost want to break it down
  • 00:43:26
    into smaller kind of component size
  • 00:43:28
    chunks for the task um and then use
  • 00:43:32
    those as kind of contributive to to get
  • 00:43:35
    the same results as you would with four
  • 00:43:37
    um so that was the steps that we're
  • 00:43:39
    taking in this particular project
  • 00:43:41
    another good tactic to use as well and
  • 00:43:43
    and one that I would highly recommend is
  • 00:43:44
    using gp4 first and then taking the
  • 00:43:47
    input and output pairings as training
  • 00:43:49
    data to fine-tune a 3.5 model as well um
  • 00:43:53
    because we found that that's that's
  • 00:43:54
    really helpful uh for getting your cost
  • 00:43:56
    down but keeping up that GPT for L
  • 00:43:59
    quality yeah I'm kind of just bashed
  • 00:44:01
    fine tuning earlier in this video
  • 00:44:02
    because I say it's it's unnecessary in
  • 00:44:04
    almost every case um so I mean using few
  • 00:44:07
    short examples is essentially a way of
  • 00:44:09
    of fine tuning VI prompting so if you
  • 00:44:10
    just give a few short examples of gp4
  • 00:44:13
    outputs or human rid outputs would that
  • 00:44:15
    not do a lot in terms of getting more
  • 00:44:17
    towards the outputs that you're looking
  • 00:44:19
    for yeah 100% and you're completely
  • 00:44:22
    right on that one fine tuning for I
  • 00:44:24
    would say a vast amount of use cases
  • 00:44:26
    isn't really NE necessary you can get I
  • 00:44:28
    would say 90 even 95% of the way with uh
  • 00:44:31
    with just good old fashioned prompt
  • 00:44:32
    engineering and and F shot prompt in
  • 00:44:34
    here um with f shot prompting there's a
  • 00:44:38
    interesting paper that came out last
  • 00:44:39
    year um and I can't remember the
  • 00:44:42
    specific name of it but uh it talks
  • 00:44:44
    about the decision boundary so there's
  • 00:44:46
    an important uh kind of lesson to learn
  • 00:44:49
    on that is that for the fot prompts that
  • 00:44:51
    you're giving the important part is to
  • 00:44:54
    give ones that are confusing to the
  • 00:44:56
    model itself so the ones that you notice
  • 00:44:58
    that it's getting wrong consistently if
  • 00:45:00
    you actually categorize those and take
  • 00:45:02
    those in and take the one to five artist
  • 00:45:05
    examples that you get and then use those
  • 00:45:09
    as the uh yeah as the examples in there
  • 00:45:12
    you'll actually get a lot of better
  • 00:45:13
    results coming out of your model too
  • 00:45:15
    well that's that's I'm learning
  • 00:45:17
    something on this on this call in this
  • 00:45:18
    video as well because uh I mean I'd
  • 00:45:20
    always start in my fut show examples
  • 00:45:21
    have kind of like the most common ones
  • 00:45:23
    you might check a a curve B in there as
  • 00:45:24
    well but I just kind of put the five
  • 00:45:26
    three to five common ones um but knowing
  • 00:45:28
    that we should try to figure out when
  • 00:45:29
    it's stuffing up and then and put those
  • 00:45:31
    on next examples is great so any other
  • 00:45:33
    notes you have on on the content just
  • 00:45:35
    Tak a look at the presentation but the
  • 00:45:36
    markdown formatting aspect um any of the
  • 00:45:39
    other any other techniques I know motion
  • 00:45:40
    promps than you want for me so anything
  • 00:45:42
    that you got there yeah uh markdown is
  • 00:45:45
    one that we use extensively um I'm a
  • 00:45:48
    huge ner so I I like writing in markdown
  • 00:45:50
    anyways just because most of the the
  • 00:45:51
    notebooks uh Jupiter notebooks if
  • 00:45:53
    there's any other uh data nerds out
  • 00:45:55
    there like myself um so it's it's rather
  • 00:45:58
    um yeah
  • 00:45:59
    consistent familiar for myself is is any
  • 00:46:02
    data or or papers that you've seen with
  • 00:46:05
    the uh the markdown base because in the
  • 00:46:07
    presentation just before I was like look
  • 00:46:08
    I I can't find any research papers but
  • 00:46:10
    I'm sure just probably G on but uh it's
  • 00:46:12
    more like if open AI using it you'd be
  • 00:46:14
    pretty stupid not to do it and even just
  • 00:46:15
    functionally for us as as writing these
  • 00:46:17
    prompts it's so much more useful to at
  • 00:46:19
    least have some kind of structure to it
  • 00:46:21
    so purely on our side you'd use it
  • 00:46:22
    regardless just to make it easier on
  • 00:46:24
    your on your end yeah absolutely so I
  • 00:46:28
    definitely remember reading I think at
  • 00:46:29
    least a couple papers about structured
  • 00:46:31
    uh structured inputs in markdown format
  • 00:46:33
    and there's other ones as well that you
  • 00:46:34
    can use um but even intuitively so when
  • 00:46:37
    they're doing the fine-tuning or fine
  • 00:46:40
    tuning in terms of
  • 00:46:42
    uh uh reinforcement learning with human
  • 00:46:45
    feedback rlf um what they're doing is
  • 00:46:47
    they're actually providing markdown
  • 00:46:49
    based formatting and that's how they're
  • 00:46:51
    structuring these prompts that they're
  • 00:46:52
    giving to it in order to fing it so
  • 00:46:54
    intuitively of course if it's seen it
  • 00:46:57
    more it's going to do better when it
  • 00:46:58
    sees more of the same that it's been
  • 00:47:00
    trained off um the cool part about using
  • 00:47:03
    markdown as well is you get to actually
  • 00:47:04
    use semantic information so if you're
  • 00:47:07
    writing a Word document if you want to
  • 00:47:08
    put bold in there if you want to put
  • 00:47:10
    something in italics titles subtitles
  • 00:47:12
    all these things it makes it into a much
  • 00:47:14
    more structured format and that Nuance
  • 00:47:16
    comes through on the other side to be
  • 00:47:18
    able to uh yeah make better better
  • 00:47:21
    prompts to to get better outputs the
  • 00:47:24
    other one that uh I would suggest as
  • 00:47:26
    well is they like small little things so
  • 00:47:29
    uh being very encouraging towards uh an
  • 00:47:32
    llm can help so uh I usually start off
  • 00:47:34
    with you're a world class X and you know
  • 00:47:38
    you are an absolute star doing this it
  • 00:47:40
    seems a little bit ridiculous at the
  • 00:47:41
    time that I'm not getting this positive
  • 00:47:43
    feedback to a machine but uh very
  • 00:47:46
    helpful um the other one's telling the
  • 00:47:49
    model to take a deep breath and to think
  • 00:47:51
    it through step by step before
  • 00:47:52
    responding I'm 100% serious has been
  • 00:47:56
    proven to actually increase the quality
  • 00:47:58
    of your responses and that also doubles
  • 00:48:00
    as a as a great one when you're
  • 00:48:01
    significant other as is angry usually
  • 00:48:05
    that yeah yeah I would not suggest that
  • 00:48:08
    as a as a I'll be honest follow the
  • 00:48:11
    chcken by calm
  • 00:48:14
    down anyway it's good you mention that
  • 00:48:17
    sorry that the the hype in the model up
  • 00:48:19
    I talked about this just earlier in the
  • 00:48:20
    video is that look this a motion prompt
  • 00:48:22
    thing where you can get I think 115%
  • 00:48:24
    increase in your in your accuracy it's
  • 00:48:26
    just by being like wow you well firstly
  • 00:48:27
    on the role prompting being like wow you
  • 00:48:29
    are like the best at this and then
  • 00:48:31
    providing enriching it with additional
  • 00:48:32
    words to to reinforce like how good it
  • 00:48:34
    is at that toas and then the other I
  • 00:48:37
    think so um let M anyway back to what
  • 00:48:40
    you said yeah I and it's actually funny
  • 00:48:43
    as well Persona based uh thing so if you
  • 00:48:46
    uh not only tell it it's a world class X
  • 00:48:49
    if you actually use names of specific
  • 00:48:51
    people especially people who have
  • 00:48:52
    written over the Internet or uh you know
  • 00:48:54
    if you say you are Albert Einstein
  • 00:48:58
    it will actually come out with higher
  • 00:48:59
    quality outputs um that are very much in
  • 00:49:02
    the style of writing the the person that
  • 00:49:04
    you're talking about I use it for
  • 00:49:06
    programming personalities so Theo he he
  • 00:49:08
    does the T3 stack um and I'll constantly
  • 00:49:11
    say you're Theo show me how to refactor
  • 00:49:14
    my code like the wood and and that
  • 00:49:16
    actually goes really really well um and
  • 00:49:19
    then the other kind of last one in here
  • 00:49:21
    is on the positivity rout but not using
  • 00:49:24
    negative uh feedback for so a lot of the
  • 00:49:27
    time your your first impulse is going to
  • 00:49:29
    be like stop doing this don't do this
  • 00:49:31
    don't do that if you instead focus on do
  • 00:49:35
    this or do that um the negative conent
  • 00:49:40
    uh words actually are associated with
  • 00:49:43
    worse outcomes than positively France
  • 00:49:46
    yeah it's just interesting because then
  • 00:49:48
    the in the research for this and I was
  • 00:49:49
    trying to put together okay like
  • 00:49:50
    negative prompting is this a real thing
  • 00:49:51
    it seems like the consensus is that it
  • 00:49:53
    doesn't actually uh do much but I will
  • 00:49:55
    I've anecdotally
  • 00:49:57
    the contrary which is uh if if it's
  • 00:49:59
    doing something incorrectly I'll usually
  • 00:50:01
    just put at the very bottom in the notes
  • 00:50:02
    section just never do this in your
  • 00:50:05
    output and it usually tends to work so I
  • 00:50:06
    mean there's both sides there it works
  • 00:50:09
    for me sometimes but it's probably
  • 00:50:10
    something a lack of my skills as well um
  • 00:50:12
    that I should be doing it further up but
  • 00:50:14
    yeah there's some really good things I
  • 00:50:15
    think if you guys can as B said that's
  • 00:50:18
    another GM that I'll be I'll be
  • 00:50:19
    incorporating into my prompting is
  • 00:50:20
    giving it a name giving the rle a name
  • 00:50:22
    um and that's something OB you just say
  • 00:50:24
    you're an expert this this this um but
  • 00:50:26
    if you have an example of a real person
  • 00:50:28
    or that someone that the internet would
  • 00:50:30
    have had information about um you can
  • 00:50:31
    throw that in there as
  • 00:50:33
    well yeah absolutely um yeah I think
  • 00:50:37
    those are the the big topl line ones for
  • 00:50:39
    me at least right yeah no that's really
  • 00:50:41
    helpful again this is why I brought
  • 00:50:42
    Spencer on even I've I've learned
  • 00:50:44
    something here um but yeah we can jump
  • 00:50:45
    back to the video thank you Spencer
  • 00:50:47
    thanks so much then so I hope that's
  • 00:50:48
    drilled in the importance of PR
  • 00:50:49
    engineering and and being able to use
  • 00:50:51
    these cheaper and faster models to
  • 00:50:53
    achieve the outcomes that your clients
  • 00:50:54
    want otherwise you're not going to make
  • 00:50:55
    any money uh but going back to to this I
  • 00:50:57
    just want to say look everything that
  • 00:50:58
    I've just taught you here can be applied
  • 00:51:00
    to all these different types of systems
  • 00:51:01
    and what I want to leave you off with at
  • 00:51:02
    the end of this is examples of things so
  • 00:51:05
    an AI agent is is like GPT is are a good
  • 00:51:07
    example of this um or the building AI
  • 00:51:09
    agents on my own platform in my own
  • 00:51:11
    software agentive if you want to check
  • 00:51:13
    it out we're only on weight list at the
  • 00:51:14
    moment so you can check that out in the
  • 00:51:15
    description uh but agentive allows you
  • 00:51:17
    to build AI agents as does the gbt
  • 00:51:20
    Builder on on the chb site but what we
  • 00:51:22
    want to do if we modifying this prompt
  • 00:51:23
    formula for this use case of AI agents
  • 00:51:26
    is to modify to include how to use the
  • 00:51:27
    knowledge how to use the tools and your
  • 00:51:29
    answer then you can provide examples of
  • 00:51:31
    response styes and Toad so you can pause
  • 00:51:33
    that take a look see but here most
  • 00:51:35
    important things to point out is that
  • 00:51:37
    I've added in U so you can see roll task
  • 00:51:39
    specifics and then tools so the tools
  • 00:51:42
    here if you are adding custom tools into
  • 00:51:43
    your uh into your gpts or into your AI
  • 00:51:46
    agents you can add a little section uh
  • 00:51:48
    using the same kind of format right we
  • 00:51:50
    have a heading and say you have two
  • 00:51:51
    tools to use one I like to include the
  • 00:51:54
    knowledge base if I've added any
  • 00:51:55
    knowledge to my AI agent I'll sell tell
  • 00:51:57
    it use the knowledge base because it's
  • 00:51:59
    actually that's how it's working they
  • 00:52:00
    use it as a knowledge based tool they
  • 00:52:02
    just don't already tell you that it's a
  • 00:52:03
    it's a tool um so you construct it
  • 00:52:05
    knowledge base is one of the tools you
  • 00:52:07
    have you can use it when you're
  • 00:52:08
    answering AI business related questions
  • 00:52:10
    and number two is a coine similarity
  • 00:52:11
    tool it could be other tool that's
  • 00:52:13
    calling relevance or something uh but
  • 00:52:15
    tell it how to use each of the tools
  • 00:52:16
    that's involved and then examples of
  • 00:52:18
    okay here's a question someone ask the
  • 00:52:20
    agent here's how you should respond uh
  • 00:52:22
    Etc so not not rocket science you guys
  • 00:52:24
    can use that uh but that's how I write
  • 00:52:25
    my adapt this formula to do AI agent
  • 00:52:28
    prompts and it works really well next is
  • 00:52:30
    voice agents you need to modify the
  • 00:52:32
    prompt formula to include a script
  • 00:52:33
    outline if necessary uh so sylow BL AI
  • 00:52:36
    air all these things that are popping
  • 00:52:37
    off right now uh you can modify the same
  • 00:52:39
    prompt template uh to do uh really good
  • 00:52:42
    voice agents for you so role task but in
  • 00:52:45
    the task here we're giving you an
  • 00:52:46
    outline of how it should talk and the
  • 00:52:47
    steps involved uh then we have the
  • 00:52:49
    specifics then we have context about the
  • 00:52:51
    business uh this is an example for a
  • 00:52:53
    restaurant um I'm just giving a bit of
  • 00:52:54
    context on the restaurant there then we
  • 00:52:56
    have examples of how it should respond
  • 00:52:58
    to the most common questions as I said
  • 00:53:00
    before you can also come in here and add
  • 00:53:01
    in a script section and add in like a
  • 00:53:05
    rough outline of how the script would go
  • 00:53:06
    but I've kind of included that in this
  • 00:53:08
    uh in this in in this section here from
  • 00:53:10
    a high level at least so voice agents
  • 00:53:13
    same sort of thing modify it to to do
  • 00:53:15
    the job then we have ai automations
  • 00:53:16
    which can be using zapia make air table
  • 00:53:18
    air table now has AI which is cool uh
  • 00:53:20
    but you can create powerful AI tasks and
  • 00:53:22
    businesses they can be relied upon to
  • 00:53:23
    handle thousands of operations a month
  • 00:53:25
    uh what we just built in the email
  • 00:53:27
    classifier is an example of an
  • 00:53:28
    automation so I don't really need to go
  • 00:53:29
    over this but here's another example at
  • 00:53:31
    the end here you can see sometimes I
  • 00:53:33
    like to throw this in um is after I've
  • 00:53:35
    given examples at the bottom I'll go q
  • 00:53:38
    and then I'll put the constraint in or
  • 00:53:39
    in this case the variable uh in again
  • 00:53:42
    and then I'll leave the a open up put
  • 00:53:44
    space and then it's just going to kind
  • 00:53:45
    of autofill that and it's a it's another
  • 00:53:46
    technique you can use to to get it to
  • 00:53:48
    only output uh the exact kind of uh
  • 00:53:50
    output style that you want so feel free
  • 00:53:52
    to use that as you need AI tools um you
  • 00:53:56
    may not know what I mean by tools but
  • 00:53:57
    basically we can set up a bunch of
  • 00:53:59
    inputs say Okay Niche offer then we can
  • 00:54:01
    insert that into a uh into a into
  • 00:54:04
    pre-written prompt and then that's going
  • 00:54:06
    to be allowed to connect to either gpts
  • 00:54:08
    or you can build it um on on a on a
  • 00:54:11
    landing page and it can be used to speed
  • 00:54:12
    up workflows so there's so many
  • 00:54:14
    different ways you can use it um here's
  • 00:54:16
    an example again you can pause that this
  • 00:54:18
    an example um here you can see I'm
  • 00:54:19
    inserting the variables uh we have lots
  • 00:54:21
    of input output Pairs and then I'm
  • 00:54:24
    screaming at the end here because
  • 00:54:25
    because it wasn't do what I wanted so uh
  • 00:54:28
    yeah take those I'll I'll leave a link
  • 00:54:30
    to this presentation down on uh I think
  • 00:54:32
    it'll be on my school community so you
  • 00:54:33
    just find this video um there'll be a a
  • 00:54:35
    resource for this thing in the YouTube
  • 00:54:37
    Tab and you can find this video pull
  • 00:54:39
    this up and then and use this as you
  • 00:54:41
    wish so I want to bring you back to this
  • 00:54:43
    um here's a lollipop um because you get
  • 00:54:45
    a lollipop for now completing this
  • 00:54:47
    course and you're now a successful and a
  • 00:54:50
    a genius level I'm not even sure what
  • 00:54:51
    this guy's supposed to his name is
  • 00:54:52
    supposed to be but he looks like a
  • 00:54:53
    genius to me he looks like a Jedi or
  • 00:54:55
    something cool so you now this guy and
  • 00:54:57
    you didn't end up being stuck in this uh
  • 00:55:00
    this midb territory so here's your
  • 00:55:01
    little lop and I'm proud of you for
  • 00:55:03
    getting through this because the skills
  • 00:55:04
    that I just taught you as I say affect
  • 00:55:06
    every different thing you're trying to
  • 00:55:07
    sell in this AI space if you don't have
  • 00:55:09
    this nailed um you're not going to be
  • 00:55:10
    able to build things and you're not
  • 00:55:11
    going to create value for your clients
  • 00:55:12
    cuz you're going to have to use even if
  • 00:55:14
    you're kind of okay but you can't get
  • 00:55:16
    the cheaper model to do what you need it
  • 00:55:18
    to do then you're not going to be able
  • 00:55:19
    to succeed long term and I mean you put
  • 00:55:22
    yourself up if if someone was offering
  • 00:55:24
    the same AI service and you said Hey
  • 00:55:25
    look it's going to cost you this much
  • 00:55:27
    month and it's going to take 10 seconds
  • 00:55:29
    to respond and some other guy goes okay
  • 00:55:30
    it's going to cost you one1 of that and
  • 00:55:32
    it's going to take a quarter of the time
  • 00:55:35
    um who's going to win there so as as
  • 00:55:37
    much PVP there's not much PVP going on
  • 00:55:39
    in the space right now because there's
  • 00:55:40
    very few people selling selling a
  • 00:55:42
    Solutions at agencies so we're still
  • 00:55:43
    very early to it but over time if you
  • 00:55:46
    don't have these skills you're going to
  • 00:55:46
    get wiped out by people who do um and
  • 00:55:50
    yeah keep in mind there's so much
  • 00:55:52
    potential to be squeezed out of these
  • 00:55:53
    prompts and out of the these models if
  • 00:55:55
    you just apply this technique so every
  • 00:55:57
    300% increase I'm going to be making a
  • 00:55:59
    couple more of these Style videos if you
  • 00:56:00
    did like this if you like me being a lot
  • 00:56:01
    more uh no and just telling you
  • 00:56:04
    outs then let me know in the comments
  • 00:56:06
    because I much prefer doing these kind
  • 00:56:07
    of videos even though I'm now getting
  • 00:56:09
    super hot and ready and my cats here but
  • 00:56:11
    I've like making this personally it's a
  • 00:56:12
    lot more fun than my normal videos where
  • 00:56:15
    but uh yeah you get the idea if you've
  • 00:56:16
    enjoyed please let me know down below
  • 00:56:18
    and uh subscribe to the channel if you
  • 00:56:19
    haven't already I'm probably going to
  • 00:56:20
    have a couple more videos like this on
  • 00:56:21
    core things that I think you need to
  • 00:56:23
    understand because if you don't learn
  • 00:56:24
    this then you can't use my sass and I
  • 00:56:26
    can't make money so I'm very selfishly
  • 00:56:29
    teaching you this stuff so that one day
  • 00:56:30
    you can use my sass and I can sell my
  • 00:56:31
    sass for hundreds of millions of dollars
  • 00:56:33
    so forgive me for being selfish but you
  • 00:56:35
    get to win along the way um but yeah see
  • 00:56:37
    you in the next one
Etiquetas
  • Prompt Engineering
  • AI Systems
  • Effectiveness
  • Efficiency
  • Role Prompting
  • Chain of Thought
  • Few-shot Prompting
  • Markdown Formatting
  • Emotional Prompting
  • AI in Business