Complete Guide to K8sGPT | Simplify Kubernetes Troubleshooting with AI

00:48:16
https://www.youtube.com/watch?v=eKsWS7OM5oY

Resumo

TLDRThe video covers the recent surge in AI adoption across industries, emphasizing tools like ChatGPT, and discusses the integration of AI with Kubernetes to enhance productivity and simplify complex Kubernetes environments. The host introduces K8s GPT, a powerful tool designed to facilitate Kubernetes cluster management, debugging, and troubleshooting using AI capabilities. The video provides a comprehensive guide on installing K8s GPT using tools like Helm, setting up necessary configurations, and leveraging the CLI for effective analysis of errors within Kubernetes clusters. Viewers are walked through steps to authenticate K8s GPT with AI backends, utilize commands to pinpoint and explain errors, and maximize the tool's potential with features like anonymization to protect sensitive data. The video also touches upon the integration of K8s GPT with other tools in the cloud-native ecosystem, such as Trivy and Kverno, while illustrating how continuous scanning through the K8s GPT operator can significantly improve cluster management. Emphasis is placed on the community-driven development of K8s GPT and the potential it holds for the future of comprehensive Kubernetes management.

ConclusΓ΅es

  • πŸ“ˆ AI adoption has risen by 60% in recent years.
  • πŸ€– AI can significantly improve Kubernetes management.
  • πŸ› οΈ K8s GPT helps in debugging and troubleshooting Kubernetes.
  • πŸ’» Helm is needed for installing K8s GPT.
  • πŸ” Anonymize flag in K8s GPT protects sensitive data.
  • πŸ”„ K8s GPT integrates with tools like Trivy and Kverno.
  • 🧩 Filters in K8s GPT help in targeted resource scanning.
  • 🌐 K8s GPT can be authenticated with multiple AI backends.
  • πŸ” K8s GPT operator allows continuous scanning.
  • πŸš€ K8s GPT highlights AI's role in complex IT environments.

Linha do tempo

  • 00:00:00 - 00:05:00

    The video begins by discussing the significant growth in AI adoption across industries, emphasizing how tools like ChatGPT are enhancing productivity and streamlining processes. AI's impact leads to questions about integrating AI with Kubernetes, a widely used container orchestration platform. The speaker proposes using AI to manage Kubernetes environments more efficiently by automating tasks and diagnosing problems.

  • 00:05:00 - 00:10:00

    The video introduces 'Kubes GPT,' a tool that applies AI to simplify troubleshooting and debugging Kubernetes clusters. The speaker intends to demonstrate its capabilities, starting with the prerequisites for using the tool, such as having a Kubernetes cluster and Helm installed. The video promises an exploration of how this tool can enhance Kubernetes management.

  • 00:10:00 - 00:15:00

    The speaker guides through installing Kubes GPT, detailing two installation methods: CLI and operator. The setup requires configuring an AI backend for Kubes GPT to facilitate AI processing. The video covers using different AI backends and highlights integration flexibility with local tools like 'AMA,' which allows running models without API fees.

  • 00:15:00 - 00:20:00

    The video dives into the initial setup of Kubes GPT, focusing on authenticating the tool to an AI backend. The speaker demonstrates using the CLI to authenticate Kubes GPT with AMA, a free AI model platform, explaining different options for backend integration and setting the stage for leveraging AI in Kubernetes error analysis.

  • 00:20:00 - 00:25:00

    The demonstration progresses by creating an error in a Kubernetes pod. The speaker uses Kubes GPT to identify and explain the error, comparing it with results from traditional Kubernetes commands, and shows how Kubes GPT suggests solutions. This illustrates the tool's potential to simplify error diagnosis and provide actionable solutions.

  • 00:25:00 - 00:30:00

    A further example is given where a service configuration error is analyzed using Kubes GPT, demonstrating the tool's ability to identify non-obvious errors by automating analysis processes. The example showcases the tool's utility in simplifying Kubernetes management for users of varying expertise levels by providing direct solutions.

  • 00:30:00 - 00:35:00

    The speaker explains how to refine error analysis using Kubes GPT by applying filters to narrow down the scope of searches to specific resources. This feature helps in managing large Kubernetes environments by focusing analyses on particular namespaces or resource types, thereby reducing the noise from irrelevant error messages.

  • 00:35:00 - 00:40:00

    Integrations with other tools, such as Trivy for vulnerability scanning and Kubes GPT's ability to analyze configuration security, expand its utility. The speaker demonstrates activating integrations and using them to enhance cluster security, providing insights into how Kubes GPT fits into a broader ecosystem of Kubernetes management tools.

  • 00:40:00 - 00:48:16

    Finally, the video covers advanced usage of Kubes GPT through the operator, which enables continuous monitoring and analysis. This automated approach contrasts with manual CLI usage, facilitating proactive Kubernetes management. The video also highlights community and development aspects, encouraging contributions to the tool's growth.

Mostrar mais

Mapa mental

Mind Map

Perguntas frequentes

  • What is the focus of the video?

    The focus is on the integration of AI, particularly K8s GPT, with Kubernetes for better debugging and troubleshooting.

  • Who is the host of the video?

    The host's name is Kunal.

  • Why is AI integration suggested for Kubernetes?

    AI can improve management efficiency, automate tasks, and streamline decision-making in Kubernetes environments.

  • What tool is introduced for Kubernetes troubleshooting?

    The tool introduced is K8s GPT.

  • What prerequisites are needed to install K8s GPT?

    You need a Kubernetes cluster and Helm pre-installed.

  • What purpose does the 'analyze' command serve in K8s GPT?

    It identifies errors in a Kubernetes cluster and provides potential explanations and solutions.

  • How does K8s GPT ensure data privacy?

    It uses an 'anonymize' flag to mask sensitive information before processing.

  • What advanced features does K8s GPT offer?

    K8s GPT offers integrations with tools like Trivy and Kverno, and supports multiple AI backends.

  • What development stage is the K8s GPT Kverno integration at?

    The Kverno integration is at an early stage, and users might encounter issues.

  • Where can you find more resources or documentation for K8s GPT?

    Resources and documentation are available on its official website and linked documentation.

Ver mais resumos de vΓ­deos

Obtenha acesso instantΓ’neo a resumos gratuitos de vΓ­deos do YouTube com tecnologia de IA!
Legendas
en
Rolagem automΓ‘tica:
  • 00:00:00
    all right I think we all agree that in
  • 00:00:02
    these recent years AI has taken the
  • 00:00:04
    World by strong right I mean tools like
  • 00:00:06
    chat gbt and different products by open
  • 00:00:09
    AI are being used in different
  • 00:00:11
    Industries right and they are actually
  • 00:00:13
    making our work very easier providing us
  • 00:00:15
    insights they're making the entire work
  • 00:00:17
    process a lot more faster therefore
  • 00:00:19
    boosting our productivity as well now
  • 00:00:21
    according to the state of the AI report
  • 00:00:22
    by McKenzie AI adoption has increased
  • 00:00:25
    around 60% in the last year alone I mean
  • 00:00:28
    that's a huge number right and
  • 00:00:29
    businesses are continuously using AI to
  • 00:00:32
    improve their processes and stay ahead
  • 00:00:34
    in terms of innovation now this
  • 00:00:36
    particular Point really got me thinking
  • 00:00:38
    because AI has this kind of significance
  • 00:00:40
    and this impact in all of these
  • 00:00:42
    industries why don't we use AI with
  • 00:00:44
    cuberes cuberes is one of the most
  • 00:00:47
    widely adopted open source container
  • 00:00:49
    orchestration platform out there if you
  • 00:00:50
    don't know about cues I mean if you've
  • 00:00:52
    been following the channel for a while
  • 00:00:54
    you would definitely know what cuberes
  • 00:00:55
    is but if you don't there's an entire
  • 00:00:57
    workshop on cuberes 101 that teaches you
  • 00:01:00
    the basic two advanced concepts about
  • 00:01:02
    cuties you can check out the description
  • 00:01:04
    box down below for its link the point is
  • 00:01:06
    that Cuties is one of the largest open
  • 00:01:08
    source projects out there but we all
  • 00:01:10
    agree that when City's environments grow
  • 00:01:12
    larger it really becomes a challenge
  • 00:01:14
    when it comes to debugging and troubl
  • 00:01:16
    shooting any issues overall it becomes
  • 00:01:18
    really difficult to manage so I think AI
  • 00:01:20
    can make a big difference here right by
  • 00:01:22
    adding AI to a normal communities
  • 00:01:24
    workflow we can make the management
  • 00:01:26
    process much faster and much efficient
  • 00:01:29
    AI can help us as quickly diagnose and
  • 00:01:31
    resolve any issues that we have it can
  • 00:01:33
    automate any routine tasks that we have
  • 00:01:35
    in our C environment and overall can
  • 00:01:37
    improve the decision- making process of
  • 00:01:39
    different companies right now we have
  • 00:01:41
    already seen in the cloud native
  • 00:01:43
    ecosystem that a lot of products have
  • 00:01:45
    started adopting AI in some of the other
  • 00:01:47
    way but there is one tool out there
  • 00:01:49
    which was released very recently and I
  • 00:01:52
    believe that this particular tool brings
  • 00:01:53
    the capabilities of AI to the next level
  • 00:01:56
    hey everyone my name is konal and in
  • 00:01:58
    this particular video we are going to
  • 00:01:59
    learn about Cage's GPT which is a really
  • 00:02:02
    powerful tool that brings the
  • 00:02:04
    capabilities of AI to Cub
  • 00:02:06
    troubleshooting and debugging and makes
  • 00:02:08
    it really easy for you to solve any
  • 00:02:10
    problems in your cuties cluster if this
  • 00:02:12
    sounds interesting to you hit that like
  • 00:02:14
    button and let's get
  • 00:02:15
    [Music]
  • 00:02:19
    started now before we move ahead and
  • 00:02:22
    install KS GPD onto our system there are
  • 00:02:24
    a few things that you need to configure
  • 00:02:25
    number one is you need to have a
  • 00:02:27
    communties cluster up and running so I
  • 00:02:29
    already have G ahead and created one
  • 00:02:30
    using mini Cube so if I write K get
  • 00:02:33
    nodes I have a single node cuties
  • 00:02:34
    cluster running you can use any of the
  • 00:02:36
    cuties distributions out there kind k3d
  • 00:02:39
    k3s or any other of your choice but you
  • 00:02:41
    need one cuties cluster the next thing
  • 00:02:43
    we will need is Helm so we'll actually
  • 00:02:45
    be installing a few Helm charts while we
  • 00:02:47
    go through this particular tutorial so
  • 00:02:49
    you would definitely need Helm
  • 00:02:50
    pre-installed you can go ahead to the
  • 00:02:52
    Helm's website and you can go ahead and
  • 00:02:55
    install Helm from their documentation so
  • 00:02:57
    once we have these two configured and
  • 00:02:59
    installed right let us install kgpt now
  • 00:03:02
    so let's head over to the documentation
  • 00:03:04
    which is dos. kgp. I'll put the link in
  • 00:03:07
    the description box down below so you
  • 00:03:08
    all can check it out but the important
  • 00:03:10
    thing to understand here is that there
  • 00:03:12
    are two ways in which you can install K
  • 00:03:14
    CPD and use it with your cuberes cluster
  • 00:03:16
    number one is using the CLI right and
  • 00:03:19
    that is the most simplest and the most
  • 00:03:21
    basic way you can use KCPD the next is
  • 00:03:23
    using the KCPD operator which we'll
  • 00:03:25
    install in our Q cluster and it will
  • 00:03:28
    constantly keep monitoring our ERS for
  • 00:03:30
    any changes right now as for the
  • 00:03:31
    operator how to install it how to use it
  • 00:03:33
    we have a completely different section
  • 00:03:35
    for that but let's start with the most
  • 00:03:37
    basic which is the CLI right so to
  • 00:03:39
    install the CLI you can go ahead and
  • 00:03:41
    check out the installation guide if you
  • 00:03:43
    have a Mac you can install using Brew
  • 00:03:45
    I've already gone ahead and done that so
  • 00:03:46
    if I write K CP version so you can see I
  • 00:03:49
    have the latest
  • 00:03:51
    0.3.0 which is latest as I'm recording
  • 00:03:54
    this particular video and if you do K
  • 00:03:57
    GPD you can see that these are all the
  • 00:04:00
    bunch of commands that are available to
  • 00:04:01
    us we'll actually be exploring quite a
  • 00:04:03
    bunch of these commands today so it's
  • 00:04:04
    going to be an exciting
  • 00:04:06
    ride all right so I hope we have already
  • 00:04:08
    gone ahead and installed kgpt onto your
  • 00:04:10
    system the next thing we would want to
  • 00:04:12
    do is authenticate kgpt to an AI backend
  • 00:04:15
    okay so if we head over to the reference
  • 00:04:17
    I'll show you what I mean head over to
  • 00:04:19
    the providers so in simple terms a
  • 00:04:21
    backend or a provider is a way for KS
  • 00:04:23
    GPT to talk to the large language model
  • 00:04:25
    or the AI model that we'll be using
  • 00:04:28
    right so we need to authenticate and
  • 00:04:30
    choose one of the models that we want to
  • 00:04:31
    use right now as of recording this video
  • 00:04:34
    kpt supports 11 back ends right so we
  • 00:04:36
    have all the famous ones like open AI we
  • 00:04:39
    have Azure open a we have Amazon Google
  • 00:04:42
    J is also there you can go ahead and use
  • 00:04:44
    the open AI provider for that you would
  • 00:04:47
    need the openai API key right but let us
  • 00:04:49
    say you are just testing right like me
  • 00:04:51
    so you don't want to spend money on any
  • 00:04:53
    API keys so we'll use AMA to keep things
  • 00:04:56
    simple now if you don't know about ama
  • 00:04:58
    ama is basically a tool that can help
  • 00:05:00
    you to run these large language models
  • 00:05:02
    on your local system free of cost right
  • 00:05:05
    so if you just see the list of models
  • 00:05:07
    that are supported so you can see these
  • 00:05:10
    are the list of all the models that you
  • 00:05:11
    can run using AMA so we have the very
  • 00:05:14
    latest llama 3.1 at the time of
  • 00:05:17
    recording this particular video you can
  • 00:05:18
    say it was updated 7 days ago we have
  • 00:05:20
    all these particular models that we can
  • 00:05:22
    use all of them are open source large
  • 00:05:24
    language models you can use any of them
  • 00:05:26
    right now cool so all in all is a pretty
  • 00:05:30
    great way for you to run large language
  • 00:05:32
    models if you don't want to spend any
  • 00:05:34
    money so that's what we are going to use
  • 00:05:35
    kcpt with right now so you can go ahead
  • 00:05:38
    and download AMA for a particular
  • 00:05:40
    operating system there are a bunch of
  • 00:05:41
    different options for Linux Mac and
  • 00:05:43
    windows I've already gone ahead and done
  • 00:05:45
    that so if I write AMA so you can see
  • 00:05:47
    the AMA CLI is already up and
  • 00:05:53
    running okay once you have installed AMA
  • 00:05:56
    right let us see how we can authenticate
  • 00:05:57
    kgpt with the AI back end right for that
  • 00:06:00
    we have a command called as KPD o let us
  • 00:06:03
    write this so KPD or is provides the
  • 00:06:06
    necessary credentials to authenticate
  • 00:06:08
    with the chosen back end now there are
  • 00:06:09
    actually a bunch of options we can use
  • 00:06:11
    here so if I write kgbt Au list this
  • 00:06:14
    actually gives a list of all the
  • 00:06:16
    supported AI backends right so you can
  • 00:06:18
    see that we have a lot of them right
  • 00:06:20
    here and the default one is open Ai and
  • 00:06:23
    we also have Ama one of them listed here
  • 00:06:25
    so we want to use AMA so let us
  • 00:06:26
    authenticate kgpt with AMA for that
  • 00:06:28
    we'll use K CPT
  • 00:06:31
    Au add and you can see it has already
  • 00:06:34
    given us the command so I'll
  • 00:06:35
    autocomplete it let me just walk you
  • 00:06:37
    through it quickly the number one flag
  • 00:06:39
    we are providing is the back end we are
  • 00:06:41
    selecting the back ends from the list of
  • 00:06:43
    back ends that I just showed you we want
  • 00:06:45
    AMA the particular model that we want to
  • 00:06:47
    use right now we want to use Lama 3
  • 00:06:49
    which is second to the latest one right
  • 00:06:51
    now the latest one is Lama 3.1 we can
  • 00:06:53
    use that as well but let's just stick to
  • 00:06:55
    Lama 3 right now and this is the base
  • 00:06:57
    URL right so the base UR L is basically
  • 00:07:00
    the API endpoint where your large
  • 00:07:03
    language model is running now this
  • 00:07:04
    basically depends on the AI provider
  • 00:07:06
    that you're using on some of the AI
  • 00:07:08
    providers that are listed out here you
  • 00:07:10
    wouldn't need to use this base URL now
  • 00:07:12
    why we are using this base URL is
  • 00:07:14
    because we are running this locally so
  • 00:07:16
    this is basically at Local Host if
  • 00:07:18
    you're using a provider such as Google
  • 00:07:19
    gmany or openai you would not need to
  • 00:07:22
    provide this you would just need to
  • 00:07:23
    provide the authentication key that is
  • 00:07:25
    the API key right but right now as we
  • 00:07:27
    are running this locally we would need
  • 00:07:29
    to provide it as the base URL that's how
  • 00:07:31
    kgbt will be able to contact the AMA API
  • 00:07:35
    so let us hit enter it will say AMA is
  • 00:07:38
    added to the AI backend now if you just
  • 00:07:40
    do kgpt o list you would see that ama is
  • 00:07:45
    an active AI provider right now so now
  • 00:07:47
    whenever we use KCPD right to analyze
  • 00:07:49
    our cuties cluster in the back end it's
  • 00:07:51
    using the Lama 3 model and how is it
  • 00:07:53
    doing it it's using AMA as the backend
  • 00:07:55
    provider there's one last thing that is
  • 00:07:57
    left to do which is starting the AMA
  • 00:07:59
    server that is also important right so
  • 00:08:02
    we just have to write AMA serve and this
  • 00:08:05
    will basically start the local server of
  • 00:08:07
    olama and you can see that it's
  • 00:08:09
    listening on Local Host and the port
  • 00:08:12
    number which we have actually configured
  • 00:08:14
    in kcpt right so on this particular
  • 00:08:17
    terminal you'll actually see the request
  • 00:08:19
    that kgbt makes with the AMA back end
  • 00:08:21
    and we'll definitely have a look into it
  • 00:08:22
    as well okay so as I mentioned
  • 00:08:24
    previously right the main purpose of K
  • 00:08:27
    GPT is it uses the power of AI to scan
  • 00:08:31
    our kubernetes cluster and it helps us
  • 00:08:33
    to troubleshoot and debug any errors
  • 00:08:35
    right so before we move on to testing it
  • 00:08:38
    let us create one error explicitly so I
  • 00:08:40
    have this example pod manifest here
  • 00:08:42
    which is named by the name hungry pod
  • 00:08:44
    and you'll definitely understand in a
  • 00:08:46
    bit why we have named this the hungry
  • 00:08:48
    pod right so this is a simple pod that
  • 00:08:50
    will use the busy box container image
  • 00:08:52
    and we have given the CPU value as th000
  • 00:08:55
    this is a pretty simple pod now if you
  • 00:08:56
    look at this right now it doesn't look
  • 00:08:58
    like it will create an error error right
  • 00:09:00
    but let us apply this into our cluster
  • 00:09:02
    and see what happens so we'll do Cube
  • 00:09:04
    CTL I can just do simple K apply minus F
  • 00:09:09
    and we do broken pod. right so the
  • 00:09:12
    hungry pod has been created and it's in
  • 00:09:14
    the default name space so if I do K get
  • 00:09:17
    po by the way K is short of Cub CTL I
  • 00:09:19
    have created an LS for this it actually
  • 00:09:22
    becomes a little bit simple when you
  • 00:09:23
    just use K right so you can see that the
  • 00:09:26
    hungry pod is created but it's in the
  • 00:09:28
    pending state now if you're familiar
  • 00:09:30
    with the basic concept of Cubes right if
  • 00:09:32
    you know your way around a cubes cluster
  • 00:09:35
    your default approach would be okay the
  • 00:09:37
    Pod is in the pending State we can use
  • 00:09:39
    the cube CTL describe command and we can
  • 00:09:42
    use hungry pod right and we can just
  • 00:09:44
    check the P events and see what's the
  • 00:09:47
    problem but let's switch this game right
  • 00:09:49
    and see how far we can go with AI so
  • 00:09:52
    instead of using the cube C command we
  • 00:09:54
    use kzpt
  • 00:09:56
    analyze right and let us see what what
  • 00:09:59
    it gives us so kgpt analyze is a command
  • 00:10:02
    that will list down all the errors in
  • 00:10:04
    your communities cluster right now we're
  • 00:10:06
    actually not concerned with these
  • 00:10:07
    default resources because these were
  • 00:10:09
    created by mini Cube we are concerned
  • 00:10:10
    with our pod right in the default name
  • 00:10:12
    space which is the hungry pod so the
  • 00:10:14
    error is that zero of one nodes are
  • 00:10:18
    available insufficient CPU okay so
  • 00:10:21
    seeing this I can understand that there
  • 00:10:23
    are no nodes available with the
  • 00:10:25
    sufficient CPU to accommodate or to
  • 00:10:27
    schedule that particular pod okay cool I
  • 00:10:29
    understand the error but konal I mean
  • 00:10:32
    this is something that we can also do
  • 00:10:33
    with Cube CDL describe right if we do
  • 00:10:36
    Cube CTL
  • 00:10:37
    describe Po and we do hungry pod you can
  • 00:10:41
    see that the events also gives the same
  • 00:10:44
    error right so what's the difference
  • 00:10:46
    here's when things get interesting let
  • 00:10:48
    us use GPD analyze again now that we
  • 00:10:51
    know the error right we can use the
  • 00:10:53
    explain flag to list down the solution
  • 00:10:56
    for that particular error let me show
  • 00:10:57
    you how we just write k GPT
  • 00:11:01
    analyze
  • 00:11:02
    explain and minus B is the flag that we
  • 00:11:06
    use to specify the back end so right now
  • 00:11:09
    we are using AMA right and we just do
  • 00:11:13
    that so now is when it will contact the
  • 00:11:17
    backend API right and you can see here
  • 00:11:20
    that something is happening in the AMA
  • 00:11:22
    server because kgbt is actually trying
  • 00:11:24
    to contact the API and you can see that
  • 00:11:26
    multiple post requests have been sent to
  • 00:11:29
    this particular address it is Local Host
  • 00:11:31
    API generate and this is what kgpt is
  • 00:11:33
    doing in the back end right let us see
  • 00:11:35
    the results now now it has actually
  • 00:11:36
    given us us a bunch of things which are
  • 00:11:38
    helpful the number one is it has
  • 00:11:40
    simplified the error message so it says
  • 00:11:42
    insufficient CPU resources available to
  • 00:11:44
    schedule the pods with one node having
  • 00:11:46
    insufficient CPU and no preemption
  • 00:11:48
    victims found basically the error that
  • 00:11:50
    we got but it has simplified it and
  • 00:11:52
    basically presented it in plain English
  • 00:11:55
    right and it's much easier to understand
  • 00:11:56
    now and the interesting part is it has
  • 00:11:58
    given us all the possible solutions that
  • 00:12:00
    we can use to fix this particular error
  • 00:12:02
    right so the number one is you can check
  • 00:12:04
    the CPU utilization of that particular
  • 00:12:06
    node you can scale down or delete the
  • 00:12:08
    less critical parts to free up the
  • 00:12:10
    resources and there are a bunch of
  • 00:12:11
    things that you can try out now this is
  • 00:12:12
    pretty interesting n because let us say
  • 00:12:14
    you are a beginner right and you don't
  • 00:12:16
    know your way around cuberes cluster
  • 00:12:18
    that much this is pretty helpful for you
  • 00:12:20
    because this just give you a head start
  • 00:12:22
    right and you can see what are the
  • 00:12:24
    different solutions that I can use to
  • 00:12:25
    solve this particular error and this
  • 00:12:27
    also helps in your learnings as well
  • 00:12:28
    right so before solving it right let us
  • 00:12:30
    see what is the CPU capacity of a node
  • 00:12:33
    right which caus the error in the first
  • 00:12:35
    place so if you do cubc it will get
  • 00:12:36
    nodes if I do Cube CTL describe nodes
  • 00:12:40
    mini
  • 00:12:42
    Cube so in this particular section if
  • 00:12:46
    you scroll up a bit we'll see a section
  • 00:12:48
    named which is capacity right here and
  • 00:12:51
    you can see the CPU limit is 11 course
  • 00:12:54
    right and if you remember our pod we
  • 00:12:56
    have given it 1,000 core CPU right so
  • 00:12:59
    that's what was causing the error in the
  • 00:13:01
    first place right and now if you look at
  • 00:13:04
    the solutions see the fifth one adjust
  • 00:13:06
    the PS resources request and limits to
  • 00:13:08
    better match available node resources so
  • 00:13:10
    the available node resources is 11 CPU
  • 00:13:13
    there's a cap there right so let us say
  • 00:13:16
    we change this to five so that our node
  • 00:13:18
    can accommodate this particular pod so
  • 00:13:20
    in order to apply these changes we'd
  • 00:13:22
    have to delete our pod first right so I
  • 00:13:24
    can do K so let us do K get pods
  • 00:13:29
    then we'll do Cube Cil
  • 00:13:32
    delete pause we'll do hungry pod and
  • 00:13:36
    we'll also use the force flag this is
  • 00:13:37
    optional by the way so now the Pod has
  • 00:13:39
    been deleted let's reapply the Pod now
  • 00:13:42
    the we Cube SE minus F broken pod the
  • 00:13:45
    Pod has been applied and if we do Cube
  • 00:13:48
    CTL get pods it's in the container
  • 00:13:51
    creating State we do again yep the Pod
  • 00:13:54
    is running successfully so we have fixed
  • 00:13:55
    that particular error now if we do KPD
  • 00:13:58
    analyze right without the explained flag
  • 00:14:00
    if you remember KPD analyze gives you
  • 00:14:02
    all the errors in your particular
  • 00:14:04
    cluster and now if you have fixed that
  • 00:14:05
    particular error it shouldn't give us
  • 00:14:08
    again so if we do analyze right now see
  • 00:14:11
    there are only four errors and the error
  • 00:14:12
    it was showing with our pod has gone
  • 00:14:14
    from here okay so I hope you understood
  • 00:14:16
    a very basic example of how this works
  • 00:14:18
    let us try one more example which will
  • 00:14:20
    give you a lot more clarity right so let
  • 00:14:22
    us create two more resources number one
  • 00:14:24
    is we are creating a service right which
  • 00:14:26
    by the name pod SVC and I am also
  • 00:14:29
    creating a pod which is the engine X pod
  • 00:14:31
    and we are attaching the Pod with the
  • 00:14:33
    service using these labels right so let
  • 00:14:36
    us apply
  • 00:14:37
    them we'll use Cube CTL apply minus F
  • 00:14:42
    and what was the name it was pod
  • 00:14:46
    SVC yep let us apply them so you can see
  • 00:14:50
    the service has been created and the Pod
  • 00:14:52
    has also been created so if we do Cube
  • 00:14:54
    CTL get service so I can see the service
  • 00:14:58
    is right here and if we do Cube CTL get
  • 00:15:01
    pods the engine X pod is currently
  • 00:15:04
    creating the
  • 00:15:05
    container okay so the Pod is already
  • 00:15:07
    created but interestingly you'll notice
  • 00:15:09
    that there are no errors right I mean
  • 00:15:11
    just having a look at it you cannot see
  • 00:15:13
    any errors everything is running
  • 00:15:14
    perfectly but we all agree that looks
  • 00:15:17
    can be a little bit deceiving right so
  • 00:15:19
    let us use K GPT analyze to see if we
  • 00:15:22
    have any hidden errors that might have
  • 00:15:25
    occurred so let us see
  • 00:15:29
    okay so if we just scroll down to the
  • 00:15:31
    list you can see that there is an error
  • 00:15:34
    with the service so it says that service
  • 00:15:36
    has no endpoint expected label app
  • 00:15:38
    enginex so there is some kind of error
  • 00:15:40
    with the service and it has to do
  • 00:15:42
    something with the labels right I'm not
  • 00:15:44
    going to check how you can solve it
  • 00:15:46
    because that's where kgpt would come
  • 00:15:48
    into the picture we can use kgpt again
  • 00:15:51
    we'll use the same command kgpt explain
  • 00:15:54
    minus
  • 00:15:55
    Bama so it has given us a bunch of
  • 00:15:58
    solutions let us scroll up okay so it
  • 00:16:01
    says the service has no endpoints
  • 00:16:03
    expected label app enginex let us see
  • 00:16:05
    what are the solutions it's saying check
  • 00:16:07
    if the service is created successfully
  • 00:16:09
    it is to my knowledge we have just
  • 00:16:11
    checked it right now verify that the
  • 00:16:13
    pods are running and have the correct
  • 00:16:15
    label app enginex okay ensure that the
  • 00:16:18
    service is referencing the correct port
  • 00:16:20
    in the spec try deleting and recreating
  • 00:16:22
    the service uh and updating the pods
  • 00:16:25
    labels to match the expected label uh
  • 00:16:28
    okay cool so let us see the second one
  • 00:16:31
    right it's saying verify the pods are
  • 00:16:32
    running and have the correct labels app
  • 00:16:34
    Eng Genex okay so let us check what are
  • 00:16:37
    the labels being used in the service
  • 00:16:40
    right I'll do service o wide and here
  • 00:16:43
    you can see that the label is appix
  • 00:16:46
    right and what is something we are using
  • 00:16:49
    with the Pod okay here we are using
  • 00:16:51
    Enix and here you can see there is a
  • 00:16:54
    spelling mistake it's saying en genix
  • 00:16:57
    without the I right let us say you
  • 00:16:59
    copied any manifest blindly from the
  • 00:17:01
    internet and it got applied and there
  • 00:17:02
    are no errors to be seen But at the end
  • 00:17:04
    of the day the service won't work right
  • 00:17:06
    because the labels are incorrect and
  • 00:17:08
    this was something that you you couldn't
  • 00:17:10
    see with your eyes right because
  • 00:17:11
    everything was working fine but K GPT
  • 00:17:13
    was able to recognize this error cool so
  • 00:17:16
    now we know what the error is we can
  • 00:17:17
    quickly fix it we can do Cube CTL label
  • 00:17:22
    pod and it has already given us the
  • 00:17:24
    command so it's engine export we'll
  • 00:17:26
    change the label app to match it with
  • 00:17:30
    the service and we'll also use the
  • 00:17:31
    override flag which will override the
  • 00:17:33
    existing
  • 00:17:34
    labels so the Pod has been correctly
  • 00:17:36
    labeled we can just verify it we can do
  • 00:17:40
    Cube CTL describe pods engine
  • 00:17:45
    XP right and if we just scroll up and
  • 00:17:48
    verify the label from here it is app
  • 00:17:51
    enginex and I think this was the one
  • 00:17:53
    that the service was also using right
  • 00:17:56
    app enginex cool so now the labels have
  • 00:17:59
    been matched right now if we run KPD
  • 00:18:03
    analyze it shouldn't give us any error
  • 00:18:05
    with our service or either our pod see
  • 00:18:09
    the error is gone so this is how you can
  • 00:18:10
    use kgpt CLI right this is a very basic
  • 00:18:13
    use case of using the CLI right you can
  • 00:18:16
    quickly check for errors you can quickly
  • 00:18:18
    debug these errors and troubleshoot them
  • 00:18:20
    now the examples I showed you are pretty
  • 00:18:22
    simple you would say if you have prior
  • 00:18:24
    knowledge of cuberes but imagine any
  • 00:18:26
    complex example so I think kgp
  • 00:18:29
    in that particular case would be a very
  • 00:18:31
    handy tool to First locate that
  • 00:18:32
    particular error and then give you
  • 00:18:34
    potential solutions that you can use to
  • 00:18:35
    solve it as well right now before we
  • 00:18:37
    move on one thing I would definitely
  • 00:18:39
    want to point out is that when we use
  • 00:18:41
    KPD analyze with the explained flag the
  • 00:18:44
    accuracy of the solutions it provides us
  • 00:18:47
    right it depends on the AI model that
  • 00:18:49
    you're using it depends on the llm that
  • 00:18:51
    you're using so if you are using any
  • 00:18:53
    advanced model with K GPD for example
  • 00:18:56
    open AI because it has got GPD 4 the
  • 00:18:58
    most advanced model out there right
  • 00:19:01
    these results would be more accurate so
  • 00:19:03
    all in all the bottom line is that the
  • 00:19:05
    more accurate large language models you
  • 00:19:07
    will use the more accurate you'll get
  • 00:19:09
    the results right of your particular
  • 00:19:11
    issues this is something that I wanted
  • 00:19:12
    to mention here but right now we are
  • 00:19:14
    testing it locally and AMA is working
  • 00:19:16
    perfectly for us to be honest but yeah
  • 00:19:17
    this is the reason that open is the
  • 00:19:19
    first back end that kgpt supported and
  • 00:19:22
    it's the recommended way for you to use
  • 00:19:24
    kcpt so one thing you might have noticed
  • 00:19:26
    when we use the kcpt analyze command is
  • 00:19:28
    it scans the entire cluster and it was
  • 00:19:30
    actually giving us all of the errors
  • 00:19:32
    present right so if we do kgpt
  • 00:19:36
    analyze so you can see that it is giving
  • 00:19:39
    us all the errors with all the different
  • 00:19:42
    components that we have now because this
  • 00:19:44
    is a demo scenario and we are just
  • 00:19:46
    learning how to use this particular tool
  • 00:19:48
    that's completely fine but imagine if
  • 00:19:50
    you have thousands of cuity spots
  • 00:19:52
    running right and in that you would
  • 00:19:54
    definitely need to focus and filter out
  • 00:19:57
    particular resources for example you
  • 00:19:59
    just want to filter out all the pods and
  • 00:20:01
    just run the scan on those particular
  • 00:20:03
    pods or you might have some services or
  • 00:20:05
    you might have deployments and you just
  • 00:20:07
    want to run scans on those particular
  • 00:20:09
    deployments right let's say you just
  • 00:20:11
    want to focus on a particular name space
  • 00:20:13
    for example right now this particular
  • 00:20:14
    scan is not namespace scoped right it is
  • 00:20:18
    giving me all the results from the cube
  • 00:20:20
    system names space it also gave us
  • 00:20:22
    result from the default namespace when
  • 00:20:23
    we ran it previously so we want to
  • 00:20:25
    filter out resources and we want to
  • 00:20:27
    filter out namespace as well right
  • 00:20:29
    interestingly kgpt supports this so it
  • 00:20:31
    provides us with filters so filters are
  • 00:20:34
    actually a way for selecting which
  • 00:20:36
    resource you wish to be a part of the
  • 00:20:37
    default analysis right so if we just do
  • 00:20:41
    kgpt filters so the filters command
  • 00:20:44
    allows you to manage filters that are
  • 00:20:46
    used to analyze kubernetes resources you
  • 00:20:48
    can list the available filters to
  • 00:20:50
    analyze resources right so again this is
  • 00:20:52
    a way for you to filter out different
  • 00:20:54
    communties resources which you want to
  • 00:20:57
    be a part of the scan and you can you
  • 00:20:58
    can just keep the results clean and only
  • 00:21:00
    focus on what is needed at that
  • 00:21:02
    particular time right so if you do kzpt
  • 00:21:06
    filters list so this basically gives you
  • 00:21:09
    a list of all the available filters that
  • 00:21:11
    you can use right so we have deployment
  • 00:21:13
    pod service these basically include all
  • 00:21:16
    the main cuberes resources right and we
  • 00:21:18
    can use this resource names as a filter
  • 00:21:20
    with the default scan right let me show
  • 00:21:22
    you how we can do it okay so to test it
  • 00:21:24
    out I'll quickly create a bunch of
  • 00:21:25
    Errors so I'll quickly apply some
  • 00:21:28
    malicious pods here one is broken pod
  • 00:21:31
    which is hungry pod that's a pretty
  • 00:21:33
    funny name you would definitely agree
  • 00:21:35
    with me I hope and then we'll also apply
  • 00:21:38
    the service and the Pod as well so now
  • 00:21:40
    if we do Cube CTL get pods you can see
  • 00:21:43
    that we have the hungry pod which is
  • 00:21:44
    again in the pending State because I
  • 00:21:46
    already changed the CPU limit to th000
  • 00:21:49
    again because we want K GPD to detect
  • 00:21:51
    the error right and then we have the
  • 00:21:52
    engine export which is already running
  • 00:21:55
    and we'll also have the service which is
  • 00:21:59
    already running right so again if you
  • 00:22:01
    remember the label is not correct right
  • 00:22:03
    in the Pod it is engine X and in the
  • 00:22:05
    service it is enginex without the I
  • 00:22:10
    right so kgpt will actually cast this
  • 00:22:13
    particular error right now if we do kgpt
  • 00:22:20
    analyze so you can see that it has
  • 00:22:22
    caught the error in the service and also
  • 00:22:25
    the Pod but let us say we want to filter
  • 00:22:28
    out only the pods right so what we can
  • 00:22:32
    do is we can first see the list of
  • 00:22:34
    filters which are
  • 00:22:36
    available right so let's filter out all
  • 00:22:38
    the pods and that's what we want kgpt to
  • 00:22:41
    analyze right so we'll do
  • 00:22:45
    kgpt
  • 00:22:47
    analyze
  • 00:22:48
    oops
  • 00:22:50
    do
  • 00:22:51
    analyze and then we'll use the filter
  • 00:22:54
    flag and then we'll write the name of
  • 00:22:56
    that particular filter cool so you you
  • 00:22:58
    can see that it has filtered out all the
  • 00:23:00
    different errors that we were getting in
  • 00:23:01
    our cluster right and now we are only
  • 00:23:03
    just focused with our particular pod
  • 00:23:05
    right now if we want to provide again
  • 00:23:07
    any solution we can just write the
  • 00:23:09
    explain flag and we'll just use minus
  • 00:23:11
    Bama and we only give us the solution
  • 00:23:13
    for all the pods let us say you want to
  • 00:23:16
    change the filter to service we can do
  • 00:23:18
    that we can just type service
  • 00:23:21
    here and it will filter out all the
  • 00:23:23
    services that you have running in your
  • 00:23:25
    cluster I only have one so that's why
  • 00:23:27
    it's giving me one but let say you have
  • 00:23:29
    multiple Services running so it will
  • 00:23:31
    scan all these services and only output
  • 00:23:33
    the names of the service which may have
  • 00:23:35
    some error in them and that's what it's
  • 00:23:36
    doing right now so the filters makes it
  • 00:23:38
    pretty interesting right now there is
  • 00:23:40
    one more interesting flag that I would
  • 00:23:42
    definitely want to mention is the
  • 00:23:43
    namespace flag which you can use to
  • 00:23:45
    restrict your scan to a particular
  • 00:23:46
    namespace let us say you have a
  • 00:23:48
    malicious SP running in different
  • 00:23:50
    namespace right because by default kgpt
  • 00:23:52
    would analyze the default namespace
  • 00:23:54
    right so let us say we create a new
  • 00:23:56
    namespace create create
  • 00:23:59
    name space and we
  • 00:24:02
    write demo we can you know check as well
  • 00:24:07
    so the demo name space has been created
  • 00:24:10
    right now we can create the pod in the
  • 00:24:13
    demo name space and we write minus end
  • 00:24:16
    demo so if we do K get po minus n demo
  • 00:24:20
    so you can see the Pod has been created
  • 00:24:21
    in the demo name space and this is again
  • 00:24:23
    the hungry pod right it has got some
  • 00:24:25
    error so if we just use the kgpt analyze
  • 00:24:28
    command with the filter right with the
  • 00:24:32
    pod filter so this will list down the
  • 00:24:34
    pod which is running in the default Nam
  • 00:24:35
    space and the one which is also running
  • 00:24:37
    in the demo name space but I only want
  • 00:24:39
    to restrict it to the demo name space
  • 00:24:41
    right so what I can do is I can type the
  • 00:24:45
    namespace flag and I can only just give
  • 00:24:48
    it the name of the
  • 00:24:50
    namespace so it has filtered out and
  • 00:24:52
    given me only the pod which is running
  • 00:24:55
    in the demo name space right let us say
  • 00:24:57
    we try to filter out service right so we
  • 00:25:00
    don't have any service which is running
  • 00:25:01
    in the demo name space so it will give
  • 00:25:03
    us no problems detected because number
  • 00:25:06
    one we don't have any service which is
  • 00:25:07
    running in the demo name space and
  • 00:25:09
    number two even if there was any service
  • 00:25:11
    which was running if the service does
  • 00:25:13
    not have any errors it will not show the
  • 00:25:15
    errors here so if kzpt is not able to
  • 00:25:17
    find any errors in your cluster it will
  • 00:25:19
    give you this particular results no
  • 00:25:21
    problem detected that means your cluster
  • 00:25:23
    is fully secure and there are no errors
  • 00:25:25
    for you to solve cool something you can
  • 00:25:26
    also try is you can use multiple filters
  • 00:25:29
    so let us say if I quickly see the
  • 00:25:31
    filters list so let us say I want to use
  • 00:25:33
    the Pod service and I also want to use
  • 00:25:35
    the node filter right in one single
  • 00:25:36
    command so you don't have to write it
  • 00:25:38
    separately you can just separate them by
  • 00:25:40
    a comma and you can just write filter
  • 00:25:42
    and you can also write node so it will
  • 00:25:45
    only give you the pods service or the
  • 00:25:47
    namespace which has some error right now
  • 00:25:49
    we can see that the node doesn't have
  • 00:25:51
    any errors so kgpt won't show us but we
  • 00:25:54
    can see that we have errors in pod and
  • 00:25:56
    service and that's what kgpt has given
  • 00:25:58
    us right and we can also make it
  • 00:26:01
    namespace scoped so we can just write
  • 00:26:02
    default namespace it will not give us
  • 00:26:04
    the pod which is in the demo namespace
  • 00:26:06
    right now there are a bunch of other
  • 00:26:07
    interesting flags that you can try out
  • 00:26:09
    with the analyze command the namespace
  • 00:26:11
    flag we already looked into there is
  • 00:26:12
    another flag which is the output you can
  • 00:26:14
    change the format of the output you want
  • 00:26:16
    right if we can just do
  • 00:26:18
    kgpt analyze D-
  • 00:26:21
    explain and we use AMA and R we'll write
  • 00:26:25
    output Json so it will give us the
  • 00:26:28
    result in a Json format like this right
  • 00:26:31
    so this particular output is pretty
  • 00:26:32
    useful if you're looking to integrate
  • 00:26:34
    kgpt with any other tool or if you're
  • 00:26:36
    looking to automate any task by
  • 00:26:38
    integrating kgpt right so Json format is
  • 00:26:41
    something which is followed and passed
  • 00:26:44
    in a lot of different tools out there
  • 00:26:45
    right so this particular output format
  • 00:26:47
    can definitely help you in that case
  • 00:26:49
    another one that I would definitely want
  • 00:26:51
    to mention is anonymize now as I
  • 00:26:54
    mentioned previously when we are using
  • 00:26:56
    kgpt the user will send the command to
  • 00:26:59
    the kgpt CLI and the kgpt CLI will send
  • 00:27:02
    all the information of that particular
  • 00:27:03
    command and your system to the AI
  • 00:27:05
    provider and that's how AI provider will
  • 00:27:07
    then process it return it back to the
  • 00:27:09
    CLI and then we'll get the result and it
  • 00:27:11
    will return back to the user now let us
  • 00:27:13
    say you are concerned with the security
  • 00:27:15
    of your system right you may not want to
  • 00:27:17
    send potentially sensitive data to open
  • 00:27:19
    AI or any other AI back end right I mean
  • 00:27:22
    you don't want to send that you don't
  • 00:27:23
    want to do it so there is a flag which
  • 00:27:26
    you can use which is called as anonymous
  • 00:27:28
    so we can just use this flag with the
  • 00:27:30
    analyze command and I'll also put the
  • 00:27:33
    Json output so I can explain it in a
  • 00:27:35
    better way to you now what actually
  • 00:27:36
    happens in the back end when you use the
  • 00:27:38
    anonymized flag now when we send request
  • 00:27:40
    to kgpt it will retrieve all the
  • 00:27:42
    information right and additionally it
  • 00:27:44
    will mask out or hide all the sensitive
  • 00:27:47
    information that could be names of cuber
  • 00:27:49
    resources labels or that could be cuity
  • 00:27:52
    secret as well right so kgpt will mask
  • 00:27:54
    all this information before sending it
  • 00:27:56
    to the AI backend now the AI backend
  • 00:27:58
    will normally process this information
  • 00:28:00
    it will return it to the kgbt CLI it
  • 00:28:03
    will again unmask all the hidden values
  • 00:28:05
    and it will replace it with the original
  • 00:28:07
    name of the community's resources and
  • 00:28:08
    that's what the user sees in front of us
  • 00:28:10
    so at the back end kgpt is actually
  • 00:28:13
    hiding all this kind of sensitive data
  • 00:28:15
    right before sending it to the AI and
  • 00:28:17
    you don't have to worry about any
  • 00:28:18
    sensitive data getting compromised in
  • 00:28:20
    your cluster right so if you just see
  • 00:28:22
    the output here with the anonymized
  • 00:28:25
    flag here you can see that because we
  • 00:28:27
    had had some kind of labels with our
  • 00:28:29
    service right so it has created a
  • 00:28:32
    separate section which is called as
  • 00:28:33
    sensitive and this is the unmasked
  • 00:28:35
    version and this is the mask version so
  • 00:28:38
    the mask version of this particular
  • 00:28:39
    label or the hidden version is what kgpt
  • 00:28:42
    is sending to the AI provider right you
  • 00:28:44
    can scroll down and see more such
  • 00:28:46
    examples right so we have the cube API
  • 00:28:48
    server as well this is also masked so
  • 00:28:51
    this is the value that is being sent to
  • 00:28:53
    the AI back end by K CPT and for example
  • 00:28:55
    if we just use the anonymized filter and
  • 00:28:57
    we don't use the output flag you can see
  • 00:29:00
    that we are able to see all the original
  • 00:29:01
    names right and that's what kzp actually
  • 00:29:03
    does in between it replaces all the mass
  • 00:29:06
    value with the actual names of the
  • 00:29:07
    communties resources cool so I think we
  • 00:29:10
    have covered a lot of important flags
  • 00:29:12
    and some additional functionalities you
  • 00:29:14
    can use with the analyze command if you
  • 00:29:15
    want to check out more you can just type
  • 00:29:17
    kgbt analyze help you can check out more
  • 00:29:20
    interesting options that you can use you
  • 00:29:21
    can also change the language of the
  • 00:29:23
    output right so kgbt supports a bunch of
  • 00:29:25
    language as well this is pretty
  • 00:29:26
    interesting but I hope that by now you
  • 00:29:28
    would definitely have a lot of clarity
  • 00:29:30
    on how kgpt actually is working right
  • 00:29:33
    and what are the different flags and
  • 00:29:34
    what are the different what are the
  • 00:29:36
    different options you can use with the
  • 00:29:37
    analyze command and how you can
  • 00:29:39
    customize your scan to fit your
  • 00:29:40
    particular use case okay so let us talk
  • 00:29:43
    about Integrations right and how you can
  • 00:29:45
    integrate kgbt with other tools in the
  • 00:29:48
    ecosystem and I think when we are
  • 00:29:50
    talking about the cloud native ecosystem
  • 00:29:51
    right the main value lies in how well a
  • 00:29:55
    particular tool can integrate with the
  • 00:29:57
    other tools that are involved in the
  • 00:29:58
    cncm landscape and kgpt actually
  • 00:30:00
    provides us a very simple way to
  • 00:30:02
    integrate with a bunch of tools so let
  • 00:30:04
    us take a look how we can work with
  • 00:30:06
    Integrations in K CPT so for that we
  • 00:30:08
    have a pretty simple command called as K
  • 00:30:10
    CPT Integrations and here are a bunch of
  • 00:30:13
    options that we can use we can activate
  • 00:30:15
    deactivate and list the built-in
  • 00:30:16
    Integrations so let us first list the
  • 00:30:19
    available Integrations right now so we
  • 00:30:22
    can do kgpt integration list and these
  • 00:30:24
    are all the available Integrations at
  • 00:30:27
    the time of recording this particular
  • 00:30:28
    video and in kcpt version 0.3.3 n right
  • 00:30:34
    so we have trivy Prometheus AWS kada and
  • 00:30:38
    we have KERO which is the very latest
  • 00:30:40
    one so how you can install these
  • 00:30:42
    Integrations and how you can use them
  • 00:30:45
    you can find it in this particular
  • 00:30:47
    documentation right here link is in the
  • 00:30:49
    description but in this particular video
  • 00:30:50
    we are going to see how we can use the
  • 00:30:52
    trivy plus the key one integration as
  • 00:30:54
    well okay so let us start with trivy
  • 00:30:56
    trivy is basically a vulnerability
  • 00:30:58
    scanning open source tool by Echo
  • 00:31:00
    security and it helps us find any
  • 00:31:02
    vulnerabilities misconfigurations as
  • 00:31:05
    bombs in containers and all that stuff
  • 00:31:07
    right so it can basically scan container
  • 00:31:08
    images file system git repositories
  • 00:31:11
    virtual machine images communities any
  • 00:31:13
    resources on AWS and all that stuff and
  • 00:31:16
    it will list down all the potential
  • 00:31:18
    vulnerabilities of that particular
  • 00:31:19
    resource right it's a pretty interesting
  • 00:31:21
    tool and if this is your first time make
  • 00:31:23
    sure to check it out I'll leave the link
  • 00:31:24
    in the description box so in order to
  • 00:31:26
    use trivy we just need to activate
  • 00:31:27
    activate that integration so we can just
  • 00:31:29
    use kgbt integration activate trivy and
  • 00:31:33
    what this will do is it will install the
  • 00:31:34
    trivy operator itself onto our
  • 00:31:36
    kubernetes cluster so as you can see
  • 00:31:38
    most of the resources for me were
  • 00:31:40
    already present so it has skipped
  • 00:31:41
    everything but essentially at the end it
  • 00:31:44
    says that the trivy operator kgpt is
  • 00:31:46
    installed we can also actually check
  • 00:31:48
    this using Helm so if we do Helm list
  • 00:31:51
    you can see that the trivy operator K
  • 00:31:53
    GPT is deployed right now cool now if
  • 00:31:56
    you remember in the previous section we
  • 00:31:58
    discussed about filters right and
  • 00:32:00
    filters are a way for you to customize
  • 00:32:02
    your scan based on certain resources now
  • 00:32:04
    any particular integration you add with
  • 00:32:06
    kgpt right it will add more resources to
  • 00:32:09
    the list of filters which we can use
  • 00:32:11
    with the basic Command right now if you
  • 00:32:14
    just confirm whether the integration has
  • 00:32:15
    been added or not we can just write kgpt
  • 00:32:18
    integration list and you can see that
  • 00:32:20
    the trivy integration is active right
  • 00:32:21
    now cool now if you just write kgpt
  • 00:32:25
    filters list you can see that two more
  • 00:32:28
    resources have been added here number
  • 00:32:30
    one is config audit report and the
  • 00:32:32
    number two is vulnerability report these
  • 00:32:35
    were the part of the trivy integration
  • 00:32:37
    right and now we can use these
  • 00:32:39
    particular resources as a filter with
  • 00:32:41
    the kgpt analyze command cool so let me
  • 00:32:44
    show you how it works let's see if we
  • 00:32:46
    have already a bunch of resources
  • 00:32:48
    running yep we already do okay first
  • 00:32:50
    let's copy the name from the filters
  • 00:32:54
    list let's say we want to use the
  • 00:32:56
    vulnerability report filter cool so we
  • 00:32:59
    can write
  • 00:33:00
    kgpt
  • 00:33:04
    analyze and we can just write filter
  • 00:33:08
    we'll paste the name of the
  • 00:33:10
    vulnerability report
  • 00:33:11
    filter and we'll use it so now if you
  • 00:33:14
    see we used kgpt analyze which gives us
  • 00:33:18
    all the errors in our particular cluster
  • 00:33:20
    right but we used the filter as
  • 00:33:22
    vulnerability report this is pretty
  • 00:33:24
    interesting right because at first we
  • 00:33:26
    are able to use kgpt to analyze all of
  • 00:33:28
    our errors in the cluster and then we
  • 00:33:30
    use trivy to get the vulnerability
  • 00:33:32
    report of all the errors right now if
  • 00:33:34
    you have used trivy before you would
  • 00:33:36
    know that it categorizes all the
  • 00:33:38
    vulnerabilities into three categories
  • 00:33:40
    which is normal medium and critical so
  • 00:33:42
    by default kgbt will only display the
  • 00:33:44
    critical vulnerabilities found and
  • 00:33:46
    another thing which is interesting is
  • 00:33:47
    you can head over to one of these links
  • 00:33:49
    and you can learn more about this
  • 00:33:52
    particular vulnerability as well so I
  • 00:33:54
    think this is pretty cool right and
  • 00:33:56
    again we can use the explain flag so
  • 00:33:58
    we'll just wait for a couple of seconds
  • 00:34:00
    to do its thing okay this definitely
  • 00:34:02
    took some time but now it has provided
  • 00:34:04
    us with a lot more information about
  • 00:34:07
    that particular vulnerability as well so
  • 00:34:09
    you can see it has given us the
  • 00:34:10
    information about the vulnerability what
  • 00:34:12
    exactly it is all about what's the root
  • 00:34:15
    cause what are the potential risk and
  • 00:34:17
    what are the potential Solutions we can
  • 00:34:19
    use to mitigate this vulnerability right
  • 00:34:22
    so this is pretty cool and similarly you
  • 00:34:24
    can also use the config audit report
  • 00:34:27
    with your k GPT analyze command and
  • 00:34:29
    that's how you can filter out the
  • 00:34:30
    resources accordingly so that's pretty
  • 00:34:31
    interesting okay so let us see how keyo
  • 00:34:34
    integration also looks like right
  • 00:34:36
    because that is the latest integration
  • 00:34:38
    which came with the 0.3.3 N version you
  • 00:34:41
    can also find the pull request here as
  • 00:34:42
    well and you can check out how it was
  • 00:34:44
    made because this is the very initial
  • 00:34:46
    stages for this particular integration
  • 00:34:48
    now if you remember that when we use the
  • 00:34:50
    triv integration it automatically
  • 00:34:52
    installed the trivy cuties operator onto
  • 00:34:54
    our cluster right but for some Advanced
  • 00:34:56
    Integrations for example Prometheus you
  • 00:34:58
    would need to have the Prometheus
  • 00:35:00
    operator previously configured and
  • 00:35:02
    installed on your cluster right before
  • 00:35:03
    you can activate the integration and use
  • 00:35:05
    it with K CBT so that is the same case
  • 00:35:08
    with keyno so before we use the keyno
  • 00:35:10
    integration we would need to install the
  • 00:35:12
    keyno operator in our cluster and we
  • 00:35:14
    would need to set that up before we can
  • 00:35:15
    use it with K CPT cool by the way if
  • 00:35:18
    you're not familiar with kyero it is a
  • 00:35:20
    policy engine that you can use with
  • 00:35:22
    kubernetes to validate your resources
  • 00:35:24
    right I'll leave the link in the
  • 00:35:25
    description box if this is your first
  • 00:35:27
    time heing about it you can check out
  • 00:35:29
    the documentation as well but let us
  • 00:35:32
    install the key word no operator first
  • 00:35:34
    onto our cluster so for that we have
  • 00:35:36
    these three commands that we'll run
  • 00:35:39
    number one is We'll add the helm repo of
  • 00:35:41
    key wordo then we'll do a repo update
  • 00:35:43
    and then we'll install the keyo operator
  • 00:35:46
    in the keyo namespace we are also
  • 00:35:47
    creating a new
  • 00:35:49
    namespace so it will do a bunch of
  • 00:35:51
    things I already have the key1 no chart
  • 00:35:54
    so it detected that and it has deployed
  • 00:35:56
    key1 no in our cluster so if we do Cube
  • 00:35:58
    CTL get NS so you can see a new name
  • 00:36:02
    space has been created and if we do Cube
  • 00:36:04
    CTL get
  • 00:36:07
    all minus
  • 00:36:09
    n keyware now you can see these are the
  • 00:36:13
    bunch of resources that keyo has created
  • 00:36:16
    for us right so we have a bunch of
  • 00:36:17
    deployments we have a bunch of PS which
  • 00:36:19
    are running Services which are running
  • 00:36:20
    we don't need to worry about it right
  • 00:36:22
    now you can check out in the description
  • 00:36:23
    for more resources but the essential
  • 00:36:25
    thing is that the key1 operator in
  • 00:36:27
    installed and now we can integrate it
  • 00:36:29
    with
  • 00:36:31
    kcpt so we can just do
  • 00:36:33
    kgpt Integrations list and then we can
  • 00:36:38
    add the key1 integration we do kgpt
  • 00:36:41
    integration activate key1 no you can see
  • 00:36:44
    that it has added keyo to the active
  • 00:36:46
    list right and now if we do kgpt filters
  • 00:36:50
    list you can see we have two resources
  • 00:36:52
    that we can use as filters with the
  • 00:36:54
    analyze command one is the cluster
  • 00:36:56
    policy report and one is the policy
  • 00:36:58
    report now before we move on we would
  • 00:37:00
    also need to create a validation policy
  • 00:37:02
    right against which Keo would check for
  • 00:37:04
    any errors so I've already gone ahead
  • 00:37:06
    and created one from the documentation
  • 00:37:08
    itself and this particular policy will
  • 00:37:10
    check whether a pod has the label team
  • 00:37:13
    or not if the pod doesn't have that
  • 00:37:15
    particular label CUO would not let the
  • 00:37:17
    Pod be applied to the cluster cool so
  • 00:37:20
    let us just apply this
  • 00:37:21
    policy this is by the way directly from
  • 00:37:24
    the documentation you can also create
  • 00:37:25
    your own policy if you wish to
  • 00:37:28
    so the policy has been applied and we
  • 00:37:30
    can just quickly test it so let us say
  • 00:37:32
    if we run an engine xod and we don't
  • 00:37:35
    give it the label team right so you can
  • 00:37:37
    see that we received an error from the
  • 00:37:40
    server and it says that the label team
  • 00:37:43
    is required and if we add a pod with the
  • 00:37:47
    label team and we can use app let's say
  • 00:37:50
    so the Pod will get created
  • 00:37:53
    right you can see the engin X2 pod is
  • 00:37:55
    running now so the validation policy is
  • 00:37:57
    also working right we have Keo operator
  • 00:37:59
    installed we have set up a validation
  • 00:38:01
    policy right now there are a bunch of
  • 00:38:03
    resources which are already running in a
  • 00:38:05
    cluster now we want K GPT to First scan
  • 00:38:07
    our entire cluster and only find the
  • 00:38:09
    resources which violates this particular
  • 00:38:11
    policy right so what we can do is we can
  • 00:38:15
    just write kgpt let me just find the
  • 00:38:18
    name first and then we can do kgpt
  • 00:38:21
    analyze and instead of the vulnerability
  • 00:38:25
    filter we can just use the policy report
  • 00:38:28
    filter and you can see that we have a
  • 00:38:30
    whole bunch of resources in our cluster
  • 00:38:31
    which are not following this particular
  • 00:38:34
    validation policy so this is how the
  • 00:38:36
    keyner integration is working right and
  • 00:38:38
    you can definitely use the explain flag
  • 00:38:40
    here to find the solution as well but we
  • 00:38:44
    already know the solution right we don't
  • 00:38:45
    have the team labels with these
  • 00:38:47
    particular resources but all in all this
  • 00:38:49
    is how the key one integration looks
  • 00:38:51
    right now something I would definitely
  • 00:38:53
    want to mention is that as the key1
  • 00:38:55
    integration is at a very early stage at
  • 00:38:57
    the time of recording this video you may
  • 00:38:59
    find some kind of Errors right sometimes
  • 00:39:02
    it may not work when you may add
  • 00:39:04
    additional filters to it this is at a
  • 00:39:06
    pretty initial stage and the community
  • 00:39:07
    is actively working for it so if you
  • 00:39:09
    encounter any new problem feel free to
  • 00:39:11
    open an issue to the KGB GitHub repo I
  • 00:39:14
    link it down in the description as well
  • 00:39:16
    and I think that's the benefit of it
  • 00:39:17
    being an open source project right we
  • 00:39:19
    can all help it improve cool so I think
  • 00:39:22
    we have covered a bunch of stuff let's
  • 00:39:24
    see how the kcpt operator works now cool
  • 00:39:26
    so I think we have spent an extensive
  • 00:39:28
    amount of time exploring the kgpt CLI
  • 00:39:31
    right and I think this is a great way to
  • 00:39:33
    get started and perform quick scans in
  • 00:39:35
    your cluster now what if you want to
  • 00:39:37
    continuously scan your cluster right you
  • 00:39:39
    don't want to mess with the CLI you
  • 00:39:41
    don't want all these manual processes so
  • 00:39:43
    for that we have the kgpt operator which
  • 00:39:45
    you can install in your cuties cluster
  • 00:39:47
    itself and it will continuously scan for
  • 00:39:50
    any errors and any issues in your
  • 00:39:52
    cluster and you can continuously view
  • 00:39:54
    those results and resolve the issues as
  • 00:39:56
    well so this is basically the
  • 00:39:57
    architecture of what are the different
  • 00:39:59
    components that are included in the kgbt
  • 00:40:01
    operator so you can see we have the
  • 00:40:03
    operator we have the community's custom
  • 00:40:05
    resource definition and then it creates
  • 00:40:07
    a bunch of KCBD deployment which talks
  • 00:40:09
    to the API server and that's how it
  • 00:40:12
    communicates with the communities
  • 00:40:13
    operator optionally you can also set
  • 00:40:16
    Prometheus to scrape out the Matrix of
  • 00:40:19
    the particular operator and you can
  • 00:40:20
    monitor the performance of the operator
  • 00:40:22
    itself as well so installing the
  • 00:40:24
    operator is pretty simple we are just
  • 00:40:25
    using Helm to add the repo update it and
  • 00:40:27
    then install the operator itself in a
  • 00:40:29
    new namespace I already installed it so
  • 00:40:32
    it's already running in my system I'll
  • 00:40:33
    not do this so if I just do K GPT get NS
  • 00:40:37
    you can see that a new namespace is
  • 00:40:40
    already created and if I do K GPT get
  • 00:40:44
    all I'm sorry Cube CTL get
  • 00:40:47
    all so you can see I have a
  • 00:40:51
    deployment a service and a replica set
  • 00:40:53
    already up and running and these are the
  • 00:40:55
    resources that are a part of the kgbt
  • 00:40:58
    operator
  • 00:41:00
    cool now according to the guide now the
  • 00:41:02
    next step which we would need to do is
  • 00:41:05
    deploy the API key of the back end that
  • 00:41:07
    we are using as a secret right so right
  • 00:41:09
    now I'm using AMA so we don't have a
  • 00:41:11
    secret to be deployed but according to
  • 00:41:13
    the documentation it's given that it's
  • 00:41:16
    compulsory so this is what I did if I
  • 00:41:18
    write kgpt o list you can see now the
  • 00:41:21
    active back end is open a so I replaced
  • 00:41:24
    AMA with open a right using the API key
  • 00:41:27
    you can create your account on open or
  • 00:41:29
    you can also write K GPT
  • 00:41:34
    generate which will automatically lead
  • 00:41:36
    you to that particular page where you
  • 00:41:38
    want to create your API key right so now
  • 00:41:40
    that we have our API key ready we can
  • 00:41:42
    create this particular kubernetes secret
  • 00:41:44
    and before that I'll
  • 00:41:47
    just export this environment
  • 00:41:50
    variable so we can just write
  • 00:41:53
    export you don't need to copy this
  • 00:41:55
    because this won't work after the
  • 00:41:57
    tutorial I'll be sure to delete it and
  • 00:41:59
    now let's create our secret it said fail
  • 00:42:03
    to create secret already exist oops if I
  • 00:42:06
    do K
  • 00:42:09
    get
  • 00:42:11
    Secrets okay I can do secrets in all
  • 00:42:14
    name spaces so I think it's in a
  • 00:42:15
    different name space yep here it is so
  • 00:42:18
    let us maybe delete it that's the
  • 00:42:22
    simplest possible solution I can think
  • 00:42:25
    of right now so we do loate delete
  • 00:42:28
    Secrets minus
  • 00:42:31
    n the name space is this and the secret
  • 00:42:36
    we want to delete is this and we'll use
  • 00:42:38
    the force flag you don't have to use it
  • 00:42:40
    again cool I think now the error should
  • 00:42:43
    be
  • 00:42:44
    gone so now we can create the secret
  • 00:42:47
    okay the secret has been created now we
  • 00:42:49
    can deploy the custom resource of K gbt
  • 00:42:53
    right so before we apply let us examine
  • 00:42:55
    the file first right so it's a kind
  • 00:42:57
    Cades GPT the model we are using is 3.5
  • 00:43:00
    turbo the AI provider we are using is
  • 00:43:02
    open Ai and the name of the secret which
  • 00:43:04
    we have just created the only thing we
  • 00:43:06
    need to change here is the version so
  • 00:43:09
    make sure you are on the latest version
  • 00:43:11
    you just need to replace it with the
  • 00:43:13
    latest version that you are on of G GPT
  • 00:43:16
    cool and I think everything looks good
  • 00:43:18
    now we can just apply this you'll do
  • 00:43:22
    ggpt apply
  • 00:43:24
    resource and you can see that the
  • 00:43:27
    resource kgpt sample has been created
  • 00:43:31
    okay so once you have applied that
  • 00:43:33
    particular resource right you can just
  • 00:43:35
    go ahead and do Cube seed get results uh
  • 00:43:38
    in the Json format right and right now
  • 00:43:42
    it's empty because this particular step
  • 00:43:45
    takes a little bit of time it is also
  • 00:43:47
    mentioned in the documentation as well
  • 00:43:49
    once the initial scans have been
  • 00:43:51
    completed after several minutes you will
  • 00:43:53
    be presented with the results custom
  • 00:43:55
    resources right so this is how the
  • 00:43:57
    results look like it will give you the
  • 00:44:00
    details and it will also give you the
  • 00:44:02
    error as well right now an interesting
  • 00:44:04
    way in which you can view these
  • 00:44:06
    particular results is via Prometheus as
  • 00:44:08
    well so if we go to references and we
  • 00:44:11
    head over to the options here so these
  • 00:44:14
    are a bunch of options that you can set
  • 00:44:16
    and customize how you want to use the
  • 00:44:18
    kgpt operator right for example as it's
  • 00:44:21
    mentioned in the diagram as well we can
  • 00:44:23
    use Prometheus to scrape out the in
  • 00:44:25
    cluster Matrix and then we can use
  • 00:44:27
    grafana to view those particular metrics
  • 00:44:30
    right now this is something I'm not
  • 00:44:32
    going to show in this particular video
  • 00:44:34
    because I think it has already been a
  • 00:44:35
    long one and we have covered a bunch of
  • 00:44:36
    interesting stuff but I'll not leave you
  • 00:44:38
    hanging I'll leave a link to a resource
  • 00:44:40
    that you can check out in the
  • 00:44:41
    description box if you want to know how
  • 00:44:43
    to set Prometheus and how you want to
  • 00:44:45
    enable grafana as well and that's how
  • 00:44:48
    you can use the kzpt operator to its
  • 00:44:50
    full extent cool I think we have covered
  • 00:44:52
    a lot of interesting stuff a few things
  • 00:44:54
    that I would definitely want to mention
  • 00:44:56
    before we part ways if you are new to
  • 00:44:58
    kgbt you can check out the kgpt c
  • 00:45:00
    tutorial which is with killer Koda right
  • 00:45:04
    so this is a great way for you to just
  • 00:45:07
    get started get to know how the CLI is
  • 00:45:09
    actually working just play around with
  • 00:45:10
    the commands right and see how they are
  • 00:45:12
    working right so shout out to the
  • 00:45:14
    community for creating this two
  • 00:45:15
    additional things that is something for
  • 00:45:17
    you to explore is how you can create
  • 00:45:19
    custom analyzers so if you remember that
  • 00:45:22
    when we do
  • 00:45:23
    kgpt filters list so the kubernetes
  • 00:45:27
    resources that you see in this
  • 00:45:29
    particular list are actually called as
  • 00:45:30
    analyzers right and if you head over to
  • 00:45:33
    this repo of kgbt you can find the
  • 00:45:36
    analyzer Dogo file and here you can see
  • 00:45:39
    the list of all the analyzers that are
  • 00:45:41
    built into kgbt right so one interesting
  • 00:45:44
    thing is that you can write your own
  • 00:45:45
    custom analyzer that matches your
  • 00:45:47
    specific use case so this particular
  • 00:45:49
    guide will provide a way for you to
  • 00:45:51
    write your own analyzer right it has a
  • 00:45:53
    bunch of things so you can definitely
  • 00:45:54
    check this one out another interesting
  • 00:45:57
    thing you can also check is how you can
  • 00:45:58
    integrate kgpt operator with slack this
  • 00:46:01
    is something interesting because for
  • 00:46:03
    example if your team is working on slack
  • 00:46:05
    and you'll want constant updates from
  • 00:46:06
    kgpt you can configure the operator to
  • 00:46:10
    use slack and it will send you constant
  • 00:46:12
    updates about any issues that may occur
  • 00:46:15
    right and it's pretty simple it's pretty
  • 00:46:16
    straightforward you just have to
  • 00:46:18
    uncomment this particular section and
  • 00:46:20
    you just have to add the slack web URL
  • 00:46:23
    here this is definitely something for
  • 00:46:24
    you to explore and by the way all the
  • 00:46:26
    steps are are also mentioned in the
  • 00:46:28
    documentation as well for your help
  • 00:46:29
    which you can check out and follow along
  • 00:46:31
    cool so I think we have covered a lot in
  • 00:46:33
    this particular tutorial I hope that you
  • 00:46:35
    were able to follow along and learn how
  • 00:46:37
    kzpt actually makes the entire process
  • 00:46:40
    of debugging and troubleshooting a
  • 00:46:43
    cuties cluster really easy and really
  • 00:46:45
    simple for you all I definitely believe
  • 00:46:47
    that this is just the start we are
  • 00:46:48
    already seeing a huge impact AI has been
  • 00:46:51
    making in the devops and Cloud native
  • 00:46:53
    and all of the other Tech Industries and
  • 00:46:55
    I believe that kgpt is one such tool
  • 00:46:57
    which really shows promise and it really
  • 00:46:59
    shows the impact AI can make in managing
  • 00:47:02
    a complex cuberes environment a quick
  • 00:47:04
    summary of what all we covered in this
  • 00:47:05
    particular video we covered what is kgpt
  • 00:47:08
    we then went onto the installation we
  • 00:47:10
    saw how kgpt helps you to analyze your
  • 00:47:13
    cluster for any errors then we also took
  • 00:47:15
    a look at how kgpt connects to an AI
  • 00:47:17
    backend provider to provide you with
  • 00:47:19
    potential Solutions in simple language
  • 00:47:22
    then we also saw how we can integrate
  • 00:47:23
    kgpt with different tools in the cloud
  • 00:47:26
    native ecosystem I think that's a really
  • 00:47:27
    powerful feature and the community is
  • 00:47:29
    actively working to include more
  • 00:47:31
    Integrations into the codebase and
  • 00:47:33
    lastly we also had a nice overview of
  • 00:47:35
    how we can use the kgpt operator and
  • 00:47:38
    some of the additional features like
  • 00:47:39
    slack integration and how you can create
  • 00:47:41
    your custom analyzers now these are some
  • 00:47:43
    features I want you to try out and let
  • 00:47:45
    me know in the comment section down
  • 00:47:46
    below what do you all think about it
  • 00:47:48
    lastly all the resources I showed you in
  • 00:47:50
    this particular video and some
  • 00:47:51
    additional resources I'll be sure to
  • 00:47:53
    link it down in the description box down
  • 00:47:55
    below so you all can check it out out
  • 00:47:57
    and the KGB Community is growing so give
  • 00:47:59
    it a try and don't hesitate to get
  • 00:48:02
    involved in its development as well but
  • 00:48:03
    yeah if you really enjoyed the video
  • 00:48:05
    make sure to hit the Subscribe button I
  • 00:48:07
    think you'll find it somewhere here and
  • 00:48:08
    make sure to hit that like button for
  • 00:48:10
    the algorithm it really helps us out but
  • 00:48:12
    yeah I'll see you all in another video
  • 00:48:14
    bye
Etiquetas
  • AI
  • Kubernetes
  • ChatGPT
  • K8s GPT
  • troubleshooting
  • integration
  • K8s CLI
  • cloud-native
  • Trivy
  • Kverno