Complete Guide to K8sGPT | Simplify Kubernetes Troubleshooting with AI
摘要
TLDRThe video covers the recent surge in AI adoption across industries, emphasizing tools like ChatGPT, and discusses the integration of AI with Kubernetes to enhance productivity and simplify complex Kubernetes environments. The host introduces K8s GPT, a powerful tool designed to facilitate Kubernetes cluster management, debugging, and troubleshooting using AI capabilities. The video provides a comprehensive guide on installing K8s GPT using tools like Helm, setting up necessary configurations, and leveraging the CLI for effective analysis of errors within Kubernetes clusters. Viewers are walked through steps to authenticate K8s GPT with AI backends, utilize commands to pinpoint and explain errors, and maximize the tool's potential with features like anonymization to protect sensitive data. The video also touches upon the integration of K8s GPT with other tools in the cloud-native ecosystem, such as Trivy and Kverno, while illustrating how continuous scanning through the K8s GPT operator can significantly improve cluster management. Emphasis is placed on the community-driven development of K8s GPT and the potential it holds for the future of comprehensive Kubernetes management.
心得
- 📈 AI adoption has risen by 60% in recent years.
- 🤖 AI can significantly improve Kubernetes management.
- 🛠️ K8s GPT helps in debugging and troubleshooting Kubernetes.
- 💻 Helm is needed for installing K8s GPT.
- 🔐 Anonymize flag in K8s GPT protects sensitive data.
- 🔄 K8s GPT integrates with tools like Trivy and Kverno.
- 🧩 Filters in K8s GPT help in targeted resource scanning.
- 🌐 K8s GPT can be authenticated with multiple AI backends.
- 🔍 K8s GPT operator allows continuous scanning.
- 🚀 K8s GPT highlights AI's role in complex IT environments.
时间轴
- 00:00:00 - 00:05:00
The video begins by discussing the significant growth in AI adoption across industries, emphasizing how tools like ChatGPT are enhancing productivity and streamlining processes. AI's impact leads to questions about integrating AI with Kubernetes, a widely used container orchestration platform. The speaker proposes using AI to manage Kubernetes environments more efficiently by automating tasks and diagnosing problems.
- 00:05:00 - 00:10:00
The video introduces 'Kubes GPT,' a tool that applies AI to simplify troubleshooting and debugging Kubernetes clusters. The speaker intends to demonstrate its capabilities, starting with the prerequisites for using the tool, such as having a Kubernetes cluster and Helm installed. The video promises an exploration of how this tool can enhance Kubernetes management.
- 00:10:00 - 00:15:00
The speaker guides through installing Kubes GPT, detailing two installation methods: CLI and operator. The setup requires configuring an AI backend for Kubes GPT to facilitate AI processing. The video covers using different AI backends and highlights integration flexibility with local tools like 'AMA,' which allows running models without API fees.
- 00:15:00 - 00:20:00
The video dives into the initial setup of Kubes GPT, focusing on authenticating the tool to an AI backend. The speaker demonstrates using the CLI to authenticate Kubes GPT with AMA, a free AI model platform, explaining different options for backend integration and setting the stage for leveraging AI in Kubernetes error analysis.
- 00:20:00 - 00:25:00
The demonstration progresses by creating an error in a Kubernetes pod. The speaker uses Kubes GPT to identify and explain the error, comparing it with results from traditional Kubernetes commands, and shows how Kubes GPT suggests solutions. This illustrates the tool's potential to simplify error diagnosis and provide actionable solutions.
- 00:25:00 - 00:30:00
A further example is given where a service configuration error is analyzed using Kubes GPT, demonstrating the tool's ability to identify non-obvious errors by automating analysis processes. The example showcases the tool's utility in simplifying Kubernetes management for users of varying expertise levels by providing direct solutions.
- 00:30:00 - 00:35:00
The speaker explains how to refine error analysis using Kubes GPT by applying filters to narrow down the scope of searches to specific resources. This feature helps in managing large Kubernetes environments by focusing analyses on particular namespaces or resource types, thereby reducing the noise from irrelevant error messages.
- 00:35:00 - 00:40:00
Integrations with other tools, such as Trivy for vulnerability scanning and Kubes GPT's ability to analyze configuration security, expand its utility. The speaker demonstrates activating integrations and using them to enhance cluster security, providing insights into how Kubes GPT fits into a broader ecosystem of Kubernetes management tools.
- 00:40:00 - 00:48:16
Finally, the video covers advanced usage of Kubes GPT through the operator, which enables continuous monitoring and analysis. This automated approach contrasts with manual CLI usage, facilitating proactive Kubernetes management. The video also highlights community and development aspects, encouraging contributions to the tool's growth.
思维导图
常见问题
What is the focus of the video?
The focus is on the integration of AI, particularly K8s GPT, with Kubernetes for better debugging and troubleshooting.
Who is the host of the video?
The host's name is Kunal.
Why is AI integration suggested for Kubernetes?
AI can improve management efficiency, automate tasks, and streamline decision-making in Kubernetes environments.
What tool is introduced for Kubernetes troubleshooting?
The tool introduced is K8s GPT.
What prerequisites are needed to install K8s GPT?
You need a Kubernetes cluster and Helm pre-installed.
What purpose does the 'analyze' command serve in K8s GPT?
It identifies errors in a Kubernetes cluster and provides potential explanations and solutions.
How does K8s GPT ensure data privacy?
It uses an 'anonymize' flag to mask sensitive information before processing.
What advanced features does K8s GPT offer?
K8s GPT offers integrations with tools like Trivy and Kverno, and supports multiple AI backends.
What development stage is the K8s GPT Kverno integration at?
The Kverno integration is at an early stage, and users might encounter issues.
Where can you find more resources or documentation for K8s GPT?
Resources and documentation are available on its official website and linked documentation.
查看更多视频摘要
The Enneagram: Help For Type 3
And Man Created dog - National Geographic Documentary
El Éxito Educativo de Finlandia: Michael Moore
99% of People DON’T KNOW the Correct Way to Drink Water | Buddhist Teachings
Hsu Untied interview with Jake Kling, Partner at Wachtell Lipton
AI tools for software engineers, but without the hype – with Simon Willison (Co-Creator of Django)
- 00:00:00all right I think we all agree that in
- 00:00:02these recent years AI has taken the
- 00:00:04World by strong right I mean tools like
- 00:00:06chat gbt and different products by open
- 00:00:09AI are being used in different
- 00:00:11Industries right and they are actually
- 00:00:13making our work very easier providing us
- 00:00:15insights they're making the entire work
- 00:00:17process a lot more faster therefore
- 00:00:19boosting our productivity as well now
- 00:00:21according to the state of the AI report
- 00:00:22by McKenzie AI adoption has increased
- 00:00:25around 60% in the last year alone I mean
- 00:00:28that's a huge number right and
- 00:00:29businesses are continuously using AI to
- 00:00:32improve their processes and stay ahead
- 00:00:34in terms of innovation now this
- 00:00:36particular Point really got me thinking
- 00:00:38because AI has this kind of significance
- 00:00:40and this impact in all of these
- 00:00:42industries why don't we use AI with
- 00:00:44cuberes cuberes is one of the most
- 00:00:47widely adopted open source container
- 00:00:49orchestration platform out there if you
- 00:00:50don't know about cues I mean if you've
- 00:00:52been following the channel for a while
- 00:00:54you would definitely know what cuberes
- 00:00:55is but if you don't there's an entire
- 00:00:57workshop on cuberes 101 that teaches you
- 00:01:00the basic two advanced concepts about
- 00:01:02cuties you can check out the description
- 00:01:04box down below for its link the point is
- 00:01:06that Cuties is one of the largest open
- 00:01:08source projects out there but we all
- 00:01:10agree that when City's environments grow
- 00:01:12larger it really becomes a challenge
- 00:01:14when it comes to debugging and troubl
- 00:01:16shooting any issues overall it becomes
- 00:01:18really difficult to manage so I think AI
- 00:01:20can make a big difference here right by
- 00:01:22adding AI to a normal communities
- 00:01:24workflow we can make the management
- 00:01:26process much faster and much efficient
- 00:01:29AI can help us as quickly diagnose and
- 00:01:31resolve any issues that we have it can
- 00:01:33automate any routine tasks that we have
- 00:01:35in our C environment and overall can
- 00:01:37improve the decision- making process of
- 00:01:39different companies right now we have
- 00:01:41already seen in the cloud native
- 00:01:43ecosystem that a lot of products have
- 00:01:45started adopting AI in some of the other
- 00:01:47way but there is one tool out there
- 00:01:49which was released very recently and I
- 00:01:52believe that this particular tool brings
- 00:01:53the capabilities of AI to the next level
- 00:01:56hey everyone my name is konal and in
- 00:01:58this particular video we are going to
- 00:01:59learn about Cage's GPT which is a really
- 00:02:02powerful tool that brings the
- 00:02:04capabilities of AI to Cub
- 00:02:06troubleshooting and debugging and makes
- 00:02:08it really easy for you to solve any
- 00:02:10problems in your cuties cluster if this
- 00:02:12sounds interesting to you hit that like
- 00:02:14button and let's get
- 00:02:15[Music]
- 00:02:19started now before we move ahead and
- 00:02:22install KS GPD onto our system there are
- 00:02:24a few things that you need to configure
- 00:02:25number one is you need to have a
- 00:02:27communties cluster up and running so I
- 00:02:29already have G ahead and created one
- 00:02:30using mini Cube so if I write K get
- 00:02:33nodes I have a single node cuties
- 00:02:34cluster running you can use any of the
- 00:02:36cuties distributions out there kind k3d
- 00:02:39k3s or any other of your choice but you
- 00:02:41need one cuties cluster the next thing
- 00:02:43we will need is Helm so we'll actually
- 00:02:45be installing a few Helm charts while we
- 00:02:47go through this particular tutorial so
- 00:02:49you would definitely need Helm
- 00:02:50pre-installed you can go ahead to the
- 00:02:52Helm's website and you can go ahead and
- 00:02:55install Helm from their documentation so
- 00:02:57once we have these two configured and
- 00:02:59installed right let us install kgpt now
- 00:03:02so let's head over to the documentation
- 00:03:04which is dos. kgp. I'll put the link in
- 00:03:07the description box down below so you
- 00:03:08all can check it out but the important
- 00:03:10thing to understand here is that there
- 00:03:12are two ways in which you can install K
- 00:03:14CPD and use it with your cuberes cluster
- 00:03:16number one is using the CLI right and
- 00:03:19that is the most simplest and the most
- 00:03:21basic way you can use KCPD the next is
- 00:03:23using the KCPD operator which we'll
- 00:03:25install in our Q cluster and it will
- 00:03:28constantly keep monitoring our ERS for
- 00:03:30any changes right now as for the
- 00:03:31operator how to install it how to use it
- 00:03:33we have a completely different section
- 00:03:35for that but let's start with the most
- 00:03:37basic which is the CLI right so to
- 00:03:39install the CLI you can go ahead and
- 00:03:41check out the installation guide if you
- 00:03:43have a Mac you can install using Brew
- 00:03:45I've already gone ahead and done that so
- 00:03:46if I write K CP version so you can see I
- 00:03:49have the latest
- 00:03:510.3.0 which is latest as I'm recording
- 00:03:54this particular video and if you do K
- 00:03:57GPD you can see that these are all the
- 00:04:00bunch of commands that are available to
- 00:04:01us we'll actually be exploring quite a
- 00:04:03bunch of these commands today so it's
- 00:04:04going to be an exciting
- 00:04:06ride all right so I hope we have already
- 00:04:08gone ahead and installed kgpt onto your
- 00:04:10system the next thing we would want to
- 00:04:12do is authenticate kgpt to an AI backend
- 00:04:15okay so if we head over to the reference
- 00:04:17I'll show you what I mean head over to
- 00:04:19the providers so in simple terms a
- 00:04:21backend or a provider is a way for KS
- 00:04:23GPT to talk to the large language model
- 00:04:25or the AI model that we'll be using
- 00:04:28right so we need to authenticate and
- 00:04:30choose one of the models that we want to
- 00:04:31use right now as of recording this video
- 00:04:34kpt supports 11 back ends right so we
- 00:04:36have all the famous ones like open AI we
- 00:04:39have Azure open a we have Amazon Google
- 00:04:42J is also there you can go ahead and use
- 00:04:44the open AI provider for that you would
- 00:04:47need the openai API key right but let us
- 00:04:49say you are just testing right like me
- 00:04:51so you don't want to spend money on any
- 00:04:53API keys so we'll use AMA to keep things
- 00:04:56simple now if you don't know about ama
- 00:04:58ama is basically a tool that can help
- 00:05:00you to run these large language models
- 00:05:02on your local system free of cost right
- 00:05:05so if you just see the list of models
- 00:05:07that are supported so you can see these
- 00:05:10are the list of all the models that you
- 00:05:11can run using AMA so we have the very
- 00:05:14latest llama 3.1 at the time of
- 00:05:17recording this particular video you can
- 00:05:18say it was updated 7 days ago we have
- 00:05:20all these particular models that we can
- 00:05:22use all of them are open source large
- 00:05:24language models you can use any of them
- 00:05:26right now cool so all in all is a pretty
- 00:05:30great way for you to run large language
- 00:05:32models if you don't want to spend any
- 00:05:34money so that's what we are going to use
- 00:05:35kcpt with right now so you can go ahead
- 00:05:38and download AMA for a particular
- 00:05:40operating system there are a bunch of
- 00:05:41different options for Linux Mac and
- 00:05:43windows I've already gone ahead and done
- 00:05:45that so if I write AMA so you can see
- 00:05:47the AMA CLI is already up and
- 00:05:53running okay once you have installed AMA
- 00:05:56right let us see how we can authenticate
- 00:05:57kgpt with the AI back end right for that
- 00:06:00we have a command called as KPD o let us
- 00:06:03write this so KPD or is provides the
- 00:06:06necessary credentials to authenticate
- 00:06:08with the chosen back end now there are
- 00:06:09actually a bunch of options we can use
- 00:06:11here so if I write kgbt Au list this
- 00:06:14actually gives a list of all the
- 00:06:16supported AI backends right so you can
- 00:06:18see that we have a lot of them right
- 00:06:20here and the default one is open Ai and
- 00:06:23we also have Ama one of them listed here
- 00:06:25so we want to use AMA so let us
- 00:06:26authenticate kgpt with AMA for that
- 00:06:28we'll use K CPT
- 00:06:31Au add and you can see it has already
- 00:06:34given us the command so I'll
- 00:06:35autocomplete it let me just walk you
- 00:06:37through it quickly the number one flag
- 00:06:39we are providing is the back end we are
- 00:06:41selecting the back ends from the list of
- 00:06:43back ends that I just showed you we want
- 00:06:45AMA the particular model that we want to
- 00:06:47use right now we want to use Lama 3
- 00:06:49which is second to the latest one right
- 00:06:51now the latest one is Lama 3.1 we can
- 00:06:53use that as well but let's just stick to
- 00:06:55Lama 3 right now and this is the base
- 00:06:57URL right so the base UR L is basically
- 00:07:00the API endpoint where your large
- 00:07:03language model is running now this
- 00:07:04basically depends on the AI provider
- 00:07:06that you're using on some of the AI
- 00:07:08providers that are listed out here you
- 00:07:10wouldn't need to use this base URL now
- 00:07:12why we are using this base URL is
- 00:07:14because we are running this locally so
- 00:07:16this is basically at Local Host if
- 00:07:18you're using a provider such as Google
- 00:07:19gmany or openai you would not need to
- 00:07:22provide this you would just need to
- 00:07:23provide the authentication key that is
- 00:07:25the API key right but right now as we
- 00:07:27are running this locally we would need
- 00:07:29to provide it as the base URL that's how
- 00:07:31kgbt will be able to contact the AMA API
- 00:07:35so let us hit enter it will say AMA is
- 00:07:38added to the AI backend now if you just
- 00:07:40do kgpt o list you would see that ama is
- 00:07:45an active AI provider right now so now
- 00:07:47whenever we use KCPD right to analyze
- 00:07:49our cuties cluster in the back end it's
- 00:07:51using the Lama 3 model and how is it
- 00:07:53doing it it's using AMA as the backend
- 00:07:55provider there's one last thing that is
- 00:07:57left to do which is starting the AMA
- 00:07:59server that is also important right so
- 00:08:02we just have to write AMA serve and this
- 00:08:05will basically start the local server of
- 00:08:07olama and you can see that it's
- 00:08:09listening on Local Host and the port
- 00:08:12number which we have actually configured
- 00:08:14in kcpt right so on this particular
- 00:08:17terminal you'll actually see the request
- 00:08:19that kgbt makes with the AMA back end
- 00:08:21and we'll definitely have a look into it
- 00:08:22as well okay so as I mentioned
- 00:08:24previously right the main purpose of K
- 00:08:27GPT is it uses the power of AI to scan
- 00:08:31our kubernetes cluster and it helps us
- 00:08:33to troubleshoot and debug any errors
- 00:08:35right so before we move on to testing it
- 00:08:38let us create one error explicitly so I
- 00:08:40have this example pod manifest here
- 00:08:42which is named by the name hungry pod
- 00:08:44and you'll definitely understand in a
- 00:08:46bit why we have named this the hungry
- 00:08:48pod right so this is a simple pod that
- 00:08:50will use the busy box container image
- 00:08:52and we have given the CPU value as th000
- 00:08:55this is a pretty simple pod now if you
- 00:08:56look at this right now it doesn't look
- 00:08:58like it will create an error error right
- 00:09:00but let us apply this into our cluster
- 00:09:02and see what happens so we'll do Cube
- 00:09:04CTL I can just do simple K apply minus F
- 00:09:09and we do broken pod. right so the
- 00:09:12hungry pod has been created and it's in
- 00:09:14the default name space so if I do K get
- 00:09:17po by the way K is short of Cub CTL I
- 00:09:19have created an LS for this it actually
- 00:09:22becomes a little bit simple when you
- 00:09:23just use K right so you can see that the
- 00:09:26hungry pod is created but it's in the
- 00:09:28pending state now if you're familiar
- 00:09:30with the basic concept of Cubes right if
- 00:09:32you know your way around a cubes cluster
- 00:09:35your default approach would be okay the
- 00:09:37Pod is in the pending State we can use
- 00:09:39the cube CTL describe command and we can
- 00:09:42use hungry pod right and we can just
- 00:09:44check the P events and see what's the
- 00:09:47problem but let's switch this game right
- 00:09:49and see how far we can go with AI so
- 00:09:52instead of using the cube C command we
- 00:09:54use kzpt
- 00:09:56analyze right and let us see what what
- 00:09:59it gives us so kgpt analyze is a command
- 00:10:02that will list down all the errors in
- 00:10:04your communities cluster right now we're
- 00:10:06actually not concerned with these
- 00:10:07default resources because these were
- 00:10:09created by mini Cube we are concerned
- 00:10:10with our pod right in the default name
- 00:10:12space which is the hungry pod so the
- 00:10:14error is that zero of one nodes are
- 00:10:18available insufficient CPU okay so
- 00:10:21seeing this I can understand that there
- 00:10:23are no nodes available with the
- 00:10:25sufficient CPU to accommodate or to
- 00:10:27schedule that particular pod okay cool I
- 00:10:29understand the error but konal I mean
- 00:10:32this is something that we can also do
- 00:10:33with Cube CDL describe right if we do
- 00:10:36Cube CTL
- 00:10:37describe Po and we do hungry pod you can
- 00:10:41see that the events also gives the same
- 00:10:44error right so what's the difference
- 00:10:46here's when things get interesting let
- 00:10:48us use GPD analyze again now that we
- 00:10:51know the error right we can use the
- 00:10:53explain flag to list down the solution
- 00:10:56for that particular error let me show
- 00:10:57you how we just write k GPT
- 00:11:01analyze
- 00:11:02explain and minus B is the flag that we
- 00:11:06use to specify the back end so right now
- 00:11:09we are using AMA right and we just do
- 00:11:13that so now is when it will contact the
- 00:11:17backend API right and you can see here
- 00:11:20that something is happening in the AMA
- 00:11:22server because kgbt is actually trying
- 00:11:24to contact the API and you can see that
- 00:11:26multiple post requests have been sent to
- 00:11:29this particular address it is Local Host
- 00:11:31API generate and this is what kgpt is
- 00:11:33doing in the back end right let us see
- 00:11:35the results now now it has actually
- 00:11:36given us us a bunch of things which are
- 00:11:38helpful the number one is it has
- 00:11:40simplified the error message so it says
- 00:11:42insufficient CPU resources available to
- 00:11:44schedule the pods with one node having
- 00:11:46insufficient CPU and no preemption
- 00:11:48victims found basically the error that
- 00:11:50we got but it has simplified it and
- 00:11:52basically presented it in plain English
- 00:11:55right and it's much easier to understand
- 00:11:56now and the interesting part is it has
- 00:11:58given us all the possible solutions that
- 00:12:00we can use to fix this particular error
- 00:12:02right so the number one is you can check
- 00:12:04the CPU utilization of that particular
- 00:12:06node you can scale down or delete the
- 00:12:08less critical parts to free up the
- 00:12:10resources and there are a bunch of
- 00:12:11things that you can try out now this is
- 00:12:12pretty interesting n because let us say
- 00:12:14you are a beginner right and you don't
- 00:12:16know your way around cuberes cluster
- 00:12:18that much this is pretty helpful for you
- 00:12:20because this just give you a head start
- 00:12:22right and you can see what are the
- 00:12:24different solutions that I can use to
- 00:12:25solve this particular error and this
- 00:12:27also helps in your learnings as well
- 00:12:28right so before solving it right let us
- 00:12:30see what is the CPU capacity of a node
- 00:12:33right which caus the error in the first
- 00:12:35place so if you do cubc it will get
- 00:12:36nodes if I do Cube CTL describe nodes
- 00:12:40mini
- 00:12:42Cube so in this particular section if
- 00:12:46you scroll up a bit we'll see a section
- 00:12:48named which is capacity right here and
- 00:12:51you can see the CPU limit is 11 course
- 00:12:54right and if you remember our pod we
- 00:12:56have given it 1,000 core CPU right so
- 00:12:59that's what was causing the error in the
- 00:13:01first place right and now if you look at
- 00:13:04the solutions see the fifth one adjust
- 00:13:06the PS resources request and limits to
- 00:13:08better match available node resources so
- 00:13:10the available node resources is 11 CPU
- 00:13:13there's a cap there right so let us say
- 00:13:16we change this to five so that our node
- 00:13:18can accommodate this particular pod so
- 00:13:20in order to apply these changes we'd
- 00:13:22have to delete our pod first right so I
- 00:13:24can do K so let us do K get pods
- 00:13:29then we'll do Cube Cil
- 00:13:32delete pause we'll do hungry pod and
- 00:13:36we'll also use the force flag this is
- 00:13:37optional by the way so now the Pod has
- 00:13:39been deleted let's reapply the Pod now
- 00:13:42the we Cube SE minus F broken pod the
- 00:13:45Pod has been applied and if we do Cube
- 00:13:48CTL get pods it's in the container
- 00:13:51creating State we do again yep the Pod
- 00:13:54is running successfully so we have fixed
- 00:13:55that particular error now if we do KPD
- 00:13:58analyze right without the explained flag
- 00:14:00if you remember KPD analyze gives you
- 00:14:02all the errors in your particular
- 00:14:04cluster and now if you have fixed that
- 00:14:05particular error it shouldn't give us
- 00:14:08again so if we do analyze right now see
- 00:14:11there are only four errors and the error
- 00:14:12it was showing with our pod has gone
- 00:14:14from here okay so I hope you understood
- 00:14:16a very basic example of how this works
- 00:14:18let us try one more example which will
- 00:14:20give you a lot more clarity right so let
- 00:14:22us create two more resources number one
- 00:14:24is we are creating a service right which
- 00:14:26by the name pod SVC and I am also
- 00:14:29creating a pod which is the engine X pod
- 00:14:31and we are attaching the Pod with the
- 00:14:33service using these labels right so let
- 00:14:36us apply
- 00:14:37them we'll use Cube CTL apply minus F
- 00:14:42and what was the name it was pod
- 00:14:46SVC yep let us apply them so you can see
- 00:14:50the service has been created and the Pod
- 00:14:52has also been created so if we do Cube
- 00:14:54CTL get service so I can see the service
- 00:14:58is right here and if we do Cube CTL get
- 00:15:01pods the engine X pod is currently
- 00:15:04creating the
- 00:15:05container okay so the Pod is already
- 00:15:07created but interestingly you'll notice
- 00:15:09that there are no errors right I mean
- 00:15:11just having a look at it you cannot see
- 00:15:13any errors everything is running
- 00:15:14perfectly but we all agree that looks
- 00:15:17can be a little bit deceiving right so
- 00:15:19let us use K GPT analyze to see if we
- 00:15:22have any hidden errors that might have
- 00:15:25occurred so let us see
- 00:15:29okay so if we just scroll down to the
- 00:15:31list you can see that there is an error
- 00:15:34with the service so it says that service
- 00:15:36has no endpoint expected label app
- 00:15:38enginex so there is some kind of error
- 00:15:40with the service and it has to do
- 00:15:42something with the labels right I'm not
- 00:15:44going to check how you can solve it
- 00:15:46because that's where kgpt would come
- 00:15:48into the picture we can use kgpt again
- 00:15:51we'll use the same command kgpt explain
- 00:15:54minus
- 00:15:55Bama so it has given us a bunch of
- 00:15:58solutions let us scroll up okay so it
- 00:16:01says the service has no endpoints
- 00:16:03expected label app enginex let us see
- 00:16:05what are the solutions it's saying check
- 00:16:07if the service is created successfully
- 00:16:09it is to my knowledge we have just
- 00:16:11checked it right now verify that the
- 00:16:13pods are running and have the correct
- 00:16:15label app enginex okay ensure that the
- 00:16:18service is referencing the correct port
- 00:16:20in the spec try deleting and recreating
- 00:16:22the service uh and updating the pods
- 00:16:25labels to match the expected label uh
- 00:16:28okay cool so let us see the second one
- 00:16:31right it's saying verify the pods are
- 00:16:32running and have the correct labels app
- 00:16:34Eng Genex okay so let us check what are
- 00:16:37the labels being used in the service
- 00:16:40right I'll do service o wide and here
- 00:16:43you can see that the label is appix
- 00:16:46right and what is something we are using
- 00:16:49with the Pod okay here we are using
- 00:16:51Enix and here you can see there is a
- 00:16:54spelling mistake it's saying en genix
- 00:16:57without the I right let us say you
- 00:16:59copied any manifest blindly from the
- 00:17:01internet and it got applied and there
- 00:17:02are no errors to be seen But at the end
- 00:17:04of the day the service won't work right
- 00:17:06because the labels are incorrect and
- 00:17:08this was something that you you couldn't
- 00:17:10see with your eyes right because
- 00:17:11everything was working fine but K GPT
- 00:17:13was able to recognize this error cool so
- 00:17:16now we know what the error is we can
- 00:17:17quickly fix it we can do Cube CTL label
- 00:17:22pod and it has already given us the
- 00:17:24command so it's engine export we'll
- 00:17:26change the label app to match it with
- 00:17:30the service and we'll also use the
- 00:17:31override flag which will override the
- 00:17:33existing
- 00:17:34labels so the Pod has been correctly
- 00:17:36labeled we can just verify it we can do
- 00:17:40Cube CTL describe pods engine
- 00:17:45XP right and if we just scroll up and
- 00:17:48verify the label from here it is app
- 00:17:51enginex and I think this was the one
- 00:17:53that the service was also using right
- 00:17:56app enginex cool so now the labels have
- 00:17:59been matched right now if we run KPD
- 00:18:03analyze it shouldn't give us any error
- 00:18:05with our service or either our pod see
- 00:18:09the error is gone so this is how you can
- 00:18:10use kgpt CLI right this is a very basic
- 00:18:13use case of using the CLI right you can
- 00:18:16quickly check for errors you can quickly
- 00:18:18debug these errors and troubleshoot them
- 00:18:20now the examples I showed you are pretty
- 00:18:22simple you would say if you have prior
- 00:18:24knowledge of cuberes but imagine any
- 00:18:26complex example so I think kgp
- 00:18:29in that particular case would be a very
- 00:18:31handy tool to First locate that
- 00:18:32particular error and then give you
- 00:18:34potential solutions that you can use to
- 00:18:35solve it as well right now before we
- 00:18:37move on one thing I would definitely
- 00:18:39want to point out is that when we use
- 00:18:41KPD analyze with the explained flag the
- 00:18:44accuracy of the solutions it provides us
- 00:18:47right it depends on the AI model that
- 00:18:49you're using it depends on the llm that
- 00:18:51you're using so if you are using any
- 00:18:53advanced model with K GPD for example
- 00:18:56open AI because it has got GPD 4 the
- 00:18:58most advanced model out there right
- 00:19:01these results would be more accurate so
- 00:19:03all in all the bottom line is that the
- 00:19:05more accurate large language models you
- 00:19:07will use the more accurate you'll get
- 00:19:09the results right of your particular
- 00:19:11issues this is something that I wanted
- 00:19:12to mention here but right now we are
- 00:19:14testing it locally and AMA is working
- 00:19:16perfectly for us to be honest but yeah
- 00:19:17this is the reason that open is the
- 00:19:19first back end that kgpt supported and
- 00:19:22it's the recommended way for you to use
- 00:19:24kcpt so one thing you might have noticed
- 00:19:26when we use the kcpt analyze command is
- 00:19:28it scans the entire cluster and it was
- 00:19:30actually giving us all of the errors
- 00:19:32present right so if we do kgpt
- 00:19:36analyze so you can see that it is giving
- 00:19:39us all the errors with all the different
- 00:19:42components that we have now because this
- 00:19:44is a demo scenario and we are just
- 00:19:46learning how to use this particular tool
- 00:19:48that's completely fine but imagine if
- 00:19:50you have thousands of cuity spots
- 00:19:52running right and in that you would
- 00:19:54definitely need to focus and filter out
- 00:19:57particular resources for example you
- 00:19:59just want to filter out all the pods and
- 00:20:01just run the scan on those particular
- 00:20:03pods or you might have some services or
- 00:20:05you might have deployments and you just
- 00:20:07want to run scans on those particular
- 00:20:09deployments right let's say you just
- 00:20:11want to focus on a particular name space
- 00:20:13for example right now this particular
- 00:20:14scan is not namespace scoped right it is
- 00:20:18giving me all the results from the cube
- 00:20:20system names space it also gave us
- 00:20:22result from the default namespace when
- 00:20:23we ran it previously so we want to
- 00:20:25filter out resources and we want to
- 00:20:27filter out namespace as well right
- 00:20:29interestingly kgpt supports this so it
- 00:20:31provides us with filters so filters are
- 00:20:34actually a way for selecting which
- 00:20:36resource you wish to be a part of the
- 00:20:37default analysis right so if we just do
- 00:20:41kgpt filters so the filters command
- 00:20:44allows you to manage filters that are
- 00:20:46used to analyze kubernetes resources you
- 00:20:48can list the available filters to
- 00:20:50analyze resources right so again this is
- 00:20:52a way for you to filter out different
- 00:20:54communties resources which you want to
- 00:20:57be a part of the scan and you can you
- 00:20:58can just keep the results clean and only
- 00:21:00focus on what is needed at that
- 00:21:02particular time right so if you do kzpt
- 00:21:06filters list so this basically gives you
- 00:21:09a list of all the available filters that
- 00:21:11you can use right so we have deployment
- 00:21:13pod service these basically include all
- 00:21:16the main cuberes resources right and we
- 00:21:18can use this resource names as a filter
- 00:21:20with the default scan right let me show
- 00:21:22you how we can do it okay so to test it
- 00:21:24out I'll quickly create a bunch of
- 00:21:25Errors so I'll quickly apply some
- 00:21:28malicious pods here one is broken pod
- 00:21:31which is hungry pod that's a pretty
- 00:21:33funny name you would definitely agree
- 00:21:35with me I hope and then we'll also apply
- 00:21:38the service and the Pod as well so now
- 00:21:40if we do Cube CTL get pods you can see
- 00:21:43that we have the hungry pod which is
- 00:21:44again in the pending State because I
- 00:21:46already changed the CPU limit to th000
- 00:21:49again because we want K GPD to detect
- 00:21:51the error right and then we have the
- 00:21:52engine export which is already running
- 00:21:55and we'll also have the service which is
- 00:21:59already running right so again if you
- 00:22:01remember the label is not correct right
- 00:22:03in the Pod it is engine X and in the
- 00:22:05service it is enginex without the I
- 00:22:10right so kgpt will actually cast this
- 00:22:13particular error right now if we do kgpt
- 00:22:20analyze so you can see that it has
- 00:22:22caught the error in the service and also
- 00:22:25the Pod but let us say we want to filter
- 00:22:28out only the pods right so what we can
- 00:22:32do is we can first see the list of
- 00:22:34filters which are
- 00:22:36available right so let's filter out all
- 00:22:38the pods and that's what we want kgpt to
- 00:22:41analyze right so we'll do
- 00:22:45kgpt
- 00:22:47analyze
- 00:22:48oops
- 00:22:50do
- 00:22:51analyze and then we'll use the filter
- 00:22:54flag and then we'll write the name of
- 00:22:56that particular filter cool so you you
- 00:22:58can see that it has filtered out all the
- 00:23:00different errors that we were getting in
- 00:23:01our cluster right and now we are only
- 00:23:03just focused with our particular pod
- 00:23:05right now if we want to provide again
- 00:23:07any solution we can just write the
- 00:23:09explain flag and we'll just use minus
- 00:23:11Bama and we only give us the solution
- 00:23:13for all the pods let us say you want to
- 00:23:16change the filter to service we can do
- 00:23:18that we can just type service
- 00:23:21here and it will filter out all the
- 00:23:23services that you have running in your
- 00:23:25cluster I only have one so that's why
- 00:23:27it's giving me one but let say you have
- 00:23:29multiple Services running so it will
- 00:23:31scan all these services and only output
- 00:23:33the names of the service which may have
- 00:23:35some error in them and that's what it's
- 00:23:36doing right now so the filters makes it
- 00:23:38pretty interesting right now there is
- 00:23:40one more interesting flag that I would
- 00:23:42definitely want to mention is the
- 00:23:43namespace flag which you can use to
- 00:23:45restrict your scan to a particular
- 00:23:46namespace let us say you have a
- 00:23:48malicious SP running in different
- 00:23:50namespace right because by default kgpt
- 00:23:52would analyze the default namespace
- 00:23:54right so let us say we create a new
- 00:23:56namespace create create
- 00:23:59name space and we
- 00:24:02write demo we can you know check as well
- 00:24:07so the demo name space has been created
- 00:24:10right now we can create the pod in the
- 00:24:13demo name space and we write minus end
- 00:24:16demo so if we do K get po minus n demo
- 00:24:20so you can see the Pod has been created
- 00:24:21in the demo name space and this is again
- 00:24:23the hungry pod right it has got some
- 00:24:25error so if we just use the kgpt analyze
- 00:24:28command with the filter right with the
- 00:24:32pod filter so this will list down the
- 00:24:34pod which is running in the default Nam
- 00:24:35space and the one which is also running
- 00:24:37in the demo name space but I only want
- 00:24:39to restrict it to the demo name space
- 00:24:41right so what I can do is I can type the
- 00:24:45namespace flag and I can only just give
- 00:24:48it the name of the
- 00:24:50namespace so it has filtered out and
- 00:24:52given me only the pod which is running
- 00:24:55in the demo name space right let us say
- 00:24:57we try to filter out service right so we
- 00:25:00don't have any service which is running
- 00:25:01in the demo name space so it will give
- 00:25:03us no problems detected because number
- 00:25:06one we don't have any service which is
- 00:25:07running in the demo name space and
- 00:25:09number two even if there was any service
- 00:25:11which was running if the service does
- 00:25:13not have any errors it will not show the
- 00:25:15errors here so if kzpt is not able to
- 00:25:17find any errors in your cluster it will
- 00:25:19give you this particular results no
- 00:25:21problem detected that means your cluster
- 00:25:23is fully secure and there are no errors
- 00:25:25for you to solve cool something you can
- 00:25:26also try is you can use multiple filters
- 00:25:29so let us say if I quickly see the
- 00:25:31filters list so let us say I want to use
- 00:25:33the Pod service and I also want to use
- 00:25:35the node filter right in one single
- 00:25:36command so you don't have to write it
- 00:25:38separately you can just separate them by
- 00:25:40a comma and you can just write filter
- 00:25:42and you can also write node so it will
- 00:25:45only give you the pods service or the
- 00:25:47namespace which has some error right now
- 00:25:49we can see that the node doesn't have
- 00:25:51any errors so kgpt won't show us but we
- 00:25:54can see that we have errors in pod and
- 00:25:56service and that's what kgpt has given
- 00:25:58us right and we can also make it
- 00:26:01namespace scoped so we can just write
- 00:26:02default namespace it will not give us
- 00:26:04the pod which is in the demo namespace
- 00:26:06right now there are a bunch of other
- 00:26:07interesting flags that you can try out
- 00:26:09with the analyze command the namespace
- 00:26:11flag we already looked into there is
- 00:26:12another flag which is the output you can
- 00:26:14change the format of the output you want
- 00:26:16right if we can just do
- 00:26:18kgpt analyze D-
- 00:26:21explain and we use AMA and R we'll write
- 00:26:25output Json so it will give us the
- 00:26:28result in a Json format like this right
- 00:26:31so this particular output is pretty
- 00:26:32useful if you're looking to integrate
- 00:26:34kgpt with any other tool or if you're
- 00:26:36looking to automate any task by
- 00:26:38integrating kgpt right so Json format is
- 00:26:41something which is followed and passed
- 00:26:44in a lot of different tools out there
- 00:26:45right so this particular output format
- 00:26:47can definitely help you in that case
- 00:26:49another one that I would definitely want
- 00:26:51to mention is anonymize now as I
- 00:26:54mentioned previously when we are using
- 00:26:56kgpt the user will send the command to
- 00:26:59the kgpt CLI and the kgpt CLI will send
- 00:27:02all the information of that particular
- 00:27:03command and your system to the AI
- 00:27:05provider and that's how AI provider will
- 00:27:07then process it return it back to the
- 00:27:09CLI and then we'll get the result and it
- 00:27:11will return back to the user now let us
- 00:27:13say you are concerned with the security
- 00:27:15of your system right you may not want to
- 00:27:17send potentially sensitive data to open
- 00:27:19AI or any other AI back end right I mean
- 00:27:22you don't want to send that you don't
- 00:27:23want to do it so there is a flag which
- 00:27:26you can use which is called as anonymous
- 00:27:28so we can just use this flag with the
- 00:27:30analyze command and I'll also put the
- 00:27:33Json output so I can explain it in a
- 00:27:35better way to you now what actually
- 00:27:36happens in the back end when you use the
- 00:27:38anonymized flag now when we send request
- 00:27:40to kgpt it will retrieve all the
- 00:27:42information right and additionally it
- 00:27:44will mask out or hide all the sensitive
- 00:27:47information that could be names of cuber
- 00:27:49resources labels or that could be cuity
- 00:27:52secret as well right so kgpt will mask
- 00:27:54all this information before sending it
- 00:27:56to the AI backend now the AI backend
- 00:27:58will normally process this information
- 00:28:00it will return it to the kgbt CLI it
- 00:28:03will again unmask all the hidden values
- 00:28:05and it will replace it with the original
- 00:28:07name of the community's resources and
- 00:28:08that's what the user sees in front of us
- 00:28:10so at the back end kgpt is actually
- 00:28:13hiding all this kind of sensitive data
- 00:28:15right before sending it to the AI and
- 00:28:17you don't have to worry about any
- 00:28:18sensitive data getting compromised in
- 00:28:20your cluster right so if you just see
- 00:28:22the output here with the anonymized
- 00:28:25flag here you can see that because we
- 00:28:27had had some kind of labels with our
- 00:28:29service right so it has created a
- 00:28:32separate section which is called as
- 00:28:33sensitive and this is the unmasked
- 00:28:35version and this is the mask version so
- 00:28:38the mask version of this particular
- 00:28:39label or the hidden version is what kgpt
- 00:28:42is sending to the AI provider right you
- 00:28:44can scroll down and see more such
- 00:28:46examples right so we have the cube API
- 00:28:48server as well this is also masked so
- 00:28:51this is the value that is being sent to
- 00:28:53the AI back end by K CPT and for example
- 00:28:55if we just use the anonymized filter and
- 00:28:57we don't use the output flag you can see
- 00:29:00that we are able to see all the original
- 00:29:01names right and that's what kzp actually
- 00:29:03does in between it replaces all the mass
- 00:29:06value with the actual names of the
- 00:29:07communties resources cool so I think we
- 00:29:10have covered a lot of important flags
- 00:29:12and some additional functionalities you
- 00:29:14can use with the analyze command if you
- 00:29:15want to check out more you can just type
- 00:29:17kgbt analyze help you can check out more
- 00:29:20interesting options that you can use you
- 00:29:21can also change the language of the
- 00:29:23output right so kgbt supports a bunch of
- 00:29:25language as well this is pretty
- 00:29:26interesting but I hope that by now you
- 00:29:28would definitely have a lot of clarity
- 00:29:30on how kgpt actually is working right
- 00:29:33and what are the different flags and
- 00:29:34what are the different what are the
- 00:29:36different options you can use with the
- 00:29:37analyze command and how you can
- 00:29:39customize your scan to fit your
- 00:29:40particular use case okay so let us talk
- 00:29:43about Integrations right and how you can
- 00:29:45integrate kgbt with other tools in the
- 00:29:48ecosystem and I think when we are
- 00:29:50talking about the cloud native ecosystem
- 00:29:51right the main value lies in how well a
- 00:29:55particular tool can integrate with the
- 00:29:57other tools that are involved in the
- 00:29:58cncm landscape and kgpt actually
- 00:30:00provides us a very simple way to
- 00:30:02integrate with a bunch of tools so let
- 00:30:04us take a look how we can work with
- 00:30:06Integrations in K CPT so for that we
- 00:30:08have a pretty simple command called as K
- 00:30:10CPT Integrations and here are a bunch of
- 00:30:13options that we can use we can activate
- 00:30:15deactivate and list the built-in
- 00:30:16Integrations so let us first list the
- 00:30:19available Integrations right now so we
- 00:30:22can do kgpt integration list and these
- 00:30:24are all the available Integrations at
- 00:30:27the time of recording this particular
- 00:30:28video and in kcpt version 0.3.3 n right
- 00:30:34so we have trivy Prometheus AWS kada and
- 00:30:38we have KERO which is the very latest
- 00:30:40one so how you can install these
- 00:30:42Integrations and how you can use them
- 00:30:45you can find it in this particular
- 00:30:47documentation right here link is in the
- 00:30:49description but in this particular video
- 00:30:50we are going to see how we can use the
- 00:30:52trivy plus the key one integration as
- 00:30:54well okay so let us start with trivy
- 00:30:56trivy is basically a vulnerability
- 00:30:58scanning open source tool by Echo
- 00:31:00security and it helps us find any
- 00:31:02vulnerabilities misconfigurations as
- 00:31:05bombs in containers and all that stuff
- 00:31:07right so it can basically scan container
- 00:31:08images file system git repositories
- 00:31:11virtual machine images communities any
- 00:31:13resources on AWS and all that stuff and
- 00:31:16it will list down all the potential
- 00:31:18vulnerabilities of that particular
- 00:31:19resource right it's a pretty interesting
- 00:31:21tool and if this is your first time make
- 00:31:23sure to check it out I'll leave the link
- 00:31:24in the description box so in order to
- 00:31:26use trivy we just need to activate
- 00:31:27activate that integration so we can just
- 00:31:29use kgbt integration activate trivy and
- 00:31:33what this will do is it will install the
- 00:31:34trivy operator itself onto our
- 00:31:36kubernetes cluster so as you can see
- 00:31:38most of the resources for me were
- 00:31:40already present so it has skipped
- 00:31:41everything but essentially at the end it
- 00:31:44says that the trivy operator kgpt is
- 00:31:46installed we can also actually check
- 00:31:48this using Helm so if we do Helm list
- 00:31:51you can see that the trivy operator K
- 00:31:53GPT is deployed right now cool now if
- 00:31:56you remember in the previous section we
- 00:31:58discussed about filters right and
- 00:32:00filters are a way for you to customize
- 00:32:02your scan based on certain resources now
- 00:32:04any particular integration you add with
- 00:32:06kgpt right it will add more resources to
- 00:32:09the list of filters which we can use
- 00:32:11with the basic Command right now if you
- 00:32:14just confirm whether the integration has
- 00:32:15been added or not we can just write kgpt
- 00:32:18integration list and you can see that
- 00:32:20the trivy integration is active right
- 00:32:21now cool now if you just write kgpt
- 00:32:25filters list you can see that two more
- 00:32:28resources have been added here number
- 00:32:30one is config audit report and the
- 00:32:32number two is vulnerability report these
- 00:32:35were the part of the trivy integration
- 00:32:37right and now we can use these
- 00:32:39particular resources as a filter with
- 00:32:41the kgpt analyze command cool so let me
- 00:32:44show you how it works let's see if we
- 00:32:46have already a bunch of resources
- 00:32:48running yep we already do okay first
- 00:32:50let's copy the name from the filters
- 00:32:54list let's say we want to use the
- 00:32:56vulnerability report filter cool so we
- 00:32:59can write
- 00:33:00kgpt
- 00:33:04analyze and we can just write filter
- 00:33:08we'll paste the name of the
- 00:33:10vulnerability report
- 00:33:11filter and we'll use it so now if you
- 00:33:14see we used kgpt analyze which gives us
- 00:33:18all the errors in our particular cluster
- 00:33:20right but we used the filter as
- 00:33:22vulnerability report this is pretty
- 00:33:24interesting right because at first we
- 00:33:26are able to use kgpt to analyze all of
- 00:33:28our errors in the cluster and then we
- 00:33:30use trivy to get the vulnerability
- 00:33:32report of all the errors right now if
- 00:33:34you have used trivy before you would
- 00:33:36know that it categorizes all the
- 00:33:38vulnerabilities into three categories
- 00:33:40which is normal medium and critical so
- 00:33:42by default kgbt will only display the
- 00:33:44critical vulnerabilities found and
- 00:33:46another thing which is interesting is
- 00:33:47you can head over to one of these links
- 00:33:49and you can learn more about this
- 00:33:52particular vulnerability as well so I
- 00:33:54think this is pretty cool right and
- 00:33:56again we can use the explain flag so
- 00:33:58we'll just wait for a couple of seconds
- 00:34:00to do its thing okay this definitely
- 00:34:02took some time but now it has provided
- 00:34:04us with a lot more information about
- 00:34:07that particular vulnerability as well so
- 00:34:09you can see it has given us the
- 00:34:10information about the vulnerability what
- 00:34:12exactly it is all about what's the root
- 00:34:15cause what are the potential risk and
- 00:34:17what are the potential Solutions we can
- 00:34:19use to mitigate this vulnerability right
- 00:34:22so this is pretty cool and similarly you
- 00:34:24can also use the config audit report
- 00:34:27with your k GPT analyze command and
- 00:34:29that's how you can filter out the
- 00:34:30resources accordingly so that's pretty
- 00:34:31interesting okay so let us see how keyo
- 00:34:34integration also looks like right
- 00:34:36because that is the latest integration
- 00:34:38which came with the 0.3.3 N version you
- 00:34:41can also find the pull request here as
- 00:34:42well and you can check out how it was
- 00:34:44made because this is the very initial
- 00:34:46stages for this particular integration
- 00:34:48now if you remember that when we use the
- 00:34:50triv integration it automatically
- 00:34:52installed the trivy cuties operator onto
- 00:34:54our cluster right but for some Advanced
- 00:34:56Integrations for example Prometheus you
- 00:34:58would need to have the Prometheus
- 00:35:00operator previously configured and
- 00:35:02installed on your cluster right before
- 00:35:03you can activate the integration and use
- 00:35:05it with K CBT so that is the same case
- 00:35:08with keyno so before we use the keyno
- 00:35:10integration we would need to install the
- 00:35:12keyno operator in our cluster and we
- 00:35:14would need to set that up before we can
- 00:35:15use it with K CPT cool by the way if
- 00:35:18you're not familiar with kyero it is a
- 00:35:20policy engine that you can use with
- 00:35:22kubernetes to validate your resources
- 00:35:24right I'll leave the link in the
- 00:35:25description box if this is your first
- 00:35:27time heing about it you can check out
- 00:35:29the documentation as well but let us
- 00:35:32install the key word no operator first
- 00:35:34onto our cluster so for that we have
- 00:35:36these three commands that we'll run
- 00:35:39number one is We'll add the helm repo of
- 00:35:41key wordo then we'll do a repo update
- 00:35:43and then we'll install the keyo operator
- 00:35:46in the keyo namespace we are also
- 00:35:47creating a new
- 00:35:49namespace so it will do a bunch of
- 00:35:51things I already have the key1 no chart
- 00:35:54so it detected that and it has deployed
- 00:35:56key1 no in our cluster so if we do Cube
- 00:35:58CTL get NS so you can see a new name
- 00:36:02space has been created and if we do Cube
- 00:36:04CTL get
- 00:36:07all minus
- 00:36:09n keyware now you can see these are the
- 00:36:13bunch of resources that keyo has created
- 00:36:16for us right so we have a bunch of
- 00:36:17deployments we have a bunch of PS which
- 00:36:19are running Services which are running
- 00:36:20we don't need to worry about it right
- 00:36:22now you can check out in the description
- 00:36:23for more resources but the essential
- 00:36:25thing is that the key1 operator in
- 00:36:27installed and now we can integrate it
- 00:36:29with
- 00:36:31kcpt so we can just do
- 00:36:33kgpt Integrations list and then we can
- 00:36:38add the key1 integration we do kgpt
- 00:36:41integration activate key1 no you can see
- 00:36:44that it has added keyo to the active
- 00:36:46list right and now if we do kgpt filters
- 00:36:50list you can see we have two resources
- 00:36:52that we can use as filters with the
- 00:36:54analyze command one is the cluster
- 00:36:56policy report and one is the policy
- 00:36:58report now before we move on we would
- 00:37:00also need to create a validation policy
- 00:37:02right against which Keo would check for
- 00:37:04any errors so I've already gone ahead
- 00:37:06and created one from the documentation
- 00:37:08itself and this particular policy will
- 00:37:10check whether a pod has the label team
- 00:37:13or not if the pod doesn't have that
- 00:37:15particular label CUO would not let the
- 00:37:17Pod be applied to the cluster cool so
- 00:37:20let us just apply this
- 00:37:21policy this is by the way directly from
- 00:37:24the documentation you can also create
- 00:37:25your own policy if you wish to
- 00:37:28so the policy has been applied and we
- 00:37:30can just quickly test it so let us say
- 00:37:32if we run an engine xod and we don't
- 00:37:35give it the label team right so you can
- 00:37:37see that we received an error from the
- 00:37:40server and it says that the label team
- 00:37:43is required and if we add a pod with the
- 00:37:47label team and we can use app let's say
- 00:37:50so the Pod will get created
- 00:37:53right you can see the engin X2 pod is
- 00:37:55running now so the validation policy is
- 00:37:57also working right we have Keo operator
- 00:37:59installed we have set up a validation
- 00:38:01policy right now there are a bunch of
- 00:38:03resources which are already running in a
- 00:38:05cluster now we want K GPT to First scan
- 00:38:07our entire cluster and only find the
- 00:38:09resources which violates this particular
- 00:38:11policy right so what we can do is we can
- 00:38:15just write kgpt let me just find the
- 00:38:18name first and then we can do kgpt
- 00:38:21analyze and instead of the vulnerability
- 00:38:25filter we can just use the policy report
- 00:38:28filter and you can see that we have a
- 00:38:30whole bunch of resources in our cluster
- 00:38:31which are not following this particular
- 00:38:34validation policy so this is how the
- 00:38:36keyner integration is working right and
- 00:38:38you can definitely use the explain flag
- 00:38:40here to find the solution as well but we
- 00:38:44already know the solution right we don't
- 00:38:45have the team labels with these
- 00:38:47particular resources but all in all this
- 00:38:49is how the key one integration looks
- 00:38:51right now something I would definitely
- 00:38:53want to mention is that as the key1
- 00:38:55integration is at a very early stage at
- 00:38:57the time of recording this video you may
- 00:38:59find some kind of Errors right sometimes
- 00:39:02it may not work when you may add
- 00:39:04additional filters to it this is at a
- 00:39:06pretty initial stage and the community
- 00:39:07is actively working for it so if you
- 00:39:09encounter any new problem feel free to
- 00:39:11open an issue to the KGB GitHub repo I
- 00:39:14link it down in the description as well
- 00:39:16and I think that's the benefit of it
- 00:39:17being an open source project right we
- 00:39:19can all help it improve cool so I think
- 00:39:22we have covered a bunch of stuff let's
- 00:39:24see how the kcpt operator works now cool
- 00:39:26so I think we have spent an extensive
- 00:39:28amount of time exploring the kgpt CLI
- 00:39:31right and I think this is a great way to
- 00:39:33get started and perform quick scans in
- 00:39:35your cluster now what if you want to
- 00:39:37continuously scan your cluster right you
- 00:39:39don't want to mess with the CLI you
- 00:39:41don't want all these manual processes so
- 00:39:43for that we have the kgpt operator which
- 00:39:45you can install in your cuties cluster
- 00:39:47itself and it will continuously scan for
- 00:39:50any errors and any issues in your
- 00:39:52cluster and you can continuously view
- 00:39:54those results and resolve the issues as
- 00:39:56well so this is basically the
- 00:39:57architecture of what are the different
- 00:39:59components that are included in the kgbt
- 00:40:01operator so you can see we have the
- 00:40:03operator we have the community's custom
- 00:40:05resource definition and then it creates
- 00:40:07a bunch of KCBD deployment which talks
- 00:40:09to the API server and that's how it
- 00:40:12communicates with the communities
- 00:40:13operator optionally you can also set
- 00:40:16Prometheus to scrape out the Matrix of
- 00:40:19the particular operator and you can
- 00:40:20monitor the performance of the operator
- 00:40:22itself as well so installing the
- 00:40:24operator is pretty simple we are just
- 00:40:25using Helm to add the repo update it and
- 00:40:27then install the operator itself in a
- 00:40:29new namespace I already installed it so
- 00:40:32it's already running in my system I'll
- 00:40:33not do this so if I just do K GPT get NS
- 00:40:37you can see that a new namespace is
- 00:40:40already created and if I do K GPT get
- 00:40:44all I'm sorry Cube CTL get
- 00:40:47all so you can see I have a
- 00:40:51deployment a service and a replica set
- 00:40:53already up and running and these are the
- 00:40:55resources that are a part of the kgbt
- 00:40:58operator
- 00:41:00cool now according to the guide now the
- 00:41:02next step which we would need to do is
- 00:41:05deploy the API key of the back end that
- 00:41:07we are using as a secret right so right
- 00:41:09now I'm using AMA so we don't have a
- 00:41:11secret to be deployed but according to
- 00:41:13the documentation it's given that it's
- 00:41:16compulsory so this is what I did if I
- 00:41:18write kgpt o list you can see now the
- 00:41:21active back end is open a so I replaced
- 00:41:24AMA with open a right using the API key
- 00:41:27you can create your account on open or
- 00:41:29you can also write K GPT
- 00:41:34generate which will automatically lead
- 00:41:36you to that particular page where you
- 00:41:38want to create your API key right so now
- 00:41:40that we have our API key ready we can
- 00:41:42create this particular kubernetes secret
- 00:41:44and before that I'll
- 00:41:47just export this environment
- 00:41:50variable so we can just write
- 00:41:53export you don't need to copy this
- 00:41:55because this won't work after the
- 00:41:57tutorial I'll be sure to delete it and
- 00:41:59now let's create our secret it said fail
- 00:42:03to create secret already exist oops if I
- 00:42:06do K
- 00:42:09get
- 00:42:11Secrets okay I can do secrets in all
- 00:42:14name spaces so I think it's in a
- 00:42:15different name space yep here it is so
- 00:42:18let us maybe delete it that's the
- 00:42:22simplest possible solution I can think
- 00:42:25of right now so we do loate delete
- 00:42:28Secrets minus
- 00:42:31n the name space is this and the secret
- 00:42:36we want to delete is this and we'll use
- 00:42:38the force flag you don't have to use it
- 00:42:40again cool I think now the error should
- 00:42:43be
- 00:42:44gone so now we can create the secret
- 00:42:47okay the secret has been created now we
- 00:42:49can deploy the custom resource of K gbt
- 00:42:53right so before we apply let us examine
- 00:42:55the file first right so it's a kind
- 00:42:57Cades GPT the model we are using is 3.5
- 00:43:00turbo the AI provider we are using is
- 00:43:02open Ai and the name of the secret which
- 00:43:04we have just created the only thing we
- 00:43:06need to change here is the version so
- 00:43:09make sure you are on the latest version
- 00:43:11you just need to replace it with the
- 00:43:13latest version that you are on of G GPT
- 00:43:16cool and I think everything looks good
- 00:43:18now we can just apply this you'll do
- 00:43:22ggpt apply
- 00:43:24resource and you can see that the
- 00:43:27resource kgpt sample has been created
- 00:43:31okay so once you have applied that
- 00:43:33particular resource right you can just
- 00:43:35go ahead and do Cube seed get results uh
- 00:43:38in the Json format right and right now
- 00:43:42it's empty because this particular step
- 00:43:45takes a little bit of time it is also
- 00:43:47mentioned in the documentation as well
- 00:43:49once the initial scans have been
- 00:43:51completed after several minutes you will
- 00:43:53be presented with the results custom
- 00:43:55resources right so this is how the
- 00:43:57results look like it will give you the
- 00:44:00details and it will also give you the
- 00:44:02error as well right now an interesting
- 00:44:04way in which you can view these
- 00:44:06particular results is via Prometheus as
- 00:44:08well so if we go to references and we
- 00:44:11head over to the options here so these
- 00:44:14are a bunch of options that you can set
- 00:44:16and customize how you want to use the
- 00:44:18kgpt operator right for example as it's
- 00:44:21mentioned in the diagram as well we can
- 00:44:23use Prometheus to scrape out the in
- 00:44:25cluster Matrix and then we can use
- 00:44:27grafana to view those particular metrics
- 00:44:30right now this is something I'm not
- 00:44:32going to show in this particular video
- 00:44:34because I think it has already been a
- 00:44:35long one and we have covered a bunch of
- 00:44:36interesting stuff but I'll not leave you
- 00:44:38hanging I'll leave a link to a resource
- 00:44:40that you can check out in the
- 00:44:41description box if you want to know how
- 00:44:43to set Prometheus and how you want to
- 00:44:45enable grafana as well and that's how
- 00:44:48you can use the kzpt operator to its
- 00:44:50full extent cool I think we have covered
- 00:44:52a lot of interesting stuff a few things
- 00:44:54that I would definitely want to mention
- 00:44:56before we part ways if you are new to
- 00:44:58kgbt you can check out the kgpt c
- 00:45:00tutorial which is with killer Koda right
- 00:45:04so this is a great way for you to just
- 00:45:07get started get to know how the CLI is
- 00:45:09actually working just play around with
- 00:45:10the commands right and see how they are
- 00:45:12working right so shout out to the
- 00:45:14community for creating this two
- 00:45:15additional things that is something for
- 00:45:17you to explore is how you can create
- 00:45:19custom analyzers so if you remember that
- 00:45:22when we do
- 00:45:23kgpt filters list so the kubernetes
- 00:45:27resources that you see in this
- 00:45:29particular list are actually called as
- 00:45:30analyzers right and if you head over to
- 00:45:33this repo of kgbt you can find the
- 00:45:36analyzer Dogo file and here you can see
- 00:45:39the list of all the analyzers that are
- 00:45:41built into kgbt right so one interesting
- 00:45:44thing is that you can write your own
- 00:45:45custom analyzer that matches your
- 00:45:47specific use case so this particular
- 00:45:49guide will provide a way for you to
- 00:45:51write your own analyzer right it has a
- 00:45:53bunch of things so you can definitely
- 00:45:54check this one out another interesting
- 00:45:57thing you can also check is how you can
- 00:45:58integrate kgpt operator with slack this
- 00:46:01is something interesting because for
- 00:46:03example if your team is working on slack
- 00:46:05and you'll want constant updates from
- 00:46:06kgpt you can configure the operator to
- 00:46:10use slack and it will send you constant
- 00:46:12updates about any issues that may occur
- 00:46:15right and it's pretty simple it's pretty
- 00:46:16straightforward you just have to
- 00:46:18uncomment this particular section and
- 00:46:20you just have to add the slack web URL
- 00:46:23here this is definitely something for
- 00:46:24you to explore and by the way all the
- 00:46:26steps are are also mentioned in the
- 00:46:28documentation as well for your help
- 00:46:29which you can check out and follow along
- 00:46:31cool so I think we have covered a lot in
- 00:46:33this particular tutorial I hope that you
- 00:46:35were able to follow along and learn how
- 00:46:37kzpt actually makes the entire process
- 00:46:40of debugging and troubleshooting a
- 00:46:43cuties cluster really easy and really
- 00:46:45simple for you all I definitely believe
- 00:46:47that this is just the start we are
- 00:46:48already seeing a huge impact AI has been
- 00:46:51making in the devops and Cloud native
- 00:46:53and all of the other Tech Industries and
- 00:46:55I believe that kgpt is one such tool
- 00:46:57which really shows promise and it really
- 00:46:59shows the impact AI can make in managing
- 00:47:02a complex cuberes environment a quick
- 00:47:04summary of what all we covered in this
- 00:47:05particular video we covered what is kgpt
- 00:47:08we then went onto the installation we
- 00:47:10saw how kgpt helps you to analyze your
- 00:47:13cluster for any errors then we also took
- 00:47:15a look at how kgpt connects to an AI
- 00:47:17backend provider to provide you with
- 00:47:19potential Solutions in simple language
- 00:47:22then we also saw how we can integrate
- 00:47:23kgpt with different tools in the cloud
- 00:47:26native ecosystem I think that's a really
- 00:47:27powerful feature and the community is
- 00:47:29actively working to include more
- 00:47:31Integrations into the codebase and
- 00:47:33lastly we also had a nice overview of
- 00:47:35how we can use the kgpt operator and
- 00:47:38some of the additional features like
- 00:47:39slack integration and how you can create
- 00:47:41your custom analyzers now these are some
- 00:47:43features I want you to try out and let
- 00:47:45me know in the comment section down
- 00:47:46below what do you all think about it
- 00:47:48lastly all the resources I showed you in
- 00:47:50this particular video and some
- 00:47:51additional resources I'll be sure to
- 00:47:53link it down in the description box down
- 00:47:55below so you all can check it out out
- 00:47:57and the KGB Community is growing so give
- 00:47:59it a try and don't hesitate to get
- 00:48:02involved in its development as well but
- 00:48:03yeah if you really enjoyed the video
- 00:48:05make sure to hit the Subscribe button I
- 00:48:07think you'll find it somewhere here and
- 00:48:08make sure to hit that like button for
- 00:48:10the algorithm it really helps us out but
- 00:48:12yeah I'll see you all in another video
- 00:48:14bye
- AI
- Kubernetes
- ChatGPT
- K8s GPT
- troubleshooting
- integration
- K8s CLI
- cloud-native
- Trivy
- Kverno