├── CNAME
├── docs
├── CNAME
├── imgs
│ ├── bedrock.png
│ ├── grafana.png
│ ├── operator.png
│ ├── icon-logo-01.png
│ ├── generate-token.png
│ └── icon-logo-01.svg
├── tutorials
│ ├── index.md
│ ├── playground.md
│ ├── observability.md
│ ├── slack-integration.md
│ ├── content-collection
│ │ └── content-collection.md
│ ├── custom-analyzers.md
│ └── custom-rest-backend.md
├── build
│ ├── Dockerfile
│ └── requirements.txt
├── reference
│ ├── guidelines
│ │ ├── community.md
│ │ ├── guidelines.md
│ │ └── privacy.md
│ ├── cli
│ │ ├── serve-mode.md
│ │ ├── debugging.md
│ │ ├── index.md
│ │ └── filters.md
│ ├── operator
│ │ ├── advanced-installation.md
│ │ └── overview.md
│ └── providers
│ │ └── backend.md
├── index.md
├── getting-started
│ ├── Community.md
│ ├── in-cluster-operator.md
│ ├── installation.md
│ └── getting-started.md
└── explanation
│ ├── integrations.md
│ └── caching.md
├── .codespellignore
├── .github
└── workflows
│ ├── typos.yml
│ └── mkdocs.yml
├── Makefile
├── README.md
├── mkdocs.yml
└── LICENSE
/CNAME:
--------------------------------------------------------------------------------
1 | docs.k8sgpt.ai
2 |
--------------------------------------------------------------------------------
/docs/CNAME:
--------------------------------------------------------------------------------
1 | docs.k8sgpt.ai
2 |
--------------------------------------------------------------------------------
/docs/imgs/bedrock.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/k8sgpt-ai/docs/HEAD/docs/imgs/bedrock.png
--------------------------------------------------------------------------------
/docs/imgs/grafana.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/k8sgpt-ai/docs/HEAD/docs/imgs/grafana.png
--------------------------------------------------------------------------------
/docs/imgs/operator.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/k8sgpt-ai/docs/HEAD/docs/imgs/operator.png
--------------------------------------------------------------------------------
/docs/imgs/icon-logo-01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/k8sgpt-ai/docs/HEAD/docs/imgs/icon-logo-01.png
--------------------------------------------------------------------------------
/docs/imgs/generate-token.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/k8sgpt-ai/docs/HEAD/docs/imgs/generate-token.png
--------------------------------------------------------------------------------
/.codespellignore:
--------------------------------------------------------------------------------
1 | k8sgpt
2 |
3 | k8sgptdocs
4 |
5 | K8sGPT
6 |
7 | helm
8 |
9 | Kubernetes
10 |
11 | kubectl
--------------------------------------------------------------------------------
/docs/tutorials/index.md:
--------------------------------------------------------------------------------
1 | # Tutorials
2 |
3 | This section provides:
4 |
5 | * end-to-end tutorials on specific use cases
6 | * a collection of user and contributor created content
--------------------------------------------------------------------------------
/.github/workflows/typos.yml:
--------------------------------------------------------------------------------
1 | name: typos
2 |
3 | on:
4 | pull_request:
5 | paths-ignore:
6 | - '*.md'
7 |
8 | jobs:
9 | build:
10 | name: Detect typos
11 | runs-on: ubuntu-latest
12 |
13 | steps:
14 | - uses: actions/checkout@v3
15 | - name: Run typo checks
16 | run: make typos
--------------------------------------------------------------------------------
/docs/build/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM squidfunk/mkdocs-material:latest
2 |
3 | ## If you want to see exactly the same version as is published to GitHub pages
4 | ## use a private image for insiders, which requires authentication.
5 |
6 | # docker login -u ${GITHUB_USERNAME} -p ${GITHUB_TOKEN} ghcr.io
7 | # FROM ghcr.io/squidfunk/mkdocs-material-insiders
8 |
9 | COPY requirements.txt .
10 | RUN pip install -r requirements.txt
11 |
--------------------------------------------------------------------------------
/docs/reference/guidelines/community.md:
--------------------------------------------------------------------------------
1 | # Community Information
2 |
3 | All community related information are kept in a separate repository from the docs.
4 |
5 | Link to the repository: [k8sgpt-ai/community](https://github.com/k8sgpt-ai/community)
6 |
7 | There you will find information on
8 |
9 | * The Charter
10 | * Adopters List
11 | * Code of Conduct
12 | * Community Members
13 | * Subprojects
14 |
15 | and much more.
--------------------------------------------------------------------------------
/docs/tutorials/playground.md:
--------------------------------------------------------------------------------
1 | # K8sGPT Playground
2 |
3 | If you want to try out K8sGPT, we highly suggest you to follow this Killercoda example:
4 |
5 | Link: [**K8sGPT CLI Tutorial**](https://killercoda.com/matthisholleville/scenario/k8sgpt-cli)
6 |
7 | This tutorial covers:
8 |
9 | - Run a simple analysis and explore possible options
10 | - Discover how AI works Explanation
11 | - Stay on the down-low with the anonymize option (because we don't want any trouble with the feds)
12 | - Filter resources like a boss
13 | - Use Integrations
14 |
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | # K8sGPT Documentation
2 |
3 | K8sGPT gives Kubernetes SRE superpowers to everyone
4 |
5 | **The documentation provides the following**
6 |
7 | * Getting started: Guides to install and use K8sGPT
8 | * Tutorials: End-to-end tutorials on specific use cases
9 | * Reference: Specific documentation on the features
10 | * Explanation: Additional explanations on the design and use of the CLI
11 |
12 | ## Documentation enhancements
13 |
14 | If anything is unclear please create an issue in the [docs repository.](https://github.com/k8sgpt-ai/docs)
--------------------------------------------------------------------------------
/.github/workflows/mkdocs.yml:
--------------------------------------------------------------------------------
1 | name: mkdocs
2 | on:
3 | push:
4 | branches:
5 | - main
6 | permissions:
7 | contents: write
8 | jobs:
9 | deploy:
10 | runs-on: ubuntu-latest
11 | steps:
12 | - uses: actions/checkout@v3
13 | - uses: actions/setup-python@v4
14 | with:
15 | python-version: 3.12
16 | - uses: actions/cache@v3
17 | with:
18 | key: ${{ github.ref }}
19 | path: .cache
20 | - run: |
21 | pip install -r docs/build/requirements.txt
22 | - run: mkdocs gh-deploy --force
--------------------------------------------------------------------------------
/Makefile:
--------------------------------------------------------------------------------
1 | MKDOCS_IMAGE := ghcr.io/k8sgpt-ai/k8sgptdocs:dev
2 | MKDOCS_PORT := 8000
3 |
4 | .PHONY: mkdocs-serve
5 | mkdocs-serve:
6 | docker build -t $(MKDOCS_IMAGE) -f docs/build/Dockerfile docs/build
7 | docker run --name mkdocs-serve --rm -v $(PWD):/docs -p $(MKDOCS_PORT):8000 $(MKDOCS_IMAGE)
8 |
9 | .PHONY: typos
10 | typos:
11 | which codespell || pip install codespell
12 | codespell -S _examples,.tfsec,.terraform,.git,go.sum --ignore-words .codespellignore -f
13 |
14 | .PHONY: fix-typos
15 | fix-typos:
16 | which codespell || pip install codespell
17 | codespell -S .terraform,go.sum --ignore-words .codespellignore -f -w -i1
--------------------------------------------------------------------------------
/docs/build/requirements.txt:
--------------------------------------------------------------------------------
1 | click==8.1.7
2 | csscompressor==0.9.5
3 | ghp-import==2.1.0
4 | htmlmin==0.1.12
5 | importlib-metadata==7.0.1
6 | Jinja2==3.1.2
7 | jsmin==3.0.1
8 | Markdown==3.5.1
9 | MarkupSafe==2.1.3
10 | mergedeep==1.3.4
11 | mike==2.0.0
12 | mkdocs==1.5.3
13 | mkdocs-macros-plugin==1.0.5
14 | mkdocs-material==9.5.3
15 | mkdocs-material-extensions==1.3.1
16 | mkdocs-minify-plugin==0.7.2
17 | mkdocs-redirects==1.2.1
18 | packaging==23.2
19 | Pygments==2.17.2
20 | pymdown-extensions==10.7
21 | pyparsing==3.1.1
22 | python-dateutil==2.8.2
23 | PyYAML==6.0.1
24 | pyyaml_env_tag==0.1
25 | six==1.16.0
26 | termcolor==2.4.0
27 | verspec==0.1.0
28 | watchdog==3.0.0
29 | zipp==3.17.0
30 |
--------------------------------------------------------------------------------
/docs/reference/guidelines/guidelines.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | ## Contributing to the Documentation
4 |
5 | This documentation follows the [Diataxis](https://diataxis.fr/) framework.
6 | If you are proposing completely new content to the documentation, please familiarise yourself with the framework first.
7 |
8 | The documentation is created with [mkdocs](https://www.mkdocs.org/), specifically the [Material for MkDocs theme](https://squidfunk.github.io/mkdocs-material/getting-started/).
9 |
10 | ## Contributing projects in the K8sGPT organisation
11 |
12 | All projects in the K8sGPT organisation follow our [contributing guidelines.](https://github.com/k8sgpt-ai/k8sgpt/blob/main/CONTRIBUTING.md)
--------------------------------------------------------------------------------
/docs/getting-started/Community.md:
--------------------------------------------------------------------------------
1 | ## GitHub
2 |
3 | The [k8sgpt source code](https://github.com/k8sgpt-ai/k8sgpt) and other related projects managed on github are in the [k8sgpt](https://github.com/k8sgpt-ai) organization.
4 |
5 |
6 | ## Slack Channel
7 |
8 | You can join the slack channel using the link : [slack](https://join.slack.com/t/k8sgpt/shared_invite/zt-276pa9uyq-pxAUr4TCVHubFxEvLZuT1Q)
9 |
10 |
11 | ## Community Meetings / Office Hours
12 |
13 | These happen on 1st and 3rd Thursday of the month Time zone: Europe/London Time: 12:00 - 13:00 Joining Info:
14 |
15 | Google Meet : [link](https://meet.google.com/beu-kbdx-dfa)
16 |
17 | Calendar Link : [calendar schedule](https://calendar.google.com/calendar/u/0?cid=YmE2NzAyZTNkMTIxYjYxN2Q4NmMzYjBjMmE2ZTAzYzgwMTg0NGRiYmMwOTY3MjAzNzJkNDBhZWZjOWJhZGNlNUBncm91cC5jYWxlbmRhci5nb29nbGUuY29t)
18 |
19 | ## Contributing
20 |
21 | Thanks for your interest in contributing! We welcome all types of contributions and encourage you to read our [contribution guidelines](https://github.com/k8sgpt-ai/k8sgpt/blob/main/CONTRIBUTING.md) for next steps.
22 |
23 |
24 |
--------------------------------------------------------------------------------
/docs/tutorials/observability.md:
--------------------------------------------------------------------------------
1 | # Integrating Observability with K8sGPT
2 |
3 | Enhance your Kubernetes observability by integrating Prometheus and Grafana with K8sGPT. Follow these steps to set up and visualize your cluster's insights:
4 | ## Prerequisites
5 | - Prometheus: Install using Helm via Prometheus Community Helm Charts.
6 | - Grafana Dashboard: Ensure Grafana is installed and accessible in your environment.
7 |
8 | # Installation Steps
9 |
10 | Install the K8sGPT Operator with observability features enabled:
11 | ```
12 | helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace --set interplex.enabled=true --set grafanaDashboard.enabled=true --set serviceMonitor.enabled=true
13 | ```
14 | This command:
15 | - Creates a ServiceMonitor to integrate with Prometheus.
16 | - Automatically configures and populates data into your Grafana dashboard.
17 |
18 | Once set up, you can explore key metrics like:
19 | - Results identified by K8sGPT.
20 | - Operator workload details, providing insight into resource usage and efficiency.
21 |
22 |
23 | _See example of a K8sGPT Grafana dashboard_
24 |
25 | 
--------------------------------------------------------------------------------
/docs/imgs/icon-logo-01.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
--------------------------------------------------------------------------------
/docs/tutorials/slack-integration.md:
--------------------------------------------------------------------------------
1 | # Integrate K8sGPT operator with Slack
2 | ## Slack prerequisites
3 | - Create a slack channel
4 | - Create a slack app
5 | - Enable and create an incoming webhook
6 | - Copy the webhook URL value
7 |
8 | You can follow Slack's documentation to create the [webhook](https://api.slack.com/messaging/webhooks)
9 |
10 | ## Configure the K8sGPT operator
11 |
12 | Install the operator with HELM
13 | ```bash
14 | helm repo add k8sgpt https://charts.k8sgpt.ai/
15 | helm repo update
16 | helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace
17 | ```
18 | Create OpenAI's secret
19 | ```bash
20 | kubectl create secret generic k8sgpt-sample-secret --from-literal=openai-api-key=$OPENAI_TOKEN -n k8sgpt-operator-system
21 | ```
22 |
23 | Last but not least, deploy your K8sGPT Custom Resource
24 | ```yaml
25 | kubectl apply -f - << EOF
26 | apiVersion: core.k8sgpt.ai/v1alpha1
27 | kind: K8sGPT
28 | metadata:
29 | name: k8sgpt-sample
30 | namespace: k8sgpt-operator-system
31 | spec:
32 | ai:
33 | enabled: true
34 | model: gpt-4o-mini
35 | backend: openai
36 | secret:
37 | name: k8sgpt-sample-secret
38 | key: openai-api-key
39 | noCache: false
40 | version: v0.3.8
41 | sink:
42 | type: slack
43 | webhook:
44 | EOF
45 | ```
46 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # K8sGPT Documentation
2 |
3 | K8sGPT gives Kubernetes SRE superpowers to everyone
4 |
5 | **The documentation provides the following**
6 |
7 | * Getting started: Guides to install and use K8sGPT
8 | * Tutorials: End-to-end tutorials on specific use cases
9 | * Reference: Specific documentation on the features
10 | * Explanation: Additional explanations on the design and use of the CLI
11 |
12 | ## Documentation enhancements
13 |
14 | If anything is unclear please create an issue in the [docs repository.](https://github.com/k8sgpt-ai/docs)
15 |
16 | ## Running the documentation locally
17 |
18 | **Prerequisites**
19 |
20 | * Docker installed and running
21 |
22 | Clone the repository
23 | ```bash
24 | git clone git@github.com:k8sgpt-ai/docs.git
25 | ```
26 |
27 | And then from within the docs repository, you can start the development server:
28 | ```bash
29 | make mkdocs-serve
30 | ```
31 |
32 | ## Contributing
33 | This documentation follows the [Diataxis](https://diataxis.fr/) framework.
34 | If you are proposing completely new content to the documentation, please familiarise yourself with the framework first. The different directories in the documentation correspond to sections explored in the framework.
35 |
36 | The contribution guidelines to the documentation are the same as for the main project: [K8sGPT contributing guidelines](https://github.com/k8sgpt-ai/k8sgpt/blob/main/CONTRIBUTING.md)
--------------------------------------------------------------------------------
/docs/explanation/integrations.md:
--------------------------------------------------------------------------------
1 | # Integrations
2 |
3 | Integrations in k8sGPT allows you to manage and configure various integrations with external tools and services within your repository's codebase.
4 |
5 | These integrations enhance the functionality of k8sGPT by providing additional capabilities for scanning, diagnosing, and triaging issues in the Kubernetes clusters.
6 |
7 | ## Description
8 |
9 | The `integration` command in the k8sgpt enables seamless integration with external tools and services. It allows you to activate, configure, and manage integrations that complement the functionalities of k8sgpt.
10 |
11 | Integrations are designed to interact with external systems and tools that complement the functionalities of k8sgpt. These integrations include vulnerability scanners, monitoring services, incident management platforms, and more.
12 |
13 | By using the following command users can access all K8sGPT CLI options related to integrations:
14 |
15 | ```
16 | k8sgpt integrations --help
17 | ```
18 |
19 | By leveraging the `integration` feature in the K8sGPT CLI, users can extend the functionality of K8sGPT by incorporating various external tools and services.
20 | This collaboration enhances the ability to diagnose and triage issues in Kubernetes clusters more effectively.
21 |
22 | For more information about each `integration` and its specific configuration options, refer to the [reference](https://docs.k8sgpt.ai/reference/cli/filters/) documentation provided for the integration.
23 |
--------------------------------------------------------------------------------
/docs/tutorials/content-collection/content-collection.md:
--------------------------------------------------------------------------------
1 | # Content Collection
2 |
3 | This section provides a collection of videos, blog posts and more on K8sGPT, posted on external sites.
4 |
5 | ## Blogs
6 | Have a look at the K8sGPT blog on the [website.](https://k8sgpt.ai/blog/)
7 |
8 | Additionally, here are several blogs created by the community:
9 |
10 | * [K8sGPT + LocalAI: Unlock Kubernetes superpowers for free!](https://itnext.io/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65) by Tyler Gillson
11 | * [K8sGPT: Simplifying Kubernetes Diagnostics with Natural Language Processing](https://www.kubetools.io/kubernetes/k8sgpt-simplifying-kubernetes-diagnostics-with-natural-language-processing/) by Karan Singh
12 | * [Kubernetes + ChatGPT = K8sGpt](https://medium.com/@vijulpatel865/kubernetes-chatgpt-k8sgpt-a9199363dd38) by Vijul Patel
13 | * [ChatGPT for your Kubernetes Cluster — k8sgpt](https://medium.com/techbeatly/chatgpt-for-your-kubernetes-cluster-k8sgpt-649f2cad1bd5) by Renjith Ravindranathan
14 | * [Using the Trivy K8sGPT plugin](https://medium.com/techbeatly/k8sgpt-integration-with-aquasec-trivy-22f53c6730bb) by Renjith Ravindranathan
15 |
16 | ## Videos
17 |
18 | * [Kubernetes + OpenAI = K8sGPT, giving you AI superpowers!](https://youtu.be/7WA8XVrod2Y)
19 | * [k8sgpt Getting Started (2023)](https://youtu.be/yhTS1Dlqygc)
20 | * [Debugging Kubernetes with AI: k8sGPT || AI-Powered Debugging for Kubernetes](https://youtu.be/tgt26P4UmmU)
21 |
22 | ## Contributing
23 |
24 | If you have created any content around K8sGPT, then please add it to this collection.
25 |
--------------------------------------------------------------------------------
/docs/reference/cli/serve-mode.md:
--------------------------------------------------------------------------------
1 | # K8sGPT Serve mode
2 |
3 | ## Prerequisites
4 |
5 | 1. Have [grpcurl](https://github.com/fullstorydev/grpcurl) installed
6 |
7 | ## Run `k8sgpt` serve mode
8 |
9 | ```bash
10 | $ k8sgpt serve
11 | {"level":"info","ts":1684309627.113916,"caller":"server/server.go:83","msg":"binding metrics to 8081"}
12 | {"level":"info","ts":1684309627.114198,"caller":"server/server.go:68","msg":"binding api to 8080"}
13 | ```
14 |
15 | This command starts two servers:
16 |
17 | 1. The health server runs on port 8081 by default and serves metrics and health endpoints.
18 | 2. The API server runs on port 8080 (gRPC) by default and serves the analysis handler.
19 |
20 | For more details about the gRPC implementation, refer to this [link](https://buf.build/k8sgpt-ai/k8sgpt/docs/main).
21 |
22 | ## Analyze your cluster with `grpcurl`
23 |
24 | Make sure you are connected to a Kubernetes cluster:
25 |
26 | ```bash
27 | kubectl get nodes
28 | ```
29 |
30 | Next, run the following command:
31 |
32 | ```bash
33 | grpcurl -plaintext localhost:8080 schema.v1.ServerService/Analyze
34 | ```
35 |
36 | This command provides a list of issues in your Kubernetes cluster. If there are no issues identified, you should receive a status of `OK`.
37 |
38 | ## Analyze with parameters
39 |
40 | You can specify parameters using the following command:
41 |
42 | ```bash
43 | grpcurl -plaintext -d '{"explain": false, "filters": ["Ingress"], "namespace": "k8sgpt"}' localhost:8080 schema.v1.ServerService/Analyze
44 | ```
45 |
46 | In this example, the analyzer will only consider the `k8sgpt` namespace without AI explanation and only focus on the `Ingress` filter.
--------------------------------------------------------------------------------
/docs/reference/guidelines/privacy.md:
--------------------------------------------------------------------------------
1 | # Privacy
2 |
3 | K8sGPT is a privacy-first tool and believe transparency is key for you to understand how we use your data. We have created this page to help you understand how we collect, use, share and protect your data.
4 |
5 | ## Data we collect
6 |
7 | K8sGPT will collect data from Analyzers and either display it directly to you or
8 | with the `--explain` flag it will send it to the selected AI backend.
9 |
10 | The type of data collected depends on the Analyzer you are using. For example, the `k8sgpt analyze pod` command will collect the following data:
11 | - Container status message
12 | - Pod name
13 | - Pod namespace
14 | - Event message
15 |
16 | ## Data we share
17 |
18 | As mentioned, K8sGPT will share data with the selected AI backend **only** when you choose
19 | `--explain` and `auth` against that backend. The data shared will be the same as the data collected.
20 |
21 | To learn more about the privacy policy of our default AI backend OpenAI please visit [their privacy policy](https://openai.com/policies/privacy-policy).
22 |
23 |
24 | ## Data we protect
25 |
26 | When you are sending data through the `--explain` option, there is the capability of anonymising some of that data. This is done by using the `--anonymize` flag. In the example of the Deployment Analyzer, this will obfuscate the following data:
27 |
28 | - Deployment name
29 | - Deployment namespace
30 |
31 | ## Data we don't collect
32 |
33 | - Logs
34 | - API Server data other than the primitives used within our Analyzers.
35 |
36 | ### Contact
37 |
38 | If you have any questions about our privacy policy, please [contact us](https://k8sgpt.ai/contact/).
39 |
--------------------------------------------------------------------------------
/docs/reference/cli/debugging.md:
--------------------------------------------------------------------------------
1 | # Debugging K8sGPT
2 |
3 | If you are experiencing issues that you believe are related to K8sGPT.
4 | Please cut us an issue [here](https://github.com/k8sgpt-ai/k8sgpt/issues) and upload your K8sGPT dump file.
5 |
6 | To create a K8sGPT dump file run `k8sgpt dump`.
7 | This will create a `dump__json` file which you can attach to your github issue.
8 |
9 | ```
10 | ❯ cat dump_20241112200820.json
11 | {
12 | "AIConfiguration": {
13 | "Providers": [
14 | {
15 | "Name": "openai",
16 | "Model": "gpt-3.5-turbo",
17 | "Password": "sk-p***",
18 | "BaseURL": "",
19 | "ProxyEndpoint": "",
20 | "ProxyPort": "",
21 | "EndpointName": "",
22 | "Engine": "",
23 | "Temperature": 0.7,
24 | "ProviderRegion": "",
25 | "ProviderId": "",
26 | "CompartmentId": "",
27 | "TopP": 0.5,
28 | "TopK": 50,
29 | "MaxTokens": 2048,
30 | "OrganizationId": "",
31 | "CustomHeaders": []
32 | }
33 | ],
34 | "DefaultProvider": ""
35 | },
36 | "ActiveFilters": [
37 | "Deployment",
38 | "StatefulSet",
39 | "ReplicaSet",
40 | "Ingress",
41 | "ValidatingWebhookConfiguration",
42 | "PersistentVolumeClaim",
43 | "CronJob",
44 | "MutatingWebhookConfiguration",
45 | "Gateway"
46 | ],
47 | "KubenetesServerVersion": {
48 | "major": "1",
49 | "minor": "31",
50 | "gitVersion": "v1.31.0",
51 | "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
52 | "gitTreeState": "clean",
53 | "buildDate": "2024-08-13T07:28:49Z",
54 | "goVersion": "go1.22.5",
55 | "compiler": "gc",
56 | "platform": "linux/amd64"
57 | },
58 | "K8sGPTInfo": {
59 | "Version": "dev",
60 | "Commit": "HEAD",
61 | "Date": "unknown"
62 | }
63 | }%
64 | ```
--------------------------------------------------------------------------------
/docs/explanation/caching.md:
--------------------------------------------------------------------------------
1 | # Caching
2 |
3 | Remote caching is a mechanism used to store and retrieve frequently accessed data in a location separate from the primary system. In the context of `K8sGPT`, it allows users to offload cached data to a remote storage service, like AWS S3, rather than managing it on the local machine.
4 | This approach offers several benefits, such as reducing local storage requirements and ensuring cache persistence even when the local environment is updated or restarted.
5 |
6 | ## AWS S3 Integration
7 |
8 | K8sGPT provides seamless integration with **AWS S3**, a widely adopted and reliable object storage service offered by **Amazon Web Services**. By leveraging this integration, users can take advantage of AWS S3's scalability, durability, and availability to store their cached data remotely.
9 |
10 | ### Prerequisites
11 |
12 | To use the remote caching feature with AWS S3 in **K8sGPT**, you need to have the following prerequisites set up:
13 |
14 | - `AWS_ACCESS_KEY_ID`: An access key ID is required to authenticate with AWS S3 programmatically.
15 |
16 | - `AWS_SECRET_ACCESS_KEY`: The corresponding secret access key that pairs with the AWS access key ID.
17 |
18 | _Adding a Remote Cache_:
19 |
20 | Users can easily add a remote cache to the K8sGPT CLI by executing the following command:
21 |
22 | ```
23 | k8sgpt cache add --region --bucket
24 | ```
25 |
26 | The command above will create a new bucket in the specified AWS region if it does not already exist. The created bucket will serve as the remote cache for K8sGPT.
27 |
28 | _Listing Cache Items_:
29 |
30 | To view the items stored in the remote cache, users can use the following command:
31 |
32 | ```
33 | k8sgpt cache list
34 | ```
35 |
36 | _Removing the Remote Cache_:
37 |
38 | If users wish to remove the remote cache without deleting the associated AWS S3 bucket, they can use the following command:
39 |
40 | ```
41 | k8sgpt cache remove --bucket
42 | ```
43 |
44 | This command ensures that the cache items are removed from the K8sGPT CLI, but the bucket and its contents in AWS S3 will remain intact for potential future usage.
45 |
--------------------------------------------------------------------------------
/mkdocs.yml:
--------------------------------------------------------------------------------
1 | site_name: k8sgpt
2 | site_url: https://k8sgpt.ai/docs
3 | site_description: K8sGPT gives Kubernetes SRE superpowers to everyone
4 | docs_dir: docs/
5 | repo_name: GitHub
6 | repo_url: https://github.com/k8sgpt-ai/k8sgpt
7 | edit_uri: https://github.com/k8sgpt-ai/docs/edit/main/docs/
8 |
9 | nav:
10 | - Getting Started:
11 | - Overview: index.md
12 | - Getting Started Guide: getting-started/getting-started.md
13 | - Installation: getting-started/installation.md
14 | - In-Cluster Operator: getting-started/in-cluster-operator.md
15 | - Community: getting-started/Community.md
16 | - Tutorials:
17 | - Overview: tutorials/index.md
18 | - Content Collection: tutorials/content-collection/content-collection.md
19 | - K8sGPT Playground: tutorials/playground.md
20 | - Custom Analyzers: tutorials/custom-analyzers.md
21 | - Slack Integration: tutorials/slack-integration.md
22 | - Observability: tutorials/observability.md
23 | - Custom Rest Backend: tutorials/custom-rest-backend.md
24 | - Reference:
25 | - CLI:
26 | - Overview: reference/cli/index.md
27 | - Integration and Filter: reference/cli/filters.md
28 | - Serve mode: reference/cli/serve-mode.md
29 | - Debugging: reference/cli/debugging.md
30 | - Providers:
31 | - Overview: reference/providers/backend.md
32 | - Operator:
33 | - Overview: reference/operator/overview.md
34 | - Advanced Installation Options: reference/operator/advanced-installation.md
35 | - Guidelines & Community:
36 | - Guidelines: reference/guidelines/guidelines.md
37 | - Community: reference/guidelines/community.md
38 | - Privacy: reference/guidelines/privacy.md
39 | - Explanation:
40 | - Integration: explanation/integrations.md
41 | - Caching: explanation/caching.md
42 |
43 | theme:
44 | name: material
45 | language: "en"
46 | logo: imgs/icon-logo-01.svg
47 | features:
48 | - navigation.tabs
49 | - navigation.tabs.sticky
50 | - navigation.sections
51 | - content.tabs.link
52 |
53 | markdown_extensions:
54 | - admonition
55 |
56 | plugins:
57 | - search
58 | - macros
59 |
--------------------------------------------------------------------------------
/docs/reference/operator/advanced-installation.md:
--------------------------------------------------------------------------------
1 | # Advanced Operator installation options
2 |
3 | This documentation lists advanced installation options for the K8sGPT Operator.
4 |
5 | ## ArgoCD
6 |
7 | ArgoCD is a continuous deployment tool that implements GitOps best practices to install and manage Kubernetes resources.
8 |
9 | ### Prerequisites
10 |
11 | To install and manage K8sGPT through ArgoCD, ensure that you have ArgoCD installed and running inside your cluster.
12 | The ArgoCD [getting-started-guide](https://argo-cd.readthedocs.io/en/stable/getting_started/) provides detailed information.
13 |
14 | ### Installing K8sGPT
15 |
16 | K8sGPT can be installed through ArgoCD by applying an `Application` CRD to the ArgoCD namespaces in your cluster (with ArgoCD running):
17 |
18 | K8sGPT Application CRD:
19 |
20 | ```yaml
21 | apiVersion: argoproj.io/v1alpha1
22 | kind: Application
23 | metadata:
24 | name: k8sgpt
25 | namespace: argocd
26 | spec:
27 | project: default
28 | source:
29 | chart: k8sgpt-operator
30 | repoURL: https://charts.k8sgpt.ai/
31 | targetRevision:
32 | helm:
33 | values: |
34 | serviceMonitor:
35 | enabled: true
36 | GrafanaDashboard:
37 | enabled: true
38 | destination:
39 | server: https://kubernetes.default.svc
40 | namespace: k8sgpt-operator-system
41 | syncPolicy:
42 | automated:
43 | prune: true
44 | selfHeal: true
45 | syncOptions:
46 | - CreateNamespace=true
47 | ```
48 |
49 | Note:
50 |
51 | * Ensure that the `namespace` is correctly set to your ArgoCD namespace.
52 | * Ensure that the `` is set to the [K8sGPT Operator Release Version](https://github.com/k8sgpt-ai/k8sgpt-operator/releases) that you want to use.
53 | * Modify the `helm.values` section with the Helm Values that you would like to overwrite. Check the [values.yaml](https://github.com/k8sgpt-ai/k8sgpt-operator/tree/main/chart/operator) file of the Operator for options.
54 |
55 | Applying the resource:
56 |
57 | ```bash
58 | kubectl apply -f application.yaml
59 | ```
60 |
61 | ### Installing the remaining Operator resources
62 |
63 | You will still need to install the
64 |
65 | * K8sGPT Operator CRD
66 | * K8sGPT secret to access the AI backend
67 |
68 | that are both detailed in the Operator installation page. The above application resource will only install the Operator pods themselves not additional resources. Note that you could manage those resources also through ArgoCD. Please refer to the official [ArgoCD documentation](https://argo-cd.readthedocs.io/en/stable/getting_started/) for further information.
69 |
--------------------------------------------------------------------------------
/docs/reference/cli/index.md:
--------------------------------------------------------------------------------
1 | # CLI Reference
2 |
3 | This section provides an overview of the different `k8sgpt` CLI commands.
4 |
5 | **Prerequisites**
6 |
7 | * You need to be connected to a Kubernetes cluster, K8sGPT will access it through your kubeconfig.
8 | * [Signed-up to OpenAI ChatGPT](https://openai.com/)
9 | * Have the [K8sGPT CLI installed](../../getting-started/installation.md)
10 |
11 | ## Commands
12 |
13 | _Run a scan with the default analyzers_
14 |
15 | ```
16 | k8sgpt generate
17 | k8sgpt auth add
18 | k8sgpt analyze --explain
19 | ```
20 |
21 | _Filter on resource_
22 |
23 | ```
24 | k8sgpt analyze --explain --filter=Service
25 | ```
26 |
27 | _Filter by namespace_
28 | ```
29 | k8sgpt analyze --explain --filter=Pod --namespace=default
30 | ```
31 |
32 | _Output to JSON_
33 |
34 | ```
35 | k8sgpt analyze --explain --filter=Service --output=json
36 | ```
37 |
38 | _Anonymize during explain_
39 |
40 | ```
41 | k8sgpt analyze --explain --filter=Service --output=json --anonymize
42 | ```
43 |
44 | ## Additional commands
45 |
46 | _List configured backends_
47 |
48 | ```
49 | k8sgpt auth list
50 | ```
51 |
52 | _Remove configured backends_
53 |
54 | ```
55 | k8sgpt auth remove --backend $MY_BACKEND
56 | ```
57 |
58 | _List integrations_
59 |
60 | ```
61 | k8sgpt integrations list
62 | ```
63 |
64 | _Activate integrations_
65 |
66 | ```
67 | k8sgpt integrations activate [integration(s)]
68 | ```
69 |
70 | _Use integration_
71 |
72 | ```
73 | k8sgpt analyze --filter=[integration(s)]
74 | ```
75 |
76 | _Deactivate integrations_
77 |
78 | ```
79 | k8sgpt integrations deactivate [integration(s)]
80 | ```
81 |
82 | _Serve mode with GRPC_
83 |
84 | ```
85 | k8sgpt serve
86 | ```
87 |
88 | _Analysis with GRPC serve mode_
89 | ```
90 | grpcurl -plaintext localhost:8080 schema.v1.ServerService/Analyze
91 | ```
92 |
93 | _Serve mode with GRPC and non-default backend (amazonbedrock)_
94 |
95 | ```
96 | k8sgpt serve -b amazonbedrock
97 | ```
98 |
99 | _Analysis with GRPC serve mode and non-default backend (amazonbedrock)_
100 | ```
101 | grpcurl -plaintext -d '{"explain": true, "backend": "amazonbedrock"}' localhost:8080 schema.v1.ServerService/Analyze
102 | ```
103 |
104 | _Serve mode with REST API_
105 | ```
106 | k8sgpt serve --http
107 | ```
108 |
109 | _Analysis with REST API serve mode_
110 | ```
111 | curl -X POST "http://localhost:8080/v1/analyze"
112 | ```
113 |
114 | _Serve mode with REST API serve mode and non-default backend (amazonbedrock)_
115 | ```
116 | k8sgpt serve --http -b amazonbedrock
117 | ```
118 |
119 | _Analysis with REST API serve mode and non-default backend (amazonbedrock)_
120 | ```
121 | curl -X POST "http://localhost:8080/v1/analyze?explain=true&backend=amazonbedrock"
122 | ```
123 |
--------------------------------------------------------------------------------
/docs/reference/operator/overview.md:
--------------------------------------------------------------------------------
1 | # K8sGPT Operator
2 |
3 | K8sGPT can run as a Kubernetes Operator inside the cluster.
4 | The scan results are provided as Kubernetes YAML manifests.
5 |
6 | This section will only detail how to configure the operator. For installation instructions, please see the [getting-started section.](../../getting-started/in-cluster-operator.md)
7 |
8 | ## Architecture
9 |
10 | The diagram below showcases the different components that the K8sGPT Operator installs and manages:
11 |
12 | 
13 |
14 | ## Customising the Operator
15 |
16 | As with other Helm Charts, the K8sGPT Operator can be customised by modifying the [ `values.yaml`](https://github.com/k8sgpt-ai/k8sgpt/blob/main/charts/k8sgpt/values.yaml) file.
17 |
18 | The following fields can be customised in the Helm Chart Deployment:
19 |
20 |
21 | | Parameter | Description | Default |
22 | | ------------------------ | ----------------------- | -------------- |
23 | | `serviceMonitor.enabled` | | `false` |
24 | | `serviceMonitor.additionalLabels` | | `{}` |
25 | | `grafanaDashboard.enabled` | | `false` |
26 | | `grafanaDashboard.folder.annotation` | | `"grafana_folder"` |
27 | | `grafanaDashboard.folder.name` | | `"ai"` |
28 | | `grafanaDashboard.label.key` | | `"grafana_dashboard"` |
29 | | `grafanaDashboard.label.value` | | `"1"` |
30 | | `controllerManager.kubeRbacProxy.containerSecurityContext.allowPrivilegeEscalation` | | `false` |
31 | | `controllerManager.kubeRbacProxy.containerSecurityContext.capabilities.drop` | | `["ALL"]` |
32 | | `controllerManager.kubeRbacProxy.image.repository` | | `"gcr.io/kubebuilder/kube-rbac-proxy"` |
33 | | `controllerManager.kubeRbacProxy.image.tag` | | `"v0.0.19"` |
34 | | `controllerManager.kubeRbacProxy.resources.limits.cpu` | | `"500m"` |
35 | | `controllerManager.kubeRbacProxy.resources.limits.memory` | | `"128Mi"` |
36 | | `controllerManager.kubeRbacProxy.resources.requests.cpu` | | `"5m"` |
37 | | `controllerManager.kubeRbacProxy.resources.requests.memory` | | `"64Mi"` |
38 | | `controllerManager.manager.sinkWebhookTimeout` | | `"30s"` |
39 | | `controllerManager.manager.containerSecurityContext.allowPrivilegeEscalation` | | `false` |
40 | | `controllerManager.manager.containerSecurityContext.capabilities.drop` | | `["ALL"]` |
41 | | `controllerManager.manager.image.repository` | | `"ghcr.io/k8sgpt-ai/k8sgpt-operator"` |
42 | | `controllerManager.manager.image.tag` | | `"v0.0.19"` |
43 | | `controllerManager.manager.resources.limits.cpu` | | `"500m"` |
44 | | `controllerManager.manager.resources.limits.memory` | | `"128Mi"` |
45 | | `controllerManager.manager.resources.requests.cpu` | | `"10m"` |
46 | | `controllerManager.manager.resources.requests.memory` | | `"64Mi"` |
47 | | `controllerManager.replicas` | | `1` |
48 | | `kubernetesClusterDomain` | | `"cluster.local"` |
49 | | `metricsService.ports` | | `[{"name": "https", "port": 8443, "protocol": "TCP", "targetPort": "https"}]` |
50 | | `metricsService.type` | | `"ClusterIP"` |
51 |
52 | ### For example: In-cluster metrics
53 |
54 | It is possible to enable metrics of the operator so that they can be scraped through Prometheus.
55 |
56 | This is the configuration required in the `values.yaml` manifest:
57 | ```yaml
58 | serviceMonitor:
59 | enabled: true
60 | ```
61 |
62 | The new `values.yaml` manifest can then be provided upon installing the Operator inside the cluster:
63 | ```bash
64 | helm update --install release k8sgpt/k8sgpt-operator --values values.yaml
65 | ```
66 |
--------------------------------------------------------------------------------
/docs/getting-started/in-cluster-operator.md:
--------------------------------------------------------------------------------
1 | # K8sGPT Operator
2 |
3 | ## Prerequisites
4 |
5 | - To begin you will require a Kubernetes cluster available and `KUBECONFIG` set.
6 | - You will also need to install helm v3. See the [Helm documentation](https://helm.sh/docs/intro/install/) for more information.
7 |
8 | ## Operator Installation
9 |
10 | To install the operator, run the following command:
11 |
12 | ```bash
13 | helm repo add k8sgpt https://charts.k8sgpt.ai/
14 | helm repo update
15 | helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace
16 | ```
17 |
18 | This will install the Operator into the cluster, which will await a `K8sGPT` resource before anything happens.
19 |
20 | ## Deploying an OpenAI secret
21 |
22 | Whilst there are multiple backends supported ( OpenAI, Azure OpenAI and Local ), in this example we'll use OpenAI.
23 | Whatever backend you are using, you need to make sure to have a secret that works with the backend.
24 |
25 | For instance, this means you will need to install your OpenAI token as a secret into the cluster:
26 |
27 | ```bash
28 | kubectl create secret generic k8sgpt-sample-secret --from-literal=openai-api-key=$OPENAI_TOKEN -n k8sgpt-operator-system
29 | ```
30 |
31 | ## Deploying a K8sGPT resource
32 |
33 | To deploy a K8sGPT resource, you will need to create a YAML file with the following contents:
34 |
35 | ```yaml
36 | kubectl apply -f - << EOF
37 | apiVersion: core.k8sgpt.ai/v1alpha1
38 | kind: K8sGPT
39 | metadata:
40 | name: k8sgpt-sample
41 | namespace: k8sgpt-operator-system
42 | spec:
43 | ai:
44 | enabled: true
45 | model: gpt-4o-mini
46 | backend: openai
47 | secret:
48 | name: k8sgpt-sample-secret
49 | key: openai-api-key
50 | # anonymized: false
51 | # language: english
52 | noCache: false
53 | version: v0.3.41
54 | # filters:
55 | # - Ingress
56 | # sink:
57 | # type: slack
58 | # webhook:
59 | # extraOptions:
60 | # backstage:
61 | # enabled: true
62 | EOF
63 | ```
64 |
65 | Please replace the `` field with the [current release of K8sGPT](https://github.com/k8sgpt-ai/k8sgpt/releases). At the time of writing this is `v0.3.41`.
66 |
67 | ### Regarding out-of-cluster traffic to AI backends
68 |
69 | In the above example `enableAI` is set to `true`.
70 | This option allows the cluster deployment to use the `backend` to filter and improve the responses to the user.
71 | Those responses will appear as `details` within the `Result` custom resources that are created.
72 |
73 | The default backend in this example is [OpenAI](https://openai.com/) and allows for additional details to be generated and solutions provided for issues.
74 | If you wish to disable out-of-cluster communication and any Artificial Intelligence processing through models, simply set `enableAI` to `false`.
75 |
76 | _It should also be noted that `localai` and `azureopenai` are supported and in-cluster models will be supported in the near future_
77 |
78 | ## Viewing the results
79 |
80 | Once the initial scans have been completed after several minutes, you will be presented with results custom resources.
81 |
82 | ```bash
83 | ❯ kubectl get results -o json -n k8sgpt-operator-system| jq .
84 | {
85 | "apiVersion": "v1",
86 | "items": [
87 | {
88 | "apiVersion": "core.k8sgpt.ai/v1alpha1",
89 | "kind": "Result",
90 | "metadata": {
91 | "creationTimestamp": "2023-04-26T09:45:02Z",
92 | "generation": 1,
93 | "name": "placementoperatorsystemplacementoperatorcontrollermanagermetricsservice",
94 | "namespace": "k8sgpt-operator-system",
95 | "resourceVersion": "108371",
96 | "uid": "f0edd4de-92b6-4de2-ac86-5bb2b2da9736"
97 | },
98 | "spec": {
99 | "details": "The error message means that the service in Kubernetes doesn't have any associated endpoints, which should have been labeled with \"control-plane=controller-manager\". \n\nTo solve this issue, you need to add the \"control-plane=controller-manager\" label to the endpoint that matches the service. Once the endpoint is labeled correctly, Kubernetes can associate it with the service, and the error should be resolved.",
100 | ...
101 | ```
102 |
--------------------------------------------------------------------------------
/docs/getting-started/installation.md:
--------------------------------------------------------------------------------
1 | # Installation
2 |
3 | This page provides further information on installation guidelines.
4 |
5 | ## Linux/Mac via brew
6 |
7 | ### Prerequisites
8 |
9 | Ensure that you have Homebrew installed:
10 |
11 | - Homebrew for Mac
12 | - Homebrew for Linux
13 | Homebrew for Linux also works on WSL
14 |
15 | ### Homebrew
16 |
17 | Install K8sGPT on your machine with the following commands:
18 | ```bash
19 | brew tap k8sgpt-ai/k8sgpt
20 | brew install k8sgpt
21 | ```
22 | ## Other Installation Options
23 |
24 | ### RPM-based installation (RedHat/CentOS/Fedora)
25 |
26 | **32 bit:**
27 |
28 | ```bash
29 | curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_386.rpm
30 | sudo rpm -ivh k8sgpt_386.rpm
31 | ```
32 |
33 | **64 bit:**
34 |
35 | ```bash
36 | curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_amd64.rpm
37 | sudo rpm -ivh -i k8sgpt_amd64.rpm
38 | ```
39 |
40 | ### DEB-based installation (Ubuntu/Debian)
41 |
42 | **32 bit:**
43 |
44 | ```bash
45 | curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_386.deb
46 | sudo dpkg -i k8sgpt_386.deb
47 | ```
48 |
49 | **64 bit:**
50 |
51 | ```bash
52 | curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_amd64.deb
53 | sudo dpkg -i k8sgpt_amd64.deb
54 | ```
55 |
56 | ### APK-based installation (Alpine)
57 |
58 | **32 bit:**
59 |
60 | ```bash
61 | curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_386.apk
62 | apk add k8sgpt_386.apk
63 | ```
64 |
65 | **64 bit:**
66 |
67 | ```bash
68 | curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_amd64.apk
69 | apk add k8sgpt_amd64.apk
70 | ```
71 |
72 | ## Windows
73 |
74 | * Download the latest Windows binaries of **k8sgpt** from the [Release](https://github.com/k8sgpt-ai/k8sgpt/releases)
75 | tab based on your system architecture.
76 | * Extract the downloaded package to your desired location. Configure the system *path* variable with the binary location.
77 |
78 | ## Verify installation
79 |
80 | Verify that K8sGPT is installed correctly:
81 |
82 | ```bash
83 | k8sgpt version
84 |
85 | k8sgpt version 0.2.7
86 | ```
87 |
88 | ## Common Issues
89 |
90 | ### Windows WSL
91 | Failing Installation on WSL or Linux (missing gcc)
92 | When installing Homebrew on WSL or Linux, you may encounter the following error:
93 |
94 | ```bash
95 | ==> Installing k8sgpt from k8sgpt-ai/k8sgpt Error: The following formula cannot be installed from bottle and must be
96 | built from source. k8sgpt Install Clang or run brew install gcc.
97 | ```
98 |
99 | If you install gcc as suggested, the problem will persist. Therefore, you need to install the build-essential package.
100 |
101 | ```bash
102 | sudo apt-get update
103 | sudo apt-get install build-essential
104 | ```
105 |
106 | ### Failing Installation on WSL or Linux (missing gcc)
107 | When installing Homebrew on WSL or Linux, you may encounter the following error:
108 |
109 | ```
110 | ==> Installing k8sgpt from k8sgpt-ai/k8sgpt Error: The following formula cannot be installed from a bottle and must be
111 | built from the source. k8sgpt Install Clang or run brew install gcc.
112 | ```
113 |
114 | If you install gcc as suggested, the problem will persist. Therefore, you need to install the build-essential package.
115 | ```bash
116 | sudo apt-get update
117 | sudo apt-get install build-essential
118 | ```
119 |
120 | ## Running K8sGPT through a container
121 |
122 | If you are running K8sGPT through a container, the CLI will not be able to open the website for the OpenAI token.
123 |
124 | You can find the latest container image for K8sGPT in the packages of the GitHub organisation: [Link](https://github.com/k8sgpt-ai/k8sgpt/pkgs/container/k8sgpt)
125 |
126 | A volume can then be mounted to the image through e.g. [Docker Compose](https://docs.docker.com/storage/volumes/).
127 | Below is an example:
128 |
129 | ```bash
130 | version: '2'
131 | services:
132 | k8sgpt:
133 | image: ghcr.io/k8sgpt-ai/k8sgpt:dev-202304011623
134 | volumes:
135 | - /home/$(whoami)/.k8sgpt.yaml:/home/root/.k8sgpt.yaml
136 | ```
137 |
138 | ## Installing the K8sGPT Operator Helm Chart
139 |
140 | K8sGPT can be installed as an Operator inside the cluster.
141 | For further information, see the [K8sGPT Operator](in-cluster-operator.md) documentation.
142 |
143 | ## Installing the K8sGPT Operator via Glasskube
144 |
145 | [Glasskube](https://glasskube.dev/) is a Kubernetes package manager that simplifies the installation process of the k8sgpt-operator and automatically ensures it stays up-to-date with the latest version. For detailed instructions on installing Glasskube, refer to the [Glasskube Installation](https://glasskube.dev/docs/getting-started/install/).
146 |
147 | Install k8sgpt via the CLI:
148 | ```
149 | glasskube install k8sgpt-operator --value "openaiApiKey="
150 | ```
151 | Alternatively, configure the package via the Glasskube UI, where you can easily customize the operator to anonymize data, choose the output language, and define the OpenAI API key seamlessly.
152 |
153 | ## Upgrading the brew installation
154 |
155 | To upgrade the K8sGPT brew installation run the following command:
156 |
157 | ```bash
158 | brew upgrade k8sgpt
159 | ```
160 |
--------------------------------------------------------------------------------
/docs/getting-started/getting-started.md:
--------------------------------------------------------------------------------
1 | # Getting Started Guide
2 |
3 | You can either get started with K8sGPT in your own environment, the details are provided below or you can use our [Playground example on Killercoda](../tutorials/playground.md).
4 |
5 | !!! tip
6 | Please only use K8sGPT on environments where you are authorized to modify Kubernetes resources.
7 |
8 | ## Prerequisites
9 |
10 | 1. Ensure `k8sgpt` is installed correctly on your environment by following the [installation](./installation.md).
11 | 2. You need to be connected to any Kubernetes cluster. Below is the documentation for setting up a new KinD Kubernetes cluster. However, make sure that kubectl is already installed.
12 |
13 | ### Setting up a Kubernetes cluster
14 |
15 | To give `k8sgpt` a try, set up a basic Kubernetes cluster, such as KinD or Minikube (if you are not connected to any other cluster).
16 |
17 | - The [KinD documentation](https://kind.sigs.k8s.io/docs/user/quick-start/) provides several installation options to set up a local cluster with two commands.
18 |
19 | - The [Minikube documentation](https://minikube.sigs.k8s.io/docs/start/) covers different Operating Systems and Architectures to set up a local Kubernetes cluster running on a Container or Virtual Machine.
20 |
21 |
22 | **Creating a KinD Kubernetes Cluster**
23 |
24 | Install KinD first:
25 | ```bash
26 | brew install kind
27 | ```
28 |
29 | Create a new Kubernetes cluster:
30 | ```bash
31 | kind create cluster --name k8sgpt-demo
32 | ```
33 |
34 | ## Using K8sGPT
35 |
36 | You can view the different command options through
37 |
38 | ```bash
39 | k8sgpt --help
40 | Kubernetes debugging powered by AI
41 |
42 | Usage:
43 | k8sgpt [command]
44 |
45 | Available Commands:
46 | analyze This command will find problems within your Kubernetes cluster
47 | auth Authenticate with your chosen backend
48 | cache For working with the cache the results of an analysis
49 | completion Generate the autocompletion script for the specified shell
50 | custom-analyzer Manage a custom analyzer
51 | dump Creates a dumpfile for debugging issues with K8sGPT
52 | filters Manage filters for analyzing Kubernetes resources
53 | generate Generate Key for your chosen backend (opens browser)
54 | help Help about any command
55 | integration Integrate another tool into K8sGPT
56 | serve Runs k8sgpt as a server
57 | version Print the version number of k8sgpt
58 |
59 | Flags:
60 | --config string Default config file (default is $HOME/.k8sgpt.yaml)
61 | -h, --help help for k8sgpt
62 | --kubeconfig string Path to a kubeconfig. Only required if out-of-cluster.
63 | --kubecontext string Kubernetes context to use. Only required if out-of-cluster.
64 |
65 | Use "k8sgpt [command] --help" for more information about a command.
66 | ```
67 |
68 | ## Authenticate with OpenAI
69 |
70 | First, you will need to authenticate with your chosen backend. The backend is the AI provider such as OpenAI's ChatGPT.
71 |
72 | [Ensure that you have created an account with OpenAI.](https://platform.openai.com/login)
73 |
74 | Next, generate a token from the backend:
75 |
76 | ```bash
77 | k8sgpt generate
78 | ```
79 |
80 | This will provide you with a URL to generate a token, follow the URL from the command line to your browser to then generate the token.
81 |
82 | 
83 |
84 | Copy the token for the next step.
85 |
86 | Then, authenticate with the following command:
87 |
88 | ```bash
89 | k8sgpt auth add --backend openai --model gpt-4o-mini
90 | ```
91 |
92 | This will request the token that has just been generated. Paste the token into the command line.
93 |
94 | You should then see the following success message:
95 | > Enter openai Key: openai added to the AI backend provider list
96 |
97 | ## Analyze your cluster
98 |
99 | Ensure that you are connected the correct Kubernetes cluster, for this initial example is preferable to use KinD or Minikube as discussed earlier.
100 |
101 | ```bash
102 | kubectl config current-context
103 | ```
104 |
105 | ```bash
106 | kubectl get nodes
107 | ```
108 |
109 | We will now create a new "broken Pod", simply create a new YAML file named `broken-pod.yml` with the following contents:
110 | ```yaml
111 | apiVersion: v1
112 | kind: Pod
113 | metadata:
114 | name: broken-pod
115 | namespace: default
116 | spec:
117 | containers:
118 | - name: broken-pod
119 | image: nginx:1.a.b.c
120 | livenessProbe:
121 | httpGet:
122 | path: /
123 | port: 81
124 | initialDelaySeconds: 3
125 | periodSeconds: 3
126 | ```
127 | You might have noticed, this Pod has a wrong image tag. This is ok for this example, we simply want to have an issue in our cluster. The simply run:
128 |
129 | ```bash
130 | kubectl apply -f broken-pod.yml
131 | ```
132 |
133 | This will create the "broken Pod" in the cluster. You can verify this by running:
134 |
135 | ```bash
136 | kubectl get pods
137 |
138 | NAME READY STATUS RESTARTS AGE
139 | broken-pod 0/1 ErrImagePull 0 5s
140 | ```
141 |
142 | Now, you can go ahead and analyse your cluster:
143 |
144 | ```bash
145 | k8sgpt analyze
146 | ```
147 |
148 | Executing this command will generate a list of issues present in your Kubernetes cluster. In the case of our example, a message should be displayed highlighting the problem related to the container image.
149 |
150 | ```bash
151 | 0 default/broken-pod(broken-pod)
152 | - Error: Back-off pulling image "nginx:1.a.b.c"
153 | ```
154 |
155 | !!! info
156 | To become acquainted with the available flags supported by the `analyse` command, type `k8sgpt analyse -h` for more information. This will provide you with a comprehensive list of all the flags that can be utilized.
157 |
158 | For a more engaging experience and a better understanding of the capabilities of `k8sgpt` and LLMs (Large Language Models), run the following command:
159 |
160 | ```bash
161 | k8sgpt analyse --explain
162 | ```
163 |
164 | Congratulations! you have successfully created a local kubernetes cluster, deployed a "broken Pod" and analyzed it using `k8sgpt`.
165 |
--------------------------------------------------------------------------------
/docs/tutorials/custom-analyzers.md:
--------------------------------------------------------------------------------
1 | # Custom Analyzers
2 |
3 | In this tutorial, we will learn how to create custom analyzers for K8sGPT.
4 | We will create a custom analyzer that checks a Linux host for resource issues and provides recommendations.
5 |
6 | [Full example code](https://github.com/k8sgpt-ai/go-custom-analyzer)
7 |
8 | ## Why?
9 |
10 | There are usecases where you might want to create custom analyzers to check for specific issues in your environment. This would be in conjunction with the K8sGPT built-in analyzers.
11 | For example, you may wish to scan the Kubernetes cluster nodes more deeply to understand if there are underlying issues that are related to issues in the cluster.
12 |
13 | ## Prerequisites
14 |
15 | - [K8sGPT CLI](https://github.com/k8sgpt-ai/k8sgpt.git)
16 | - [Golang](https://golang.org/doc/install) go1.22 or higher
17 |
18 | ### Writing a simple analyzer
19 |
20 | The K8sGPT CLI, operator and custom analyzers all use a GRPC API to communicate with each other. The API is defined in the [buf.build/k8sgpt-ai/k8sgpt](https://buf.build/k8sgpt-ai/k8sgpt/docs/main:schema.v1) repository. Buf is a tool that helps you manage Protobuf files. You can install it by following the instructions [here](https://docs.buf.build/installation).
21 | Another advantage of buf is that when you import a Protobuf file, it will automatically download the dependencies for you. This is useful when you are working with Protobuf files that have dependencies on other Protobuf files. Additionally, you'll always be able to get the latest version of the Protobuf files.
22 |
23 | ### Project setup
24 |
25 | Let's create a new simple golang project. We will use the following directory structure:
26 |
27 | ```bash
28 | mkdir -p custom-analyzer
29 | cd custom-analyzer
30 | go mod init github.com//custom-analyzer
31 | ```
32 |
33 | Once we have this structure let's create a simple main.go file with the following content:
34 |
35 | ```go
36 | // main.go
37 | package main
38 |
39 | import (
40 | "errors"
41 | "fmt"
42 | "net"
43 | "net/http"
44 |
45 | rpc "buf.build/gen/go/k8sgpt-ai/k8sgpt/grpc/go/schema/v1/schemav1grpc"
46 | "github.com/k8sgpt-ai/go-custom-analyzer/pkg/analyzer"
47 | "google.golang.org/grpc"
48 | "google.golang.org/grpc/reflection"
49 | )
50 |
51 | func main() {
52 | fmt.Println("Starting!")
53 | var err error
54 | address := fmt.Sprintf(":%s", "8085")
55 | lis, err := net.Listen("tcp", address)
56 | if err != nil {
57 | panic(err)
58 | }
59 | grpcServer := grpc.NewServer()
60 | reflection.Register(grpcServer)
61 | aa := analyzer.Analyzer{}
62 | rpc.RegisterCustomAnalyzerServiceServer(grpcServer, aa.Handler)
63 | if err := grpcServer.Serve(
64 | lis,
65 | ); err != nil && !errors.Is(err, http.ErrServerClosed) {
66 | return
67 | }
68 | }
69 | ```
70 |
71 | The most important part of this file is here:
72 |
73 | ```go
74 | aa := analyzer.Analyzer{}
75 | rpc.RegisterAnalyzerServiceServer(grpcServer, aa.Handler)
76 | ```
77 |
78 | Let's go ahead and create the `analyzer` package with the following structure:
79 |
80 | ```bash
81 | mkdir -p pkg/analyzer
82 | ```
83 |
84 | Now let's create the `analyzer.go` file with the following content:
85 |
86 | ```go
87 | // analyzer.go
88 |
89 | package analyzer
90 |
91 | import (
92 | "context"
93 | "fmt"
94 |
95 | rpc "buf.build/gen/go/k8sgpt-ai/k8sgpt/grpc/go/schema/v1/schemav1grpc"
96 | v1 "buf.build/gen/go/k8sgpt-ai/k8sgpt/protocolbuffers/go/schema/v1"
97 | "github.com/ricochet2200/go-disk-usage/du"
98 | )
99 |
100 | type Handler struct {
101 | rpc.CustomAnalyzerServiceServer
102 | }
103 | type Analyzer struct {
104 | Handler *Handler
105 | }
106 |
107 | func (a *Handler) Run(context.Context, *v1.RunRequest) (*v1.RunResponse, error) {
108 | response := &v1.RunResponse{
109 | Result: &v1.Result{
110 | Name: "example",
111 | Details: "example",
112 | Error: []*v1.ErrorDetail{
113 | &v1.ErrorDetail{
114 | Text: "This is an example error message!",
115 | },
116 | },
117 | },
118 | }
119 |
120 | return response, nil
121 | }
122 | ```
123 |
124 | This file contains the `Handler` struct which implements the `Run` method. This method is called when the analyzer is run. In this example, we are returning an error message.
125 | The `Run` method takes a context and an `RunRequest` as arguments and returns an `RunResponse` and an error. Find the API available [here](https://buf.build/k8sgpt-ai/k8sgpt/file/1379a5a1889d4bf49494b2e2b8e36164:schema/v1/custom_analyzer.proto).
126 |
127 | ### Implementing some custom logic
128 |
129 | Now that we have the basic structure in place, let's implement some custom logic. We will check the disk usage on the host and return an error if it is above a certain threshold.
130 |
131 | ```go
132 | // analyzer.go
133 | func (a *Handler) Run(context.Context, *v1.RunRequest) (*v1.RunResponse, error) {
134 | println("Running analyzer")
135 | usage := du.NewDiskUsage("/")
136 | diskUsage := int((usage.Size() - usage.Free()) * 100 / usage.Size())
137 | return &v1.RunResponse{
138 | Result: &v1.Result{
139 | Name: "diskuse",
140 | Details: fmt.Sprintf("Disk usage is %d", diskUsage),
141 | Error: []*v1.ErrorDetail{
142 | {
143 | Text: fmt.Sprintf("Disk usage is %d", diskUsage),
144 | },
145 | },
146 | },
147 | }, nil
148 | }
149 | ```
150 |
151 | ### Testing it out
152 |
153 | To test this with K8sGPT we need to update the local K8sGPT CLI configuration to point to the custom analyzer. We can do this by running the following command:
154 |
155 | ```bash
156 | k8sgpt custom-analyzer add -n diskuse
157 | ```
158 |
159 | This will add the custom analyzer `diskuse` to the list of available analyzers in the K8sGPT CLI.
160 |
161 | ```bash
162 | k8sgpt custom-analyzer list
163 | Active:
164 | > diskuse
165 | ```
166 |
167 | To execute the analyzer we can run the following command:
168 |
169 | - run the customer analyzer
170 |
171 | ```bash
172 | go run main.go
173 | ```
174 |
175 | - execute the analyzer
176 |
177 | ```bash
178 | k8sgpt analyze --custom-analysis
179 | ```
180 |
181 | ## What's next?
182 |
183 | Now you've got the basics of how to write a custom analyzer, you can extend this to check for other issues on your hosts or in your Kubernetes cluster. You can also create more complex analyzers that check for multiple issues and provide more detailed recommendations.
184 |
--------------------------------------------------------------------------------
/docs/tutorials/custom-rest-backend.md:
--------------------------------------------------------------------------------
1 | # Custom Rest Backend
2 | This tutorial guides you through the process of integrating a custom backend with k8sgpt using RESTful API. This setup is particularly useful when you want to integrate Retrieval-Augmented Generation (RAG) or an AI Agent with k8sgpt.
3 | In this tutorial, we will store a CNCF Q&A dataset for knowledge retrieval and create a simple Retrieval-Augmented Generation (RAG) application and integrate it with k8sgpt.
4 |
5 | ## API Specification
6 | To ensure k8sgpt can interact with your custom backend, implement the following API endpoint using the OpenAPI schema:
7 |
8 | ### OpenAPI Specification
9 | ```yaml
10 | openapi: 3.0.0
11 | info:
12 | title: Custom REST Backend API
13 | version: 1.0.0
14 | paths:
15 | /v1/completions:
16 | post:
17 | summary: Generate a text-based response from the custom backend
18 | requestBody:
19 | required: true
20 | content:
21 | application/json:
22 | schema:
23 | type: object
24 | properties:
25 | model:
26 | type: string
27 | description: The name of the model to use.
28 | prompt:
29 | type: string
30 | description: The textual prompt to send to the model.
31 | options:
32 | type: object
33 | additionalProperties:
34 | type: string
35 | description: Model-specific options, such as temperature.
36 | required:
37 | - model
38 | - prompt
39 | responses:
40 | "200":
41 | description: Successful response
42 | content:
43 | application/json:
44 | schema:
45 | type: object
46 | properties:
47 | model:
48 | type: string
49 | description: The model name that generated the response.
50 | created_at:
51 | type: string
52 | format: date-time
53 | description: The timestamp of the response.
54 | response:
55 | type: string
56 | description: The textual response itself.
57 | required:
58 | - model
59 | - created_at
60 | - response
61 | "400":
62 | description: Bad Request
63 | "500":
64 | description: Internal Server Error
65 | ```
66 | ### Example Interaction
67 |
68 | #### Request
69 | ```json
70 | {
71 | "model": "gpt-4",
72 | "prompt": "Explain the process of photosynthesis.",
73 | "options": {
74 | "temperature": 0.7,
75 | "max_tokens": 150
76 | }
77 | }
78 | ```
79 |
80 | #### Response
81 | ```json
82 | {
83 | "model": "gpt-4",
84 | "created_at": "2025-01-14T10:00:00Z",
85 | "response": "Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize foods with the help of chlorophyll."
86 | }
87 | ```
88 |
89 | ### Implementation Notes
90 |
91 | - **Endpoint Configuration**: Ensure the /v1/completions endpoint is reachable and adheres to the provided schema.
92 |
93 | - **Error Handling**: Implement robust error handling to manage invalid requests or processing failures.
94 |
95 | By following this specification, your custom REST service will seamlessly integrate with k8sgpt, enabling powerful and customizable AI-driven functionalities.
96 | ## Prerequisites
97 |
98 | - [K8sGPT CLI](https://github.com/k8sgpt-ai/k8sgpt.git)
99 | - [Golang](https://golang.org/doc/install) go1.22 or higher
100 | - [langchaingo](https://github.com/tmc/langchaingo) library for building RAG applications
101 | - [gin](https://github.com/gin-gonic/gin) for handling RESTful APIs in Go
102 | - [Qdrant](https://github.com/qdrant/qdrant) vector database for storing and searching through knowledge bases
103 | - [Ollama](https://github.com/ollama/ollama) service to run large language models
104 |
105 | ## Writing a simple RAG backend
106 | ### Setup
107 | Let's create a new simple golang project.
108 | ```bash
109 | mkdir -p custom-backend
110 | cd custom-backend
111 | go mod init github.com//custom-backend
112 | ```
113 | Install necessary dependencies for the RAG application and RESTful API:
114 |
115 | ```bash
116 | go get -u github.com/tmc/langchaingo
117 | go get -u github.com/gin-gonic/gin
118 | ```
119 | Once we have this structure let's create a simple main.go file with the following content:
120 | ```golang
121 | // main.go
122 | package main
123 |
124 | import (
125 | "context"
126 | "fmt"
127 | "net/http"
128 | "net/url"
129 | "strings"
130 | "time"
131 |
132 | "github.com/gin-gonic/gin"
133 | "github.com/tmc/langchaingo/embeddings"
134 | "github.com/tmc/langchaingo/llms"
135 | "github.com/tmc/langchaingo/llms/ollama"
136 | "github.com/tmc/langchaingo/vectorstores"
137 | "github.com/tmc/langchaingo/vectorstores/qdrant"
138 | )
139 |
140 | var (
141 | ollama_url = "http://localhost:11434"
142 | listenAddr = ":8090"
143 | )
144 |
145 | func main() {
146 | server := gin.Default()
147 | server.POST("/completion", func(c *gin.Context) {
148 | var req CustomRestRequest
149 | if err := c.ShouldBindJSON(&req); err != nil {
150 | c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
151 | return
152 | }
153 | content, err := rag(ollama_url, req)
154 | if err != nil {
155 | c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
156 | return
157 | }
158 | resp := CustomRestResponse{
159 | Model: req.Model,
160 | CreatedAt: time.Now(),
161 | Response: content,
162 | }
163 | c.JSON(http.StatusOK, resp)
164 | })
165 | // start backend server
166 | err := server.Run(listenAddr)
167 | if err != nil {
168 | fmt.Println("Error: %w", err)
169 | }
170 | }
171 | ```
172 | This basic implementation sets up a RESTful API endpoint `/completion` that receives a `CustomRestRequest` from k8sgpt and return `CustomRestResponse`. The `rag` function handles the RAG logic. The structure of request and response is as follows:
173 |
174 | ```golang
175 | type CustomRestRequest struct {
176 | Model string `json:"model"`
177 |
178 | // Prompt is the textual prompt to send to the model.
179 | Prompt string `json:"prompt"`
180 |
181 | // Options lists model-specific options. For example, temperature can be
182 | // set through this field, if the model supports it.
183 | Options map[string]interface{} `json:"options"`
184 | }
185 |
186 | type CustomRestResponse struct {
187 | // Model is the model name that generated the response.
188 | Model string `json:"model"`
189 |
190 | // CreatedAt is the timestamp of the response.
191 | CreatedAt time.Time `json:"created_at"`
192 |
193 | // Response is the textual response itself.
194 | Response string `json:"response"`
195 | }
196 | ```
197 |
198 | ### Implementing a simple RAG
199 | Now, we will build the RAG pipeline using `langchaingo`. The RAG application will query a knowledge base stored in `Qdrant` and use a large language model from `ollama` to generate responses.
200 | First, ensure that you have `ollama` and `Qdrant` running locally.
201 | ```bash
202 | # run Ollama
203 | ollama run llama3.1
204 |
205 | # run Qdrant
206 | docker run -p 6333:6333 -p 6334:6334 \
207 | -v $(pwd)/qdrant_storage:/qdrant/storage:z \
208 | qdrant/qdrant
209 |
210 | ```
211 | We can download the `CNCF Q&A dataset` from [huggingface](https://huggingface.co/datasets/Kubermatic/cncf-question-and-answer-dataset-for-llm-training), and then load it into `Qdrant` using Python scribt below.
212 | ```python
213 | from langchain.embeddings import OllamaEmbeddings
214 | from langchain_community.document_loaders import CSVLoader
215 | from langchain_qdrant import QdrantVectorStore
216 |
217 | embeddings = OllamaEmbeddings(base_url="http://localhost:11434", model="llama3.1")
218 | loader = CSVLoader(file_path='./cncf_qa.csv', csv_args={
219 | 'delimiter': ',',
220 | 'quotechar': '"',
221 | 'fieldnames': ['Question', 'Answer', 'Project', 'Filename', 'Subcategory', 'Category']
222 | })
223 | data = loader.load()
224 | qdrant = QdrantVectorStore.from_documents(
225 | data,
226 | embeddings,
227 | url="localhost:6333",
228 | prefer_grpc=False,
229 | collection_name="my_documents",
230 | )
231 |
232 | data = loader.load()
233 | ```
234 | Next, implement the RAG pipeline logic.
235 | ```golang
236 | func rag(serverURL string, req CustomRestRequest) (string, error) {
237 | model := req.Model
238 | llm, err := ollama.New(ollama.WithServerURL(serverURL), ollama.WithModel(model))
239 | if err != nil {
240 | return "", err
241 | }
242 |
243 | embedder, err := embeddings.NewEmbedder(llm)
244 | if err != nil {
245 | return "", err
246 | }
247 |
248 | url, err := url.Parse("http://localhost:6333")
249 | if err != nil {
250 | return "", err
251 | }
252 |
253 | // new a client of vector store
254 | store, err := qdrant.New(
255 | qdrant.WithURL(*url),
256 | qdrant.WithCollectionName("my_documents"),
257 | qdrant.WithEmbedder(embedder),
258 | qdrant.WithContentKey("page_content"),
259 | )
260 | if err != nil {
261 | return "Wi", err
262 | }
263 |
264 | optionsVector := []vectorstores.Option{
265 | vectorstores.WithScoreThreshold(0.6),
266 | }
267 |
268 | retriever := vectorstores.ToRetriever(store, 10, optionsVector...)
269 | errMessage := req.Options["message"].(string)
270 | // search local knowledge
271 | resDocs, err := retriever.GetRelevantDocuments(context.Background(), errMessage)
272 | if err != nil {
273 | return "", err
274 | }
275 |
276 | // get content
277 | x := make([]string, len(resDocs))
278 | for i, doc := range resDocs {
279 | x[i] = doc.PageContent
280 | }
281 |
282 | // generate content by LLM
283 | ragPromptTemplate := `Base on context: %s;
284 | Please generate a response to the following query and response doesn't include context, if context is empty, generate a response using the model's knowledge and capabilities: \n %s`
285 | prompt := fmt.Sprintf(ragPromptTemplate, strings.Join(x, "; "), req.Prompt)
286 | ctx := context.Background()
287 | completion, err := llms.GenerateFromSinglePrompt(ctx, llm, prompt)
288 | if err != nil {
289 | return "", err
290 | }
291 | fmt.Println("Error: "+errMessage, "Answer: "+completion)
292 | return completion, err
293 | }
294 | ```
295 |
296 | ### Testing it out
297 | To test this with K8sGPT we need to add a `customrest` AI backend configuration to point to this RAG service. We can do this by running the following command:
298 | ```bash
299 | ./k8sgpt auth add --backend customrest --baseurl http://localhost:8090/completion --model llama3.1
300 | ```
301 | This will add the custom RAG service to the list of available backend in the K8sGPT CLI.
302 | To explain the analysis results using the custom RAG pipeline we can run the following command:
303 | ```bash
304 | ./k8sgpt analyze --backend customrest --explain
305 | ```
306 |
307 | ## What's next?
308 | Now you've got the basics of how to write a custom AI backend, you can extend this to use private dataset for knowledge retrieval. You can also build more complex AI pipelines to explain the result obtained from `Analyzers` and provide more detailed recommendations.
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Apache License
3 | Version 2.0, January 2004
4 | http://www.apache.org/licenses/
5 |
6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7 |
8 | 1. Definitions.
9 |
10 | "License" shall mean the terms and conditions for use, reproduction,
11 | and distribution as defined by Sections 1 through 9 of this document.
12 |
13 | "Licensor" shall mean the copyright owner or entity authorized by
14 | the copyright owner that is granting the License.
15 |
16 | "Legal Entity" shall mean the union of the acting entity and all
17 | other entities that control, are controlled by, or are under common
18 | control with that entity. For the purposes of this definition,
19 | "control" means (i) the power, direct or indirect, to cause the
20 | direction or management of such entity, whether by contract or
21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
22 | outstanding shares, or (iii) beneficial ownership of such entity.
23 |
24 | "You" (or "Your") shall mean an individual or Legal Entity
25 | exercising permissions granted by this License.
26 |
27 | "Source" form shall mean the preferred form for making modifications,
28 | including but not limited to software source code, documentation
29 | source, and configuration files.
30 |
31 | "Object" form shall mean any form resulting from mechanical
32 | transformation or translation of a Source form, including but
33 | not limited to compiled object code, generated documentation,
34 | and conversions to other media types.
35 |
36 | "Work" shall mean the work of authorship, whether in Source or
37 | Object form, made available under the License, as indicated by a
38 | copyright notice that is included in or attached to the work
39 | (an example is provided in the Appendix below).
40 |
41 | "Derivative Works" shall mean any work, whether in Source or Object
42 | form, that is based on (or derived from) the Work and for which the
43 | editorial revisions, annotations, elaborations, or other modifications
44 | represent, as a whole, an original work of authorship. For the purposes
45 | of this License, Derivative Works shall not include works that remain
46 | separable from, or merely link (or bind by name) to the interfaces of,
47 | the Work and Derivative Works thereof.
48 |
49 | "Contribution" shall mean any work of authorship, including
50 | the original version of the Work and any modifications or additions
51 | to that Work or Derivative Works thereof, that is intentionally
52 | submitted to Licensor for inclusion in the Work by the copyright owner
53 | or by an individual or Legal Entity authorized to submit on behalf of
54 | the copyright owner. For the purposes of this definition, "submitted"
55 | means any form of electronic, verbal, or written communication sent
56 | to the Licensor or its representatives, including but not limited to
57 | communication on electronic mailing lists, source code control systems,
58 | and issue tracking systems that are managed by, or on behalf of, the
59 | Licensor for the purpose of discussing and improving the Work, but
60 | excluding communication that is conspicuously marked or otherwise
61 | designated in writing by the copyright owner as "Not a Contribution."
62 |
63 | "Contributor" shall mean Licensor and any individual or Legal Entity
64 | on behalf of whom a Contribution has been received by Licensor and
65 | subsequently incorporated within the Work.
66 |
67 | 2. Grant of Copyright License. Subject to the terms and conditions of
68 | this License, each Contributor hereby grants to You a perpetual,
69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70 | copyright license to reproduce, prepare Derivative Works of,
71 | publicly display, publicly perform, sublicense, and distribute the
72 | Work and such Derivative Works in Source or Object form.
73 |
74 | 3. Grant of Patent License. Subject to the terms and conditions of
75 | this License, each Contributor hereby grants to You a perpetual,
76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77 | (except as stated in this section) patent license to make, have made,
78 | use, offer to sell, sell, import, and otherwise transfer the Work,
79 | where such license applies only to those patent claims licensable
80 | by such Contributor that are necessarily infringed by their
81 | Contribution(s) alone or by combination of their Contribution(s)
82 | with the Work to which such Contribution(s) was submitted. If You
83 | institute patent litigation against any entity (including a
84 | cross-claim or counterclaim in a lawsuit) alleging that the Work
85 | or a Contribution incorporated within the Work constitutes direct
86 | or contributory patent infringement, then any patent licenses
87 | granted to You under this License for that Work shall terminate
88 | as of the date such litigation is filed.
89 |
90 | 4. Redistribution. You may reproduce and distribute copies of the
91 | Work or Derivative Works thereof in any medium, with or without
92 | modifications, and in Source or Object form, provided that You
93 | meet the following conditions:
94 |
95 | (a) You must give any other recipients of the Work or
96 | Derivative Works a copy of this License; and
97 |
98 | (b) You must cause any modified files to carry prominent notices
99 | stating that You changed the files; and
100 |
101 | (c) You must retain, in the Source form of any Derivative Works
102 | that You distribute, all copyright, patent, trademark, and
103 | attribution notices from the Source form of the Work,
104 | excluding those notices that do not pertain to any part of
105 | the Derivative Works; and
106 |
107 | (d) If the Work includes a "NOTICE" text file as part of its
108 | distribution, then any Derivative Works that You distribute must
109 | include a readable copy of the attribution notices contained
110 | within such NOTICE file, excluding those notices that do not
111 | pertain to any part of the Derivative Works, in at least one
112 | of the following places: within a NOTICE text file distributed
113 | as part of the Derivative Works; within the Source form or
114 | documentation, if provided along with the Derivative Works; or,
115 | within a display generated by the Derivative Works, if and
116 | wherever such third-party notices normally appear. The contents
117 | of the NOTICE file are for informational purposes only and
118 | do not modify the License. You may add Your own attribution
119 | notices within Derivative Works that You distribute, alongside
120 | or as an addendum to the NOTICE text from the Work, provided
121 | that such additional attribution notices cannot be construed
122 | as modifying the License.
123 |
124 | You may add Your own copyright statement to Your modifications and
125 | may provide additional or different license terms and conditions
126 | for use, reproduction, or distribution of Your modifications, or
127 | for any such Derivative Works as a whole, provided Your use,
128 | reproduction, and distribution of the Work otherwise complies with
129 | the conditions stated in this License.
130 |
131 | 5. Submission of Contributions. Unless You explicitly state otherwise,
132 | any Contribution intentionally submitted for inclusion in the Work
133 | by You to the Licensor shall be under the terms and conditions of
134 | this License, without any additional terms or conditions.
135 | Notwithstanding the above, nothing herein shall supersede or modify
136 | the terms of any separate license agreement you may have executed
137 | with Licensor regarding such Contributions.
138 |
139 | 6. Trademarks. This License does not grant permission to use the trade
140 | names, trademarks, service marks, or product names of the Licensor,
141 | except as required for reasonable and customary use in describing the
142 | origin of the Work and reproducing the content of the NOTICE file.
143 |
144 | 7. Disclaimer of Warranty. Unless required by applicable law or
145 | agreed to in writing, Licensor provides the Work (and each
146 | Contributor provides its Contributions) on an "AS IS" BASIS,
147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148 | implied, including, without limitation, any warranties or conditions
149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150 | PARTICULAR PURPOSE. You are solely responsible for determining the
151 | appropriateness of using or redistributing the Work and assume any
152 | risks associated with Your exercise of permissions under this License.
153 |
154 | 8. Limitation of Liability. In no event and under no legal theory,
155 | whether in tort (including negligence), contract, or otherwise,
156 | unless required by applicable law (such as deliberate and grossly
157 | negligent acts) or agreed to in writing, shall any Contributor be
158 | liable to You for damages, including any direct, indirect, special,
159 | incidental, or consequential damages of any character arising as a
160 | result of this License or out of the use or inability to use the
161 | Work (including but not limited to damages for loss of goodwill,
162 | work stoppage, computer failure or malfunction, or any and all
163 | other commercial damages or losses), even if such Contributor
164 | has been advised of the possibility of such damages.
165 |
166 | 9. Accepting Warranty or Additional Liability. While redistributing
167 | the Work or Derivative Works thereof, You may choose to offer,
168 | and charge a fee for, acceptance of support, warranty, indemnity,
169 | or other liability obligations and/or rights consistent with this
170 | License. However, in accepting such obligations, You may act only
171 | on Your own behalf and on Your sole responsibility, not on behalf
172 | of any other Contributor, and only if You agree to indemnify,
173 | defend, and hold each Contributor harmless for any liability
174 | incurred by, or claims asserted against, such Contributor by reason
175 | of your accepting any such warranty or additional liability.
176 |
177 | END OF TERMS AND CONDITIONS
178 |
179 | APPENDIX: How to apply the Apache License to your work.
180 |
181 | To apply the Apache License to your work, attach the following
182 | boilerplate notice, with the fields enclosed by brackets "[]"
183 | replaced with your own identifying information. (Don't include
184 | the brackets!) The text should be enclosed in the appropriate
185 | comment syntax for the file format. We also recommend that a
186 | file or class name and description of purpose be included on the
187 | same "printed page" as the copyright notice for easier
188 | identification within third-party archives.
189 |
190 | Copyright 2023 The k8sgpt Authors
191 |
192 | Licensed under the Apache License, Version 2.0 (the "License");
193 | you may not use this file except in compliance with the License.
194 | You may obtain a copy of the License at
195 |
196 | http://www.apache.org/licenses/LICENSE-2.0
197 |
198 | Unless required by applicable law or agreed to in writing, software
199 | distributed under the License is distributed on an "AS IS" BASIS,
200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201 | See the License for the specific language governing permissions and
202 | limitations under the License.
203 |
--------------------------------------------------------------------------------
/docs/reference/cli/filters.md:
--------------------------------------------------------------------------------
1 | # Using Integration and Filters in K8sGPT
2 |
3 | K8sGPT offers integration with other tools. Once an integration is added to K8sGPT, it is possible to use its resources as additional filters.
4 |
5 | * Filters are a way of selecting which resources you wish to be part of your default analysis.
6 | * Integrations are a way to add resources to the filter list.
7 |
8 | Use the following command to access all K8sGPT CLI options related to integrations:
9 | ```bash
10 | k8sgpt integrations
11 | ```
12 |
13 |
14 | ## Prerequisites
15 |
16 | For using the K8sGPT integrations please ensure that you have the latest version of the [K8sGPT CLI](https://docs.k8sgpt.ai/getting-started/installation/) installed.
17 | Also, please make sure that you are connected to a Kubernetes cluster.
18 |
19 |
20 | ## Activating an Integration
21 |
22 | **Prerequisites**
23 |
24 | * Connected to a running Kubernetes cluster, any cluster will work for demonstration purposes.
25 |
26 | To list all integrations run the following command:
27 | ```bash
28 | k8sgpt integrations list
29 | ```
30 |
31 | This will provide you with a list of available integrations.
32 |
33 |
34 | ## Trivy
35 |
36 | The first integration that has been added is Trivy.
37 | [Trivy](https://github.com/aquasecurity/trivy) is an open source, cloud native security scanner, maintained by Aqua Security.
38 |
39 | Activate the Trivy integration:
40 | ```bash
41 | k8sgpt integration activate trivy
42 | ```
43 |
44 | Once activated, you should see the following success message displayed:
45 | ```
46 | Activated integration trivy
47 | ```
48 |
49 | This will install the Trivy Kubernetes Operator into the Kubernetes cluster and make it possible for K8sGPT to interact with the results of the Operator.
50 |
51 | Once the Trivy Operator is installed inside the cluster, K8sGPT will have access to VulnerabilityReports and ConfigAuditReports:
52 | ```bash
53 | ❯ k8sgpt filters list
54 |
55 | Active:
56 | > VulnerabilityReport (integration)
57 | > Pod
58 | > ConfigAuditReport (integration)
59 | Unused:
60 | > PersistentVolumeClaim
61 | > Service
62 | > CronJob
63 | > Node
64 | > MutatingWebhookConfiguration
65 | > Deployment
66 | > StatefulSet
67 | > ValidatingWebhookConfiguration
68 | > ReplicaSet
69 | > Ingress
70 | > HorizontalPodAutoScaler
71 | > PodDisruptionBudget
72 | > NetworkPolicy
73 | ```
74 |
75 | More information can be found on the official [Trivy-Operator documentation.](https://aquasecurity.github.io/trivy-operator/latest/docs/crds/)
76 |
77 | ### Using the new filters to analyze your cluster
78 |
79 | Any of the filters listed in the previous section can be used as part of the `k8sgpt analyze` command.
80 |
81 | To use the `VulnerabilityReport` filter from the Trivy integration, set it through the `--filter` flag:
82 | ```bash
83 | k8sgpt analyze --filter VulnerabilityReport
84 | ```
85 |
86 | This command will analyze your cluster Vulnerabilities through K8sGPT. Depending on the VulnerabilityReports available in your cluster, the result of the report will look different:
87 | ```bash
88 | ❯ k8sgpt analyze --filter VulnerabilityReport
89 |
90 | 0 demo/nginx-deployment-7bcfc88bbf(Deployment/nginx-deployment)
91 | - Error: critical Vulnerability found ID: CVE-2023-23914 (learn more at: https://avd.aquasec.com/nvd/cve-2023-23914)
92 | - Error: critical Vulnerability found ID: CVE-2023-27536 (learn more at: https://avd.aquasec.com/nvd/cve-2023-27536)
93 | - Error: critical Vulnerability found ID: CVE-2023-23914 (learn more at: https://avd.aquasec.com/nvd/cve-2023-23914)
94 | - Error: critical Vulnerability found ID: CVE-2023-27536 (learn more at: https://avd.aquasec.com/nvd/cve-2023-27536)
95 | - Error: critical Vulnerability found ID: CVE-2019-8457 (learn more at: https://avd.aquasec.com/nvd/cve-2019-8457)
96 | ```
97 |
98 | ## Prometheus
99 |
100 | K8sGPT supports a [Prometheus](https://prometheus.io) integration. Prometheus is an open source monitoring solution.
101 |
102 | The Prometheus integration does not deploy resources in your cluster. Instead,
103 | it detects a running Prometheus stack in the provided namespace using the
104 | `--namespace` flag. If you do not have Prometheus running, you can install it
105 | using [prometheus-operator](https://prometheus-operator.dev) or [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus).
106 |
107 | Activate the [Prometheus](https://prometheus.io) integration:
108 | ```bash
109 | k8sgpt integration activate prometheus --namespace
110 | ```
111 |
112 | If successful, you should see the following success message displayed:
113 | ```
114 | Activating prometheus integration...
115 | Found existing installation
116 | Activated integration prometheus
117 | ```
118 |
119 | Otherwise, it will report an error:
120 | ```
121 | Activating prometheus integration...
122 | Prometheus installation not found in namespace: .
123 | Please ensure Prometheus is deployed to analyze.
124 | Error: no prometheus installation found
125 | ```
126 |
127 | Once activated, K8sGPT will have access to new filters:
128 | ```bash
129 | ❯ k8sgpt filters list
130 |
131 | Active:
132 | > PersistentVolumeClaim
133 | > Service
134 | > ValidatingWebhookConfiguration
135 | > MutatingWebhookConfiguration
136 | > PrometheusConfigRelabelReport (integration)
137 | > Deployment
138 | > CronJob
139 | > Node
140 | > Pod
141 | > PrometheusConfigValidate (integration)
142 | > Ingress
143 | > StatefulSet
144 | > PrometheusConfigReport
145 | > ReplicaSet
146 | Unused:
147 | > HorizontalPodAutoScaler
148 | > PodDisruptionBudget
149 | > NetworkPolicy
150 | > Log
151 | > GatewayClass
152 | > Gateway
153 | > HTTPRoute
154 | ```
155 |
156 | ### Using the new filters to analyze your cluster
157 |
158 | Any of the filters listed in the previous section can be used as part of the `k8sgpt analyze` command.
159 |
160 | The `PrometheusConfigValidate` analyzer does a basic "sanity-check" on your
161 | Prometheus configuration to ensure it is formatted correctly and that Prometheus
162 | can load it properly. For example, if Prometheus is deployed in the `monitoring`
163 | namespace and has a bad config, we can analyze the issue using the `--filter` flag:
164 | ```bash
165 | ❯ k8sgpt analyze --filter PrometheusConfigValidate --namespace monitoring --explain
166 |
167 | 0 monitoring/prometheus-test-0(StatefulSet/prometheus-test)
168 | - Error: error validating Prometheus YAML configuration: unknown relabel action "keeps"
169 | Error: Unknown relabel action "keeps" in Prometheus configuration.
170 |
171 | Solution:
172 | 1. Check the Prometheus documentation for valid relabel actions.
173 | 2. Correct the relabel action to a valid one, such as "keep" or "drop".
174 | 3. Ensure the relabel configuration is correct and matches the intended behavior.
175 | 4. Restart Prometheus to apply the changes.
176 | ```
177 |
178 | The `PrometheusConfigRelabelReport` analyzer parses your Prometheus relabeling
179 | rules and reports groups of labels needed by your targets to be scraped successfully.
180 | ```bash
181 | ❯ k8sgpt analyze --filter PrometheusConfigRelabelReport --namespace monitoring --explain
182 |
183 | Discovered and parsed Prometheus scrape configurations.
184 | For targets to be scraped by Prometheus, ensure they are running with
185 | at least one of the following label sets:
186 | - Job: prom-example
187 | - Service Labels:
188 | - app.kubernetes.io/name=prom-example
189 | - Pod Labels:
190 | - app.kubernetes.io/name=prom-example
191 | - Namespaces:
192 | - default
193 | - Ports:
194 | - metrics
195 | - Containers:
196 | - prom-example
197 | - Job: collector
198 | - Service Labels:
199 | - app.kubernetes.io/name=collector
200 | - Pod Labels:
201 | - app.kubernetes.io/name=collector
202 | - Namespaces:
203 | - monitoring
204 | - Ports:
205 | - prom-metrics
206 | - Containers:
207 | - collector
208 | ```
209 |
210 | Note: the LLM prompt includes a subset of your Prometheus relabeling rules to
211 | avoid using too many tokens, so you may not see every label set in the output.
212 |
213 | ## AWS
214 |
215 | The AWS Operator is a tool that allows Kubernetes to manage AWS resources directly, making it easier to integrate AWS services with other Kubernetes applications. This integration helps K8sGPT to interact with the AWS resources managed by the Operator. As a result, you can use K8sGPT to analyze and manage not only your Kubernetes resources but also your AWS resources that are under the management of the AWS Operator.
216 |
217 | Activate the AWS integration:
218 | ```bash
219 | k8sgpt integration activate aws
220 | ```
221 | Once activated, you should see the following success message displayed:
222 | ```
223 | Activated integration aws
224 | ```
225 |
226 | This will activate the AWS Kubernetes Operator into the Kubernetes cluster and make it possible for K8sGPT to interact with the results of the Operator.
227 |
228 | Once the AWS integration is activated inside the cluster, K8sGPT will have access to EKS:
229 | ```bash
230 | ❯ k8sgpt filters list
231 |
232 | Active:
233 | > StatefulSet
234 | > Ingress
235 | > Pod
236 | > Node
237 | > ValidatingWebhookConfiguration
238 | > Service
239 | > EKS (integration)
240 | > PersistentVolumeClaim
241 | > MutatingWebhookConfiguration
242 | > CronJob
243 | > Deployment
244 | > ReplicaSet
245 | Unused:
246 | > Log
247 | > GatewayClass
248 | > Gateway
249 | > HTTPRoute
250 | > HorizontalPodAutoScaler
251 | > PodDisruptionBudget
252 | > NetworkPolicy
253 | ```
254 |
255 | More information can be found on the official [AWS-Operator documentation](https://aws.amazon.com/blogs/opensource/aws-service-operator-kubernetes-available/).
256 |
257 | ### Using the new filters to analyze your cluster
258 |
259 | Any of the filters listed in the previous section can be used as part of the `k8sgpt analyze` command.
260 |
261 | > **Note:** Ensure the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables are set as outlined in the [AWS CLI environment variables documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html).
262 |
263 | To use the `EKS` filter from the AWS integration, specify it with the --filter flag:
264 | ```bash
265 | k8sgpt analyze --filter EKS
266 | ```
267 |
268 | This command analyzes your cluster's EKS resources using K8sGPT. Make sure your EKS cluster is working in the specified namespace. The report's results will vary based on the EKS reports available in your cluster.
269 |
270 |
271 | ## Kyverno
272 |
273 | [Kyverno](https://kyverno.io/) is a policy engine designed for Kubernetes.
274 |
275 | Kyverno must be installed prior to using this integration.
276 |
277 | To activate the Kyverno integration:
278 | ```
279 | k8sgpt integration activate kyverno
280 |
281 | k8sgpt integration list
282 | Active:
283 | > kyverno
284 | Unused:
285 | > trivy
286 | > prometheus
287 | > aws
288 | > keda
289 | ```
290 |
291 | The following filters will become available:
292 |
293 | * PolicyReport
294 | * ClusterPolicyReport
295 |
296 | ```
297 | k8sgpt filters list
298 | Active:
299 | > ClusterPolicyReport (integration)
300 | > ReplicaSet
301 | > Service
302 | > StatefulSet
303 | > PersistentVolumeClaim
304 | > ValidatingWebhookConfiguration
305 | > MutatingWebhookConfiguration
306 | > PolicyReport (integration)
307 | > Node
308 | > Pod
309 | > Deployment
310 | > Ingress
311 | > CronJob
312 | Unused:
313 | > Log
314 | > GatewayClass
315 | > Gateway
316 | > HTTPRoute
317 | > HorizontalPodAutoScaler
318 | > PodDisruptionBudget
319 | > NetworkPolicy
320 | ```
321 |
322 | Policy reports are generated and managed by Kyverno. You can learn more about this here https://kyverno.io/docs/policy-reports/.
323 |
324 | Kyverno is currently only supported via the CLI, an operator is being developed.
325 |
326 |
327 | ## Adding and removing default filters
328 |
329 | _Remove default filters_
330 |
331 | ```
332 | k8sgpt filters add [filter(s)]
333 | ```
334 |
335 | - Single filter : `k8sgpt filters add Service`
336 | - Multiple filters : `k8sgpt filters add Ingress,Pod`
337 |
338 |
339 | _Remove default filters_
340 |
341 | ```
342 | k8sgpt filters remove [filter(s)]
343 | ```
344 |
345 | - Simple filter : `k8sgpt filters remove Service`
346 | - Multiple filters : `k8sgpt filters remove Ingress,Pod`
--------------------------------------------------------------------------------
/docs/reference/providers/backend.md:
--------------------------------------------------------------------------------
1 | # K8sGPT AI Backends
2 |
3 | A Backend (also called Provider) is a service that provides access to the AI language model. There are many different backends available for K8sGPT. Each backend has its own strengths and weaknesses, so it is important to choose the one that is right for your needs.
4 |
5 | Currently, we have a total of 11 backends available:
6 |
7 | - [OpenAI](https://openai.com/)
8 | - [Cohere](https://cohere.com/)
9 | - [Amazon Bedrock](https://aws.amazon.com/bedrock/)
10 | - [Amazon SageMaker](https://aws.amazon.com/sagemaker/)
11 | - [Azure OpenAI](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service)
12 | - [Google Gemini](https://ai.google.dev/docs/gemini_api_overview)
13 | - [Google Vertex AI](https://cloud.google.com/vertex-ai)
14 | - [Hugging Face](https://huggingface.co)
15 | - [IBM watsonx.ai](https://www.ibm.com/products/watsonx-ai)
16 | - [LocalAI](https://github.com/go-skynet/LocalAI)
17 | - [Ollama](https://github.com/ollama/ollama)
18 | - FakeAI
19 |
20 | ## OpenAI
21 |
22 | OpenAI is the default backend for K8sGPT. We recommend using OpenAI first if you are new to K8sGPT and if you have an account on [OpenAI](https://openai.com/). OpenAI comes with the access to powerful language models such as GPT-4. If you are looking for a powerful and easy-to-use language modeling service, OpenAI is a great option.
23 |
24 | - To use OpenAI you'll need an OpenAI token for authentication purposes. To generate a token use:
25 | ```bash
26 | k8sgpt generate
27 | ```
28 | - To set the token in K8sGPT, use the following command:
29 | ```bash
30 | k8sgpt auth add
31 | ```
32 | - Run the following command to analyze issues within your cluster using OpenAI:
33 | ```bash
34 | k8sgpt analyze --explain
35 | ```
36 |
37 | ## Cohere
38 |
39 | Cohere allows building conversational apps. It uses Retrieval Augmented Generation (RAG) toolkit that improves LLM's answer accuracy.
40 |
41 | - To use Cohere, visit [Cohere dashboard](https://dashboard.cohere.ai/api-keys).
42 | - To configure backend in K8sGPT, use the following command:
43 | ```bash
44 | k8sgpt auth add --backend cohere --model command-nightly
45 | ```
46 | - Run the following command to analyze issues within your cluster using Cohere:
47 | ```bash
48 | k8sgpt analyze --explain --backend cohere
49 | ```
50 |
51 | ## Amazon Bedrock
52 |
53 | Amazon Bedrock allows building and scaling generative AI applications.
54 |
55 | - To use Bedrock, make sure you have access to Bedrock API and models e.g. in AWS Console you should see something like this:
56 |
57 | 
58 |
59 | - You will need to set the follow local environmental variables:
60 | ```
61 | - AWS_ACCESS_KEY
62 | - AWS_SECRET_ACCESS_KEY
63 | - AWS_DEFAULT_REGION
64 | ```
65 |
66 | - To configure backend in K8sGPT use auth command:
67 | ```bash
68 | k8sgpt auth add --backend amazonbedrock --model anthropic.claude-v2
69 | ```
70 | - Run the following command to analyze issues within your cluster using Amazon Bedrock:
71 | ```bash
72 | k8sgpt analyze --explain --backend amazonbedrock
73 | ```
74 |
75 | ## Amazon SageMaker
76 |
77 | The Amazon SageMaker backend allows you to leverage a self-deployed and managed Language Models (LLM) on Amazon SageMaker.
78 |
79 | Example how to deploy Amazon SageMaker with cdk is available in [llm-sagemaker-jumpstart-cdk](https://github.com/zaremb/llm-sagemaker-jumpstart-cdk) repo.
80 |
81 | - To use SageMaker, make sure you have [the AWS CLI configured on your machine](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
82 | - You need to have [an Amazon SageMaker instance set up](https://github.com/zaremb/llm-sagemaker-jumpstart-cdk).
83 | - Run the following command to add SageMaker:
84 | ```bash
85 | k8sgpt auth add --backend amazonsagemaker --providerRegion eu-west-1 --endpointname endpoint-xxxxxxxxxx
86 | ```
87 | - Now you are ready to analyze with the Amazon SageMaker backend:
88 | ```bash
89 | k8sgpt analyze --explain --backend amazonsagemaker
90 | ```
91 |
92 | ## Azure OpenAI
93 |
94 | Azure OpenAI Provider provides REST API access to OpenAI's powerful language models. It gives the users an advanced language AI with powerful models with the security and enterprise promise of Azure.
95 |
96 | - The Azure OpenAI Provider requires a deployment as a prerequisite. You can visit their [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource) to create your own.
97 | To authenticate with k8sgpt, you would require an Azure OpenAI endpoint of your tenant `https://your Azure OpenAI Endpoint`,the API key to access your deployment, the deployment name of your model and the model name itself.
98 |
99 | - Run the following command to authenticate with Azure OpenAI:
100 | ```bash
101 | k8sgpt auth add --backend azureopenai --baseurl https:// --engine --model
102 | ```
103 | - Now you are ready to analyze with the Azure OpenAI backend:
104 | ```bash
105 | k8sgpt analyze --explain --backend azureopenai
106 | ```
107 |
108 | ## Google Gemini
109 |
110 | Google [Gemini](https://blog.google/technology/ai/google-gemini-ai/#performance) allows generative AI capabilities with multimodal approach (it is capable to understand not only text, but also code, audio, image and video). With Gemini models, a new [API](https://ai.google.dev/docs/gemini_api_overview) was introduced, and this is what is now built-in K8sGPT. This API also works against the [Google Cloud Vertex AI](https://ai.google.dev/docs/migrate_to_cloud) service. See also [Google AI Studio](https://ai.google.dev/tutorials/ai-studio_quickstart) to get started.
111 |
112 | > NOTE: Gemini API might be still rolling to some regions. See the [available regions](https://ai.google.dev/available_regions) for details.
113 |
114 | - To use Google Gemini API in K8sGPT, obtain [the API key](https://ai.google.dev/tutorials/setup).
115 | - To configure Google backend in K8sGPT with `gemini-pro` model (see all [models](https://ai.google.dev/models) here) use auth command:
116 | ```bash
117 | k8sgpt auth add --backend googlevertexai --model gemini-pro --password ""
118 | ```
119 | - Run the following command to analyze issues within your cluster with the Google provider:
120 | ```bash
121 | k8sgpt analyze --explain --backend google
122 | ```
123 |
124 | ## Google Gemini via Vertex AI
125 |
126 | Google [Gemini](https://blog.google/technology/ai/google-gemini-ai/#performance) allows generative AI capabilities with multimodal approach (it is capable to understand not only text, but also code, audio, image and video).
127 |
128 | - To use [Google Vertex AI](https://cloud.google.com/vertex-ai?#build-with-gemini) you need to be authorized via [Google Cloud SDK](https://cloud.google.com/sdk/install).
129 | The [Vertex AI API](https://console.cloud.google.com/apis/library/vertex-ai.googleapis.com) needs to be enabled.
130 |
131 | > Note: Vertex AI Gemini API is currently available in these [regions](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini?hl=de#http_request), verify if those are working for your environment
132 |
133 | - Open a terminal or command prompt and run the following command to authenticate using your Google Cloud credentials:
134 | ```bash
135 | gcloud auth application-default login
136 | ```
137 |
138 | - To configure Google backend in K8sGPT with `gemini-pro` model (see all [models](https://ai.google.dev/models) here) use auth command:
139 | ```bash
140 | k8sgpt auth add --backend googlevertexai --model "gemini-pro" --providerRegion "us-central1" --providerId ""
141 | ```
142 | - Run the following command to analyze issues within your cluster with the Google provider:
143 | ```bash
144 | k8sgpt analyze --explain --backend googlevertexai
145 | ```
146 |
147 | ## HuggingFace
148 |
149 | Hugging Face is a versatile backend for K8sGPT, offering access to a wide range of pre-trained language models. It provides easy-to-use interfaces for both training and inference tasks. Refer to the Hugging Face [documentation](https://huggingface.co/docs) for further insights into model usage and capabilities.
150 |
151 | - To use Hugging Face API in K8sGPT, obtain [the API key](https://huggingface.co/settings/tokens).
152 | - Configure the HuggingFace backend in K8sGPT by specifying the desired model (see all [models](https://huggingface.co/models) here) using auth command:
153 | ```bash
154 | k8sgpt auth add --backend huggingface --model
155 | ```
156 | > NOTE: Since the default gpt-4o-mini model is not available in Hugging Face, a valid backend model is required.
157 |
158 | - Once configured, you can analyze issues within your cluster using the Hugging Face provider with the following command:
159 | ```bash
160 | k8sgpt analyze --explain --backend huggingface
161 | ```
162 |
163 | ## IBM watsonx.ai
164 |
165 | IBM® watsonx.ai™ AI studio is part of the IBM watsonx™ AI and data platform, bringing together new generative AI (gen AI) capabilities powered by foundation models and traditional machine learning (ML) into a powerful studio spanning the AI lifecycle. Tune and guide models with your enterprise data to meet your needs with easy-to-use tools for building and refining performant prompts. With watsonx.ai, you can build AI applications in a fraction of the time and with a fraction of the data.
166 |
167 | - To use [IBM watsonx.ai](https://dataplatform.cloud.ibm.com/login?context=wx), you'll need a watsonx API key and project ID for authentication.
168 |
169 | - You will need to set the follow local environmental variables:
170 | ```
171 | - WATSONX_API_KEY
172 | - WATSONX_PROJECT_ID
173 | ```
174 | - To configure backend in K8sGPT use auth command:
175 | ```bash
176 | k8sgpt auth add --backend watsonxai --model ibm/granite-13b-chat-v2
177 | ```
178 | - Run the following command to analyze issues within your cluster using IBM watsonx.ai:
179 | ```bash
180 | k8sgpt analyze --explain --backend watsonxai
181 | ```
182 |
183 | ## LocalAI
184 |
185 | LocalAI is a local model, which is an OpenAI compatible API. It uses llama.cpp and ggml to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
186 |
187 | - To run local inference, you need to download the models first, for instance you can find `ggml` compatible models in [huggingface.com](https://huggingface.co/models?search=ggml)(for example vicuna, alpaca and koala).
188 | - To start the API server, follow the instruction in [LocalAI](https://github.com/go-skynet/LocalAI#example-use-gpt4all-j-model).
189 | - Authenticate K8sGPT with LocalAI:
190 | ```bash
191 | k8sgpt auth add --backend localai --model --baseurl http://localhost:8080/v1
192 | ```
193 | - Analyze with a LocalAI backend:
194 | ```bash
195 | k8sgpt analyze --explain --backend localai
196 | ```
197 |
198 | ## Ollama (via LocalAI backend)
199 |
200 | Ollama is a local model, which has an OpenAI compatible API. It supports the models listed in the [Ollama library](https://ollama.com/library).
201 |
202 | - To start the API server, follow the instruction in the [Ollama docs](https://github.com/ollama/ollama?tab=readme-ov-file#quickstart).
203 | - Authenticate K8sGPT with LocalAI:
204 | ```bash
205 | k8sgpt auth add --backend localai --model --baseurl http://localhost:11434
206 | ```
207 | - Analyze with a LocalAI backend:
208 | ```bash
209 | k8sgpt analyze --explain --backend localai
210 | ```
211 |
212 | ## Ollama
213 |
214 | Ollama can get up and running locally with large language models. It runs Llama 2, Code Llama, and other models.
215 |
216 | - To start the Ollama server, follow the instruction in [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#start-ollama).
217 | ```bash
218 | ollama serve
219 | ```
220 | It can also run as an docker image, follow the instruction in [Ollama BLog](https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image)
221 | ```bash
222 | docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
223 | ```
224 |
225 | - Authenticate K8sGPT with Ollama:
226 | ```bash
227 | k8sgpt auth add --backend ollama --model llama2 --baseurl http://localhost:11434
228 | ```
229 | - Analyze with a Ollama backend:
230 | ```bash
231 | k8sgpt analyze --explain --backend ollama
232 | ```
233 | ## FakeAI
234 |
235 | FakeAI or the NoOpAiProvider might be useful in situations where you need to test a new feature or simulate the behaviour of an AI based-system without actually invoking it. It can help you with local development, testing and troubleshooting.
236 | The NoOpAiProvider does not actually perform any AI-based operations but simulates them by echoing the input given as a problem.
237 |
238 | Follow the steps outlined below to learn how to utilize the NoOpAiProvider:
239 |
240 | - Authorize k8sgpt with `noopai` or `noop` as the Backend Provider:
241 | ```
242 | k8sgpt auth add -b noopai
243 | ```
244 | - For the auth token, you can leave it blank as the NoOpAiProvider is configured to work fine with or without any token.
245 |
246 | - Use the analyze and explain command to check for errors in your kubernetes cluster and the NoOpAiProvider should return the error as the solution itself:
247 | ```
248 | k8sgpt analyze --explain --backend noopai
249 | ```
250 |
251 |
--------------------------------------------------------------------------------