SpinKube is an open source, Kubernetes native project that streamlines developing,
9 | deploying, and operating WebAssembly (Wasm) workloads in Kubernetes - resulting in delivering smaller, more portable applications with exciting compute performance benefits.
10 | {{< blocks/link-down color="info" >}}
11 | {{< /blocks/cover >}}
12 |
13 |
14 | {{% blocks/lead color="secondary" %}}
15 |
16 | SpinKube combines the Spin operator, containerd shim Spin, and the runtime class manager (formerly KWasm) open source projects with contributions from Microsoft, SUSE,
20 | Liquid Reply, and Fermyon. By running applications at the Wasm abstraction layer, SpinKube gives
21 | developers a more powerful, efficient and scalable way to optimize application delivery on
22 | Kubernetes.
23 |
24 |
25 | ### Made with Contributions from:
26 |
27 | |||||
28 | |---|---|---|---|
29 |
30 | ### Overview
31 |
32 | [**Spin Operator**](https://github.com/spinframework/spin-operator/) is a Kubernetes operator that enables
33 | deploying and running Spin applications in Kubernetes. It houses the SpinApp and SpinAppExecutor CRDs
34 | which are used for configuring the individual workloads and workload execution configuration such as
35 | runtime class. Spin Operator introduces a host of functionality such as resource-based scaling,
36 | event-driven scaling and much more.
37 |
38 | [**Containerd Shim Spin**](https://github.com/spinframework/containerd-shim-spin) provides a shim for running Spin
39 | workloads managed by containerd. The Spin workload uses this shim as a runtime class within Kubernetes enabling
40 | these workloads to function similarly to container workloads in Pods in Kubernetes.
41 |
42 | [**Runtime Class Manager**](https://github.com/spinframework/runtime-class-manager) is an operator that
43 | automates and manages the lifecycle of containerd shims in a Kubernetes environment. This includes tasks
44 | like installation, update, removal, and configuration of shims, reducing manual errors and improving
45 | reliability in managing WebAssembly (Wasm) workloads and other containerd extensions.
46 |
47 | [**Spin Kube Plugin**](https://github.com/spinframework/spin-plugin-kube) is a plugin for the [Spin](https://developer.fermyon.com/spin/v3/index) CLI
48 | that aims to ease the experience for scaffolding, deploying and inspecting Spin workloads in Kubernetes.
49 |
50 | ### Get Involved
51 |
52 | We have bi-weekly [community calls](https://docs.google.com/document/d/10is2YoNC0NpXw4_5lSyTfpPph9_A9wBissKGrpFaIrI/edit?usp=sharing) and a [Slack channel](https://cloud-native.slack.com/archives/C06PC7JA1EE). We would love to have you join us!
53 |
54 | Check out the [contribution guidelines](/docs/contrib/) to learn how to get involved with the project.
55 |
56 | ---
57 |
58 | {{% /blocks/lead %}}
59 |
60 | {{% blocks/section %}}
61 |
62 |
66 |
67 |
18 |
19 | **Containerd Shim Spin**
20 |
21 | The [Containerd Shim Spin repository](https://github.com/spinframework/containerd-shim-spin) provides
22 | shim implementations for running WebAssembly ([Wasm](https://webassembly.org/)) / Wasm System
23 | Interface ([WASI](https://github.com/WebAssembly/WASI)) workloads using
24 | [runwasi](https://github.com/deislabs/runwasi) as a library, whereby workloads built using the [Spin
25 | framework](https://github.com/fermyon/spin) can function similarly to container workloads in a
26 | Kubernetes environment.
27 |
28 |
29 |
30 | **Runtime Class Manager**
31 |
32 | The [Runtime Class Manager, also known as the Containerd Shim Lifecycle
33 | Operator](https://github.com/spinframework/runtime-class-manager), is designed to automate and manage the
34 | lifecycle of containerd shims in a Kubernetes environment. This includes tasks like installation,
35 | update, removal, and configuration of shims, reducing manual errors and improving reliability in
36 | managing WebAssembly (Wasm) workloads and other containerd extensions.
37 |
38 |
39 |
40 | **Spin Plugin for Kubernetes**
41 |
42 | The [Spin plugin for Kubernetes](https://github.com/spinframework/spin-plugin-kube), known as `spin
43 | kube`, faciliates the translation of existing [Spin
44 | applications](https://developer.fermyon.com/spin) into the Kubernetes custom resource that will be
45 | deployed and managed on your cluster. This plugin works by taking your spin application manifest and
46 | scaffolding it into a Kubernetes yaml, which can be deployed and managed with `kubectl`. This allows
47 | Kubernetes to manage and run Wasm workloads in a way similar to traditional container workloads.
48 |
49 |
50 |
51 | **Spin Operator**
52 |
53 | The [Spin Operator](https://github.com/spinframework/spin-operator/) enables deploying Spin applications
54 | to Kubernetes. The foundation of this project is built using the
55 | [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework. Spin Operator defines Spin
56 | App Custom Resource Definitions (CRDs). Spin Operator watches SpinApp Custom Resources e.g. Spin app
57 | image, replicas, schedulers and other user-defined values and realizes the desired state in the
58 | Kubernetes cluster. Spin Operator introduces a host of functionality such as resource-based scaling,
59 | event-driven scaling, and much more.
60 |
61 | {{% /blocks/lead %}}
62 |
--------------------------------------------------------------------------------
/content/en/blog/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Blog
3 | menu: {main: {weight: 30}}
4 | ---
5 |
6 | This is the **blog** section. Files in these directories will be listed in reverse chronological order.
7 |
--------------------------------------------------------------------------------
/content/en/blog/community/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Community
3 | weight: 30
4 | ---
5 |
--------------------------------------------------------------------------------
/content/en/blog/community/spinkube-kind-rd/featured-background.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/community/spinkube-kind-rd/featured-background.jpeg
--------------------------------------------------------------------------------
/content/en/blog/community/spinkube-kind-rd/lima-kind-create-cluster.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/community/spinkube-kind-rd/lima-kind-create-cluster.png
--------------------------------------------------------------------------------
/content/en/blog/community/spinkube-kind-rd/spikube-app-port-nix.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/community/spinkube-kind-rd/spikube-app-port-nix.png
--------------------------------------------------------------------------------
/content/en/blog/community/spinkube-kind-rd/spikube-app-port.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/community/spinkube-kind-rd/spikube-app-port.png
--------------------------------------------------------------------------------
/content/en/blog/community/spinkube-kind-rd/wsl-kind-create-cluster.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/community/spinkube-kind-rd/wsl-kind-create-cluster.png
--------------------------------------------------------------------------------
/content/en/blog/news/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: News
3 | weight: 20
4 | ---
5 |
--------------------------------------------------------------------------------
/content/en/blog/news/first-post/featured-background.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/news/first-post/featured-background.jpeg
--------------------------------------------------------------------------------
/content/en/blog/news/first-post/spinkube-diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/news/first-post/spinkube-diagram.png
--------------------------------------------------------------------------------
/content/en/blog/news/first-post/spinkube-scaling.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/news/first-post/spinkube-scaling.mp4
--------------------------------------------------------------------------------
/content/en/blog/news/five-new-things/featured-background.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/news/five-new-things/featured-background.jpeg
--------------------------------------------------------------------------------
/content/en/blog/news/five-new-things/index.md:
--------------------------------------------------------------------------------
1 | ---
2 | date: 2024-11-12
3 | title: Five New Things in SpinKube
4 | linkTitle: Five New Things in SpinKube
5 | description: >
6 | Catching up on what's new in SpinKube
7 | author: The SpinKube Team ([@SpinKube](https://mastodon.social/@SpinKube))
8 | resources:
9 | - src: "**.{png,jpg}"
10 | title: "Image #:counter"
11 | ---
12 |
13 | Since we publicly [released](/blog/2024/03/13/introducing-spinkube/) SpinKube in March we've been hard at work steadily making it better. Spin Operator [`v0.4.0`](https://github.com/spinframework/spin-operator/releases/tag/v0.4.0), Containerd shim for Spin [`v0.17.0`](https://github.com/spinframework/containerd-shim-spin/releases/tag/v0.17.0), and `spin kube` plugin [`v0.3.0`](https://github.com/spinframework/spin-plugin-kube/releases/tag/v0.3.0) have all just been released. To celebrate that, here's five new things in SpinKube you should know about.
14 |
15 | ## Selective Deployments
16 |
17 | SpinKube now supports selectively deploying a subset of a Spin apps components. Consider this simple example Spin application (named salutation in the [example repo](https://github.com/spinframework/spin-operator/tree/main/apps/salutations)) composed of two HTTP-triggered components: `hello` and `goodbye`. In the newly added `components` field you can select which components you would like to be a part of the deployment. Here's an example of what the YAML for a selectively deployed app might look like:
18 |
19 | ```yaml
20 | apiVersion: core.spinkube.dev/v1alpha1
21 | kind: SpinApp
22 | metadata:
23 | name: salutations
24 | spec:
25 | image: "ghcr.io/spinkube/spin-operator/salutations:20241105-223428-g4da3171"
26 | executor: containerd-shim-spin
27 | replicas: 1
28 | components:
29 | - hello
30 | ```
31 |
32 | We're really excited about this feature because it makes developing microservices easier. Locally develop your application in one code base. Then, when you go to production, you can split your app based on the characteristics of each component. For example you run your front end closer to the end user while keeping your backend colocated with your database.
33 |
34 | If you want to learn more about how to use selective deployments in SpinKube checkout this [tutorial](https://www.spinkube.dev/docs/topics/selective-deployments/).
35 |
36 | ## OpenTelemetry Support
37 |
38 | Spin has had [OpenTelemetry](https://opentelemetry.io/) support for a while now, and it's now available in SpinKube. OpenTelemetry is an observability standard that makes understanding your applications running production much easier via traces and metrics.
39 |
40 | To configure a `SpinApp` to send telemetry data to an OpenTelemetry collector you need to modify the `SpinAppExecutor` custom resource.
41 |
42 | ```yaml
43 | apiVersion: core.spinkube.dev/v1alpha1
44 | kind: SpinAppExecutor
45 | metadata:
46 | name: otel-shim-executor
47 | spec:
48 | createDeployment: true
49 | deploymentConfig:
50 | runtimeClassName: wasmtime-spin-v2
51 | installDefaultCACerts: true
52 | otel:
53 | exporter_otlp_endpoint: http://otel-collector.default.svc.cluster.local:4318
54 | ```
55 |
56 | Now any Spin apps using this executor will send telemetry to the collector at `otel-collector.default.svc.cluster.local:4318`. For full details on how to use OpenTelemetry in SpinKube checkout this [tutorial](/docs/topics/monitoring-your-app).
57 |
58 | 
59 |
60 | ## MQTT Trigger Support
61 |
62 | The Containerd Shim for Spin has added support for [MQTT triggers](https://github.com/spinframework/spin-trigger-mqtt). [MQTT](https://mqtt.org/) is a lightweight, publish-subscribe messaging protocol that enables devices to send and receive messages through a broker. It's used all over the place to enable Internet of Things (IoT) designs.
63 |
64 | If you want to learn more about how to use this new trigger checkout this [blog post](https://www.fermyon.com/blog/mqtt_trigger_spinkube) by Kate Goldenring.
65 |
66 | ## Spintainer Executor
67 |
68 | In SpinKube there is a concept of an executor. An executor is defined by the `SpinAppExecutor` CRD and it configures how a `SpinApp` is run. Typically you'll want to define an executor that uses the Containerd shim for Spin.
69 |
70 | ```yaml
71 | apiVersion: core.spinkube.dev/v1alpha1
72 | kind: SpinAppExecutor
73 | metadata:
74 | name: containerd-shim-spin
75 | spec:
76 | createDeployment: true
77 | deploymentConfig:
78 | runtimeClassName: wasmtime-spin-v2
79 | installDefaultCACerts: true
80 | ```
81 |
82 | However, it can also be useful to run your Spin application directly in a container. You might want to:
83 |
84 | - Use a specific version of Spin.
85 | - Use a custom trigger or plugin.
86 | - Workaround a lack of cluster permissions to install the shim.
87 |
88 | This is enabled by the new executor we've dubbed 'Spintainer'.
89 |
90 | ```yaml
91 | apiVersion: core.spinkube.dev/v1alpha1
92 | kind: SpinAppExecutor
93 | metadata:
94 | name: spintainer
95 | spec:
96 | createDeployment: true
97 | deploymentConfig:
98 | installDefaultCACerts: true
99 | spinImage: ghcr.io/fermyon/spin:v3.0.0
100 | ```
101 |
102 | Learn more about the Spintainer executor [here](/docs/misc/spintainer-executor).
103 |
104 | ## Gaining Stability
105 |
106 | SpinKube and its constituent sub-projects are all still in alpha as we iron out the kinks. However, SpinKube has made great leaps and bounds in improving its stability and polish. In the number of releases we've cut for each sub-project we've squashed many bugs and sanded down plenty of rough edges.
107 |
108 | One more example of SpinKube's growing stability is the domain migration we've completed in Spin Operator. As of the `v0.4.0` release we have migrated the Spin Operator CRDs from the `spinoperator.dev` domain to `spinkube.dev`[^1]. This change was made to better align the Spin Operator with the overall SpinKube project. While this is a breaking change (upgrade steps can be found [here](/docs/misc/upgrading-to-v0.4.0/)) we're now in a better position to support this domain going forward. This is just one step towards SpinKube eventually moving out of alpha.
109 |
110 | ## More To Come
111 |
112 | We hope this has gotten you as excited about SpinKube as we are. Stay tuned as we continue to make SpinKube better. If you'd like to get involved in the community we'd love to have you — check out our [community page](https://www.spinkube.dev/community/).
113 |
114 | [^1]: This was also a great opportunity to exercise the [SKIP](https://github.com/spinframework/skips/tree/main/proposals/004-crd-domains) (SpinKube Improvement Proposal) process.
115 |
--------------------------------------------------------------------------------
/content/en/blog/news/five-new-things/otel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/blog/news/five-new-things/otel.png
--------------------------------------------------------------------------------
/content/en/cloud-native-computing.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/content/en/community/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Community
3 | menu: {main: {weight: 40}}
4 | ---
5 |
6 |
7 |
--------------------------------------------------------------------------------
/content/en/docs/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: SpinKube documentation
3 | linkTitle: Docs
4 | menu: {main: {weight: 20}}
5 | ---
6 |
7 | Everything you need to know about SpinKube.
8 |
9 | ## First steps
10 |
11 | To get started with SpinKube, follow our [Quickstart guide]({{< ref "quickstart" >}}).
12 |
13 | ## How the documentation is organized
14 |
15 | SpinKube has a lot of documentation. A high-level overview of how it's organized will help you know
16 | where to look for certain things.
17 |
18 | - [Installation guides]({{< relref "install" >}}) cover how to install SpinKube on various
19 | platforms.
20 | - [Topic guides]({{< relref "topics" >}}) discuss key topics and concepts at a fairly high level and
21 | provide useful background information and explanation.
22 | - [Reference guides]({{< relref "reference" >}}) contain technical reference for APIs and other
23 | aspects of SpinKube's machinery. They describe how it works and how to use it but assume that you
24 | have a basic understanding of key concepts.
25 | - [Contributing guides]({{< relref "contrib" >}}) show how to contribute to the SpinKube project.
26 | - [Miscellaneous guides]({{< relref "misc" >}}) cover topics that don't fit neatly into either of
27 | the above categories.
28 |
--------------------------------------------------------------------------------
/content/en/docs/contrib/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: How to get involved
3 | description: How to contribute to the SpinKube project.
4 | weight: 99
5 | aliases:
6 | - /docs/contribution-guidelines
7 | ---
8 |
9 | SpinKube is an open source community-driven project. You can contribute in many ways, either to the
10 | project or to the wider community.
11 |
12 | ## Community Calls
13 |
14 | Join our bi-weekly community calls to connect with the team and other contributors. These calls are a great opportunity to listen in, ask questions, or share your ideas!
15 |
16 | * When: Bi-weekly at 8:00 am PT / 11:00 am ET / 5:00 pm CET
17 |
18 | * Meeting Document: [Community Call Notes](https://docs.google.com/document/d/10is2YoNC0NpXw4_5lSyTfpPph9_A9wBissKGrpFaIrI/edit?usp=sharing)
19 |
20 | * Zoom Link: Join [the Zoom Meeting](https://us06web.zoom.us/j/83473056051?pwd=gDFNQCxUG1OMzdBqTNXIQamlvHjhFo.1)
21 |
22 | ## Slack
23 |
24 | Connect with the community on Slack for real-time discussions and updates:
25 |
26 | visit the `#spinkube` channel on the [CNCF
27 | Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE).
28 |
29 | We look forward to seeing you there!
--------------------------------------------------------------------------------
/content/en/docs/contrib/new-contributors.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Advice for new contributors
3 | description: Are you a contributor and not sure what to do? Want to help but just don't know how to get started? This is the section for you.
4 | weight: 1
5 | aliases:
6 | - /docs/contrib/feedback
7 | - /docs/spin-operator/support/feedback
8 | ---
9 |
10 | This page contains more general advice on ways you can contribute to SpinKube, and how to approach
11 | that.
12 |
13 | If you are looking for a reference on the details of making code contributions, see the [Writing
14 | code]({{< ref "writing-code" >}}) documentation.
15 |
16 | ## First steps
17 |
18 | Start with these steps to be successful as a contributor to SpinKube.
19 |
20 | ### Join the conversation
21 |
22 | It can be argued that collaboration and communication are the most crucial aspects of open source
23 | development. Gaining consensus on the direction of the project, and that your work is aligned with
24 | that direction, is key to getting your work accepted. This is why it is important to join the
25 | conversation early and often.
26 |
27 | To join the conversation, visit the `#spinkube` channel on the [CNCF
28 | Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE).
29 |
30 | ### Read the documentation
31 |
32 | The SpinKube documentation is a great place to start. It contains information on how to get started
33 | with the project, how to contribute, and how to use the project. The documentation is also a great
34 | place to find information on the project's architecture and design.
35 |
36 | SpinKube's documentation is great but it is not perfect. If you find something that is unclear or
37 | incorrect, please submit a pull request to fix it. See the guide on [writing documentation]({{< ref
38 | "writing-documentation" >}}) for more information.
39 |
40 | ### Triage issues
41 |
42 | If an issue reports a bug, try and reproduce it. If you can reproduce it and it seems valid, make a
43 | note that you confirmed the bug. Make sure the issue is labeled properly. If you cannot reproduce
44 | the bug, ask the reporter for more information.
45 |
46 | ### Write tests
47 |
48 | Consider writing a test for the bug's behavior, even if you don't fix the bug itself.
49 |
50 | issues labeled `good first issue` are a great place to start. These issues are specifically tagged
51 | as being good for new contributors to work on.
52 |
53 | ## Guidelines
54 |
55 | As a newcomer on a large project, it's easy to experience frustration. Here's some advice to make
56 | your work on SpinKube more useful and rewarding.
57 |
58 | ### Pick a subject area that you care about, that you are familiar with, or that you want to learn about
59 |
60 | You don't already have to be an expert on the area you want to work on; you become an expert through
61 | your ongoing contributions to the code.
62 |
63 | ### Start small
64 |
65 | It's easier to get feedback on a little issue than on a big one, especially as a new contributor;
66 | the maintainters are more likely to have time to review a small change.
67 |
68 | ### If you're going to engage in a big task, make sure that your idea has support first
69 |
70 | This means getting someone else to confirm that a bug is real before you fix the issue, and ensuring
71 | that there's consensus on a proposed feature before you go implementing it.
72 |
73 | ### Be bold! Leave feedback!
74 |
75 | Sometimes it can be scary to put your opinion out to the world and say "this issue is correct" or
76 | "this patch needs work", but it's the only way the project moves forward. The contributions of the
77 | broad SpinKube community ultimately have a much greater impact than that of any one person. We can't
78 | do it without you!
79 |
80 | ### Err on the side of caution when marking things ready for review
81 |
82 | If you're really not certain if a pull request is ready for review, don't mark it as such. Leave a
83 | comment instead, letting others know your thoughts. If you're mostly certain, but not completely
84 | certain, you might also try asking on [Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE)
85 | to see if someone else can confirm your suspicions.
86 |
87 | ### Wait for feedback, and respond to feedback that you receive
88 |
89 | Focus on one or two issues, see them through from start to finish, and repeat. The shotgun approach
90 | of taking on lots of issues and letting some fall by the wayside ends up doing more harm than good.
91 |
92 | ### Be rigorous
93 |
94 | When we say "this pull request must have documentation and tests", we mean it. If a patch doesn't
95 | have documentation and tests, there had better be a good reason. Arguments like "I couldn't find any
96 | existing tests of this feature" don't carry much weight; while it may be true, that means you have
97 | the extra-important job of writing the very first tests for that feature, not that you get a pass
98 | from writing tests altogether.
99 |
100 | ### Be patient
101 |
102 | It's not always easy for your issue or your patch to be reviewed quickly. This isn't personal. There
103 | are a lot of issues and pull requests to get through.
104 |
105 | Keeping your patch up to date is important. Review the pull request on GitHub to ensure that you've
106 | addressed all review comments.
107 |
--------------------------------------------------------------------------------
/content/en/docs/contrib/writing-code.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Writing code
3 | description: Fix a bug, or add a new feature. You can make a pull request and see your code in the next version of SpinKube!
4 | weight: 10
5 | ---
6 |
7 | Interested in giving back to the community a little? Maybe you've found a bug in SpinKube that you'd
8 | like to see fixed, or maybe there's a small feature you want added.
9 |
10 | Contributing back to SpinKube itself is the best way to see your own concerns addressed. This may
11 | seem daunting at first, but it's a well-traveled path with documentation, tooling, and a community
12 | to support you. We'll walk you through the entire process, so you can learn by example.
13 |
14 | ## Who's this tutorial for?
15 |
16 | For this tutorial, we expect that you have at least a basic understanding of how SpinKube works.
17 | This means you should be comfortable going through the existing tutorials on deploying your first
18 | app to SpinKube. It is also worthwhile learning a bit of Rust, since many of SpinKube's projects are
19 | written in Rust. If you don't, [Learn Rust](https://www.rust-lang.org/learn) is a great place to
20 | start.
21 |
22 | Those of you who are unfamiliar with `git` and GitHub will find that this tutorial and its links
23 | include just enough information to get started. However, you'll probably want to read some more
24 | about these different tools if you plan on contributing to SpinKube regularly.
25 |
26 | For the most part though, this tutorial tries to explain as much as possible, so that it can be of
27 | use to the widest audience.
28 |
29 | ## Code of Conduct
30 |
31 | As a contributor, you can help us keep the SpinKube community open and inclusive. Please read and
32 | follow our [Code of Conduct](https://github.com/spinframework/governance/blob/main/CODE_OF_CONDUCT.md).
33 |
34 | ## Install git
35 |
36 | For this tutorial, you'll need Git installed to download the current development version of SpinKube
37 | and to generate a branch for the changes you make.
38 |
39 | To check whether or not you have Git installed, enter `git` into the command line. If you get
40 | messages saying that this command could not be found, you'll have to download and install it. See
41 | [Git's download page](https://git-scm.com/download) for more information.
42 |
43 | If you're not that familiar with Git, you can always find out more about its commands (once it's
44 | installed) by typing `git help` into the command line.
45 |
46 | ## Fork the repository
47 |
48 | SpinKube is hosted on GitHub, and you'll need a GitHub account to contribute. If you don't have one,
49 | you can sign up for free at [GitHub](https://github.com).
50 |
51 | SpinKube's repositories are organized under the [spinframework GitHub
52 | organization](https://github.com/spinframework). Once you have an account, fork one of the repositories
53 | by visiting the repository's page and clicking "Fork" in the upper right corner.
54 |
55 | Then, from the command line, clone your fork of the repository. For example, if you forked the
56 | `spin-operator` repository, you would run:
57 |
58 | ```shell
59 | git clone https://github.com/YOUR-USERNAME/spin-operator.git
60 | ```
61 |
62 | ## Read the README
63 |
64 | Each repository in the SpinKube organization has a README file that explains what the project does
65 | and how to get started. This is a great place to start, as it will give you an overview of the
66 | project and how to run the test suite.
67 |
68 | ## Run the test suite
69 |
70 | When contributing to a project, it's very important that your code changes don't introduce bugs. One
71 | way to check that the project still works after you make your changes is by running the project's
72 | test suite. If all the tests still pass, then you can be reasonably sure that your changes work and
73 | haven't broken other parts of the project. If you've never run the project's test suite before, it's
74 | a good idea to run it once beforehand to get familiar with its output.
75 |
76 | Most projects have a command to run the test suite. This is usually something like `make test` or
77 | `cargo test`. Check the project's README file for instructions on how to run the test suite. If
78 | you're not sure, you can always ask for help in the `#spinkube` channel [on
79 | Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE).
80 |
81 | ## Find an issue to work on
82 |
83 | If you're not sure where to start, you can look for issues labeled `good first issue` in the
84 | repository you're interested in. These issues are often much simpler in nature and specifically
85 | tagged as being good for new contributors to work on.
86 |
87 | ## Create a branch
88 |
89 | Before making any changes, create a new branch for the issue:
90 |
91 | ```shell
92 | git checkout -b issue-123
93 | ```
94 |
95 | Choose any name that you want for the branch. `issue-123` is an example. All changes made in this
96 | branch will be specific to the issue and won't affect the main copy of the code that we cloned
97 | earlier.
98 |
99 | ## Write some tests for your issue
100 |
101 | If you're fixing a bug, write a test (or multiple tests) that reproduces the bug. If you're adding a
102 | new feature, write a test that verifies the feature works as expected. This will help ensure that
103 | your changes work as expected and don't break other parts of the project.
104 |
105 | ## Confirm the tests fail
106 |
107 | Now that we've written a test, we need to confirm that it fails. This is important because it
108 | verifies that the test is actually testing what we think it is. If the test passes, then it's not
109 | actually testing the issue we're trying to fix.
110 |
111 | To run the test suite, refer to the project's README or reach out on
112 | [Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE).
113 |
114 | ## Make the changes
115 |
116 | Now that we have a failing test, we can make the changes to the code to fix the issue. This is the
117 | fun part! Use your favorite text editor to make the changes.
118 |
119 | ## Confirm the tests pass
120 |
121 | After making the changes, run the test suite again to confirm that the tests pass. If the tests
122 | pass, then you can be reasonably sure that your changes work as expected.
123 |
124 | Once you've verified that your changes and test are working correctly, it's a good idea to run the
125 | entire test suite to verify that your change hasn't introduced any bugs into other areas of the
126 | project. While successfully passing the entire test suite doesn't guarantee your code is bug free,
127 | it does help identify many bugs and regressions that might otherwise go unnoticed.
128 |
129 | ## Commit your changes
130 |
131 | Once you've made your changes and confirmed that the tests pass, commit your changes to your branch:
132 |
133 | ```shell
134 | git add .
135 | git commit -m "Fix issue 123"
136 | ```
137 |
138 | ## Push your changes
139 |
140 | Now that you've committed your changes to your branch, push your branch to your fork on GitHub:
141 |
142 | ```shell
143 | git push origin issue-123
144 | ```
145 |
146 | ## Create a pull request
147 |
148 | Once you've pushed your changes to your fork on GitHub, you can create a pull request. This is a
149 | request to merge your changes into the main copy of the code. To create a pull request, visit your
150 | fork on GitHub and click the "New pull request" button.
151 |
152 | ## Write documentation
153 |
154 | If your changes introduce new features or change existing behavior, it's important to update the
155 | documentation. This helps other contributors understand your changes and how to use them.
156 |
157 | See the guide on [writing documentation]({{< ref "writing-documentation" >}}) for more information.
158 |
159 | ## Next steps
160 |
161 | Congratulations! You've made a contribution to SpinKube.
162 |
163 | After a pull request has been submitted, it needs to be reviewed by a maintainer. Reach out on the
164 | `#spinkube` channel on the [CNCF Slack](https://cloud-native.slack.com/archives/C06PC7JA1EE) to ask
165 | for a review.
166 |
--------------------------------------------------------------------------------
/content/en/docs/contrib/writing-documentation.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Writing documentation
3 | description: Our goal is to keep the documentation informative and thorough. You can help to improve the documentation and keep it relevant as the project evolves.
4 | weight: 11
5 | ---
6 |
7 | We place high importance on the consistency and readability of documentation. We treat our
8 | documentation like we treat our code: we aim to improve it as often as possible.
9 |
10 | Documentation changes generally come in two forms:
11 |
12 | 1. General improvements: typo corrections, error fixes and better explanations through clearer
13 | writing and more examples.
14 | 1. New features: documentation of features that have been added to the project since the last
15 | release.
16 |
17 | This section explains how writers can craft their documentation changes in the most useful and least
18 | error-prone ways.
19 |
20 | ## How documentation is written
21 |
22 | Though SpinKube's documentation is intended to be read as HTML at https://spinkube.dev/docs, we edit
23 | it as a collection of plain text files written in [Markdown](https://en.wikipedia.org/wiki/Markdown)
24 | for maximum flexibility.
25 |
26 | SpinKube's documentation uses a documentation system known as [docsy](https://www.docsy.dev/), which
27 | in turn is based on the [Hugo web framework](https://gohugo.io/). The basic idea is that
28 | lightly-formatted plain-text documentation is transformed into HTML through a process known as
29 | [Static Site Generation (SSG)](https://en.wikipedia.org/wiki/Static_site_generator).
30 |
31 | ### Previewing your changes locally
32 |
33 | If you want to run your own local Hugo server to preview your changes as you work:
34 |
35 | 1. Fork the [`spinframework/spinkube-docs`](https://github.com/spinframework/spinkube-docs) repository on
36 | GitHub.
37 | 1. Clone your fork to your computer.
38 | 1. Read the `README.md` file for instructions on how to build the site from source.
39 | 1. Continue with the usual development workflow to edit files, commit them, push changes up to your
40 | fork, and create a pull request. If you're not sure how to do this, see [writing code]({{< ref
41 | "writing-code" >}}) for tips.
42 |
43 | ## Making quick changes
44 |
45 | If you’ve just spotted something you’d like to change while using the documentation, the website has
46 | a shortcut for you:
47 |
48 | 1. Click **Edit this page** in the top right-hand corner of the page.
49 | 1. If you don't already have an up-to-date fork of the project repo, you are prompted to get one -
50 | click **Fork this repository and propose changes** or **Update your Fork** to get an up-to-date
51 | version of the project to edit.
52 |
53 | ## Filing issues
54 |
55 | If you've found a problem in the documentation, but you're not sure how to fix it yourself, please
56 | file an issue in the [documentation repository](https://github.com/spinframework/spinkube-docs/issues).
57 | You can also file an issue about a specific page by clicking the **Create Issue** button in the top
58 | right-hand corner of the page.
59 |
--------------------------------------------------------------------------------
/content/en/docs/install/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Installation
3 | description: Before you can use SpinKube, you'll need to get it installed. We have several complete installation guides that covers all the possibilities; these guides will guide you through the process of installing SpinKube on your Kubernetes cluster.
4 | weight: 20
5 | ---
6 |
--------------------------------------------------------------------------------
/content/en/docs/install/azure-kubernetes-service.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Installing on Azure Kubernetes Service
3 | description: In this tutorial you'll learn how to deploy SpinKube on Azure Kubernetes Service (AKS).
4 | date: 2024-02-16
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 5
8 | aliases:
9 | - /docs/spin-operator/tutorials/deploy-on-azure-kubernetes-service
10 | ---
11 |
12 | In this tutorial, you install Spin Operator on an Azure Kubernetes Service (AKS) cluster and deploy
13 | a simple Spin application. You will learn how to:
14 |
15 | - Deploy an AKS cluster
16 | - Install Spin Operator Custom Resource Definition and Runtime Class
17 | - Install and verify containerd shim via Kwasm
18 | - Deploy a simple Spin App custom resource on your cluster
19 |
20 |
21 |
22 | ## Prerequisites
23 |
24 | Please ensure you have the following tools installed before continuing:
25 |
26 | - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI
27 | - [Helm](https://helm.sh) - the package manager for Kubernetes
28 | - [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) - cross-platform CLI
29 | for managing Azure resources
30 |
31 | ## Provisioning the necessary Azure Infrastructure
32 |
33 | Before you dive into deploying Spin Operator on Azure Kubernetes Service (AKS), the underlying cloud
34 | infrastructure must be provisioned. For the sake of this article, you will provision a simple AKS
35 | cluster. (Alternatively, you can setup the AKS cluster following [this guide from
36 | Microsoft](https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli).)
37 |
38 | ```shell
39 | # Login with Azure CLI
40 | az login
41 |
42 | # Select the desired Azure Subscription
43 | az account set --subscription
44 |
45 | # Create an Azure Resource Group
46 | az group create --name rg-spin-operator \
47 | --location germanywestcentral
48 |
49 | # Create an AKS cluster
50 | az aks create --name aks-spin-operator \
51 | --resource-group rg-spin-operator \
52 | --location germanywestcentral \
53 | --node-count 1 \
54 | --tier free \
55 | --generate-ssh-keys
56 | ```
57 |
58 | Once the AKS cluster has been provisioned, use the `aks get-credentials` command to download
59 | credentials for `kubectl`:
60 |
61 | ```shell
62 | # Download credentials for kubectl
63 | az aks get-credentials --name aks-spin-operator \
64 | --resource-group rg-spin-operator
65 | ```
66 |
67 | For verification, you can use `kubectl` to browse common resources inside of the AKS cluster:
68 |
69 | ```shell
70 | # Browse namespaces in the AKS cluster
71 | kubectl get namespaces
72 |
73 | NAME STATUS AGE
74 | default Active 3m
75 | kube-node-lease Active 3m
76 | kube-public Active 3m
77 | kube-system Active 3m
78 | ```
79 |
80 | ## Deploying the Spin Operator
81 |
82 | First, the [Custom Resource Definition (CRD)]({{< ref "glossary#custom-resource-definition-crd" >}})
83 | and the [Runtime Class]({{< ref "glossary#runtime-class" >}}) for `wasmtime-spin-v2` must be
84 | installed.
85 |
86 | ```shell
87 | # Install the CRDs
88 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.crds.yaml
89 |
90 | # Install the Runtime Class
91 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.runtime-class.yaml
92 | ```
93 |
94 | The following installs [cert-manager](https://github.com/cert-manager/cert-manager) which is
95 | required to automatically provision and manage TLS certificates (used by the admission webhook
96 | system of Spin Operator)
97 |
98 | ```shell
99 | # Install cert-manager CRDs
100 | kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml
101 |
102 | # Add and update Jetstack repository
103 | helm repo add jetstack https://charts.jetstack.io
104 | helm repo update
105 |
106 | # Install the cert-manager Helm chart
107 | helm install cert-manager jetstack/cert-manager \
108 | --namespace cert-manager \
109 | --create-namespace \
110 | --version v1.14.3
111 | ```
112 |
113 | The Spin Operator chart also has a dependency on [Kwasm](https://kwasm.sh/), which you use to
114 | install `containerd-wasm-shim` on the Kubernetes node(s):
115 |
116 |
118 |
119 | ```shell
120 | # Add Helm repository if not already done
121 | helm repo add kwasm http://kwasm.sh/kwasm-operator/
122 | helm repo update
123 |
124 | # Install KWasm operator
125 | helm install \
126 | kwasm-operator kwasm/kwasm-operator \
127 | --namespace kwasm \
128 | --create-namespace \
129 | --set kwasmOperator.installerImage=ghcr.io/spinframework/containerd-shim-spin/node-installer:v0.19.0
130 |
131 | # Provision Nodes
132 | kubectl annotate node --all kwasm.sh/kwasm-node=true
133 | ```
134 |
135 | To verify `containerd-wasm-shim` installation, you can inspect the logs from the Kwasm Operator:
136 |
137 | ```shell
138 | # Inspect logs from the Kwasm Operator
139 | kubectl logs -n kwasm -l app.kubernetes.io/name=kwasm-operator
140 |
141 | {"level":"info","node":"aks-nodepool1-31687461-vmss000000","time":"2024-02-12T11:23:43Z","message":"Trying to Deploy on aks-nodepool1-31687461-vmss000000"}
142 | {"level":"info","time":"2024-02-12T11:23:43Z","message":"Job aks-nodepool1-31687461-vmss000000-provision-kwasm is still Ongoing"}
143 | {"level":"info","time":"2024-02-12T11:24:00Z","message":"Job aks-nodepool1-31687461-vmss000000-provision-kwasm is Completed. Happy WASMing"}
144 | ```
145 |
146 | The following installs the chart with the release name `spin-operator` in the `spin-operator`
147 | namespace:
148 |
149 | ```shell
150 | helm install spin-operator \
151 | --namespace spin-operator \
152 | --create-namespace \
153 | --version 0.5.0 \
154 | --wait \
155 | oci://ghcr.io/spinframework/charts/spin-operator
156 | ```
157 |
158 | Lastly, create the [shim executor]({{< ref "glossary#spin-app-executor-crd" >}})::
159 |
160 | ```console
161 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.shim-executor.yaml
162 | ```
163 |
164 | ## Deploying a Spin App to AKS
165 |
166 | To validate the Spin Operator deployment, you will deploy a simple Spin App to the AKS cluster. The
167 | following command will install a simple Spin App using the `SpinApp` CRD you provisioned in the
168 | previous section:
169 |
170 | ```shell
171 | # Deploy a sample Spin app
172 | kubectl apply -f https://raw.githubusercontent.com/spinframework/spin-operator/main/config/samples/simple.yaml
173 | ```
174 |
175 | ## Verifying the Spin App
176 |
177 | Configure port forwarding from port `8080` of your local machine to port `80` of the Kubernetes
178 | service which points to the Spin App you installed in the previous section:
179 |
180 | ```shell
181 | kubectl port-forward services/simple-spinapp 8080:80
182 | Forwarding from 127.0.0.1:8080 -> 80
183 | Forwarding from [::1]:8080 -> 80
184 | ```
185 |
186 | Send a HTTP request to [http://127.0.0.1:8080/hello](http://127.0.0.1:8080/hello) using
187 | [`curl`](https://curl.se/):
188 |
189 | ```shell
190 | # Send an HTTP GET request to the Spin App
191 | curl -iX GET http://localhost:8080/hello
192 | HTTP/1.1 200 OK
193 | transfer-encoding: chunked
194 | date: Mon, 12 Feb 2024 12:23:52 GMT
195 |
196 | Hello world from Spin!%
197 | ```
198 |
199 | ## Removing the Azure infrastructure
200 |
201 | To delete the Azure infrastructure created as part of this article, use the following command:
202 |
203 | ```shell
204 | # Remove all Azure resources
205 | az group delete --name rg-spin-operator \
206 | --no-wait \
207 | --yes
208 | ```
209 |
--------------------------------------------------------------------------------
/content/en/docs/install/compatibility-matrices.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Executor Compatibility Matrices
3 | description: A set of compatibility matrices for each SpinKube executor
4 | date: 2024-11-08
5 | categories: [Spin Operator]
6 | tags: [reference]
7 | weight: 99
8 | ---
9 |
10 | ## `containerd-shim-spin` Executor
11 |
12 | The [Spin containerd shim](https://github.com/spinframework/containerd-shim-spin) project is a containerd shim implementation for Spin.
13 |
14 | ### Spin Operator and Shim Feature Map
15 |
16 | If a feature is configured in a `SpinApp` that is not supported in the version of the shim being
17 | used, the application may not execute as expected. The following maps out the versions of the [Spin
18 | containerd shim](https://github.com/spinframework/containerd-shim-spin), Spin Operator, and `spin kube`
19 | plugin that have support for specific features.
20 |
21 | | Feature | SpinApp field | Shim Version | Spin Operator Version | `spin kube` plugin version |
22 | | -------------------- | ------------- | ------------ | --------------------- | -------------------------- |
23 | | OTEL Traces | `otel` | v0.15.0 | v0.3.0 | NA |
24 | | Selective Deployment | `components` | v0.17.0 | v0.4.0 | v0.3.0 |
25 |
26 | > NA indicates that the feature in not available yet in that project
27 |
28 | ### Spin and Spin Containerd Shim Version Map
29 |
30 | For tracking the availability of Spin features and compatibility of Spin SDKs, the following
31 | indicates which versions of the Spin runtime the [Spin containerd
32 | shim](https://github.com/spinframework/containerd-shim-spin) uses.
33 |
34 | | Shim Version | Spin Version |
35 | |-------------|-------------|
36 | | **Spin v3.x** | |
37 | | *v0.19.0* | [Spin v3.2.0](https://github.com/fermyon/spin/releases/tag/v3.2.0) |
38 | | *v0.18.0* | [Spin v3.1.2](https://github.com/fermyon/spin/releases/tag/v3.1.2) |
39 | | *v0.17.0* | [Spin v3.0.0](https://github.com/fermyon/spin/releases/tag/v3.0.0) |
40 | | **Spin v2.x** | |
41 | | *v0.16.0* | [Spin v2.6.0](https://github.com/fermyon/spin/releases/tag/v2.6.0) |
42 | | *v0.15.1* | [Spin v2.6.0](https://github.com/fermyon/spin/releases/tag/v2.6.0) |
43 | | *v0.15.0* | [Spin v2.6.0](https://github.com/fermyon/spin/releases/tag/v2.6.0) |
44 | | *v0.14.1* | [Spin v2.4.3](https://github.com/fermyon/spin/releases/tag/v2.4.3) |
45 | | *v0.14.0* | [Spin v2.4.2](https://github.com/fermyon/spin/releases/tag/v2.4.2) |
46 | | *v0.13.0* | [Spin v2.3.1](https://github.com/fermyon/spin/releases/tag/v2.3.1) |
47 | | *v0.12.0* | [Spin v2.2.0](https://github.com/fermyon/spin/releases/tag/v2.2.0) |
48 |
--------------------------------------------------------------------------------
/content/en/docs/install/installing-with-helm.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Installing with Helm
3 | description: This guide walks you through the process of installing SpinKube using [Helm](https://helm.sh).
4 | date: 2024-02-16
5 | tags: [Installation]
6 | weight: 4
7 | aliases:
8 | - /docs/spin-operator/installation/installing-with-helm
9 | ---
10 |
11 | ## Prerequisites
12 |
13 | For this guide in particular, you will need:
14 |
15 | - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI
16 | - [Helm](https://helm.sh) - the package manager for Kubernetes
17 |
18 | ## Install Spin Operator With Helm
19 |
20 | The following instructions are for installing Spin Operator using a Helm chart (using `helm
21 | install`).
22 |
23 | ### Prepare the Cluster
24 |
25 | Before installing the chart, you'll need to ensure the following are installed:
26 |
27 | - [cert-manager](https://github.com/cert-manager/cert-manager) to automatically provision and manage
28 | TLS certificates (used by spin-operator's admission webhook system). For detailed installation
29 | instructions see [the cert-manager documentation](https://cert-manager.io/docs/installation/).
30 |
31 | ```shell
32 | kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml
33 | ```
34 |
35 | - [Kwasm Operator](https://github.com/kwasm/kwasm-operator) is required to install WebAssembly shims
36 | on Kubernetes nodes that don't already include them. Note that in the future this will be replaced
37 | by [runtime class manager]({{< ref "architecture#runtime-class-manager" >}}).
38 |
39 | ```shell
40 | # Add Helm repository if not already done
41 | helm repo add kwasm http://kwasm.sh/kwasm-operator/
42 |
43 | # Install KWasm operator
44 | helm install \
45 | kwasm-operator kwasm/kwasm-operator \
46 | --namespace kwasm \
47 | --create-namespace \
48 | --set kwasmOperator.installerImage=ghcr.io/spinframework/containerd-shim-spin/node-installer:v0.19.0
49 |
50 | # Provision Nodes
51 | kubectl annotate node --all kwasm.sh/kwasm-node=true
52 | ```
53 |
54 | ## Chart prerequisites
55 |
56 | Now we have our dependencies installed, we can start installing the operator. This involves a couple
57 | of steps that allow for further customization of Spin Applications in the cluster over time, but
58 | here we install the defaults.
59 |
60 | - First ensure the [Custom Resource Definitions (CRD)]({{< ref
61 | "glossary#custom-resource-definition-crd" >}}) are installed. This includes the SpinApp CRD
62 | representing Spin applications to be scheduled on the cluster.
63 |
64 | ```shell
65 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.crds.yaml
66 | ```
67 |
68 | - Next we create a [RuntimeClass]({{< ref "glossary#runtime-class" >}}) that points to the `spin`
69 | handler called `wasmtime-spin-v2`. If you are deploying to a production cluster that only has a shim
70 | on a subset of nodes, you'll need to modify the RuntimeClass with a `nodeSelector:`:
71 |
72 | ```shell
73 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.runtime-class.yaml
74 | ```
75 |
76 | - Finally, we create a `containerd-spin-shim` [SpinAppExecutor]({{< ref
77 | "glossary#spin-app-executor-crd" >}}). This tells the Spin Operator to use the RuntimeClass we
78 | just created to run Spin Apps:
79 |
80 | ```shell
81 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.shim-executor.yaml
82 | ```
83 |
84 | ### Installing the Spin Operator Chart
85 |
86 | The following installs the chart with the release name `spin-operator`:
87 |
88 | ```shell
89 | # Install Spin Operator with Helm
90 | helm install spin-operator \
91 | --namespace spin-operator \
92 | --create-namespace \
93 | --version 0.5.0 \
94 | --wait \
95 | oci://ghcr.io/spinframework/charts/spin-operator
96 | ```
97 |
98 | ### Upgrading the Chart
99 |
100 | Note that you may also need to upgrade the spin-operator CRDs in tandem with upgrading the Helm
101 | release:
102 |
103 | ```shell
104 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.crds.yaml
105 | ```
106 |
107 | To upgrade the `spin-operator` release, run the following:
108 |
109 | ```shell
110 | # Upgrade Spin Operator using Helm
111 | helm upgrade spin-operator \
112 | --namespace spin-operator \
113 | --version 0.5.0 \
114 | --wait \
115 | oci://ghcr.io/spinframework/charts/spin-operator
116 | ```
117 |
118 | ### Uninstalling the Chart
119 |
120 | To delete the `spin-operator` release, run:
121 |
122 | ```shell
123 | # Uninstall Spin Operator using Helm
124 | helm delete spin-operator --namespace spin-operator
125 | ```
126 |
127 | This will remove all Kubernetes resources associated with the chart and deletes the Helm release.
128 |
129 | To completely uninstall all resources related to spin-operator, you may want to delete the
130 | corresponding CRD resources and the RuntimeClass:
131 |
132 | ```shell
133 | kubectl delete -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.shim-executor.yaml
134 | kubectl delete -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.runtime-class.yaml
135 | kubectl delete -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.crds.yaml
136 | ```
137 |
--------------------------------------------------------------------------------
/content/en/docs/install/linode-kubernetes-engine.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Installing on Linode Kubernetes Engine (LKE)
3 | description: This guide walks you through the process of installing SpinKube on [LKE](https://www.linode.com/docs/products/compute/kubernetes/).
4 | date: 2024-07-23
5 | tags: [Installation]
6 | weight: 6
7 | ---
8 |
9 | This guide walks through the process of installing and configuring SpinKube on Linode Kubernetes
10 | Engine (LKE).
11 |
12 | ## Prerequisites
13 |
14 | This guide assumes that you have an [Akamai Linode](https://login.linode.com/) account that is
15 | configured and has sufficient permissions for creating a new LKE cluster.
16 |
17 | You will also need recent versions of `kubectl` and `helm` installed on your system.
18 |
19 | ## Creating an LKE Cluster
20 |
21 | LKE has a managed control plane, so you only need to create the pool of worker nodes. In this
22 | tutorial, we will create a 2-node LKE cluster using the smallest available worker nodes. This should
23 | be fine for installing SpinKube and running up to around 100 Spin apps.
24 |
25 | You may prefer to run a larger cluster if you plan on mixing containers and Spin apps, because
26 | containers consume substantially more resources than Spin apps do.
27 |
28 | In the Linode web console, click on `Kubernetes` in the right-hand navigation, and then click
29 | `Create Cluster`.
30 |
31 | 
32 |
33 | You will only need to make a few choices on this screen. Here's what we have done:
34 |
35 | * **Cluster name**: We named the cluster `spinkube-lke-1`. You should name it according to whatever
36 | convention you prefer.
37 | * **Region**: We chose the `Chicago, IL (us-ord)` region, but you can choose any region you prefer.
38 | * **Kubernetes Version**: The latest supported Kubernetes version is `1.30`, so we chose that.
39 | * **High Availability Control Plane**: For this testing cluster, we chose `No` on `HA Control Plane`
40 | because we do not need high availability.
41 | * **Node Pool configuration**: In `Add Node Pools`, we added two `Dedicated 4 GB` simply to show a
42 | cluster running more than one node. Two nodes is sufficient for Spin apps, though you may prefer
43 | the more traditional 3 node cluster. Click `Add` to add these, and ignore the warning about
44 | minimum sizes.
45 |
46 | Once you have set things to your liking, click `Create Cluster`.
47 |
48 | This will take you to a screen that shows the status of the cluster. Initially, you will want to
49 | wait for all of your `Node Pool` to start up. Once all of the nodes are online, download the
50 | `kubeconfig` file, which will be named something like `spinkube-lke-1-kubeconfig.yaml`.
51 |
52 | > The `kubeconfig` file will have the credentials for connecting to your new LKE cluster. Do not
53 | > share that file or put it in a public place.
54 |
55 | For all of the subsequent operations, you will want to use the `spinkube-lke-1-kubeconfig.yaml` as
56 | your main Kubernetes configuration file. The best way to do that is to set the environment variable
57 | `KUBECONFIG` to point to that file:
58 |
59 | ```console
60 | $ export KUBECONFIG=/path/to/spinkube-lke-1-kubeconfig.yaml
61 | ```
62 |
63 | You can test this using the command `kubectl config view`:
64 |
65 | ```
66 | $ kubectl config view
67 | apiVersion: v1
68 | clusters:
69 | - cluster:
70 | certificate-authority-data: DATA+OMITTED
71 | server: https://REDACTED.us-ord-1.linodelke.net:443
72 | name: lke203785
73 | contexts:
74 | - context:
75 | cluster: lke203785
76 | namespace: default
77 | user: lke203785-admin
78 | name: lke203785-ctx
79 | current-context: lke203785-ctx
80 | kind: Config
81 | preferences: {}
82 | users:
83 | - name: lke203785-admin
84 | user:
85 | token: REDACTED
86 | ```
87 |
88 | This shows us our cluster config. You should be able to cross-reference the `lkeNNNNNN` version with
89 | what you see on your Akamai Linode dashboard.
90 |
91 | ## Install SpinKube Using Helm
92 |
93 | At this point, [install SpinKube with Helm](installing-with-helm). As long as your `KUBECONFIG`
94 | environment variable is pointed at the correct cluster, the installation method documented there
95 | will work.
96 |
97 | Once you are done following the installation steps, return here to install a first app.
98 |
99 | ## Creating a First App
100 |
101 | We will use the `spin kube` plugin to scaffold out a new app. If you run the following command and
102 | the `kube` plugin is not installed, you will first be prompted to install the plugin. Choose `yes`
103 | to install.
104 |
105 | We'll point to an existing Spin app, a [Hello World program written in
106 | Rust](https://github.com/fermyon/spin/tree/main/examples/http-rust), compiled to Wasm, and stored in
107 | GitHub Container Registry (GHCR):
108 |
109 | ```console
110 | $ spin kube scaffold --from ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0 > hello-world.yaml
111 | ```
112 |
113 | > Note that Spin apps, which are WebAssembly, can be [stored in most container
114 | > registries](https://developer.fermyon.com/spin/v2/registry-tutorial) even though they are not
115 | > Docker containers.
116 |
117 | This will write the following to `hello-world.yaml`:
118 |
119 | ```yaml
120 | apiVersion: core.spinkube.dev/v1alpha1
121 | kind: SpinApp
122 | metadata:
123 | name: spin-rust-hello
124 | spec:
125 | image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0"
126 | executor: containerd-shim-spin
127 | replicas: 2
128 | ```
129 |
130 | Using `kubectl apply`, we can deploy that app:
131 |
132 | ```console
133 | $ kubectl apply -f hello-world.yaml
134 | spinapp.core.spinkube.dev/spin-rust-hello created
135 | ```
136 |
137 | With SpinKube, SpinApps will be deployed as `Pod` resources, so we can see the app using `kubectl
138 | get pods`:
139 |
140 | ```console
141 | $ kubectl get pods
142 | NAME READY STATUS RESTARTS AGE
143 | spin-rust-hello-f6d8fc894-7pq7k 1/1 Running 0 54s
144 | spin-rust-hello-f6d8fc894-vmsgh 1/1 Running 0 54s
145 | ```
146 |
147 | Status is listed as `Running`, which means our app is ready.
148 |
149 | ## Making An App Public with a NodeBalancer
150 |
151 | By default, Spin apps will be deployed with an internal service. But with Linode, you can provision
152 | a [NodeBalancer](https://www.linode.com/docs/products/networking/nodebalancers/) using a `Service`
153 | object. Here is a `hello-world-service.yaml` that provisions a `nodebalancer` for us:
154 |
155 | ```yaml
156 | apiVersion: v1
157 | kind: Service
158 | metadata:
159 | name: spin-rust-hello-nodebalancer
160 | annotations:
161 | service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
162 | labels:
163 | core.spinkube.dev/app-name: spin-rust-hello
164 | spec:
165 | type: LoadBalancer
166 | ports:
167 | - name: http
168 | port: 80
169 | protocol: TCP
170 | targetPort: 80
171 | selector:
172 | core.spinkube.dev/app.spin-rust-hello.status: ready
173 | sessionAffinity: None
174 | ```
175 |
176 | When LKE receives a `Service` whose `type` is `LoadBalancer`, it will provision a NodeBalancer for
177 | you.
178 |
179 | > You can customize this for your app simply by replacing all instances of `spin-rust-hello` with
180 | > the name of your app.
181 |
182 | We can create the NodeBalancer by running `kubectl apply` on the above file:
183 |
184 | ```console
185 | $ kubectl apply -f hello-world-nodebalancer.yaml
186 | service/spin-rust-hello-nodebalancer created
187 | ```
188 |
189 | Provisioning the new NodeBalancer may take a few moments, but we can get the IP address using
190 | `kubectl get service spin-rust-hello-nodebalancer`:
191 |
192 | ```console
193 | $ get service spin-rust-hello-nodebalancer
194 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
195 | spin-rust-hello-nodebalancer LoadBalancer 10.128.235.253 172.234.210.123 80:31083/TCP 40s
196 | ```
197 |
198 | The `EXTERNAL-IP` field tells us what the NodeBalancer is using as a public IP. We can now test this
199 | out over the Internet using `curl` or by entering the URL `http://172.234.210.123/hello` into your
200 | browser.
201 |
202 | ```console
203 | $ curl 172.234.210.123/hello
204 | Hello world from Spin!
205 | ```
206 |
207 | ## Deleting Our App
208 |
209 | To delete this sample app, we will first delete the NodeBalancer, and then delete the app:
210 |
211 | ```console
212 | $ kubectl delete service spin-rust-hello-nodebalancer
213 | service "spin-rust-hello-nodebalancer" deleted
214 | $ kubectl delete spinapp spin-rust-hello
215 | spinapp.core.spinkube.dev "spin-rust-hello" deleted
216 | ```
217 |
218 | > If you delete the NodeBalancer out of the Linode console, it will not automatically delete the
219 | > `Service` record in Kubernetes, which will cause inconsistencies. So it is best to use `kubectl
220 | > delete service` to delete your NodeBalancer.
221 |
222 | If you are also done with your LKE cluster, the easiest way to delete it is to log into the Akamai
223 | Linode dashboard, navigate to `Kubernetes`, and press the `Delete` button. This will destroy all of
224 | your worker nodes and deprovision the control plane.
225 |
--------------------------------------------------------------------------------
/content/en/docs/install/lke-spinkube-create.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/install/lke-spinkube-create.png
--------------------------------------------------------------------------------
/content/en/docs/install/microk8s.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Installing on Microk8s
3 | description: This guide walks you through the process of installing SpinKube using [Microk8s](https://microk8s.io/).
4 | date: 2024-06-19
5 | tags: [Installation]
6 | weight: 8
7 | ---
8 |
9 | This guide walks through the process of installing and configuring Microk8s and SpinKube.
10 |
11 | ## Prerequisites
12 |
13 | This guide assumes you are running Ubuntu 24.04, and that you have Snap enabled (which is the
14 | default).
15 |
16 | > The testing platform for this installation was an Akamai Edge Linode running 4G of memory and 2
17 | > cores.
18 |
19 | ## Installing Spin
20 |
21 | You will need to [install Spin](https://developer.fermyon.com/spin/quickstart). The easiest way is
22 | to just use the following one-liner to get the latest version of Spin:
23 |
24 | ```console { data-plausible="copy-quick-deploy-sample" }
25 | $ curl -fsSL https://developer.fermyon.com/downloads/install.sh | bash
26 | ```
27 |
28 | Typically you will then want to move `spin` to `/usr/local/bin` or somewhere else on your `$PATH`:
29 |
30 | ```console { data-plausible="copy-quick-deploy-sample" }
31 | $ sudo mv spin /usr/local/bin/spin
32 | ```
33 |
34 | You can test that it's on your `$PATH` with `which spin`. If this returns blank, you will need to
35 | adjust your `$PATH` variable or put Spin somewhere that is already on `$PATH`.
36 |
37 | ## A Script To Do This
38 |
39 | If you would rather work with a shell script, you may find [this
40 | Gist](https://gist.github.com/kate-goldenring/47950ccb30be2fa0180e276e82ac3593#file-spinkube-on-microk8s-sh)
41 | a great place to start. It installs Microk8s and SpinKube, and configures both.
42 |
43 | ## Installing Microk8s on Ubuntu
44 |
45 | Use `snap` to install microk8s:
46 |
47 | ```console { data-plausible="copy-quick-deploy-sample" }
48 | $ sudo snap install microk8s --classic
49 | ```
50 |
51 | This will install Microk8s and start it. You may want to read the [official installation
52 | instructions](https://microk8s.io/docs/getting-started) before proceeding. Wait for a moment or two,
53 | and then ensure Microk8s is running with the `microk8s status` command.
54 |
55 | Next, enable the TLS certificate manager:
56 |
57 | ```console { data-plausible="copy-quick-deploy-sample" }
58 | $ microk8s enable cert-manager
59 | ```
60 |
61 | Now we’re ready to install the SpinKube environment for running Spin applications.
62 |
63 | ### Installing SpinKube
64 |
65 | SpinKube provides the entire toolkit for running Spin serverless apps. You may want to familiarize
66 | yourself with the [SpinKube quickstart](https://www.spinkube.dev/docs/install/quickstart/) guide
67 | before proceeding.
68 |
69 | First, we need to apply a runtime class and a CRD for SpinKube:
70 |
71 | ```console { data-plausible="copy-quick-deploy-sample" }
72 | $ microk8s kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.runtime-class.yaml
73 | $ microk8s kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.crds.yaml
74 | ```
75 |
76 | Both of these should apply immediately.
77 |
78 | We then need to install KWasm because it is not yet included with Microk8s:
79 |
80 | ```console { data-plausible="copy-quick-deploy-sample" }
81 | $ microk8s helm repo add kwasm http://kwasm.sh/kwasm-operator/
82 | $ microk8s helm install kwasm-operator kwasm/kwasm-operator --namespace kwasm --create-namespace --set kwasmOperator.installerImage=ghcr.io/spinframework/containerd-shim-spin/node-installer:v0.19.0
83 | $ microk8s kubectl annotate node --all kwasm.sh/kwasm-node=true
84 |
85 | ```
86 |
87 | > The last line above tells Microk8s that all nodes on the cluster (which is just one node in this
88 | > case) can run Spin applications.
89 |
90 | Next, we need to install SpinKube’s operator using Helm (which is included with Microk8s).
91 |
92 | ```console { data-plausible="copy-quick-deploy-sample" }
93 | $ microk8s helm install spin-operator --namespace spin-operator --create-namespace --version 0.5.0 --wait oci://ghcr.io/spinframework/charts/spin-operator
94 |
95 | ```
96 |
97 | Now we have the main operator installed. There is just one more step. We need to install the shim
98 | executor, which is a special CRD that allows us to use multiple executors for WebAssembly.
99 |
100 | ```console { data-plausible="copy-quick-deploy-sample" }
101 | $ microk8s kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.shim-executor.yaml
102 |
103 | ```
104 |
105 | Now SpinKube is installed!
106 |
107 | ### Running an App in SpinKube
108 |
109 | Next, we can run a simple Spin application inside of Microk8s.
110 |
111 | While we could write regular deployments or pod specifications, the easiest way to deploy a Spin app
112 | is by creating a simple `SpinApp` resource. Let's use the simple example from SpinKube:
113 |
114 | ```console { data-plausible="copy-quick-deploy-sample" }
115 | $ microk8s kubectl apply -f https://raw.githubusercontent.com/spinframework/spin-operator/main/config/samples/simple.yaml
116 | ```
117 |
118 | The above installs a simple `SpinApp` YAML that looks like this:
119 |
120 | ```yaml
121 | apiVersion: core.spinkube.dev/v1alpha1
122 | kind: SpinApp
123 | metadata:
124 | name: simple-spinapp
125 | spec:
126 | image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0"
127 | replicas: 1
128 | executor: containerd-shim-spin
129 | ```
130 |
131 | You can read up on the definition [in the
132 | documentation](https://www.spinkube.dev/docs/reference/spin-app/).
133 |
134 | It may take a moment or two to get started, but you should be able to see the app with `microk8s
135 | kubectl get pods`.
136 |
137 | ```console { data-plausible="copy-quick-deploy-sample" }
138 | $ microk8s kubectl get po
139 | NAME READY STATUS RESTARTS AGE
140 | simple-spinapp-5c7b66f576-9v9fd 1/1 Running 0 45m
141 | ```
142 |
143 | ### Troubleshooting
144 |
145 | If `STATUS` gets stuck in `ContainerCreating`, it is possible that KWasm did not install correctly.
146 | Try doing a `microk8s stop`, waiting a few minutes, and then running `microk8s start`. You can also
147 | try the command:
148 |
149 | ```console { data-plausible="copy-quick-deploy-sample" }
150 | $ microk8s kubectl logs -n kwasm -l app.kubernetes.io/name=kwasm-operator
151 | ```
152 |
153 | ### Testing the Spin App
154 |
155 | The easiest way to test our Spin app is to port forward from the Spin app to the outside host:
156 |
157 | ```console { data-plausible="copy-quick-deploy-sample" }
158 | $ microk8s kubectl port-forward services/simple-spinapp 8080:80
159 | ```
160 |
161 | You can then run `curl localhost:8080/hello`
162 |
163 | ```console { data-plausible="copy-quick-deploy-sample" }
164 | $ curl localhost:8080/hello
165 | Hello world from Spin!
166 | ```
167 |
168 | ### Where to go from here
169 |
170 | So far, we installed Microk8s, SpinKube, and a single Spin app. To have a more production-ready
171 | version, you might want to:
172 |
173 | - Generate TLS certificates and attach them to your Spin app to add HTTPS support. If you are using
174 | an ingress controller (see below), [here is the documentation for TLS
175 | config](https://kubernetes.github.io/ingress-nginx/user-guide/tls/).
176 | - Configure a [cluster ingress](https://microk8s.io/docs/addon-ingress)
177 | - Set up another Linode Edge instsance and create a [two-node Microk8s
178 | cluster](https://microk8s.io/docs/clustering).
179 |
180 | ### Bonus: Configuring Microk8s ingress
181 |
182 | Microk8s includes an NGINX-based ingress controller that works great with Spin applications.
183 |
184 | Enable the ingress controller: `microk8s enable ingress`
185 |
186 | Now we can create an ingress that routes our traffic to the `simple-spinapp` app. Create the file
187 | `ingress.yaml` with the following content. Note that the [`service.name`](http://service.name) is
188 | the name of our Spin app.
189 |
190 | ```yaml
191 | apiVersion: networking.k8s.io/v1
192 | kind: Ingress
193 | metadata:
194 | name: http-ingress
195 | spec:
196 | rules:
197 | - http:
198 | paths:
199 | - path: /
200 | pathType: Prefix
201 | backend:
202 | service:
203 | name: simple-spinapp
204 | port:
205 | number: 80
206 | ```
207 |
208 | Install the above with `microk8s kubectl -f ingress.yaml`. After a moment or two, you should be able
209 | to run `curl [localhost](http://localhost)` and see `Hello World!`.
210 |
211 | ## Conclusion
212 |
213 | In this guide we've installed Spin, Microk8s, and SpinKube and then run a Spin application.
214 |
215 | To learn more about the many things you can do with Spin apps, go to [the Spin developer
216 | docs](https://developer.fermyon.com/spin). You can also look at a variety of examples at [Spin Up
217 | Hub](https://developer.fermyon.com/hub).
218 |
219 | Or to try out different Kubernetes configurations, check out [other installation
220 | guides](https://www.spinkube.dev/docs/install/).
221 |
--------------------------------------------------------------------------------
/content/en/docs/install/quickstart.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Quickstart
3 | description: Learn how to setup a Kubernetes cluser, install SpinKube and run your first Spin App.
4 | weight: 2
5 | aliases:
6 | - /docs/quickstart
7 | - /docs/spin-operator/quickstart
8 | ---
9 |
10 | This Quickstart guide demonstrates how to set up a new Kubernetes cluster, install the SpinKube and
11 | deploy your first Spin application.
12 |
13 | ## Prerequisites
14 |
15 | For this Quickstart guide, you will need:
16 |
17 | - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI
18 | - [Rancher Desktop](https://rancherdesktop.io/) or [Docker
19 | Desktop](https://docs.docker.com/get-docker/) for managing containers and Kubernetes on your
20 | desktop
21 | - [k3d](https://k3d.io/v5.6.0/?h=installation#installation) - a lightweight Kubernetes distribution
22 | that runs on Docker
23 | - [Helm](https://helm.sh/docs/intro/install/) - the package manager for Kubernetes
24 |
25 | ### Set up Your Kubernetes Cluster
26 |
27 | 1. Create a Kubernetes cluster with a k3d image that includes the
28 | [containerd-shim-spin](https://github.com/spinframework/containerd-shim-spin) prerequisite already
29 | installed:
30 |
31 | ```console { data-plausible="copy-quick-create-k3d" }
32 | k3d cluster create wasm-cluster \
33 | --image ghcr.io/spinframework/containerd-shim-spin/k3d:v0.19.0 \
34 | --port "8081:80@loadbalancer" \
35 | --agents 2
36 | ```
37 |
38 | > Note: Spin Operator requires a few Kubernetes resources that are installed globally to the
39 | > cluster. We create these directly through `kubectl` as a best practice, since their lifetimes are
40 | > usually managed separately from a given Spin Operator installation.
41 |
42 | 2. Install cert-manager
43 |
44 | ```console { data-plausible="copy-quick-install-cert-manager" }
45 | kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
46 | kubectl wait --for=condition=available --timeout=300s deployment/cert-manager-webhook -n cert-manager
47 | ```
48 |
49 | 3. Apply the [Runtime
50 | Class](https://github.com/spinframework/spin-operator/blob/main/config/samples/spin-runtime-class.yaml)
51 | used for scheduling Spin apps onto nodes running the shim:
52 |
53 | > Note: In a production cluster you likely want to customize the Runtime Class with a `nodeSelector`
54 | > that matches nodes that have the shim installed. However, in the K3d example, they're installed on
55 | > every node.
56 |
57 | ```console { data-plausible="copy-quick-apply-runtime-class" }
58 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.runtime-class.yaml
59 | ```
60 |
61 | 4. Apply the [Custom Resource Definitions]({{< ref "glossary#custom-resource-definition-crd" >}})
62 | used by the Spin Operator:
63 |
64 | ```console { data-plausible="copy-quick-apply-crd" }
65 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.crds.yaml
66 | ```
67 |
68 | ## Deploy the Spin Operator
69 |
70 | Execute the following command to install the Spin Operator on the K3d cluster using Helm. This will
71 | create all of the Kubernetes resources required by Spin Operator under the Kubernetes namespace
72 | `spin-operator`. It may take a moment for the installation to complete as dependencies are installed
73 | and pods are spinning up.
74 |
75 | ```console { data-plausible="copy-quick-deploy-operator" }
76 | # Install Spin Operator with Helm
77 | helm install spin-operator \
78 | --namespace spin-operator \
79 | --create-namespace \
80 | --version 0.5.0 \
81 | --wait \
82 | oci://ghcr.io/spinframework/charts/spin-operator
83 | ```
84 |
85 | Lastly, create the [shim executor]({{< ref "glossary#spin-app-executor-crd" >}}):
86 |
87 | ```console { data-plausible="copy-quick-create-shim-executor" }
88 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.5.0/spin-operator.shim-executor.yaml
89 | ```
90 |
91 | ## Run the Sample Application
92 |
93 | You are now ready to deploy Spin applications onto the cluster!
94 |
95 | 1. Create your first application in the same `spin-operator` namespace that the operator is running:
96 |
97 | ```console { data-plausible="copy-quick-deploy-sample" }
98 | kubectl apply -f https://raw.githubusercontent.com/spinframework/spin-operator/main/config/samples/simple.yaml
99 | ```
100 |
101 | 2. Forward a local port to the application pod so that it can be reached:
102 |
103 | ```console { data-plausible="copy-quick-forward-local-port" }
104 | kubectl port-forward svc/simple-spinapp 8083:80
105 | ```
106 |
107 | 3. In a different terminal window, make a request to the application:
108 |
109 | ```console { data-plausible="copy-quick-make-request" }
110 | curl localhost:8083/hello
111 | ```
112 |
113 | You should see:
114 |
115 | ```bash
116 | Hello world from Spin!
117 | ```
118 |
119 | ## Next Steps
120 |
121 | Congrats on deploying your first SpinApp! Recommended next steps:
122 |
123 | - Scale your [Spin Apps with Horizontal Pod Autoscaler (HPA)]({{< ref "scaling-with-hpa" >}})
124 | - Scale your [Spin Apps with Kubernetes Event Driven Autoscaler (KEDA)]({{< ref "scaling-with-keda"
125 | >}})
126 |
--------------------------------------------------------------------------------
/content/en/docs/install/rancher-desktop-certificates.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/install/rancher-desktop-certificates.png
--------------------------------------------------------------------------------
/content/en/docs/install/rancher-desktop-cluster.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/install/rancher-desktop-cluster.png
--------------------------------------------------------------------------------
/content/en/docs/install/rancher-desktop-contexts.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/install/rancher-desktop-contexts.png
--------------------------------------------------------------------------------
/content/en/docs/install/rancher-desktop-hello.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/install/rancher-desktop-hello.png
--------------------------------------------------------------------------------
/content/en/docs/install/rancher-desktop-kubernetes.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/install/rancher-desktop-kubernetes.png
--------------------------------------------------------------------------------
/content/en/docs/install/rancher-desktop-preferences.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/install/rancher-desktop-preferences.png
--------------------------------------------------------------------------------
/content/en/docs/install/rancher-desktop.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Installing on Rancher Desktop
3 | description: This tutorial shows how to integrate SpinKube and Rancher Desktop.
4 | date: 2024-02-16
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 7
8 | aliases:
9 | - /docs/spin-operator/tutorials/integrating-with-rancher-desktop
10 | ---
11 |
12 | [Rancher Desktop](https://rancherdesktop.io/) is an open-source application that provides all the
13 | essentials to work with containers and Kubernetes on your desktop.
14 |
15 | ### Prerequisites
16 |
17 | - An operating system compatible with Rancher Desktop (Windows, macOS, or Linux).
18 | - Administrative or superuser access on your computer.
19 |
20 | ### Step 1: Installing Rancher Desktop
21 |
22 | 1. **Download Rancher Desktop**:
23 | - Navigate to the [Rancher Desktop releases
24 | page](https://github.com/rancher-sandbox/rancher-desktop/releases/tag/v1.14.0).
25 | - Select the appropriate installer for your operating system for version 1.14.0.
26 | 2. **Install Rancher Desktop**:
27 | - Run the downloaded installer and follow the on-screen instructions to complete the
28 | installation.
29 |
30 | ### Step 2: Configure Rancher Desktop
31 |
32 | - Open Rancher Desktop.
33 | - Navigate to the **Preferences** -> **Kubernetes** menu.
34 | - Ensure that the **Enable** **Kubernetes** is selected and that the **Enable Traefik** and
35 | **Install Spin Operator** Options are checked. Make sure to **Apply** your changes.
36 |
37 | 
38 |
39 | - Make sure to select `rancher-desktop` from the `Kubernetes Contexts` configuration in your
40 | toolbar.
41 |
42 | 
43 |
44 | - Make sure that the Enable Wasm option is checked in the **Preferences** → **Container Engine
45 | section**. Remember to always apply your changes.
46 |
47 | 
48 |
49 | - Once your changes have been applied, go to the **Cluster Dashboard** → **More Resources** →
50 | **Cert Manager** section and click on **Certificates**. You will see the
51 | `spin-operator-serving-cert` is ready.
52 |
53 | 
54 |
55 | ### Step 3: Creating a Spin Application
56 |
57 | 1. **Open a terminal** (Command Prompt, Terminal, or equivalent based on your OS).
58 | 2. **Create a new Spin application**: This command creates a new Spin application using the
59 | `http-js` template, named `hello-k3s`.
60 |
61 | ```bash
62 | $ spin new -t http-js hello-k3s --accept-defaults
63 | $ cd hello-k3s
64 | ```
65 | 3. We can edit the `/src/index.js` file and make the workload return a string "Hello from Rancher
66 | Desktop":
67 |
68 | ```javascript
69 | export async function handleRequest(request) {
70 | return {
71 | status: 200,
72 | headers: {"content-type": "text/plain"},
73 | body: "Hello from Rancher Desktop" // <-- This changed
74 | }
75 | }
76 | ```
77 |
78 | ### Step 4: Deploying Your Application
79 |
80 | 1. **Push the application to a registry**:
81 |
82 | ```bash
83 | $ npm install
84 | $ spin build
85 | $ spin registry push ttl.sh/hello-k3s:0.1.0
86 | ```
87 |
88 | Replace `ttl.sh/hello-k3s:0.1.0` with your registry URL and tag.
89 |
90 | 2. **Scaffold Kubernetes resources**:
91 |
92 | ```bash
93 | $ spin kube scaffold --from ttl.sh/hello-k3s:0.1.0
94 |
95 | apiVersion: core.spinkube.dev/v1alpha1
96 | kind: SpinApp
97 | metadata:
98 | name: hello-k3s
99 | spec:
100 | image: "ttl.sh/hello-k3s:0.1.0"
101 | executor: containerd-shim-spin
102 | replicas: 2
103 | ```
104 |
105 | This command prepares the necessary Kubernetes deployment configurations.
106 |
107 | 3. **Deploy the application to Kubernetes**:
108 |
109 | ```bash
110 | $ spin kube deploy --from ttl.sh/hello-k3s:0.1.0
111 | ```
112 |
113 | If we click on the Rancher Desktop’s “Cluster Dashboard”, we can see hello-k3s:0.1.0 running inside
114 | the “Workloads” dropdown section:
115 |
116 | 
117 |
118 | To access our app outside of the cluster, we can forward the port so that we access the application
119 | from our host machine:
120 |
121 | ```bash
122 | $ kubectl port-forward svc/hello-k3s 8083:80
123 | ```
124 |
125 | To test locally, we can make a request as follows:
126 |
127 | ```bash
128 | $ curl localhost:8083
129 | Hello from Rancher Desktop
130 | ```
131 |
132 | The above `curl` command or a quick visit to your browser at localhost:8083 will return the "Hello
133 | from Rancher Desktop" message:
134 |
135 | 
136 |
--------------------------------------------------------------------------------
/content/en/docs/install/spin-kube-plugin.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Installing the `spin kube` plugin
3 | description: Learn how to install the `kube` plugin.
4 | categories: [guides]
5 | tags: [plugins, kubernetes, spin]
6 | weight: 3
7 | aliases:
8 | - /docs/spin-plugin-kube/installation
9 | ---
10 |
11 | The `kube` plugin for `spin` (The Spin CLI) provides first class experience for working with Spin
12 | apps in the context of Kubernetes.
13 |
14 | ## Prerequisites
15 |
16 | Ensure you have the Spin CLI ([version 2.3.1 or
17 | newer](https://developer.fermyon.com/spin/v2/upgrade)) installed on your machine.
18 |
19 | ## Install the plugin
20 |
21 | Before you install the plugin, you should fetch the list of latest Spin plugins from the
22 | spin-plugins repository:
23 |
24 | ```sh
25 | # Update the list of latest Spin plugins
26 | spin plugins update
27 | Plugin information updated successfully
28 | ```
29 |
30 | Go ahead and install the `kube` using `spin plugin install`:
31 |
32 | ```sh
33 | # Install the latest kube plugin
34 | spin plugins install kube
35 | ```
36 |
37 | At this point you should see the `kube` plugin when querying the list of installed Spin plugins:
38 |
39 | ```sh
40 | # List all installed Spin plugins
41 | spin plugins list --installed
42 |
43 | cloud 0.7.0 [installed]
44 | cloud-gpu 0.1.0 [installed]
45 | kube 0.1.1 [installed]
46 | pluginify 0.6.0 [installed]
47 | ```
48 |
49 | ### Compiling from source
50 |
51 | As an alternative to the plugin manager, you can download and manually install the plugin. Manual
52 | installation is commonly used to test in-flight changes. For a user, installing the plugin using
53 | Spin's plugin manager is better.
54 |
55 | Please refer to the [spin-plugin-kube GitHub
56 | repository](https://github.com/spinframework/spin-plugin-kube) for instructions on how to compile the
57 | plugin from source.
58 |
--------------------------------------------------------------------------------
/content/en/docs/misc/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Miscellaneous
3 | description: Documentation that we can't find a more organized place for. Like that drawer in your kitchen with the scissors, batteries, duct tape, and other junk.
4 | weight: 80
5 | ---
6 |
--------------------------------------------------------------------------------
/content/en/docs/misc/compatibility.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Compatibility
3 | description: A list of compatible Kubernetes distributions and platforms for running SpinKube.
4 | categories: [Spin Operator]
5 | tags: []
6 | aliases:
7 | - /docs/compatibility
8 | ---
9 |
10 | See the following list of compatible Kubernetes distributions and platforms for running the [Spin
11 | Operator](https://github.com/spinframework/spin-operator/):
12 |
13 | - [Amazon Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/)
14 | - [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/products/kubernetes-service)
15 | - [Civo Kubernetes](https://www.civo.com/kubernetes)
16 | - [Digital Ocean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes)
17 | - [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine)
18 | - [k3d](https://k3d.io)
19 | - [minikube](https://minikube.sigs.k8s.io/docs/) (explicitly pass `--container-runtime=containerd`
20 | and ensure you're on minikube version `>= 1.33`)
21 | - [Scaleway Kubernetes Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/)
22 |
23 | > **Disclaimer**: Please note that this is a working list of compatible Kubernetes distributions and
24 | > platforms. For managed Kubernetes services, it's important to be aware that cloud providers may
25 | > choose to discontinue support for specific dependencies, such as container runtimes. While we
26 | > strive to maintain the accuracy of this documentation, it is ultimately your responsibility to
27 | > verify with your Kubernetes provider whether the required dependencies are still supported.
28 |
29 | ### How to validate Spin Operator Compatibility
30 |
31 | If you would like to validate Spin Operator's compatibility with a new specific Kubernetes
32 | distribution or platform or simply test one of the platforms listed above yourself, follow these
33 | steps for validation:
34 |
35 | 1. **Install the Spin Operator**: Begin by installing the Spin Operator within the Kubernetes
36 | cluster. This involves deploying the necessary dependencies and the Spin Operator itself. (See
37 | [Installing with Helm]({{< ref "installing-with-helm" >}}))
38 |
39 | 2. **Create, Package, and Deploy a Spin App**: Proceed by creating a Spin App, packaging it, and
40 | successfully deploying it within the Kubernetes environment. (See [Package and Deploy Spin
41 | Apps]({{< ref "packaging" >}}))
42 |
43 | 3. **Invoke the Spin App**: Once the Spin App is deployed, ensure at least one request was
44 | successfully served by the Spin App.
45 |
46 | ## Container Runtime Constraints
47 |
48 | The Spin Operator requires the target nodes that would run Spin applications to support containerd
49 | version [`1.6.26+`](https://github.com/containerd/containerd/releases/tag/v1.6.26) or
50 | [`1.7.7+`](https://github.com/containerd/containerd/releases/tag/v1.7.7).
51 |
52 | Use the `kubectl get nodes -o wide` command to see which container runtime is installed per node:
53 |
54 | ```shell
55 | # Inspect container runtimes per node
56 | kubectl get nodes -o wide
57 | NAME STATUS VERSION OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
58 | generalnp-vmss000000 Ready v1.27.9 Ubuntu 22.04.4 LTS 5.15.0-1056-azure containerd://1.7.7-1
59 | generalnp-vmss000001 Ready v1.27.9 Ubuntu 22.04.4 LTS 5.15.0-1056-azure containerd://1.7.7-1
60 | generalnp-vmss000002 Ready v1.27.9 Ubuntu 22.04.4 LTS 5.15.0-1056-azure containerd://1.7.7-1
61 |
62 | ```
63 |
--------------------------------------------------------------------------------
/content/en/docs/misc/integrations.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Integrations
3 | description: A high level overview of the SpinKube integrations.
4 | categories: [SpinKube]
5 | tags: [Integrations]
6 | aliases:
7 | - /docs/integrations
8 | ---
9 |
10 | # SpinKube Integrations
11 |
12 | ## KEDA
13 |
14 | [Kubernetes Event-Driven Autoscaling (KEDA)](https://keda.sh/) provides event-driven autoscaling for
15 | Kubernetes workloads. It allows Kubernetes to automatically scale applications in response to
16 | external events such as messages in a queue, enabling more efficient resource utilization and
17 | responsive scaling based on actual demand, rather than static metrics. KEDA serves as a bridge
18 | between Kubernetes and various event sources, making it easier to scale applications dynamically in
19 | a cloud-native environment. If you would like to see how SpinKube integrates with KEDA, please read
20 | the ["Scaling With KEDA" tutorial]({{< ref "scaling-with-keda" >}}) which deploys a SpinApp and the
21 | KEDA ScaledObject instance onto a cluster. The tutorial also uses Bombardier to generate traffic to
22 | test how well KEDA scales our SpinApp.
23 |
24 | ## Rancher Desktop
25 |
26 | The [release of Rancher Desktop
27 | 1.13.0](https://www.suse.com/c/rancher_blog/rancher-desktop-1-13-with-support-for-webassembly-and-more/)
28 | comes with basic support for running WebAssembly (Wasm) containers and deploying them to Kubernetes.
29 | Rancher Desktop by SUSE, is an open-source application that provides all the essentials to work with
30 | containers and Kubernetes on your desktop. If you would like to see how SpinKube integrates with
31 | Rancher Desktop, please read the ["Integrating With Rancher Desktop" tutorial]({{< ref
32 | "/docs/install/rancher-desktop" >}}) which walks through the steps of installing the necessary
33 | components for SpinKube (including the CertManager for SSL, CRDs and the KWasm runtime class manager
34 | using Helm charts). The tutorial then demonstrates how to create a simple Spin JavaScript
35 | application and deploys the application within Rancher Desktop's local cluster.
36 |
--------------------------------------------------------------------------------
/content/en/docs/misc/spintainer-executor.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Spintainer Executor
3 | description: An overview of what the Spintainer Executor does and how it can be used.
4 | categories: [SpinKube]
5 | tags: []
6 | ---
7 |
8 | # The Spintainer Executor
9 |
10 | The Spintainer (a play on the words Spin and container) executor is a [SpinAppExecutor](../../reference/spin-app-executor) that runs Spin applications directly in a container rather than via the [shim](../../topics/architecture#containerd-shim-spin). This is useful for a number of reasons:
11 |
12 | - Provides the flexibility to:
13 | - Use any Spin version you want.
14 | - Use any custom triggers or plugins you want.
15 | - Allows you to use SpinKube even if you don't have the cluster permissions to install the shim.
16 |
17 | > Note: We recommend using the shim for most use cases. The spintainer executor is best saved as a workaround.
18 |
19 | ## How to create a spintainer executor
20 |
21 | The following is some sample configuration for a spintainer executor:
22 |
23 | ```yaml
24 | apiVersion: core.spinkube.dev/v1alpha1
25 | kind: SpinAppExecutor
26 | metadata:
27 | name: spintainer
28 | spec:
29 | createDeployment: true
30 | deploymentConfig:
31 | installDefaultCACerts: true
32 | spinImage: ghcr.io/fermyon/spin:v2.7.0
33 | ```
34 |
35 | Save this into a file named `spintainer-executor.yaml` and then apply it to the cluster.
36 |
37 | ```bash
38 | kubectl apply -f spintainer-executor.yaml
39 | ```
40 |
41 | ## How to use a spintainer executor
42 |
43 | To use the spintainer executor you must reference it as the executor of your `SpinApp`.
44 |
45 | ```yaml
46 | apiVersion: core.spinkube.dev/v1alpha1
47 | kind: SpinApp
48 | metadata:
49 | name: simple-spinapp
50 | spec:
51 | image: "ghcr.io/spinkube/containerd-shim-spin/examples/spin-rust-hello:v0.13.0"
52 | replicas: 1
53 | executor: spintainer
54 | ```
55 |
56 | ## How the spintainer executor works
57 |
58 | The spintainer executor executes your Spin application in a container created from the image specified by `.spec.deploymentConfig.spinImage`. The container image must have a Spin binary be the [entrypoint](https://docs.docker.com/reference/dockerfile/#entrypoint) of the container. It will be started with the following args.
59 |
60 | ```
61 | up --listen {spin-operator-defined-port} -f {spin-operator-defined-image} --runtime-config-file {spin-operator-defined-config-file}
62 | ```
63 |
64 | For ease of use you can use the images published by the Spin project [here](https://github.com/fermyon/spin/pkgs/container/spin). Alternatively you can craft images for your own unique need.
65 |
--------------------------------------------------------------------------------
/content/en/docs/misc/upgrading-to-v0.4.0.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Upgrading to v0.4.0
3 | description: Instructions on how to navigate the breaking changes v0.4.0 introduces.
4 | categories: [Spin Operator]
5 | tags: []
6 | ---
7 |
8 | Spin Operator v0.4.0 introduces a breaking API change. The SpinApp and SpinAppExecutor are moving from the `spinoperator.dev` to `spinkube.dev` domains. This is a breaking change and therefore requires a re-install of the Spin Operator when upgrading to v0.4.0.
9 |
10 | ## Migration steps
11 |
12 | 1. Uninstall any existing SpinApps.
13 |
14 | > Note: Back em' up! TODO
15 | >
16 | > ```sh
17 | > kubectl get spinapps.core.spinoperator.dev -o yaml > spinapps.yaml
18 | > ```
19 |
20 | ```sh
21 | kubectl delete spinapp.core.spinoperator.dev --all
22 | ```
23 |
24 | 2. Uninstall any existing SpinAppExecutors.
25 | ```sh
26 | kubectl delete spinappexecutor.core.spinoperator.dev --all
27 | ```
28 | 3. Uninstall the old Spin Operator.
29 | > Note: If you used a different release name or namespace when installing the Spin Operator you'll have to adjust the command accordingly. Alternatively, if you used something other than Helm to install the Spin Operator, you'll need to uninstall it following whatever approach you used to install it.
30 | ```sh
31 | helm uninstall spin-operator --namespace spin-operator
32 | ```
33 | 4. Uninstall the old CRDs.
34 | ```sh
35 | kubectl delete crd spinapps.core.spinoperator.dev
36 | kubectl delete crd spinappexecutors.core.spinoperator.dev
37 | ```
38 | 5. Modify your SpinApps to use the new `apiVersion`.
39 | Now you'll need to modify the `apiVersion` in your SpinApps, replacing `core.spinoperator.dev/v1alpha1` with `core.spinkube.dev/v1alpha1`.
40 | > Note: If you don't have your SpinApps tracked in source code somewhere than you will have backed up the SpinApps in your cluster to a file named `spinapps.yaml` in step 1. If you did this then you need to replace the `apiVersion` in the `spinapps.yaml` file. Here's a command that can help with that:
41 | ```sh
42 | sed 's|apiVersion: core.spinoperator.dev/v1alpha1|apiVersion: core.spinkube.dev/v1alpha1|g' spinapps.yaml > modified-spinapps.yaml
43 | ```
44 | 6. Install the new CRDs.
45 | ```sh
46 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.4.0/spin-operator.crds.yaml
47 | ```
48 | 7. Re-install the SpinAppExecutor.
49 | ```sh
50 | kubectl apply -f https://github.com/spinframework/spin-operator/releases/download/v0.4.0/spin-operator.shim-executor.yaml
51 | ```
52 | If you had other executors you'll need to install them too.
53 | 8. Install the new Spin Operator.
54 | ```sh
55 | # Install Spin Operator with Helm
56 | helm install spin-operator \
57 | --namespace spin-operator \
58 | --create-namespace \
59 | --version 0.4.0 \
60 | --wait \
61 | oci://ghcr.io/spinkube/charts/spin-operator
62 | ```
63 | 9. Re-apply your modified SpinApps.
64 | Follow whatever pattern you normally follow to get your SpinApps in the cluster e.g. Kubectl, Flux, Helm, etc.
65 | > Note: If you backed up your SpinApps in step 1, you can re-apply them using the command below:
66 | >
67 | > ```sh
68 | > kubectl apply -f modified-spinapps.yaml
69 | > ```
70 | 10. Upgrade your `spin kube` plugin.
71 | If you're using the `spin kube` plugin you'll need to upgrade it to the new version so that the scaffolded apps are still valid.
72 | ```sh
73 | spin plugins upgrade kube
74 | ```
75 |
--------------------------------------------------------------------------------
/content/en/docs/overview.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Overview
3 | description: A high level overview of the SpinKube sub-projects.
4 | weight: 1
5 | categories: [SpinKube]
6 | tags: []
7 | ---
8 |
9 | # Project Overview
10 |
11 | [SpinKube](https://github.com/spinframework) is an open source project that streamlines the
12 | experience of deploying and operating Wasm workloads on Kubernetes, using [Spin
13 | Operator](https://github.com/spinframework/spin-operator) in tandem with
14 | [runwasi](https://github.com/containerd/runwasi) and [runtime class
15 | manager](https://github.com/spinframework/runtime-class-manager).
16 |
17 | With SpinKube, you can leverage the advantages of using WebAssembly (Wasm) for your workloads:
18 |
19 | - Artifacts are significantly smaller in size compared to container images.
20 | - Artifacts can be quickly fetched over the network and started much faster (\*Note: We are aware of
21 | several optimizations that still need to be implemented to enhance the startup time for
22 | workloads).
23 | - Substantially fewer resources are required during idle times.
24 |
25 | Thanks to SpinKube, we can do all of this while integrating with Kubernetes primitives including
26 | DNS, probes, autoscaling, metrics, and many more cloud native and CNCF projects.
27 |
28 | 
29 |
30 | SpinKube watches [Spin App Custom Resources]({{< ref "glossary#spinapp-manifest" >}}) and realizes
31 | the desired state in the Kubernetes cluster. The foundation of this project was built using the
32 | [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework and contains a Spin App
33 | Custom Resource Definition (CRD) and controller.
34 |
35 | SpinKube is a [Cloud Native Computing Foundation](https://www.cncf.io/) sandbox project.
36 |
37 | To get started, check out our [Quickstart guide]({{< ref "quickstart" >}}).
38 |
--------------------------------------------------------------------------------
/content/en/docs/reference/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: API Reference
3 | description: Technical references for APIs and other aspects of SpinKube's machinery.
4 | weight: 70
5 | ---
6 |
--------------------------------------------------------------------------------
/content/en/docs/reference/cli-reference.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: CLI Reference
3 | description: Spin Plugin kube CLI Reference.
4 | weight: 3
5 | categories: [reference]
6 | tags: [plugins]
7 | aliases:
8 | - /docs/spin-plugin-kube/reference
9 | ---
10 |
11 | ## spin kube completion
12 |
13 | ```bash
14 | spin kube completion --help
15 | Generate the autocompletion script for kube for the specified shell.
16 | See each sub-command's help for details on how to use the generated script.
17 |
18 | Usage:
19 | kube completion [command]
20 |
21 | Available Commands:
22 | bash Generate the autocompletion script for bash
23 | fish Generate the autocompletion script for fish
24 | powershell Generate the autocompletion script for powershell
25 | zsh Generate the autocompletion script for zsh
26 |
27 | Flags:
28 | -h, --help help for completion
29 | ```
30 |
31 | ### spin kube completion bash
32 |
33 | ```bash
34 | spin kube completion bash --help
35 | Generate the autocompletion script for the bash shell.
36 |
37 | This script depends on the 'bash-completion' package.
38 | If it is not installed already, you can install it via your OS's package manager.
39 |
40 | To load completions in your current shell session:
41 |
42 | source <(kube completion bash)
43 |
44 | To load completions for every new session, execute once:
45 |
46 | #### Linux:
47 |
48 | kube completion bash > /etc/bash_completion.d/kube
49 |
50 | #### macOS:
51 |
52 | kube completion bash > $(brew --prefix)/etc/bash_completion.d/kube
53 |
54 | You will need to start a new shell for this setup to take effect.
55 |
56 | Usage:
57 | kube completion bash
58 |
59 | Flags:
60 | -h, --help help for bash
61 | --no-descriptions disable completion descriptions
62 | ```
63 |
64 | ### spin kube completion fish
65 |
66 | ```bash
67 | spin kube completion fish --help
68 | Generate the autocompletion script for the fish shell.
69 |
70 | To load completions in your current shell session:
71 |
72 | kube completion fish | source
73 |
74 | To load completions for every new session, execute once:
75 |
76 | kube completion fish > ~/.config/fish/completions/kube.fish
77 |
78 | You will need to start a new shell for this setup to take effect.
79 |
80 | Usage:
81 | kube completion fish [flags]
82 |
83 | Flags:
84 | -h, --help help for fish
85 | --no-descriptions disable completion descriptions
86 | ```
87 |
88 | ### spin kube completion powershell
89 |
90 | ```bash
91 | spin kube completion powershell --help
92 | Generate the autocompletion script for powershell.
93 |
94 | To load completions in your current shell session:
95 |
96 | kube completion powershell | Out-String | Invoke-Expression
97 |
98 | To load completions for every new session, add the output of the above command
99 | to your powershell profile.
100 |
101 | Usage:
102 | kube completion powershell [flags]
103 |
104 | Flags:
105 | -h, --help help for powershell
106 | --no-descriptions disable completion descriptions
107 | ```
108 |
109 | ### spin kube completion zsh
110 |
111 | ```bash
112 | spin kube completion zsh --help
113 | Generate the autocompletion script for the zsh shell.
114 |
115 | If shell completion is not already enabled in your environment you will need
116 | to enable it. You can execute the following once:
117 |
118 | echo "autoload -U compinit; compinit" >> ~/.zshrc
119 |
120 | To load completions in your current shell session:
121 |
122 | source <(kube completion zsh)
123 |
124 | To load completions for every new session, execute once:
125 |
126 | #### Linux:
127 |
128 | kube completion zsh > "${fpath[1]}/_kube"
129 |
130 | #### macOS:
131 |
132 | kube completion zsh > $(brew --prefix)/share/zsh/site-functions/_kube
133 |
134 | You will need to start a new shell for this setup to take effect.
135 |
136 | Usage:
137 | kube completion zsh [flags]
138 |
139 | Flags:
140 | -h, --help help for zsh
141 | --no-descriptions disable completion descriptions
142 | ```
143 |
144 | ## spin kube help
145 |
146 | ```bash
147 | spin kube --help
148 | Manage apps running on Kubernetes
149 |
150 | Usage:
151 | kube [command]
152 |
153 | Available Commands:
154 | completion Generate the autocompletion script for the specified shell
155 | help Help about any command
156 | scaffold scaffold SpinApp manifest
157 | version Display version information
158 |
159 | Flags:
160 | -h, --help help for kube
161 | --kubeconfig string the path to the kubeconfig file
162 | -n, --namespace string the namespace scope
163 | -v, --version version for kube
164 | ```
165 |
166 | ## spin kube scaffold
167 |
168 | ```bash
169 | spin kube scaffold --help
170 | scaffold SpinApp manifest
171 |
172 | Usage:
173 | kube scaffold [flags]
174 |
175 | Flags:
176 | --autoscaler string The autoscaler to use. Valid values are 'hpa' and 'keda'
177 | --autoscaler-target-cpu-utilization int32 The target CPU utilization percentage to maintain across all pods (default 60)
178 | --autoscaler-target-memory-utilization int32 The target memory utilization percentage to maintain across all pods (default 60)
179 | --cpu-limit string The maximum amount of CPU resource units the Spin application is allowed to use
180 | --cpu-request string The amount of CPU resource units requested by the Spin application. Used to determine which node the Spin application will run on
181 | --executor string The executor used to run the Spin application (default "containerd-shim-spin")
182 | -f, --from string Reference in the registry of the Spin application
183 | -h, --help help for scaffold
184 | -s, --image-pull-secret strings secrets in the same namespace to use for pulling the image
185 | --max-replicas int32 Maximum number of replicas for the spin app. Autoscaling must be enabled to use this flag (default 3)
186 | --memory-limit string The maximum amount of memory the Spin application is allowed to use
187 | --memory-request string The amount of memory requested by the Spin application. Used to determine which node the Spin application will run on
188 | -o, --out string path to file to write manifest yaml
189 | -r, --replicas int32 Minimum number of replicas for the spin app (default 2)
190 | -c, --runtime-config-file string path to runtime config file
191 | ```
192 |
193 | ## spin kube version
194 |
195 | ```bash
196 | spin kube version
197 | ```
198 |
--------------------------------------------------------------------------------
/content/en/docs/reference/spin-app-executor.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: SpinAppExecutor
3 | weight: 2
4 | description: Custom Resource Definition (CRD) reference for `SpinAppExecutor`
5 | categories: [Spin Operator]
6 | tags: [reference]
7 | ---
8 | Resource Types:
9 |
10 | - [SpinAppExecutor](#spinappexecutor)
11 |
12 | ## SpinAppExecutor
13 |
14 | SpinAppExecutor is the Schema for the spinappexecutors API
15 |
16 |
46 | SpinAppExecutorSpec defines the desired state of SpinAppExecutor
47 |
48 |
false
49 |
50 |
status
51 |
object
52 |
53 | SpinAppExecutorStatus defines the observed state of SpinAppExecutor
54 |
55 |
false
56 |
57 |
58 |
59 |
60 | ### `SpinAppExecutor.spec`
61 | [back to parent](#spinappexecutor)
62 |
63 |
64 | SpinAppExecutorSpec defines the desired state of SpinAppExecutor
65 |
66 |
67 |
68 |
69 |
Name
70 |
Type
71 |
Description
72 |
Required
73 |
74 |
75 |
76 |
createDeployment
77 |
boolean
78 |
79 | CreateDeployment specifies whether the Executor wants the SpinKube operator
80 | to create a deployment for the application or if it will be realized externally.
81 |
87 | DeploymentConfig specifies how the deployment should be configured when
88 | createDeployment is true.
89 |
90 |
false
91 |
92 |
93 |
94 |
95 | ### `SpinAppExecutor.spec.deploymentConfig`
96 | [back to parent](#spinappexecutorspec)
97 |
98 |
99 | DeploymentConfig specifies how the deployment should be configured when
100 | createDeployment is true.
101 |
102 |
103 |
104 |
105 |
Name
106 |
Type
107 |
Description
108 |
Required
109 |
110 |
111 |
112 |
caCertSecret
113 |
string
114 |
115 | CACertSecret specifies the name of the secret containing the CA
116 | certificates to be mounted to the deployment.
117 |
118 |
false
119 |
120 |
installDefaultCACerts
121 |
boolean
122 |
123 | InstallDefaultCACerts specifies whether the default CA
124 | certificate bundle should be generated. When set a new secret
125 | will be created containing the certificates. If no secret name is
126 | defined in `CACertSecret` the secret name will be `spin-ca`.
127 |
140 | RuntimeClassName is the runtime class name that should be used by pods created
141 | as part of a deployment. This should only be defined when SpintainerImage is not defined.
142 |
143 |
false
144 |
145 |
spinImage
146 |
string
147 |
148 | SpinImage points to an image that will run Spin in a container to execute
149 | your SpinApp. This is an alternative to using the shim to execute your
150 | SpinApp. This should only be defined when RuntimeClassName is not
151 | defined. When specified, application images must be available without
152 | authentication.
153 |
178 | ExporterOtlpEndpoint configures the default combined otlp endpoint for sending telemetry
179 |
180 |
false
181 |
182 |
exporter_otlp_logs_endpoint
183 |
string
184 |
185 | ExporterOtlpLogsEndpoint configures the logs-specific otlp endpoint
186 |
187 |
false
188 |
189 |
exporter_otlp_metrics_endpoint
190 |
string
191 |
192 | ExporterOtlpMetricsEndpoint configures the metrics-specific otlp endpoint
193 |
194 |
false
195 |
196 |
exporter_otlp_traces_endpoint
197 |
string
198 |
199 | ExporterOtlpTracesEndpoint configures the trace-specific otlp endpoint
200 |
201 |
false
202 |
203 |
204 |
205 |
206 |
--------------------------------------------------------------------------------
/content/en/docs/spinkube-overview-diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/spinkube-overview-diagram.png
--------------------------------------------------------------------------------
/content/en/docs/topics/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Using SpinKube
3 | description: Introductions to all the key parts of SpinKube you’ll need to know.
4 | weight: 30
5 | ---
6 |
--------------------------------------------------------------------------------
/content/en/docs/topics/architecture.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: SpinKube at a glance
3 | description: A high level overview of the SpinKube sub-projects.
4 | weight: 80
5 | ---
6 |
7 | ## spin-operator
8 |
9 | [Spin Operator](https://github.com/spinframework/spin-operator/) is a [Kubernetes
10 | operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) which empowers platform
11 | engineers to deploy [Spin applications](https://developer.fermyon.com/spin) as custom resources to
12 | their Kubernetes clusters. Spin Operator provides an elegant solution for platform engineers looking
13 | to improve efficiency without compromising on performance while maintaining workload portability.
14 |
15 | ### Why Spin Operator?
16 |
17 | By bringing the power of the Spin framework to Kubernetes clusters, Spin Operator provides
18 | application developers and platform engineers with the best of both worlds. For developers, this
19 | means easily building portable serverless functions that leverage the power and performance of Wasm
20 | via the Spin developer tool. For platform engineers, this means using idiomatic Kubernetes
21 | primitives (secrets, autoscaling, etc.) and tooling to manage these workloads at scale in a
22 | production environment, improving their overall operational efficiency.
23 |
24 | ### How Does Spin Operator Work?
25 |
26 | Built with the [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) framework, Spin
27 | Operator is a Kubernetes operator. Kubernetes operators are used to extend Kubernetes automation to
28 | new objects, defined as custom resources, without modifying the Kubernetes API. The Spin Operator is
29 | composed of two main components:
30 | - A controller that defines and manages Wasm workloads on k8s.
31 | - The "SpinApps" Custom Resource Definition (CRD).
32 |
33 | 
34 |
35 | SpinApps CRDs can be [composed manually]({{< ref "glossary#custom-resource-definition-crd" >}}) or
36 | generated automatically from an existing Spin application using the [`spin kube
37 | scaffold`](#spin-plugin-kube) command. The former approach lends itself well to CI/CD systems,
38 | whereas the latter is a better fit for local testing as part of a local developer workflow.
39 |
40 | Once an application deployment begins, Spin Operator handles scheduling the workload on the
41 | appropriate nodes (thanks to the [Runtime Class Manager](#runtime-class-manager), previously known
42 | as Kwasm) and managing the resource's lifecycle. There is no need to fetch the
43 | [`containerd-shim-spin`](#containerd-shim-spin) binary or mutate node labels. This is all managed
44 | via the Runtime Class Manager, which you will install as a dependency when setting up Spin Operator.
45 |
46 | ## containerd-shim-spin
47 |
48 | The [`containerd-shim-spin`](https://github.com/spinframework/containerd-shim-spin) is a [containerd
49 | shim](https://github.com/containerd/containerd/blob/main/core/runtime/v2/README.md#runtime-shim)
50 | implementation for [Spin](https://developer.fermyon.com/spin), which enables running Spin workloads
51 | on Kubernetes via [runwasi](https://github.com/deislabs/runwasi). This means that by installing this
52 | shim onto Kubernetes nodes, we can add a [runtime
53 | class](https://kubernetes.io/docs/concepts/containers/runtime-class/) to Kubernetes and schedule
54 | Spin workloads on those nodes. Your Spin apps can act just like container workloads!
55 |
56 | The `containerd-shim-spin` is specifically designed to execute applications built with
57 | [Spin](https://www.fermyon.com/spin) (a developer tool for building and running serverless Wasm
58 | applications). The shim ensures that Wasm workloads can be managed effectively within a Kubernetes
59 | environment, leveraging containerd's capabilities.
60 |
61 | In a Kubernetes cluster, specific nodes can be bootstrapped with Wasm runtimes and labeled
62 | accordingly to facilitate the scheduling of Wasm workloads. `RuntimeClasses` in Kubernetes are used
63 | to schedule Pods to specific nodes and target specific runtimes. By defining a `RuntimeClass` with
64 | the appropriate `NodeSelector` and handler, Kubernetes can direct Wasm workloads to nodes equipped
65 | with the necessary Wasm runtimes and ensure they are executed with the correct runtime handler.
66 |
67 | Overall, the Containerd Shim Spin represents a significant advancement in integrating Wasm workloads
68 | into Kubernetes clusters, enhancing the versatility and capabilities of container orchestration.
69 |
70 | ## runtime-class-manager
71 |
72 | The [Runtime Class Manager, also known as the Containerd Shim Lifecycle
73 | Operator](https://github.com/spinframework/runtime-class-manager), is designed to automate and manage the
74 | lifecycle of containerd shims in a Kubernetes environment. This includes tasks like installation,
75 | update, removal, and configuration of shims, reducing manual errors and improving reliability in
76 | managing WebAssembly (Wasm) workloads and other containerd extensions.
77 |
78 | The Runtime Class Manager provides a robust and production-ready solution for installing, updating,
79 | and removing shims, as well as managing node labels and runtime classes in a Kubernetes environment.
80 |
81 | By automating these processes, the runtime-class-manager enhances reliability, reduces human error,
82 | and simplifies the deployment and management of containerd shims in Kubernetes clusters.
83 |
84 | ## spin-plugin-kube
85 |
86 | The [Kubernetes plugin for Spin](https://github.com/spinframework/spin-plugin-kube) is designed to
87 | enhance Spin by enabling the execution of Wasm modules directly within a Kubernetes cluster.
88 | Specifically a tool designed for Kubernetes integration with the Spin command-line interface. This
89 | plugin works by integrating with containerd shims, allowing Kubernetes to manage and run Wasm
90 | workloads in a way similar to traditional container workloads.
91 |
92 | The Kubernetes plugin for Spin allows developers to use the Spin command-line interface for
93 | deploying Spin applications; it provides a seamless workflow for building, pushing, deploying, and
94 | managing Spin applications in a Kubernetes environment. It includes commands for scaffolding new
95 | components as Kubernetes manifests, and deploying and retrieving information about Spin applications
96 | running in Kubernetes. This plugin is an essential tool for developers looking to streamline their
97 | Spin application deployment on Kubernetes platforms.
98 |
--------------------------------------------------------------------------------
/content/en/docs/topics/assigning-variables.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Assigning variables
3 | description: Configure Spin Apps using values from Kubernetes ConfigMaps and Secrets.
4 | date: 2024-02-16
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 11
8 | aliases:
9 | - /docs/spin-operator/tutorials/assigning-variables
10 | ---
11 |
12 |
13 | By using variables, you can alter application behavior without recompiling your SpinApp. When
14 | running in Kubernetes, you can either provide constant values for variables, or reference them from
15 | Kubernetes primitives such as `ConfigMaps` and `Secrets`. This tutorial guides your through the
16 | process of assigning variables to your `SpinApp`.
17 |
18 | > Note: If you'd like to learn how to configure your application with an external variable provider
19 | > like [Vault](https://vaultproject.io) or [Azure Key
20 | > Vault](https://azure.microsoft.com/en-us/products/key-vault), see the [External Variable Provider
21 | > guide](./external-variable-providers.md)
22 |
23 | ## Build and Store SpinApp in an OCI Registry
24 |
25 | We’re going to build the SpinApp and store it inside of a [ttl.sh](http://ttl.sh) registry. Move
26 | into the
27 | [apps/variable-explorer](https://github.com/spinframework/spin-operator/blob/main/apps/variable-explorer)
28 | directory and build the SpinApp we’ve provided:
29 |
30 | ```bash
31 | # Build and publish the sample app
32 | cd apps/variable-explorer
33 | spin build
34 | spin registry push ttl.sh/variable-explorer:1h
35 | ```
36 |
37 | Note that the tag at the end of [ttl.sh/variable-explorer:1h](http://ttl.sh/variable-explorer:1h)
38 | indicates how long the image will last e.g. `1h` (1 hour). The maximum is `24h` and you will need to
39 | repush if ttl exceeds 24 hours.
40 |
41 | For demonstration purposes, we use the [variable
42 | explorer](https://github.com/spinframework/spin-operator/blob/main/apps/variable-explorer) sample app. It
43 | reads three different variables (`log_level`, `platform_name` and `db_password`) and prints their
44 | values to the `STDOUT` stream as shown in the following snippet:
45 |
46 | ```rust
47 | let log_level = variables::get("log_level")?;
48 | let platform_name = variables::get("platform_name")?;
49 | let db_password = variables::get("db_password")?;
50 |
51 | println!("# Log Level: {}", log_level);
52 | println!("# Platform name: {}", platform_name);
53 | println!("# DB Password: {}", db_password);
54 | ```
55 |
56 | Those variables are defined as part of the Spin manifest (`spin.toml`), and access to them is
57 | granted to the `variable-explorer` component:
58 |
59 | ```toml
60 | [variables]
61 | log_level = { default = "WARN" }
62 | platform_name = { default = "Fermyon Cloud" }
63 | db_password = { required = true }
64 |
65 | [component.variable-explorer.variables]
66 | log_level = "{{ log_level }}"
67 | platform_name = "{{ platform_name }}"
68 | db_password = "{{ db_password }}"
69 | ```
70 |
71 | For further reading on defining variables in the Spin manifest, see the [Spin Application Manifest
72 | Reference](https://developer.fermyon.com/spin/v2/manifest-reference#the-variables-table).
73 |
74 | ## Configuration data in Kubernetes
75 |
76 | In Kubernetes, you use `ConfigMaps` for storing non-sensitive, and `Secrets` for storing sensitive
77 | configuration data. The deployment manifest (`config/samples/variable-explorer.yaml`) contains
78 | specifications for both a `ConfigMap` and a `Secret`:
79 |
80 | ```yaml
81 | kind: ConfigMap
82 | apiVersion: v1
83 | metadata:
84 | name: spinapp-cfg
85 | data:
86 | logLevel: INFO
87 | ---
88 | kind: Secret
89 | apiVersion: v1
90 | metadata:
91 | name: spinapp-secret
92 | data:
93 | password: c2VjcmV0X3NhdWNlCg==
94 | ```
95 |
96 | ## Assigning variables to a SpinApp
97 |
98 | When creating a `SpinApp`, you can choose from different approaches for specifying variables:
99 |
100 | 1. Providing constant values
101 | 2. Loading configuration values from ConfigMaps
102 | 3. Loading configuration values from Secrets
103 |
104 | The `SpinApp` specification contains the `variables` array, that you use for specifying variables
105 | (See `kubectl explain spinapp.spec.variables`).
106 |
107 | The deployment manifest (`config/samples/variable-explorer.yaml`) specifies a static value for
108 | `platform_name`. The value of `log_level` is read from the `ConfigMap` called `spinapp-cfg`, and the
109 | `db_password` is read from the `Secret` called `spinapp-secret`:
110 |
111 | ```yaml
112 | kind: SpinApp
113 | apiVersion: core.spinkube.dev/v1alpha1
114 | metadata:
115 | name: variable-explorer
116 | spec:
117 | replicas: 1
118 | image: ttl.sh/variable-explorer:1h
119 | executor: containerd-shim-spin
120 | variables:
121 | - name: platform_name
122 | value: Kubernetes
123 | - name: log_level
124 | valueFrom:
125 | configMapKeyRef:
126 | name: spinapp-cfg
127 | key: logLevel
128 | optional: true
129 | - name: db_password
130 | valueFrom:
131 | secretKeyRef:
132 | name: spinapp-secret
133 | key: password
134 | optional: false
135 | ```
136 |
137 | As the deployment manifest outlines, you can use the `optional` property - as you would do when
138 | specifying environment variables for a regular Kubernetes `Pod` - to control if Kubernetes should
139 | prevent starting the SpinApp, if the referenced configuration source does not exist.
140 |
141 | You can deploy all resources by executing the following command:
142 |
143 | ```bash
144 | kubectl apply -f config/samples/variable-explorer.yaml
145 |
146 | configmap/spinapp-cfg created
147 | secret/spinapp-secret created
148 | spinapp.core.spinkube.dev/variable-explorer created
149 | ```
150 |
151 | ## Inspecting runtime logs of your SpinApp
152 |
153 | To verify that all variables are passed correctly to the SpinApp, you can configure port forwarding
154 | from your local machine to the corresponding Kubernetes `Service`:
155 |
156 | ```bash
157 | kubectl port-forward services/variable-explorer 8080:80
158 |
159 | Forwarding from 127.0.0.1:8080 -> 80
160 | Forwarding from [::1]:8080 -> 80
161 | ```
162 |
163 | When port forwarding is established, you can send an HTTP request to the variable-explorer from
164 | within an additional terminal session:
165 |
166 | ```bash
167 | curl http://localhost:8080
168 | Hello from Kubernetes
169 | ```
170 |
171 | Finally, you can use `kubectl logs` to see all logs produced by the variable-explorer at runtime:
172 |
173 | ```bash
174 | kubectl logs -l core.spinkube.dev/app-name=variable-explorer
175 |
176 | # Log Level: INFO
177 | # Platform Name: Kubernetes
178 | # DB Password: secret_sauce
179 | ```
180 |
--------------------------------------------------------------------------------
/content/en/docs/topics/autoscaling/_index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Autoscaling your apps
3 | description: Guides on autoscaling your applications with SpinKube.
4 | weight: 20
5 | ---
6 |
--------------------------------------------------------------------------------
/content/en/docs/topics/autoscaling/autoscaling.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Using the `spin kube` plugin
3 | description: A tutorial to show how autoscaler support can be enabled via the `spin kube` command.
4 | date: 2024-02-26
5 | categories: [guides]
6 | tags: [tutorial, autoscaling]
7 | aliases:
8 | - /docs/spin-plugin-kube/tutorials/autoscaler-support
9 | ---
10 |
11 | ## Horizontal autoscaling support
12 |
13 | In Kubernetes, a horizontal autoscaler automatically updates a workload resource (such as a
14 | Deployment or StatefulSet) with the aim of automatically scaling the workload to match demand.
15 |
16 | Horizontal scaling means that the response to increased load is to deploy more resources. This is
17 | different from vertical scaling, which for Kubernetes would mean assigning more memory or CPU to the
18 | resources that are already running for the workload.
19 |
20 | If the load decreases, and the number of resources is above the configured minimum, a horizontal
21 | autoscaler would instruct the workload resource (the Deployment, StatefulSet, or other similar
22 | resource) to scale back down.
23 |
24 | The Kubernetes plugin for Spin includes autoscaler support, which allows you to tell Kubernetes when
25 | to scale your Spin application up or down based on demand. This tutorial will show you how to enable
26 | autoscaler support via the `spin kube scaffold` command.
27 |
28 | ### Prerequisites
29 |
30 | Regardless of what type of autoscaling is used, you must determine how you want your application to
31 | scale by answering the following questions:
32 |
33 | 1. Do you want your application to scale based upon system metrics (CPU and memory utilization) or
34 | based upon events (like messages in a queue or rows in a database)?
35 | 1. If you application scales based on system metrics, how much CPU and memory each instance does
36 | your application need to operate?
37 |
38 | ### Choosing an autoscaler
39 |
40 | The Kubernetes plugin for Spin supports two types of autoscalers: Horizontal Pod Autoscaler (HPA)
41 | and Kubernetes Event-driven Autoscaling (KEDA). The choice of autoscaler depends on the requirements
42 | of your application.
43 |
44 | #### Horizontal Pod Autoscaling (HPA)
45 |
46 | Horizontal Pod Autoscaler (HPA) scales Kubernetes pods based on CPU or memory utilization. This HPA
47 | scaling can be implemented via the Kubernetes plugin for Spin by setting the `--autoscaler hpa`
48 | option. This page deals exclusively with autoscaling via the Kubernetes plugin for Spin.
49 |
50 | ```sh
51 | spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi
52 | ```
53 |
54 | Horizontal Pod Autoscaling is built-in to Kubernetes and does not require the installation of a
55 | third-party runtime. For more general information about scaling with HPA, please see the Spin
56 | Operator's [Scaling with HPA section]({{< ref "scaling-with-hpa" >}})
57 |
58 | #### Kubernetes Event-driven Autoscaling (KEDA)
59 |
60 | Kubernetes Event-driven Autoscaling (KEDA) is an extension of Horizontal Pod Autoscaling (HPA). On
61 | top of allowing to scale based on CPU or memory utilization, KEDA allows for scaling based on events
62 | from various sources like messages in a queue, or the number of rows in a database.
63 |
64 | KEDA can be enabled by setting the `--autoscaler keda` option:
65 |
66 | ```sh
67 | spin kube scaffold --from user-name/app-name:latest --autoscaler keda --cpu-limit 100m --memory-limit 128Mi -replicas 1 --max-replicas 10
68 | ```
69 |
70 | Using KEDA to autoscale your Spin applications requires the installation of the [KEDA
71 | runtime](https://keda.sh/) into your Kubernetes cluster. For more information about scaling with
72 | KEDA in general, please see the Spin Operator's [Scaling with KEDA section]({{< ref
73 | "scaling-with-keda" >}})
74 |
75 | ### Setting min/max replicas
76 |
77 | The `--replicas` and `--max-replicas` options can be used to set the minimum and maximum number of
78 | replicas for your application. The `--replicas` option defaults to 2 and the `--max-replicas` option
79 | defaults to 3.
80 |
81 | ```sh
82 | spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi -replicas 1 --max-replicas 10
83 | ```
84 |
85 | ### Setting CPU/memory limits and CPU/memory requests
86 |
87 | If the node where an application is running has enough of a resource available, it's possible (and
88 | allowed) for that application to use more resource than its resource request for that resource
89 | specifies. However, an application is not allowed to use more than its resource limit.
90 |
91 | For example, if you set a memory request of 256 MiB, and that application is scheduled to a node
92 | with 8GiB of memory and no other appplications, then the application can try to use more RAM.
93 |
94 | If you set a memory limit of 4GiB for that application, the webassembly runtime will enforce that
95 | limit. The runtime prevents the application from using more than the configured resource limit. For
96 | example: when a process in the application tries to consume more than the allowed amount of memory,
97 | the webassembly runtime terminates the process that attempted the allocation with an out of memory
98 | (OOM) error.
99 |
100 | The `--cpu-limit`, `--memory-limit`, `--cpu-request`, and `--memory-request` options can be used to
101 | set the CPU and memory limits and requests for your application. The `--cpu-limit` and
102 | `--memory-limit` options are required, while the `--cpu-request` and `--memory-request` options are
103 | optional.
104 |
105 | It is important to note the following:
106 |
107 | - CPU/memory requests are optional and will default to the CPU/memory limit if not set.
108 | - CPU/memory requests must be lower than their respective CPU/memory limit.
109 | - If you specify a limit for a resource, but do not specify any request, and no admission-time
110 | mechanism has applied a default request for that resource, then Kubernetes copies the limit you
111 | specified and uses it as the requested value for the resource.
112 |
113 | ```sh
114 | spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi --cpu-request 50m --memory-request 64Mi
115 | ```
116 |
117 | ### Setting target utilization
118 |
119 | Target utilization is the percentage of the resource that you want to be used before the autoscaler
120 | kicks in. The autoscaler will check the current resource utilization of your application against the
121 | target utilization and scale your application up or down based on the result.
122 |
123 | Target utilization is based on the average resource utilization across all instances of your
124 | application. For example, if you have 3 instances of your application, the target CPU utilization is
125 | 50%, and each application is averaging 80% CPU utilization, the autoscaler will continue to increase
126 | the number of instances until all instances are averaging 50% CPU utilization.
127 |
128 | To scale based on CPU utilization, use the `--autoscaler-target-cpu-utilization` option:
129 |
130 | ```sh
131 | spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi --autoscaler-target-cpu-utilization 50
132 | ```
133 |
134 | To scale based on memory utilization, use the `--autoscaler-target-memory-utilization` option:
135 |
136 | ```sh
137 | spin kube scaffold --from user-name/app-name:latest --autoscaler hpa --cpu-limit 100m --memory-limit 128Mi --autoscaler-target-memory-utilization 50
138 | ```
139 |
--------------------------------------------------------------------------------
/content/en/docs/topics/connecting-to-a-sqlite-database.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Connecting to a SQLite database
3 | description: Connect your Spin App to an external SQLite database
4 | date: 2024-07-17
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 14
8 | ---
9 |
10 | Spin applications can utilize a [standardized API for persisting data in a SQLite
11 | database](https://developer.fermyon.com/spin/v2/sqlite-api-guide). A default database is created by
12 | the Spin runtime on the local filesystem, which is great for getting an application up and running.
13 | However, this on-disk solution may not be preferable for an app running in the context of SpinKube,
14 | where apps are often scaled beyond just one replica.
15 |
16 | Thankfully, Spin supports configuring an application with an [external SQLite database provider via
17 | runtime
18 | configuration](https://developer.fermyon.com/spin/v2/dynamic-configuration#libsql-storage-provider).
19 | External providers include any [libSQL](https://libsql.org/) databases that can be accessed over
20 | HTTPS.
21 |
22 | ## Prerequisites
23 |
24 | To follow along with this tutorial, you'll need:
25 |
26 | - A Kubernetes cluster running SpinKube. See the [Installation]({{< relref "install" >}}) guides for
27 | more information.
28 | - The [kubectl CLI](https://kubernetes.io/docs/tasks/tools/#kubectl)
29 | - The [spin CLI](https://developer.fermyon.com/spin/v2/install )
30 |
31 | ## Build and publish the Spin application
32 |
33 | For this tutorial, we'll use the [HTTP CRUD Go
34 | SQLite](https://github.com/fermyon/enterprise-architectures-and-patterns/tree/main/http-crud-go-sqlite)
35 | sample application. It is a Go-based app implementing CRUD (Create, Read, Update, Delete) operations
36 | via the SQLite API.
37 |
38 | First, clone the repository locally and navigate to the `http-crud-go-sqlite` directory:
39 |
40 | ```bash
41 | git clone git@github.com:fermyon/enterprise-architectures-and-patterns.git
42 | cd enterprise-architectures-and-patterns/http-crud-go-sqlite
43 | ```
44 |
45 | Now, build and push the application to a registry you have access to. Here we'll use
46 | [ttl.sh](https://ttl.sh):
47 |
48 | ```bash
49 | export IMAGE_NAME=ttl.sh/$(uuidgen):1h
50 | spin build
51 | spin registry push ${IMAGE_NAME}
52 | ```
53 |
54 | ## Create a LibSQL database
55 |
56 | If you don't already have a LibSQL database that can be used over HTTPS, you can follow along as we
57 | set one up via [Turso](https://turso.tech/).
58 |
59 | Before proceeding, install the [turso CLI](https://docs.turso.tech/quickstart) and sign up for an
60 | account, if you haven't done so already.
61 |
62 | Create a new database and save its HTTP URL:
63 |
64 | ```bash
65 | turso db create spinkube
66 | export DB_URL=$(turso db show spinkube --http-url)
67 | ```
68 |
69 | Next, create an auth token for this database:
70 |
71 | ```bash
72 | export DB_TOKEN=$(turso db tokens create spinkube)
73 | ```
74 |
75 | ## Create a Kubernetes Secret for the database token
76 |
77 | The database token is a sensitive value and thus should be created as a Secret resource in
78 | Kubernetes:
79 |
80 | ```bash
81 | kubectl create secret generic turso-auth --from-literal=db-token="${DB_TOKEN}"
82 | ```
83 |
84 | ## Prepare the SpinApp manifest
85 |
86 | You're now ready to assemble the SpinApp custom resource manifest.
87 |
88 | - Note the `image` value uses the reference you published above.
89 | - All of the SQLite database config is set under `spec.runtimeConfig.sqliteDatabases`. See the
90 | [sqliteDatabases reference guide]({{< ref
91 | "docs/reference/spin-app#spinappspecruntimeconfigsqlitedatabasesindex" >}}) for more details.
92 | - Here we configure the `default` database to use the `libsql` provider type and under `options`
93 | supply the database URL and auth token (via its Kubernetes secret)
94 |
95 | Plug the `$IMAGE_NAME` and `$DB_URL` values into the manifest below and save as `spinapp.yaml`:
96 |
97 | ```yaml
98 | apiVersion: core.spinkube.dev/v1alpha1
99 | kind: SpinApp
100 | metadata:
101 | name: http-crud-go-sqlite
102 | spec:
103 | image: "$IMAGE_NAME"
104 | replicas: 1
105 | executor: containerd-shim-spin
106 | runtimeConfig:
107 | sqliteDatabases:
108 | - name: "default"
109 | type: "libsql"
110 | options:
111 | - name: "url"
112 | value: "$DB_URL"
113 | - name: "token"
114 | valueFrom:
115 | secretKeyRef:
116 | name: "turso-auth"
117 | key: "db-token"
118 | ```
119 |
120 | ## Create the SpinApp
121 |
122 | Apply the resource manifest to your Kubernetes cluster:
123 |
124 | ```bash
125 | kubectl apply -f spinapp.yaml
126 | ```
127 |
128 | The Spin Operator will handle the creation of the underlying Kubernetes resources on your behalf.
129 |
130 | ## Test the application
131 |
132 | Now you are ready to test the application and verify connectivity and data storage to the configured
133 | SQLite database.
134 |
135 | Configure port forwarding from your local machine to the corresponding Kubernetes `Service`:
136 |
137 | ```bash
138 | kubectl port-forward services/http-crud-go-sqlite 8080:80
139 |
140 | Forwarding from 127.0.0.1:8080 -> 80
141 | Forwarding from [::1]:8080 -> 80
142 | ```
143 |
144 | When port forwarding is established, you can send HTTP requests to the http-crud-go-sqlite app from
145 | within an additional terminal session. Here are a few examples to get you started.
146 |
147 | Get current items:
148 |
149 | ```bash
150 | $ curl -X GET http://localhost:8080/items
151 | [
152 | {
153 | "id": "8b933c84-ee60-45a1-848d-428ad3259e2b",
154 | "name": "Full Self Driving (FSD)",
155 | "active": true
156 | },
157 | {
158 | "id": "d660b9b2-0406-46d6-9efe-b40b4cca59fc",
159 | "name": "Sentry Mode",
160 | "active": true
161 | }
162 | ]
163 | ```
164 |
165 | Create a new item:
166 |
167 | ```bash
168 | $ curl -X POST -d '{"name":"Engage Thrusters","active":true}' localhost:8080/items
169 | {
170 | "id": "a5efaa73-a4ac-4ffc-9c5c-61c5740e2d9f",
171 | "name": "Engage Thrusters",
172 | "active": true
173 | }
174 | ```
175 |
176 | Get items and see the newly added item:
177 |
178 | ```bash
179 | $ curl -X GET http://localhost:8080/items
180 | [
181 | {
182 | "id": "8b933c84-ee60-45a1-848d-428ad3259e2b",
183 | "name": "Full Self Driving (FSD)",
184 | "active": true
185 | },
186 | {
187 | "id": "d660b9b2-0406-46d6-9efe-b40b4cca59fc",
188 | "name": "Sentry Mode",
189 | "active": true
190 | },
191 | {
192 | "id": "a5efaa73-a4ac-4ffc-9c5c-61c5740e2d9f",
193 | "name": "Engage Thrusters",
194 | "active": true
195 | }
196 | ]
197 | ```
198 |
--------------------------------------------------------------------------------
/content/en/docs/topics/https-requests.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Making HTTPS Requests
3 | description: Configure Spin Apps to allow HTTPS requests.
4 | date: 2024-09-03
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 11
8 | aliases:
9 | - /docs/spin-operator/tutorials/https-requests
10 | ---
11 |
12 | To enable HTTPS requests, the [executor](https://www.spinkube.dev/docs/glossary/#spin-app-executor-crd) must be configured to use certificates. SpinKube can be configured to use either default or custom certificates.
13 |
14 | If you make a request without properly configured certificates, you'll encounter an error message that reads: `error trying to connect: unexpected EOF (unable to get local issuer certificate)`.
15 |
16 | ## Using default certificates
17 |
18 | SpinKube can generate a default CA certificate bundle by setting `installDefaultCACerts` to `true`. This creates a secret named `spin-ca` populated with curl's [default bundle](https://curl.se/ca/cacert.pem). You can specify a custom secret name by setting `caCertSecret`.
19 |
20 | ```yaml
21 | apiVersion: core.spinkube.dev/v1alpha1
22 | kind: SpinAppExecutor
23 | metadata:
24 | name: containerd-shim-spin
25 | spec:
26 | createDeployment: true
27 | deploymentConfig:
28 | runtimeClassName: wasmtime-spin-v2
29 | installDefaultCACerts: true
30 | ```
31 |
32 | Apply the executor using kubectl:
33 |
34 | ```console
35 | kubectl apply -f myexecutor.yaml
36 | ```
37 |
38 | ## Using custom certificates
39 |
40 | Create a secret from your certificate file:
41 |
42 | ```console
43 | kubectl create secret generic my-custom-ca --from-file=ca-certificates.crt
44 | ```
45 |
46 | Configure the executor to use the custom certificate secret:
47 |
48 | ```yaml
49 | apiVersion: core.spinkube.dev/v1alpha1
50 | kind: SpinAppExecutor
51 | metadata:
52 | name: containerd-shim-spin
53 | spec:
54 | createDeployment: true
55 | deploymentConfig:
56 | runtimeClassName: wasmtime-spin-v2
57 | caCertSecret: my-custom-ca
58 | ```
59 |
60 | Apply the executor using kubectl:
61 |
62 | ```console
63 | kubectl apply -f myexecutor.yaml
64 | ```
65 |
--------------------------------------------------------------------------------
/content/en/docs/topics/monitoring-your-app.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Monitoring your app
3 | description: How to view telemetry data from your Spin apps running in SpinKube.
4 | weight: 13
5 | ---
6 |
7 | This topic guide shows you how to configure SpinKube so your Spin apps export observability data. This data will export to an OpenTelemetry collector which will send it to Jaeger.
8 |
9 | ## Prerequisites
10 |
11 | Please ensure you have the following tools installed before continuing:
12 |
13 | - A Kubernetes cluster running SpinKube. See the [installation guides](https://www.spinkube.dev/docs/install/) for more information
14 | - The [kubectl CLI](https://kubernetes.io/docs/tasks/tools/)
15 | - The [Helm CLI](https://helm.sh)
16 |
17 | ## About OpenTelemetry Collector
18 |
19 | From the OpenTelemetry [documentation](https://opentelemetry.io/docs/collector/):
20 |
21 | > The OpenTelemetry Collector offers a vendor-agnostic implementation of how to receive, process and export telemetry data. It removes the need to run, operate, and maintain multiple agents/collectors. This works with improved scalability and supports open source observability data formats (e.g. Jaeger, Prometheus, Fluent Bit, etc.) sending to one or more open source or commercial backends.
22 |
23 | In our case, the OpenTelemetry collector serves as a single endpoint to receive and route telemetry data, letting us to monitor metrics, traces, and logs via our preferred UIs.
24 |
25 | ## About Jaeger
26 |
27 | From the Jaeger [documentation](https://www.jaegertracing.io/docs/):
28 |
29 | > Jaeger is a distributed tracing platform released as open source by Uber Technologies. With Jaeger you can: Monitor and troubleshoot distributed workflows, Identify performance bottlenecks, Track down root causes, Analyze service dependencies
30 |
31 | Here, we have the OpenTelemetry collector send the trace data to Jaeger.
32 |
33 | ## Deploy OpenTelemetry Collector
34 |
35 | First, add the OpenTelemetry collector Helm repository:
36 |
37 | ```sh
38 | helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
39 | helm repo update
40 | ```
41 |
42 | Next, deploy the OpenTelemetry collector to your cluster:
43 |
44 | ```sh
45 | helm upgrade --install otel-collector open-telemetry/opentelemetry-collector \
46 | --set image.repository="otel/opentelemetry-collector-k8s" \
47 | --set nameOverride=otel-collector \
48 | --set mode=deployment \
49 | --set config.exporters.otlp.endpoint=http://jaeger-collector.default.svc.cluster.local:4317 \
50 | --set config.exporters.otlp.tls.insecure=true \
51 | --set config.service.pipelines.traces.exporters\[0\]=otlp \
52 | --set config.service.pipelines.traces.processors\[0\]=batch \
53 | --set config.service.pipelines.traces.receivers\[0\]=otlp \
54 | --set config.service.pipelines.traces.receivers\[1\]=jaeger
55 | ```
56 |
57 | ## Deploy Jaeger
58 |
59 | Next, add the Jaeger Helm repository:
60 |
61 | ```sh
62 | helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
63 | helm repo update
64 | ```
65 |
66 | Then, deploy Jaeger to your cluster:
67 |
68 | ```sh
69 | helm upgrade --install jaeger jaegertracing/jaeger \
70 | --set provisionDataStore.cassandra=false \
71 | --set allInOne.enabled=true \
72 | --set agent.enabled=false \
73 | --set collector.enabled=false \
74 | --set query.enabled=false \
75 | --set storage.type=memory
76 | ```
77 |
78 | ## Configure the SpinAppExecutor
79 |
80 | The `SpinAppExecutor` resource determines how Spin applications are deployed in the cluster. The following configuration will ensure that any `SpinApp` resource using this executor will send telemetry data to the OpenTelemetry collector. To see a comprehensive list of OTel options for the `SpinAppExecutor`, see the [API reference](https://www.spinkube.dev/docs/reference/spin-app-executor/).
81 |
82 | Create a file called `executor.yaml` with the following content:
83 |
84 | ```yaml
85 | apiVersion: core.spinkube.dev/v1alpha1
86 | kind: SpinAppExecutor
87 | metadata:
88 | name: otel-shim-executor
89 | spec:
90 | createDeployment: true
91 | deploymentConfig:
92 | runtimeClassName: wasmtime-spin-v2
93 | installDefaultCACerts: true
94 | otel:
95 | exporter_otlp_endpoint: http://otel-collector.default.svc.cluster.local:4318
96 | ```
97 |
98 | To deploy the executor, run:
99 |
100 | ```sh
101 | kubectl apply -f executor.yaml
102 | ```
103 |
104 | ## Deploy a Spin app to observe
105 |
106 | With everything in place, we can now deploy a `SpinApp` resource that uses the executor `otel-shim-executor`.
107 |
108 | Create a file called `app.yaml` with the following content:
109 |
110 | ```yaml
111 | apiVersion: core.spinkube.dev/v1alpha1
112 | kind: SpinApp
113 | metadata:
114 | name: otel-spinapp
115 | spec:
116 | image: ghcr.io/spinkube/spin-operator/cpu-load-gen:20240311-163328-g1121986
117 | executor: otel-shim-executor
118 | replicas: 1
119 | ```
120 |
121 | Deploy the app by running:
122 |
123 | ```sh
124 | kubectl apply -f app.yaml
125 | ```
126 |
127 | Congratulations! You now have a Spin app exporting telemetry data.
128 |
129 | Next, we need to generate telemetry data for the Spin app to export. Use the below command to port-forward the Spin app:
130 |
131 | ```sh
132 | kubectl port-forward svc/otel-spinapp 3000:80
133 | ```
134 |
135 | In a new terminal window, execute a `curl` request:
136 |
137 | ```sh
138 | curl localhost:3000
139 | ```
140 |
141 | The request will take a couple of moments to run, but once it's done, you should see an output similar to this:
142 |
143 | ```
144 | fib(43) = 433494437
145 | ```
146 |
147 | ## Interact with Jaeger
148 |
149 | To view the traces in Jaeger, use the following port-forward command:
150 |
151 | ```sh
152 | kubectl port-forward svc/jaeger-query 16686:16686
153 | ```
154 |
155 | Then, open your browser and navigate to `localhost:16686` to interact with Jaeger's UI.
156 |
--------------------------------------------------------------------------------
/content/en/docs/topics/packaging.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Packaging and deploying apps
3 | description: Learn how to package and distribute Spin Apps using either public or private OCI compliant registries.
4 | date: 2024-02-16
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 10
8 | aliases:
9 | - /docs/spin-operator/tutorials/package-and-deploy
10 | ---
11 |
12 | This article explains how Spin Apps are packaged and distributed via both public and private
13 | registries. You will learn how to:
14 |
15 | - Package and distribute Spin Apps
16 | - Deploy Spin Apps
17 | - Scaffold Kubernetes Manifests for Spin Apps
18 | - Use private registries that require authentication
19 |
20 | ## Prerequisites
21 |
22 | For this tutorial in particular, you need
23 |
24 | - [TinyGo](https://tinygo.org/) - for building the Spin app
25 | - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI
26 | - [spin](https://developer.fermyon.com/spin/v2/install) - the Spin CLI
27 | - [spin kube](/docs/spin-plugin-kube/installation) - the Kubernetes plugin for `spin`
28 |
29 | ## Creating a new Spin App
30 |
31 | You use the `spin` CLI, to create a new Spin App. The `spin` CLI provides different templates, which
32 | you can use to quickly create different kinds of Spin Apps. For demonstration purposes, you will use
33 | the `http-go` template to create a simple Spin App.
34 |
35 | ```shell
36 | # Create a new Spin App using the http-go template
37 | spin new --accept-defaults -t http-go hello-spin
38 |
39 | # Navigate into the hello-spin directory
40 | cd hello-spin
41 | ```
42 |
43 | The `spin` CLI created all necessary files within `hello-spin`. Besides the Spin Manifest
44 | (`spin.toml`), you can find the actual implementation of the app in `main.go`:
45 |
46 | ```go
47 | package main
48 |
49 | import (
50 | "fmt"
51 | "net/http"
52 |
53 | spinhttp "github.com/fermyon/spin/sdk/go/v2/http"
54 | )
55 |
56 | func init() {
57 | spinhttp.Handle(func(w http.ResponseWriter, r *http.Request) {
58 | w.Header().Set("Content-Type", "text/plain")
59 | fmt.Fprintln(w, "Hello Fermyon!")
60 | })
61 | }
62 |
63 | func main() {}
64 | ```
65 |
66 | This implementation will respond to any incoming HTTP request, and return an HTTP response with a
67 | status code of 200 (`Ok`) and send `Hello Fermyon` as the response body.
68 |
69 | You can test the app on your local machine by invoking the `spin up` command from within the
70 | `hello-spin` folder.
71 |
72 | ## Packaging and Distributing Spin Apps
73 |
74 | Spin Apps are packaged and distributed as OCI artifacts. By leveraging OCI artifacts, Spin Apps can
75 | be distributed using any registry that implements the [Open Container Initiative Distribution
76 | Specification](https://github.com/opencontainers/distribution-spec) (a.k.a. "OCI Distribution
77 | Spec").
78 |
79 | The `spin` CLI simplifies packaging and distribution of Spin Apps and provides an atomic command for
80 | this (`spin registry push`). You can package and distribute the `hello-spin` app that you created as
81 | part of the previous section like this:
82 |
83 | ```shell
84 | # Package and Distribute the hello-spin app
85 | spin registry push --build ttl.sh/hello-spin:24h
86 | ```
87 |
88 | > It is a good practice to add the `--build` flag to `spin registry push`. It prevents you from
89 | > accidentally pushing an outdated version of your Spin App to your registry of choice.
90 |
91 | ## Deploying Spin Apps
92 |
93 | To deploy Spin Apps to a Kubernetes cluster which has Spin Operator running, you use the `kube`
94 | plugin for `spin`. Use the `spin kube deploy` command as shown here to deploy the `hello-spin` app
95 | to your Kubernetes cluster:
96 |
97 | ```shell
98 | # Deploy the hello-spin app to your Kubernetes Cluster
99 | spin kube deploy --from ttl.sh/hello-spin:24h
100 |
101 | spinapp.core.spinkube.dev/hello-spin created
102 | ```
103 | > You can deploy a subset of components in your Spin Application using [Selective Deployments](./selective-deployments.md).
104 |
105 | ## Scaffolding Spin Apps
106 |
107 | In the previous section, you deployed the `hello-spin` app using the `spin kube deploy` command.
108 | Although this is handy, you may want to inspect, or alter the Kubernetes manifests before applying
109 | them. You use the `spin kube scaffold` command to generate Kubernetes manifests:
110 |
111 | ```shell
112 | spin kube scaffold --from ttl.sh/hello-spin:24h
113 | apiVersion: core.spinkube.dev/v1alpha1
114 | kind: SpinApp
115 | metadata:
116 | name: hello-spin
117 | spec:
118 | image: "ttl.sh/hello-spin:24h"
119 | replicas: 2
120 | ```
121 |
122 | By default, the command will print all Kubernetes manifests to `STDOUT`. Alternatively, you can
123 | specify the `out` argument to store the manifests to a file:
124 |
125 | ```shell
126 | # Scaffold manifests to spinapp.yaml
127 | spin kube scaffold --from ttl.sh/hello-spin:24h \
128 | --out spinapp.yaml
129 |
130 | # Print contents of spinapp.yaml
131 | cat spinapp.yaml
132 | apiVersion: core.spinkube.dev/v1alpha1
133 | kind: SpinApp
134 | metadata:
135 | name: hello-spin
136 | spec:
137 | image: "ttl.sh/hello-spin:24h"
138 | replicas: 2
139 | ```
140 |
141 | You can then deploy the Spin App by applying the manifest with the `kubectl` CLI:
142 |
143 | ```shell
144 | kubectl apply -f spinapp.yaml
145 | ```
146 |
147 | ## Distributing and Deploying Spin Apps via private registries
148 |
149 | It is quite common to distribute Spin Apps through private registries that require some sort of
150 | authentication. To publish a Spin App to a private registry, you have to authenticate using the
151 | `spin registry login` command.
152 |
153 | For demonstration purposes, you will now distribute the Spin App via GitHub Container Registry
154 | (GHCR). You can follow [this guide by
155 | GitHub](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic)
156 | to create a new personal access token (PAT), which is required for authentication.
157 |
158 | ```shell
159 | # Store PAT and GitHub username as environment variables
160 | export GH_PAT=YOUR_TOKEN
161 | export GH_USER=YOUR_GITHUB_USERNAME
162 |
163 | # Authenticate spin CLI with GHCR
164 | echo $GH_PAT | spin registry login ghcr.io -u $GH_USER --password-stdin
165 |
166 | Successfully logged in as YOUR_GITHUB_USERNAME to registry ghcr.io
167 | ```
168 |
169 | Once authentication succeeded, you can use `spin registry push` to push your Spin App to GHCR:
170 |
171 | ```shell
172 | # Push hello-spin to GHCR
173 | spin registry push --build ghcr.io/$GH_USER/hello-spin:0.0.1
174 |
175 | Pushing app to the Registry...
176 | Pushed with digest sha256:1611d51b296574f74b99df1391e2dc65f210e9ea695fbbce34d770ecfcfba581
177 | ```
178 |
179 | In Kubernetes you store authentication information as secret of type `docker-registry`. The
180 | following snippet shows how to create such a secret with `kubectl` leveraging the environment
181 | variables, you specified in the previous section:
182 |
183 | ```shell
184 | # Create Secret in Kubernetes
185 | kubectl create secret docker-registry ghcr \
186 | --docker-server ghcr.io \
187 | --docker-username $GH_USER \
188 | --docker-password $CR_PAT
189 |
190 | secret/ghcr created
191 | ```
192 |
193 | Scaffold the necessary `SpinApp` Custom Resource (CR) using `spin kube scaffold`:
194 |
195 | ```shell
196 | # Scaffold the SpinApp manifest
197 | spin kube scaffold --from ghcr.io/$GH_USER/hello-spin:0.0.1 \
198 | --out spinapp.yaml
199 | ```
200 |
201 | Before deploying the manifest with `kubectl`, update `spinapp.yaml` and link the `ghcr` secret you
202 | previously created using the `imagePullSecrets` property. Your `SpinApp` manifest should look like
203 | this:
204 |
205 | ```yaml
206 | apiVersion: core.spinkube.dev/v1alpha1
207 | kind: SpinApp
208 | metadata:
209 | name: hello-spin
210 | spec:
211 | image: ghcr.io/$GH_USER/hello-spin:0.0.1
212 | imagePullSecrets:
213 | - name: ghcr
214 | replicas: 2
215 | executor: containerd-shim-spin
216 | ```
217 |
218 | > `$GH_USER` should match the actual username provided while running through the previous sections
219 | > of this article
220 |
221 | Finally, you can deploy the app using `kubectl apply`:
222 |
223 | ```shell
224 | # Deploy the spinapp.yaml using kubectl
225 | kubectl apply -f spinapp.yaml
226 | spinapp.core.spinkube.dev/hello-spin created
227 | ```
228 | ```
229 |
--------------------------------------------------------------------------------
/content/en/docs/topics/routing.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Connecting to your app
3 | description: Learn how to connect to your application.
4 | weight: 13
5 | ---
6 |
7 | This topic guide shows you how to connect to your application deployed to SpinKube, including how to
8 | use port-forwarding for local development, or Ingress rules for a production setup.
9 |
10 | ## Run the sample application
11 |
12 | Let's deploy a sample application to your Kubernetes cluster. We will use this application
13 | throughout the tutorial to demonstrate how to connect to it.
14 |
15 | Refer to the [quickstart guide]({{< ref "quickstart" >}}) if you haven't set up a Kubernetes cluster
16 | yet.
17 |
18 | ```shell
19 | kubectl apply -f https://raw.githubusercontent.com/spinkube/spin-operator/main/config/samples/simple.yaml
20 | ```
21 |
22 | When SpinKube deploys the application, it creates a Kubernetes Service that exposes the application
23 | to the cluster. You can check the status of the deployment with the following command:
24 |
25 | ```shell
26 | kubectl get services
27 | ```
28 |
29 | You should see a service named `simple-spinapp` with a type of `ClusterIP`. This means that the
30 | service is only accessible from within the cluster.
31 |
32 | ```shell
33 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
34 | simple-spinapp ClusterIP 10.43.152.184 80/TCP 1m
35 | ```
36 |
37 | We will use this service to connect to your application.
38 |
39 | ## Port forwarding
40 |
41 | This option is useful for debugging and development. It allows you to forward a local port to the
42 | service.
43 |
44 | Forward port 8083 to the service so that it can be reached from your computer:
45 |
46 | ```shell
47 | kubectl port-forward svc/simple-spinapp 8083:80
48 | ```
49 |
50 | You should be able to reach it from your browser at [http://localhost:8083](http://localhost:8083):
51 |
52 | ```shell
53 | curl http://localhost:8083
54 | ```
55 |
56 | You should see a message like "Hello world from Spin!".
57 |
58 | This is one of the simplest ways to test your application. However, it is not suitable for
59 | production use. The next section will show you how to expose your application to the internet using
60 | an Ingress controller.
61 |
62 | ## Ingress
63 |
64 | Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
65 | Traffic routing is controlled by rules defined on the Ingress resource.
66 |
67 | Here is a simple example where an Ingress sends all its traffic to one Service:
68 |
69 | 
70 |
71 | (source: [Kubernetes
72 | documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/))
73 |
74 | An Ingress may be configured to give applications externally-reachable URLs, load balance traffic,
75 | terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for
76 | fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router
77 | or additional frontends to help handle the traffic.
78 |
79 | ### Prerequisites
80 |
81 | You must have an [Ingress
82 | controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) to satisfy
83 | an Ingress rule. Creating an Ingress rule without a controller has no effect.
84 |
85 | Ideally, all Ingress controllers should fit the reference specification. In reality, the various
86 | Ingress controllers operate slightly differently. Make sure you review your Ingress controller's
87 | documentation to understand the specifics of how it works.
88 |
89 | [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/) is a popular Ingress controller,
90 | so we will use it in this tutorial:
91 |
92 | ```shell
93 | helm upgrade --install ingress-nginx ingress-nginx \
94 | --repo https://kubernetes.github.io/ingress-nginx \
95 | --namespace ingress-nginx --create-namespace
96 | ```
97 |
98 | Wait for the ingress controller to be ready:
99 |
100 | ```shell
101 | kubectl wait --namespace ingress-nginx \
102 | --for=condition=ready pod \
103 | --selector=app.kubernetes.io/component=controller \
104 | --timeout=120s
105 | ```
106 |
107 | ### Check the Ingress controller's external IP address
108 |
109 | If your Kubernetes cluster is a "real" cluster that supports services of type `LoadBalancer`, it
110 | will have allocated an external IP address or FQDN to the ingress controller.
111 |
112 | Check the IP address or FQDN with the following command:
113 |
114 | ```shell
115 | kubectl get service ingress-nginx-controller --namespace=ingress-nginx
116 | ```
117 |
118 | It will be the `EXTERNAL-IP` field. If that field shows ``, this means that your Kubernetes
119 | cluster wasn't able to provision the load balancer. Generally, this is because it doesn't support
120 | services of type `LoadBalancer`.
121 |
122 | Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Refer to your
123 | DNS provider's documentation on how to add a new DNS record to your domain.
124 |
125 | You will want to create an A record that points to the external IP address. If your external IP
126 | address is ``, you would create a record like this:
127 |
128 | ```shell
129 | A myapp.spinkube.local
130 | ```
131 |
132 | Once you've added a DNS record to your domain and it has propagated, proceed to create an ingress
133 | resource.
134 |
135 | ### Create an Ingress resource
136 |
137 | Create an Ingress resource that routes traffic to the `simple-spinapp` service. The following
138 | example assumes that you have set up a DNS record for `myapp.spinkube.local`:
139 |
140 | ```shell
141 | kubectl create ingress simple-spinapp --class=nginx --rule="myapp.spinkube.local/*=simple-spinapp:80"
142 | ```
143 |
144 | A couple notes about the above command:
145 |
146 | - `simple-spinapp` is the name of the Ingress resource.
147 | - `myapp.spinkube.local` is the hostname that the Ingress will route traffic to. This is the DNS
148 | record you set up earlier.
149 | - `simple-spinapp:80` is the Service that SpinKube created for us. The application listens for
150 | requests on port 80.
151 |
152 | Assuming DNS has propagated correctly, you should see a message like "Hello world from Spin!" when
153 | you connect to http://myapp.spinkube.local/.
154 |
155 | Congratulations, you are serving a public website hosted on a Kubernetes cluster! 🎉
156 |
157 | ### Connecting with kubectl port-forward
158 |
159 | This is a quick way to test your Ingress setup without setting up DNS records or on clusters without
160 | support for services of type `LoadBalancer`.
161 |
162 | Open a new terminal and forward a port from localhost port 8080 to the Ingress controller:
163 |
164 | ```shell
165 | kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
166 | ```
167 |
168 | Then, in another terminal, test the Ingress setup:
169 |
170 | ```shell
171 | curl --resolve myapp.spinkube.local:8080:127.0.0.1 http://myapp.spinkube.local:8080/hello
172 | ```
173 |
174 | You should see a message like "Hello world from Spin!".
175 |
176 | If you want to see your app running in the browser, update your `/etc/hosts` file to resolve
177 | requests from `myapp.spinkube.local` to the ingress controller:
178 |
179 | ```shell
180 | 127.0.0.1 myapp.spinkube.local
181 | ```
182 |
--------------------------------------------------------------------------------
/content/en/docs/topics/selective-deployments.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Selective Deployments in Spin
3 | description: Learn how to deploy a subset of components from your SpinApp using Selective Deployments.
4 | date: 2024-11-10
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 10
8 | ---
9 |
10 | This article explains how to selectively deploy a subset of components from your Spin App using Selective Deployments. You will learn how to:
11 |
12 | - Scaffold a Specific Component from a Spin Application into a Custom Resource
13 | - Run a Selective Deployment
14 |
15 | Selective Deployments allow you to control which components within a Spin app are active for a specific instance of the app. With Component Selectors, Spin and SpinKube can declare at runtime which components should be activated, letting you deploy a single, versioned artifact while choosing which parts to enable at startup. This approach separates developer goals (building a well-architected app) from operational needs (optimizing for specific infrastructure).
16 |
17 | ## Prerequisites
18 |
19 | For this tutorial, you’ll need:
20 |
21 | - [kubectl](https://kubernetes.io/docs/tasks/tools/) - the Kubernetes CLI
22 | - Kubernetes cluster with the Spin Operator v0.4 and Containerd Spin Shim v0.17 - follow the [Quickstart](../install/quickstart.md) if needed
23 | - `spin kube` plugin v0.3 - follow [Installing the `spin kube` plugin](../install/spin-kube-plugin.md) if needed
24 |
25 | ## Scaffold a Specific Component from a Spin Application into a Custom Resource
26 |
27 | We’ll use a sample application called "Salutations", which demonstrates greetings via two components, each responding to a unique HTTP route. If we take a look at the [application manifest](https://github.com/spinframework/spin-operator/blob/main/apps/salutations/spin.toml), we’ll see that this Spin application is comprised of two components:
28 |
29 | - `Hello` component triggered by the `/hi` route
30 | - `Goodbye` component triggered by the `/bye` route
31 |
32 | ```yaml
33 | spin_manifest_version = 2
34 |
35 | [application]
36 | name = "salutations"
37 | version = "0.1.0"
38 | authors = ["Kate Goldenring "]
39 | description = "An app that gives salutations"
40 |
41 | [[trigger.http]]
42 | route = "/hi"
43 | component = "hello"
44 |
45 | [component.hello]
46 | source = "../hello-world/main.wasm"
47 | allowed_outbound_hosts = []
48 | [component.hello.build]
49 | command = "cd ../hello-world && tinygo build -target=wasi -gc=leaking -no-debug -o main.wasm main.go"
50 | watch = ["**/*.go", "go.mod"]
51 |
52 | [[trigger.http]]
53 | route = "/bye"
54 | component = "goodbye"
55 |
56 | [component.goodbye]
57 | source = "main.wasm"
58 | allowed_outbound_hosts = []
59 | [component.goodbye.build]
60 | command = "tinygo build -target=wasi -gc=leaking -no-debug -o main.wasm main.go"
61 | watch = ["**/*.go", "go.mod"]
62 | ```
63 |
64 | With Selective Deployments, you can choose to deploy only specific components without modifying the source code. For this example, we’ll deploy just the `hello` component.
65 |
66 | > Note that if you had an Spin application with more than two components, you could choose to deploy multiple components selectively.
67 |
68 | To Selectively Deploy, we first need to turn our application into a SpinApp Custom Resource with the `spin kube scaffold` command, using the optional `--component` field to specify which component we’d like to deploy:
69 |
70 | ```bash
71 | spin kube scaffold --from ghcr.io/spinkube/spin-operator/salutations:20241105-223428-g4da3171 --component hello --replicas 1 --out spinapp.yaml
72 | ```
73 |
74 | Now if we take a look at our `spinapp.yaml`, we should see that only the hello component will be deployed via Selective Deployments:
75 |
76 | ```yaml
77 | apiVersion: core.spinkube.dev/v1alpha1
78 | kind: SpinApp
79 | metadata:
80 | name: salutations
81 | spec:
82 | image: "ghcr.io/spinkube/spin-operator/salutations:20241105-223428-g4da3171"
83 | executor: containerd-shim-spin
84 | replicas: 1
85 | components:
86 | - hello
87 | ```
88 |
89 | ## Run a Selective Deployment
90 |
91 | Now you can deploy your app using `kubectl` as you normally would:
92 |
93 | ```bash
94 | # Deploy the spinapp.yaml using kubectl
95 | kubectl apply -f spinapp.yaml
96 | spinapp.core.spinkube.dev/salutations created
97 | ```
98 |
99 | We can test that only our `hello` component is running by port-forwarding its service.
100 |
101 | ```bash
102 | kubectl port-forward svc/salutations 8083:80
103 | ```
104 |
105 | Now let’s call the `/hi` route in a seperate terminal:
106 |
107 | ```bash
108 | curl localhost:8083/hi
109 | ```
110 |
111 | If the hello component is running correctly, we should see a response of "Hello Fermyon!":
112 |
113 | ```bash
114 | Hello Fermyon!
115 | ```
116 |
117 | Next, let’s try the `/bye` route. This should return nothing, confirming that only the `hello` component was deployed:
118 |
119 | ```bash
120 | curl localhost:8083/bye
121 | ```
122 |
123 | There you have it! You selectively deployed a subset of your Spin application to SpinKube with no modifications to your source code. This approach lets you easily deploy only the components you need, which can improve efficiency in environments where only specific services are required.
124 |
--------------------------------------------------------------------------------
/content/en/docs/topics/spin-operator-diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/docs/topics/spin-operator-diagram.png
--------------------------------------------------------------------------------
/content/en/docs/topics/using-a-key-value-store.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Using a key value store
3 | description: Connect your Spin App to a key value store
4 | date: 2024-07-29
5 | categories: [Spin Operator]
6 | tags: [Tutorials]
7 | weight: 14
8 | ---
9 |
10 | Spin applications can utilize a [standardized API for persisting data in a key value
11 | store](https://developer.fermyon.com/spin/v2/kv-store-api-guide). The default key value store in
12 | Spin is an SQLite database, which is great for quickly utilizing non-relational local storage
13 | without any infrastructure set-up. However, this solution may not be preferable for an app running
14 | in the context of SpinKube, where apps are often scaled beyond just one replica.
15 |
16 | Thankfully, Spin supports configuring an application with an [external key value
17 | provider](https://developer.fermyon.com/spin/v2/dynamic-configuration#key-value-store-runtime-configuration).
18 | External providers include [Redis](https://redis.io/) or [Valkey](https://valkey.io/) and [Azure
19 | Cosmos DB](https://azure.microsoft.com/en-us/products/cosmos-db).
20 |
21 | ## Prerequisites
22 |
23 | To follow along with this tutorial, you'll need:
24 |
25 | - A Kubernetes cluster running SpinKube. See the [Installation]({{< relref "install" >}}) guides for
26 | more information.
27 | - The [kubectl CLI](https://kubernetes.io/docs/tasks/tools/#kubectl)
28 | - The [spin CLI](https://developer.fermyon.com/spin/v2/install )
29 |
30 | ## Build and publish the Spin application
31 |
32 | For this tutorial, we'll use a [Spin key/value
33 | application](https://github.com/fermyon/spin-go-sdk/tree/main/examples/key-value) written with the
34 | Go SDK. The application serves a CRUD (Create, Read, Update, Delete) API for managing key/value
35 | pairs.
36 |
37 | First, clone the repository locally and navigate to the `examples/key-value` directory:
38 |
39 | ```bash
40 | git clone git@github.com:fermyon/spin-go-sdk.git
41 | cd examples/key-value
42 | ```
43 |
44 | Now, build and push the application to a registry you have access to. Here we'll use
45 | [ttl.sh](https://ttl.sh):
46 |
47 | ```bash
48 | export IMAGE_NAME=ttl.sh/$(uuidgen):1h
49 | spin build
50 | spin registry push ${IMAGE_NAME}
51 | ```
52 |
53 | ## Configure an external key value provider
54 |
55 | Since we have access to a Kubernetes cluster already running SpinKube, we'll choose
56 | [Valkey](https://valkey.io/) for our key value provider and install this provider via Bitnami's
57 | [Valkey Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/valkey). Valkey is swappable
58 | for Redis in Spin, though note we do need to supply a URL using the `redis://` protocol rather than
59 | `valkey://`.
60 |
61 | ```bash
62 | helm install valkey --namespace valkey --create-namespace oci://registry-1.docker.io/bitnamicharts/valkey
63 | ```
64 |
65 | As mentioned in the notes shown after successful installation, be sure to capture the valkey
66 | password for use later:
67 |
68 | ```bash
69 | export VALKEY_PASSWORD=$(kubectl get secret --namespace valkey valkey -o jsonpath="{.data.valkey-password}" | base64 -d)
70 | ```
71 |
72 | ## Create a Kubernetes Secret for the Valkey URL
73 |
74 | The runtime configuration will require the Valkey URL so that it can connect to this provider. As
75 | this URL contains the sensitive password string, we will create it as a Secret resource in
76 | Kubernetes:
77 |
78 | ```bash
79 | kubectl create secret generic kv-secret --from-literal=valkey-url="redis://:${VALKEY_PASSWORD}@valkey-master.valkey.svc.cluster.local:6379"
80 | ```
81 |
82 | ## Prepare the SpinApp manifest
83 |
84 | You're now ready to assemble the SpinApp custom resource manifest for this application.
85 |
86 | - All of the key value config is set under `spec.runtimeConfig.keyValueStores`. See the
87 | [keyValueStores reference guide]({{< ref
88 | "docs/reference/spin-app#spinappspecruntimeconfigkeyvaluestoresindex" >}}) for more details.
89 | - Here we configure the `default` store to use the `redis` provider type and under `options` supply
90 | the Valkey URL (via its Kubernetes secret)
91 |
92 | Plug the `$IMAGE_NAME` and `$DB_URL` values into the manifest below and save as `spinapp.yaml`:
93 |
94 | ```yaml
95 | apiVersion: core.spinkube.dev/v1alpha1
96 | kind: SpinApp
97 | metadata:
98 | name: kv-app
99 | spec:
100 | image: "$IMAGE_NAME"
101 | replicas: 1
102 | executor: containerd-shim-spin
103 | runtimeConfig:
104 | keyValueStores:
105 | - name: "default"
106 | type: "redis"
107 | options:
108 | - name: "url"
109 | valueFrom:
110 | secretKeyRef:
111 | name: "kv-secret"
112 | key: "valkey-url"
113 | ```
114 |
115 | ## Create the SpinApp
116 |
117 | Apply the resource manifest to your Kubernetes cluster:
118 |
119 | ```bash
120 | kubectl apply -f spinapp.yaml
121 | ```
122 |
123 | The Spin Operator will handle the creation of the underlying Kubernetes resources on your behalf.
124 |
125 | ## Test the application
126 |
127 | Now you are ready to test the application and verify connectivity and key value storage to the
128 | configured provider.
129 |
130 | Configure port forwarding from your local machine to the corresponding Kubernetes `Service`:
131 |
132 | ```bash
133 | kubectl port-forward services/kv-app 8080:80
134 |
135 | Forwarding from 127.0.0.1:8080 -> 80
136 | Forwarding from [::1]:8080 -> 80
137 | ```
138 |
139 | When port forwarding is established, you can send HTTP requests to the application from within an
140 | additional terminal session. Here are a few examples to get you started.
141 |
142 | Create a `test` key with value `ok!`:
143 |
144 | ```bash
145 | $ curl -i -X POST -d "ok!" localhost:8080/test
146 | HTTP/1.1 200 OK
147 | content-length: 0
148 | date: Mon, 29 Jul 2024 19:58:14 GMT
149 | ```
150 |
151 | Get the value for the `test` key:
152 |
153 | ```bash
154 | $ curl -i -X GET localhost:8080/test
155 | HTTP/1.1 200 OK
156 | content-length: 3
157 | date: Mon, 29 Jul 2024 19:58:39 GMT
158 |
159 | ok!
160 | ```
161 |
162 | Delete the value for the `test` key:
163 |
164 | ```bash
165 | $ curl -i -X DELETE localhost:8080/test
166 | HTTP/1.1 200 OK
167 | content-length: 0
168 | date: Mon, 29 Jul 2024 19:59:18 GMT
169 | ```
170 |
171 | Attempt to get the value for the `test` key:
172 |
173 | ```bash
174 | $ curl -i -X GET localhost:8080/test
175 | HTTP/1.1 500 Internal Server Error
176 | content-type: text/plain; charset=utf-8
177 | x-content-type-options: nosniff
178 | content-length: 12
179 | date: Mon, 29 Jul 2024 19:59:44 GMT
180 |
181 | no such key
182 | ```
183 |
--------------------------------------------------------------------------------
/content/en/logo-fermyon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/logo-fermyon.png
--------------------------------------------------------------------------------
/content/en/logo-liquidreply.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/logo-liquidreply.png
--------------------------------------------------------------------------------
/content/en/logo-microsoft.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/logo-microsoft.png
--------------------------------------------------------------------------------
/content/en/logo-suse.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/content/en/logo-suse.png
--------------------------------------------------------------------------------
/content/en/search.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Search Results
3 | layout: search
4 | ---
5 |
--------------------------------------------------------------------------------
/crd-reference/check.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | # Checks to make sure that the reference docs in the content/en/docs/reference directory
4 | # are up-to-date with the Spin Operator CRDs
5 |
6 | script_dir=$(dirname "$0")
7 |
8 | cd $script_dir
9 |
10 | echo "Checking for changes..."
11 | cp ../content/en/docs/reference/spin-app-executor.md spin-app-executor.yaml.bak
12 | cp ../content/en/docs/reference/spin-app.md spin-app.yaml.bak
13 |
14 | ./generate.sh > /dev/null 2>&1
15 |
16 | has_changes=false
17 | diff ../content/en/docs/reference/spin-app-executor.md spin-app-executor.yaml.bak > /dev/null
18 | if [ $? -ne 0 ]; then
19 | has_changes=true
20 | echo "Changes found in spin-app-executor.md"
21 | fi
22 | diff ../content/en/docs/reference/spin-app.md spin-app.yaml.bak > /dev/null
23 | if [ $? -ne 0 ]; then
24 | has_changes=true
25 | echo "Changes found in spin-app.md"
26 | fi
27 |
28 | mv spin-app-executor.yaml.bak ../content/en/docs/reference/spin-app-executor.md
29 | mv spin-app.yaml.bak ../content/en/docs/reference/spin-app.md
30 |
31 | if $has_changes; then
32 | echo "Changes found. Please run ./generate.sh"
33 | exit 1
34 | fi
35 |
36 | echo "No changes found"
--------------------------------------------------------------------------------
/crd-reference/generate.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | # Generates the CRD reference docs for SpinAppExecutor and SpinApp and places them in the
4 | # content/en/docs/reference directory
5 |
6 | set -eo pipefail
7 |
8 | script_dir=$(dirname "$0")
9 |
10 | cd $script_dir
11 |
12 | SPIN_OPERATOR_RELEASE=${SPIN_OPERATOR_RELEASE:-v0.5.0}
13 |
14 | echo "Installing crdoc"
15 | go install fybrik.io/crdoc@latest
16 |
17 | echo "Downloading Spin Operator CRDs ($SPIN_OPERATOR_RELEASE)"
18 | spin_operator_crds_file=$(mktemp)
19 | wget https://github.com/spinframework/spin-operator/releases/download/$SPIN_OPERATOR_RELEASE/spin-operator.crds.yaml -O ${spin_operator_crds_file}
20 |
21 | # Generate SpinAppExecutor Reference Docs
22 | echo "Generating CRD reference docs for SpinAppExecutor"
23 | crdoc -r ${spin_operator_crds_file} \
24 | -o ../content/en/docs/reference/spin-app-executor.md \
25 | --toc ./spin-app-executor-toc.yaml \
26 | --template ./spin-operator.tmpl
27 |
28 | echo "Generating CRD reference docs for SpinApp"
29 | # Generate SpinApp Reference Docs
30 | crdoc -r ${spin_operator_crds_file} \
31 | -o ../content/en/docs/reference/spin-app.md \
32 | --toc ./spin-app-toc.yaml \
33 | --template ./spin-operator.tmpl
34 |
35 | # Remove temporary file
36 | rm ${spin_operator_crds_file}
37 |
38 | echo "Done."
39 |
--------------------------------------------------------------------------------
/crd-reference/spin-app-executor-toc.yaml:
--------------------------------------------------------------------------------
1 | metadata:
2 | title: "SpinAppExecutor"
3 | weight: 2
4 | description: "Custom Resource Definition (CRD) reference for `SpinAppExecutor`"
5 | category: Spin Operator
6 | groups:
7 | - group: core.spinkube.dev
8 | version: v1alpha1
9 | kinds:
10 | - name: SpinAppExecutor
11 |
--------------------------------------------------------------------------------
/crd-reference/spin-app-toc.yaml:
--------------------------------------------------------------------------------
1 | metadata:
2 | title: "SpinApp"
3 | weight: 1
4 | description: "Custom Resource Definition (CRD) reference for `SpinApp`"
5 | groups:
6 | - group: core.spinkube.dev
7 | version: v1alpha1
8 | kinds:
9 | - name: SpinApp
10 |
--------------------------------------------------------------------------------
/crd-reference/spin-operator.tmpl:
--------------------------------------------------------------------------------
1 | ---
2 | title: {{or .Metadata.Title "API Reference"}}
3 | weight: {{or .Metadata.Weight 1 }}
4 | {{- if .Metadata.Description}}
5 | description: {{.Metadata.Description}}
6 | {{- end}}
7 | categories: [Spin Operator]
8 | tags: [reference]
9 | ---
10 | {{- range .Groups }}
11 | {{- $group := . }}
12 | Resource Types:
13 | {{range .Kinds}}
14 | - [{{.Name}}](#{{ anchorize .Name }})
15 | {{- end}}{{/* range .Kinds */}}
16 |
17 | {{- range .Kinds}}
18 | {{$kind := .}}
19 | ## {{.Name}}
20 | {{- range .Types}}
21 | {{- if not .IsTopLevel}}
22 | ### `{{.Name}}`
23 | {{if .ParentKey}}[back to parent](#{{.ParentKey}}){{end}}
24 | {{end}}
25 |
26 | {{.Description}}
27 |
28 |
63 | {{.Description}}
64 | {{- if or .Schema.XValidations .Schema.Format .Schema.Enum .Schema.Default .Schema.Minimum .Schema.Maximum }}
65 |
66 | {{- end}}
67 | {{- if .Schema.XValidations }}
68 | Validations:
69 | {{- range .Schema.XValidations -}}
70 |
{{ .Rule }}: {{ .Message }}
71 | {{- end -}}
72 | {{- end }}
73 | {{- if .Schema.Format }}
74 | Format: {{ .Schema.Format }}
75 | {{- end }}
76 | {{- if .Schema.Enum }}
77 | Enum: {{ .Schema.Enum | toStrings | join ", " }}
78 | {{- end }}
79 | {{- if .Schema.Default }}
80 | Default: {{ .Schema.Default }}
81 | {{- end }}
82 | {{- if .Schema.Minimum }}
83 | Minimum: {{ .Schema.Minimum }}
84 | {{- end }}
85 | {{- if .Schema.Maximum }}
86 | Maximum: {{ .Schema.Maximum }}
87 | {{- end }}
88 |
89 |
{{.Required}}
90 |
91 | {{- end -}}
92 |
93 |
94 |
95 | {{end}}{{/* range .Types */}}
96 | {{- end}}{{/* range .Kinds */}}
97 | {{- end}}{{/* range .Groups */}}
98 |
--------------------------------------------------------------------------------
/docker-compose.yaml:
--------------------------------------------------------------------------------
1 | version: "3.8"
2 |
3 | services:
4 |
5 | site:
6 | image: docsy/docsy-example
7 | build:
8 | context: .
9 | command: server
10 | ports:
11 | - "1313:1313"
12 | volumes:
13 | - .:/src
14 |
--------------------------------------------------------------------------------
/docsy.work:
--------------------------------------------------------------------------------
1 | go 1.19
2 |
3 | use .
4 | use ../docsy/ // Local docsy clone resides in sibling folder to this project
5 | // use ./themes/docsy/ // Local docsy clone resides in themes folder
6 |
--------------------------------------------------------------------------------
/docsy.work.sum:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/spinframework/spinkube-docs/a7a5852d0d7e36ed5965d211a704297006c1278a/docsy.work.sum
--------------------------------------------------------------------------------
/go.mod:
--------------------------------------------------------------------------------
1 | module github.com/spinframework/spinkube-docs
2 |
3 | go 1.22
4 |
5 | require (
6 | github.com/FortAwesome/Font-Awesome v0.0.0-20240402185447-c0f460dca7f7 // indirect
7 | github.com/google/docsy v0.10.0 // indirect
8 | github.com/twbs/bootstrap v5.3.3+incompatible // indirect
9 | )
10 |
--------------------------------------------------------------------------------
/go.sum:
--------------------------------------------------------------------------------
1 | github.com/FortAwesome/Font-Awesome v0.0.0-20240108205627-a1232e345536/go.mod h1:IUgezN/MFpCDIlFezw3L8j83oeiIuYoj28Miwr/KUYo=
2 | github.com/FortAwesome/Font-Awesome v0.0.0-20240402185447-c0f460dca7f7 h1:2aWEKCRLqQ9nPyXaz4/IYtRrDr3PzEiX0DUSUr2/EDs=
3 | github.com/FortAwesome/Font-Awesome v0.0.0-20240402185447-c0f460dca7f7/go.mod h1:IUgezN/MFpCDIlFezw3L8j83oeiIuYoj28Miwr/KUYo=
4 | github.com/google/docsy v0.9.0 h1:FDbPR9UvKqBFJmN3cT5pNB2rTL+f51YyTRo+g2cJui4=
5 | github.com/google/docsy v0.9.0/go.mod h1:saOqKEUOn07Bc0orM/JdIF3VkOanHta9LU5Y53bwN2U=
6 | github.com/google/docsy v0.10.0 h1:6tMDacPwAyRWNCfvsn/9qGOZDQ8b0aRzjRZvnZPY5dg=
7 | github.com/google/docsy v0.10.0/go.mod h1:c0nIAqmRTOuJ01F85U/wJPQtc3Zj9N58Kea9bOT2AJc=
8 | github.com/twbs/bootstrap v5.2.3+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0=
9 | github.com/twbs/bootstrap v5.3.3+incompatible h1:goFoqinzdHfkeegpFP7pvhbd0g+A3O2hbU3XCjuNrEQ=
10 | github.com/twbs/bootstrap v5.3.3+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0=
11 |
--------------------------------------------------------------------------------
/hugo.toml:
--------------------------------------------------------------------------------
1 | baseURL = "https://spinkube.dev"
2 | title = "SpinKube"
3 |
4 | # Language settings
5 | contentDir = "content/en"
6 | defaultContentLanguage = "en"
7 | defaultContentLanguageInSubdir = false
8 | # Useful when translating.
9 | enableMissingTranslationPlaceholders = true
10 |
11 | enableRobotsTXT = true
12 |
13 | # Will give values to .Lastmod etc.
14 | enableGitInfo = true
15 |
16 | # Comment out to enable taxonomies in Docsy
17 | # disableKinds = ["taxonomy", "taxonomyTerm"]
18 |
19 | # You can add your own taxonomies
20 | [taxonomies]
21 | tag = "tags"
22 | category = "categories"
23 |
24 | [params.taxonomy]
25 | # set taxonomyCloud = [] to hide taxonomy clouds
26 | taxonomyCloud = ["tags", "categories"]
27 |
28 | # If used, must have same length as taxonomyCloud
29 | taxonomyCloudTitle = ["Tags", "Categories"]
30 |
31 | # set taxonomyPageHeader = [] to hide taxonomies on the page headers
32 | taxonomyPageHeader = ["tags", "categories"]
33 |
34 |
35 | # Highlighting config
36 | pygmentsCodeFences = true
37 | pygmentsUseClasses = false
38 | # Use the new Chroma Go highlighter in Hugo.
39 | pygmentsUseClassic = false
40 | #pygmentsOptions = "linenos=table"
41 | # See https://help.farbox.com/pygments.html
42 | pygmentsStyle = "tango"
43 |
44 | # Configure how URLs look like per section.
45 | [permalinks]
46 | blog = "/:section/:year/:month/:day/:slug/"
47 |
48 | # Image processing configuration.
49 | [imaging]
50 | resampleFilter = "CatmullRom"
51 | quality = 75
52 | anchor = "smart"
53 |
54 | # Language configuration
55 |
56 | [languages]
57 | [languages.en]
58 | languageName ="English"
59 | # Weight used for sorting.
60 | weight = 1
61 | [languages.en.params]
62 | title = "SpinKube"
63 | description = "A new open source project that streamlines the experience of developing, deploying, and operating Wasm workloads on Kubernetes."
64 |
65 | time_format_default = "02.01.2006"
66 | time_format_blog = "02.01.2006"
67 |
68 | [markup]
69 | [markup.goldmark]
70 | [markup.goldmark.parser.attribute]
71 | block = true
72 | [markup.goldmark.renderer]
73 | unsafe = true
74 | [markup.highlight]
75 | # See a complete list of available styles at https://xyproto.github.io/splash/docs/all.html
76 | style = "rose-pine"
77 | # Uncomment if you want your chosen highlight style used for code blocks without a specified language
78 | # guessSyntax = "true"
79 |
80 | # Everything below this are Site Params
81 |
82 | # Comment out if you don't want the "print entire section" link enabled.
83 | [outputs]
84 | section = ["HTML", "print", "RSS"]
85 |
86 | [params]
87 | privacy_policy = "https://policies.google.com/privacy"
88 |
89 | # First one is picked as the Twitter card image if not set on page.
90 | # images = ["images/project-illustration.png"]
91 |
92 | # Menu title if your navbar has a versions selector to access old versions of your site.
93 | # This menu appears only if you have at least one [params.versions] set.
94 | version_menu = "Releases"
95 |
96 | # Flag used in the "version-banner" partial to decide whether to display a
97 | # banner on every page indicating that this is an archived version of the docs.
98 | # Set this flag to "true" if you want to display the banner.
99 | archived_version = false
100 |
101 | # The version number for the version of the docs represented in this doc set.
102 | # Used in the "version-banner" partial to display a version number for the
103 | # current doc set.
104 | version = "0.0.0"
105 |
106 | # A link to latest version of the docs. Used in the "version-banner" partial to
107 | # point people to the main doc site.
108 | url_latest_version = "https://spinkube.dev"
109 |
110 | # Repository configuration (URLs for in-page links to opening issues and suggesting changes)
111 | github_repo = "https://github.com/spinframework/spinkube-docs"
112 | # An optional link to a related project repo. For example, the sibling repository where your product code lives.
113 | github_project_repo = "https://github.com/spinframework/spin-operator"
114 |
115 | # Specify a value here if your content directory is not in your repo's root directory
116 | # github_subdir = ""
117 |
118 | # Uncomment this if your GitHub repo does not have "main" as the default branch,
119 | # or specify a new value if you want to reference another branch in your GitHub links
120 | github_branch= "main"
121 |
122 | # Google Custom Search Engine ID. Remove or comment out to disable search.
123 | # gcs_engine_id = "d774df95371c74907"
124 |
125 | # Enable Lunr.js offline search
126 | offlineSearch = true
127 | offlineSearchSummaryLength = 200
128 | offlineSearchMaxResults = 25
129 |
130 | # Enable syntax highlighting and copy buttons on code blocks with Prism
131 | prism_syntax_highlighting = false
132 |
133 | [params.copyright]
134 | authors = "SpinKube Authors | [CC BY 4.0](https://creativecommons.org/licenses/by/4.0) | "
135 | from_year = 2024
136 |
137 | # User interface configuration
138 | [params.ui]
139 | # Set to true to disable breadcrumb navigation.
140 | breadcrumb_disable = false
141 | # Set to false if you don't want to display a logo (/assets/icons/logo.svg) in the top navbar
142 | navbar_logo = true
143 | # Set to true if you don't want the top navbar to be translucent when over a `block/cover`, like on the homepage.
144 | navbar_translucent_over_cover_disable = false
145 | # Enable to show the side bar menu in its compact state.
146 | sidebar_menu_compact = false
147 | sidebar_menu_foldable = true
148 | # Set to true to hide the sidebar search box (the top nav search box will still be displayed if search is enabled)
149 | sidebar_search_disable = true
150 |
151 | # Adds a H2 section titled "Feedback" to the bottom of each doc. The responses are sent to Google Analytics as events.
152 | # This feature depends on [services.googleAnalytics] and will be disabled if "services.googleAnalytics.id" is not set.
153 | # If you want this feature, but occasionally need to remove the "Feedback" section from a single page,
154 | # add "hide_feedback: true" to the page's front matter.
155 | [params.ui.feedback]
156 | enable = true
157 | # The responses that the user sees after clicking "yes" (the page was helpful) or "no" (the page was not helpful).
158 | yes = 'Glad to hear it!'
159 | no = 'Sorry to hear that. Please tell us how we can improve.'
160 |
161 | # Adds a reading time to the top of each doc.
162 | # If you want this feature, but occasionally need to remove the Reading time from a single page,
163 | # add "hide_readingtime: true" to the page's front matter
164 | [params.ui.readingtime]
165 | enable = false
166 |
167 | [params.links]
168 | # End user relevant links. These will show up on left side of footer and in the community page if you have one.
169 | [[params.links.user]]
170 | name = "Email"
171 | url = "mailto:spinkubemaintainers@gmail.com"
172 | icon = "fa fa-envelope"
173 | desc = "SpinKube Maintainers email."
174 | [[params.links.user]]
175 | name ="Twitter"
176 | url = "https://twitter.com/spinkube"
177 | icon = "fab fa-twitter"
178 | desc = "Follow us on Twitter for the latest SpinKube news!"
179 | [[params.links.user]]
180 | name ="Mastodon"
181 | url = "https://mastodon.social/@SpinKube"
182 | icon = "fab fa-mastodon"
183 | desc = "Follow us on Mastodon to get the latest SpinKube news!"
184 | #[[params.links.user]]
185 | # name = "Stack Overflow"
186 | # url = "https://example.org/stack"
187 | # icon = "fab fa-stack-overflow"
188 | # desc = "Practical questions and curated answers"
189 | # Developer relevant links. These will show up on right side of footer and in the community page if you have one.
190 | [[params.links.developer]]
191 | name = "GitHub"
192 | url = "https://github.com/spinframework"
193 | icon = "fab fa-github"
194 | desc = "Development takes place here!"
195 | [[params.links.developer]]
196 | name = "Slack"
197 | url = "https://cloud-native.slack.com/archives/C06PC7JA1EE"
198 | icon = "fab fa-slack"
199 | desc = "Chat with the SpinKube community"
200 | [[params.links.developer]]
201 | name = "Email"
202 | url = "mailto:spinkubemaintainers@gmail.com"
203 | icon = "fa fa-envelope"
204 | desc = "SpinKube Maintainers email."
205 |
206 | # hugo module configuration
207 |
208 | [module]
209 | # Uncomment the next line to build and serve using local docsy clone declared in the named Hugo workspace:
210 | # workspace = "docsy.work"
211 | [module.hugoVersion]
212 | extended = true
213 | min = "0.110.0"
214 | [[module.imports]]
215 | path = "github.com/google/docsy"
216 | disable = false
217 |
--------------------------------------------------------------------------------
/layouts/404.html:
--------------------------------------------------------------------------------
1 | {{ define "main" -}}
2 |
3 |
Not found
4 |
Oops! This page doesn't exist. Try going back to the home page.
5 |
You can learn how to make a 404 page like this in Custom 404 Pages.