<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[JoshDMoore's Blog]]></title><description><![CDATA[Software engineering, home automation and country living]]></description><link>https://joshdmoore.com/</link><generator>Ghost 5.75</generator><lastBuildDate>Tue, 05 May 2026 23:02:21 GMT</lastBuildDate><atom:link href="https://joshdmoore.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[OpenClaw on Kubernetes: A Practical Installation Guide]]></title><description><![CDATA[A practical guide to installing OpenClaw on Kubernetes as part of a broader AI Delivery Workflow, including storage choices, Helm configuration, and first access.]]></description><link>https://joshdmoore.com/openclaw-on-kubernetes-practical-installation-guide/</link><guid isPermaLink="false">699f82a4dfe55900013451f8</guid><category><![CDATA[AI Delivery Workflow]]></category><category><![CDATA[OpenClaw]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Homelab]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Thu, 02 Apr 2026 01:09:00 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2026/02/openclaw-logo.png" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2026/02/openclaw-logo.png" alt="OpenClaw on Kubernetes: A Practical Installation Guide"><p>In my last post, I laid out why I&#x2019;m building an <strong>AI Delivery Workflow</strong> instead of relying on a pile of disconnected AI tools. This article is the first practical step in that direction: getting OpenClaw running on Kubernetes.</p><p>For me, OpenClaw is not interesting as just another chat interface. It becomes useful when it lives inside a real delivery stack alongside GitHub, coding agents, and ArgoCD. That is why I want it running as part of real infrastructure, not sitting off to the side as a disconnected experiment.</p><p>This guide is for someone who already knows their way around a Kubernetes cluster and wants a clean, repeatable OpenClaw install. I&#x2019;m assuming you already have a working cluster, basic <code>kubectl</code> access, Helm installed, and a general sense of how you handle storage, secrets, and service access in your own environment.</p><p>This is not a Kubernetes primer. The goal here is simpler than that: get OpenClaw running in a way that is understandable, debuggable, and easy to repeat.</p><h2 id="what-this-guide-covers">What this guide covers</h2><p>This guide walks through:</p><ul><li>getting a working provider token</li><li>creating the Kubernetes secret</li><li>choosing a storage path that matches your cluster</li><li>creating a working Helm values file</li><li>installing OpenClaw</li><li>verifying that the deployment actually works</li></ul><p>The goal is that someone with a Kubernetes cluster can come away with a running OpenClaw install.</p><h2 id="prerequisites">Prerequisites</h2><p>Before you begin, you should already have:</p><ul><li>a working Kubernetes cluster with <code>kubectl</code> access</li><li>Helm 3 installed</li><li>persistent storage available for workloads in your cluster</li><li>a model provider you plan to use</li></ul><p>For storage, I would treat one of these as a prerequisite:</p><ul><li>a default <code>StorageClass</code> that can dynamically provision a PVC for OpenClaw, or</li><li>a PV/PVC arrangement you have already prepared for this cluster</li></ul><p>I&#x2019;m intentionally not turning this into a distro-specific storage setup guide. That varies too much across managed Kubernetes, bare metal, k3s, Talos, and everything in between. This article assumes you already know how storage is handled in your environment.</p><p>I&#x2019;m using GitHub Copilot as the provider example in this guide because its authentication flow is the least obvious part of the setup. If you plan to use another provider, the overall install pattern is similar, but the secret values and model configuration will change.</p><h2 id="step-1-create-a-namespace">Step 1: Create a namespace</h2><p>Create a dedicated namespace for OpenClaw:</p><pre><code>kubectl create namespace openclaw</code></pre><h2 id="step-2-get-a-github-copilot-oauth-token">Step 2: Get a GitHub Copilot OAuth token</h2><p>If you are using GitHub Copilot, this is the part most likely to trip you up.</p><p>A GitHub Personal Access Token with the <code>copilot</code> scope is not enough for model access here. You need an OAuth token from GitHub&#x2019;s device flow.</p><p>Request a device code:</p><pre><code>curl -s -X POST https://github.com/login/device/code \
  -H &quot;Accept: application/json&quot; \
  -d &quot;client_id=Iv1.b507a08c87ecfe98&amp;scope=read:user&quot;</code></pre><p>This returns a <code>device_code</code>, <code>user_code</code>, and <code>verification_uri</code>.</p><p>Open <code>https://github.com/login/device</code> in your browser and enter the <code>user_code</code>.</p><p>Then poll for the access token:</p><pre><code>curl -s -X POST https://github.com/login/oauth/access_token \
  -H &quot;Accept: application/json&quot; \
  -d &quot;client_id=Iv1.b507a08c87ecfe98&amp;device_code=&lt;DEVICE_CODE&gt;&amp;grant_type=urn:ietf:params:oauth:grant-type:device_code&quot;</code></pre><p>Repeat that command until the response includes an <code>access_token</code> beginning with <code>ghu_</code>.</p><p>To verify the token works:</p><pre><code>curl -s -o /dev/null -w &quot;%{http_code}&quot; \
  https://api.github.com/copilot_internal/v2/token \
  -H &quot;Authorization: token ghu_YOUR_TOKEN&quot;</code></pre><p>A <code>200</code> response confirms the token is valid.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">These ghu_ tokens can expire. If OpenClaw later loses provider access, this is one of the first things to check.</div></div><h2 id="step-3-discover-available-model-ids">Step 3: Discover available model IDs</h2><p>The model names you see in a UI do not always match the exact model IDs you need in configuration.</p><p>You can query the available models directly:</p><pre><code># Get a session token
COPILOT_SESSION=$(curl -s https://api.github.com/copilot_internal/v2/token \
  -H &quot;Authorization: token ghu_YOUR_TOKEN&quot;)

# Extract the API endpoint and token
API_URL=$(echo &quot;$COPILOT_SESSION&quot; | jq -r &apos;.endpoints.api&apos;)
SESSION_TOKEN=$(echo &quot;$COPILOT_SESSION&quot; | jq -r &apos;.token&apos;)

# List models
curl -s &quot;$API_URL/models&quot; \
  -H &quot;Authorization: Bearer $SESSION_TOKEN&quot; \
  -H &quot;Copilot-Integration-Id: vscode-chat&quot; | jq &apos;.data[].id&apos;</code></pre><p>When you configure OpenClaw, prefix the model ID with <code>github-copilot/</code>.</p><p>For example:</p><pre><code>github-copilot/gpt-5.4</code></pre><h2 id="step-4-create-the-kubernetes-secrets">Step 4: Create the Kubernetes secrets</h2><p>Create one secret for your provider token:</p><pre><code>kubectl create secret generic openclaw-provider-secret -n openclaw \
  --from-literal=GITHUB_TOKEN=&apos;ghu_YOUR_OAUTH_TOKEN&apos;</code></pre><p>Then create a second secret for the OpenClaw gateway password:</p><pre><code>kubectl create secret generic openclaw-auth-secret -n openclaw \
  --from-literal=OPENCLAW_GATEWAY_PASSWORD=&apos;YOUR_GATEWAY_PASSWORD&apos;</code></pre><p>In this example:</p><ul><li><code>GITHUB_TOKEN</code> is the GitHub Copilot OAuth token from the device flow</li><li><code>OPENCLAW_GATEWAY_PASSWORD</code> is the password you will use to sign in to the OpenClaw control UI</li></ul><p>If you are using another model provider, substitute the correct provider credential in the provider secret.</p><h2 id="step-5-confirm-your-storage-path">Step 5: Confirm your storage path</h2><p>The key storage requirement is simple: OpenClaw needs writable persistent state under <code>/home/node/.openclaw</code>.</p><p>Treat storage as a prerequisite here, not a side quest. The practical question is whether your cluster can give OpenClaw a writable home directory cleanly and predictably.</p><p>For chart version <code>openclaw/openclaw 1.5.10</code>, the detail that matters most is that <code>/home/node/.openclaw</code> must be mounted writable.</p><p>If you miss that, the pod can crash during init because it tries to write <code>/home/node/.openclaw/openclaw.json</code> onto the read-only root filesystem.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">1.5.10 was the current chart version at the time of writing. Check for a newer release with helm search repo openclaw and use the latest available version.</div></div><h3 id="option-a-non-persistent-smoke-test">Option A: Non-persistent smoke test</h3><p>Use this if you want the fastest proven path to a working install.</p><p>This was the cleanest path I validated on <code>lernaean-dev</code>. The trick is that disabling <code>app-template.persistence.data</code> by itself is <strong>not</strong> enough for this chart version. You also need to mount a writable <code>emptyDir</code> at both <code>/tmp</code> and <code>/home/node/.openclaw</code>.</p><p>If you choose this path, you do not need to create a PV or PVC in this step.</p><h3 id="option-b-persistent-storage">Option B: Persistent storage</h3><p>Use this if you want OpenClaw to keep its local state between restarts.</p><p>The exact implementation depends on your cluster. If your cluster has a <code>StorageClass</code> with dynamic provisioning, the storage driver handles directory creation and ownership &#x2014; no extra steps needed. If you are using a static <code>hostPath</code> PV with <code>DirectoryOrCreate</code>, Kubernetes creates the directory as <code>root:root</code> on first use. Since OpenClaw runs as uid <code>1000</code>, the <code>init-config</code> container will fail with <code>Permission denied</code> and the pod will crash-loop. In that case, add the <code>fix-permissions</code> init container shown in the values snippet below.</p><p>First check whether your cluster already has a storage class:</p><pre><code>kubectl get storageclass</code></pre><p>If your cluster has a default storage class, or a storage class you want to use, Helm may be able to create the PVC for you during install.</p><p>If your cluster does <strong>not</strong> have a usable storage class, create storage first. For example:</p><pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
  name: openclaw
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: &quot;&quot;
  claimRef:
    namespace: openclaw
    name: openclaw
  hostPath:
    path: /var/openclaw-data
    type: DirectoryOrCreate</code></pre><p>Apply it:</p><pre><code>kubectl apply -f openclaw-storage.yaml</code></pre><p>Then verify the PVC is ready:</p><pre><code>kubectl get pvc openclaw -n openclaw</code></pre><p>The success condition is simple: the PVC should show <code>Bound</code>.</p><p>Two practical caveats from testing:</p><ul><li>this chart version expected the PVC name to be <code>openclaw</code> when persistence stayed enabled</li><li><code>existingClaim</code> under <code>app-template.persistence.data</code> works, but requires <code>--skip-schema-validation</code> at install time</li><li>if using a static <code>hostPath</code> PV with <code>DirectoryOrCreate</code> and a fresh directory, include the <code>fix-permissions</code> init container in the values snippet. If your storage class or an existing directory is already owned by uid <code>1000</code>, you can omit it.</li></ul><h2 id="step-6-create-the-values-file">Step 6: Create the values file</h2><p>Create a file named <code>openclaw-values.yaml</code>.</p><p>The goal of this values file is to keep the deployment explicit, simple, and easy to debug. Start with this shared base config for either storage path:</p><pre><code>app-template:
  configMode: merge
  controllers:
    main:
      containers:
        chromium:
          enabled: false
        main:
          envFrom:
            - secretRef:
                name: openclaw-provider-secret
            - secretRef:
                name: openclaw-auth-secret
  configMaps:
    config:
      enabled: true
      data:
        openclaw.json: |
          {
            &quot;gateway&quot;: {
              &quot;port&quot;: 18789,
              &quot;mode&quot;: &quot;local&quot;,
              &quot;auth&quot;: {
                &quot;mode&quot;: &quot;password&quot;
              },
              &quot;controlUi&quot;: {
                &quot;dangerouslyAllowHostHeaderOriginFallback&quot;: true
              }
            },
            &quot;browser&quot;: {
              &quot;enabled&quot;: false
            },
            &quot;agents&quot;: {
              &quot;defaults&quot;: {
                &quot;workspace&quot;: &quot;/home/node/.openclaw/workspace&quot;,
                &quot;model&quot;: {
                  &quot;primary&quot;: &quot;github-copilot/gpt-5.4&quot;
                },
                &quot;userTimezone&quot;: &quot;UTC&quot;,
                &quot;timeoutSeconds&quot;: 600,
                &quot;maxConcurrent&quot;: 1
              },
              &quot;list&quot;: [
                {
                  &quot;id&quot;: &quot;main&quot;,
                  &quot;default&quot;: true,
                  &quot;identity&quot;: {
                    &quot;name&quot;: &quot;OpenClaw&quot;,
                    &quot;emoji&quot;: &quot;&#x1F99E;&quot;
                  }
                }
              ]
            },
            &quot;session&quot;: {
              &quot;scope&quot;: &quot;per-sender&quot;,
              &quot;store&quot;: &quot;/home/node/.openclaw/sessions&quot;,
              &quot;reset&quot;: {
                &quot;mode&quot;: &quot;idle&quot;,
                &quot;idleMinutes&quot;: 60
              }
            },
            &quot;logging&quot;: {
              &quot;level&quot;: &quot;info&quot;,
              &quot;consoleLevel&quot;: &quot;info&quot;,
              &quot;consoleStyle&quot;: &quot;compact&quot;,
              &quot;redactSensitive&quot;: &quot;tools&quot;
            },
            &quot;tools&quot;: {
              &quot;profile&quot;: &quot;full&quot;,
              &quot;web&quot;: {
                &quot;search&quot;: {
                  &quot;enabled&quot;: false
                },
                &quot;fetch&quot;: {
                  &quot;enabled&quot;: true
                }
              }
            }
          }</code></pre><p>If you chose the non-persistent smoke-test path, use this storage section:</p><pre><code>  persistence:
    data:
      enabled: false
    tmp:
      enabled: true
      type: emptyDir
      advancedMounts:
        main:
          init-config:
            - path: /tmp
            - path: /home/node/.openclaw
          init-skills:
            - path: /tmp
            - path: /home/node/.openclaw
          main:
            - path: /tmp
            - path: /home/node/.openclaw</code></pre><p>If you chose the persistent path, use this storage section instead:</p><pre><code>  controllers:
    main:
      initContainers:
        fix-permissions:
          image:
            repository: busybox
            tag: &quot;1.36&quot;
          command:
            - sh
            - -c
            - |
              set -eu
              mkdir -p /home/node/.openclaw
              chown -R 1000:1000 /home/node/.openclaw
          securityContext:
            runAsUser: 0
            runAsGroup: 0
            runAsNonRoot: false
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false
            seccompProfile:
              type: RuntimeDefault
            capabilities:
              drop:
                - ALL
              add:
                - CHOWN
  persistence:
    data:
      enabled: true
      type: persistentVolumeClaim
      accessMode: ReadWriteOnce
      size: 5Gi
      advancedMounts:
        main:
          fix-permissions:
            - path: /home/node/.openclaw
          init-config:
            - path: /home/node/.openclaw
          init-skills:
            - path: /home/node/.openclaw
          main:
            - path: /home/node/.openclaw
    tmp:
      enabled: true
      type: emptyDir</code></pre><p>The <code>tmp</code> emptyDir is still needed even with persistent storage. OpenClaw writes temporary files outside of <code>/home/node/.openclaw</code> during init, so the root filesystem must have a writable <code>/tmp</code> even when the PVC is handling persistent state.</p><p>A few practical notes:</p><ul><li><code>OPENCLAW_GATEWAY_PASSWORD</code> comes from <code>openclaw-auth-secret</code></li><li>the browser sidecar is disabled here to keep the install lighter</li><li>if you use another model provider, update both the credentials and the configured model name</li><li>the smoke-test path is ephemeral; pod recreation will discard local state</li></ul><h2 id="step-7-install-openclaw">Step 7: Install OpenClaw</h2><p>Add the Helm repo:</p><pre><code>helm repo add openclaw https://serhanekicii.github.io/openclaw-helm
helm repo update</code></pre><p>Then install the chart:</p><pre><code>helm install openclaw openclaw/openclaw -n openclaw \
  -f openclaw-values.yaml \
  --skip-schema-validation</code></pre><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">--skip-schema-validation is required here. The wrapper chart schema can reject values that still work correctly with the underlying chart.</div></div><h2 id="step-8-wait-for-the-deployment">Step 8: Wait for the deployment</h2><p>Wait for the rollout:</p><pre><code>kubectl rollout status deployment openclaw -n openclaw --timeout=600s</code></pre><p>You can also check the pods directly:</p><pre><code>kubectl get pods -n openclaw</code></pre><p>You want to see the OpenClaw pod reach <code>Running</code>.</p><p>If it does not, inspect the pod and recent events:</p><pre><code>kubectl describe pod -n openclaw &lt;pod-name&gt;
kubectl get events -n openclaw --sort-by=.lastTimestamp</code></pre><h2 id="step-9-access-the-control-ui">Step 9: Access the control UI</h2><p>The simplest way to test a fresh install is with port forwarding:</p><pre><code>kubectl port-forward -n openclaw deployment/openclaw 18789:18789</code></pre><p>Then open:</p><pre><code>http://localhost:18789</code></pre><p>Sign in with the password you stored in the <code>openclaw-auth-secret</code> secret.</p><p>You can expose OpenClaw through ingress or a load balancer later if you want always-on remote access. For the first install, I would treat that as an operational choice, not a prerequisite. Port forwarding is the fastest way to verify that everything works.</p><h2 id="quick-troubleshooting-notes">Quick troubleshooting notes</h2><p>If the install does not work the first time, these are the first places I would check:</p><ol><li><strong>Provider token:</strong> if you are using GitHub Copilot, make sure you used the OAuth device flow token, not a PAT.</li><li><strong>Model ID:</strong> query the API instead of guessing the exact model name.</li><li><strong>Writable home directory:</strong> for chart <code>1.5.10</code>, make sure <code>/home/node/.openclaw</code> is mounted writable.</li><li><strong>Storage:</strong> if your PVC stays pending, check whether your cluster has a default storage class, whether you need to set one explicitly, or whether you need to create a PV first.</li><li><strong>Chart schema:</strong> the wrapper chart schema can reject values that look reasonable. If using <code>existingClaim</code> under <code>app-template.persistence.data</code>, add <code>--skip-schema-validation</code> at install time.</li><li><strong>Cold start time:</strong> initial image pulls can take a while, especially on smaller or ARM64 systems.</li><li><strong>Token expiry:</strong> GitHub Copilot OAuth tokens can expire and may need to be refreshed later.</li></ol><h2 id="why-this-matters-in-the-bigger-workflow">Why this matters in the bigger workflow</h2><p>Getting OpenClaw running is not the whole point. The point is to make it one reliable part of a larger delivery system.</p><p>In the workflow I&#x2019;m building, GitHub holds the work queue, coding agents help implement changes, ArgoCD handles deployment, and OpenClaw acts as the operating layer tying those pieces together. Infrastructure only matters if it supports a repeatable way to move work from idea to production. That is the role I want OpenClaw to play.</p><p>This guide covers the first part of that path: getting OpenClaw running cleanly so the rest of the workflow has a solid base.</p><h2 id="conclusion">Conclusion</h2><p>OpenClaw makes sense on Kubernetes when you want your AI operating layer to live alongside the rest of your delivery infrastructure instead of floating outside it.</p><p>Once provider credentials, storage, and chart configuration are sorted out, the deployment itself is fairly straightforward. The real friction is usually around authentication, model naming, and choosing the right storage path for your cluster.</p><p>If you are trying to move from scattered AI tooling toward a usable AI Delivery Workflow, this is where the system starts to become real. A clean OpenClaw install does not finish the job, but it gives you a solid operating base for everything that comes next.</p>]]></content:encoded></item><item><title><![CDATA[Why I’m Building an AI Delivery Workflow]]></title><description><![CDATA[Why I am trying to turn scattered AI, GitHub, and deployment tools into one practical delivery workflow.]]></description><link>https://joshdmoore.com/why-im-building-an-ai-delivery-workflow/</link><guid isPermaLink="false">69bd62dedfe5590001345222</guid><category><![CDATA[AI Delivery Workflow]]></category><category><![CDATA[OpenClaw]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Mon, 23 Mar 2026 03:10:38 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2026/03/file_16---d7eae2ca-9b15-4e92-9d7f-074b57409fb8.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2026/03/file_16---d7eae2ca-9b15-4e92-9d7f-074b57409fb8.jpg" alt="Why I&#x2019;m Building an AI Delivery Workflow"><p>AI tooling has gotten good enough to be useful, but for most technical builders it still does not feel like a real operating system.</p><p>You can open ChatGPT, Claude, Copilot, Codex, or another coding agent and get something helpful. You can ask for code, explanations, refactors, commands, and plans. You can connect pieces of your stack. You can automate parts of your workflow.</p><p>But most of the time, it still feels scattered.</p><p>That is the problem I care about right now.</p><p>I am not especially interested in AI as a toy, a gimmick, or a source of endless screenshots. I am interested in whether it can become a practical delivery layer for real work.</p><p>I want something that helps move a project from idea to issue, from issue to implementation, from implementation to review, and from review to deployment without turning the whole process into chaos.</p><p>That is what I mean by an <strong>AI Delivery Workflow</strong>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://joshdmoore.com/content/images/2026/03/tmp_ai_delivery_workflow_inline-4.svg" class="kg-image" alt="Why I&#x2019;m Building an AI Delivery Workflow" loading="lazy" width="1400" height="700"><figcaption><span style="white-space: pre-wrap;">The goal is not more AI tabs. It is a clearer path from planned work to shipped work.</span></figcaption></figure><h2 id="not-%E2%80%9Cai-does-everything%E2%80%9D">Not &#x201C;AI does everything&#x201D;</h2><p>When people hear language like this, it is easy to imagine a fully autonomous setup that replaces judgment, skips review, and magically runs the whole software lifecycle on its own.</p><p>That is not what I am building.</p><p>I do not think the goal is to hand over the keys and hope for the best. I think the goal is to build a workflow where AI is genuinely useful inside a system that still has structure, boundaries, and human decision points.</p><p>In practice, that means:</p><ul><li>issues still matter</li><li>review still matters</li><li>deployment boundaries still matter</li><li>human approval still matters</li></ul><p>The value is not that AI replaces the workflow. The value is that AI becomes productive <em>inside</em> the workflow.</p><h2 id="the-problem-with-the-current-tool-landscape">The problem with the current tool landscape</h2><p>Right now, there are a lot of individually impressive tools:</p><ul><li>coding agents that can implement real changes</li><li>systems like OpenClaw that can act more like an operating layer than a chat box</li><li>GitHub issues and pull requests that already provide a clean work queue</li><li>GitOps tools like ArgoCD that create a sane deployment path</li></ul><p>But if you are a technical builder, platform engineer, founder, or operator trying to actually use these tools together, the path is still fuzzy.</p><p>You can usually get one piece working. You can often get two or three pieces working.</p><p>What is harder is getting the overall system to feel coherent.</p><p>That is where most of the friction lives:</p><ul><li>What tool should do what?</li><li>Where should work begin?</li><li>How do you keep agents from becoming disconnected chat assistants?</li><li>How do you make GitHub the queue instead of a side effect?</li><li>How do you preserve review and deployment discipline?</li><li>How do you make the whole thing feel usable instead of fragile?</li></ul><p>That is the gap I want to close.</p><h2 id="what-i%E2%80%99m-actually-building">What I&#x2019;m actually building</h2><p>I am working toward a practical operating model built around a few core ideas:</p><ul><li><strong>OpenClaw</strong> as the coordinating layer</li><li><strong>GitHub issues</strong> as the work queue</li><li><strong>coding agents</strong> as implementation helpers, not independent bosses</li><li><strong>pull requests and review</strong> as quality and control points</li><li><strong>ArgoCD and Kubernetes</strong> as the deployment path</li></ul><p>That stack will not be right for everyone. It is opinionated. It assumes some technical comfort. It is not a beginner course and it is not trying to be one.</p><p>But for the kind of builder I care about here, it solves a real problem: how to turn a pile of promising AI and infrastructure tools into a workflow you can actually trust.</p><h2 id="why-this-matters-to-me">Why this matters to me</h2><p>I do not want to spend my time bouncing between disconnected tools, each of which is impressive in isolation but awkward in combination.</p><p>I want a workflow that helps me do delivery work with more leverage and more clarity.</p><p>I want to be able to:</p><ul><li>capture work cleanly</li><li>delegate parts of implementation to agents</li><li>review changes with clear boundaries</li><li>ship through a real deployment path</li><li>understand what the system is doing and why</li></ul><p>That last point matters a lot.</p><p>I am not trying to build a black box. I am trying to build a workflow that increases confidence.</p><h2 id="what-this-series-will-cover">What this series will cover</h2><p>This is the frame for a broader set of writing and operator material I plan to publish.</p><p>Some of it will stay public. Some of the more structured playbooks, checklists, and deeper workflow material will eventually live as paid member content. But the goal is the same across all of it: make this stack more understandable, more usable, and more practical.</p><p>The first implementation article in that series will be about getting OpenClaw running on Kubernetes in a way that fits this broader workflow direction.</p><p>That matters because I do not want the install guide to feel like an isolated technical note. I want it to sit inside a more complete point of view:</p><blockquote>AI becomes much more useful when it is part of a delivery workflow instead of just another chat window.</blockquote><h2 id="what-this-is-not">What this is not</h2><p>To keep this grounded, it is worth saying what I am <em>not</em> trying to do.</p><ul><li>I am not promising full automation.</li><li>I am not saying AI replaces engineering judgment.</li><li>I am not building a generic prompt guide.</li><li>I am not treating Kubernetes, GitHub, and agent tooling like magic.</li><li>I am not trying to create a hypey &#x201C;one weird trick&#x201D; system.</li></ul><p>I am trying to create a workflow that a technically capable person can actually operate with confidence.</p><h2 id="the-real-goal">The real goal</h2><p>If this work goes well, the outcome is not just &#x201C;I have OpenClaw installed.&#x201D;</p><p>The outcome is something better than that:</p><ul><li>I understand the stack</li><li>I know what each part is for</li><li>I can move work through it with less friction</li><li>I trust it enough to use it on real projects</li></ul><p>That is the standard I care about.</p><p>That is why I am building an AI Delivery Workflow.</p><p>And that is the direction this series is going next.</p>]]></content:encoded></item><item><title><![CDATA[Kubernetes Home Automation Cluster – Part 3: Core Apps]]></title><description><![CDATA[<p>Now that we have a running Kubernetes cluster, we need to make it a bit more usable. For my home cluster, I wanted to ensure it was fault-tolerant, so I decided to implement a load balancer, an ingress controller, and an application manager. For these components, I chose MetalLB, Nginx,</p>]]></description><link>https://joshdmoore.com/kubernetes-home-automation-cluster-part-3-batteries/</link><guid isPermaLink="false">6790930b502fbe0001fb6b5a</guid><category><![CDATA[Home Automation]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Fri, 28 Feb 2025 01:45:44 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2025/03/Screenshot-2025-03-05-at-10.47.21-PM.png" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2025/03/Screenshot-2025-03-05-at-10.47.21-PM.png" alt="Kubernetes Home Automation Cluster &#x2013; Part 3: Core Apps"><p>Now that we have a running Kubernetes cluster, we need to make it a bit more usable. For my home cluster, I wanted to ensure it was fault-tolerant, so I decided to implement a load balancer, an ingress controller, and an application manager. For these components, I chose MetalLB, Nginx, and Kubeapps.</p><p>In this guide, we will use Helm to install all the core applications. We&#x2019;ll start with MetalLB for load balancing, then move on to Nginx as the ingress controller, and finally install Kubeapps. Installing Kubeapps last ensures that we have a UI available once everything else is set up.</p><p>Another suggestion I would make is to get a UI-based kubectl IDE like Lens or Headlamp. This will give you an easy way to view and interact with your cluster. Be aware, though, that the CPU and memory usage information will be missing until we install metrics-server. Talos has an excellent tutorial on how to install metrics-server, but I&#x2019;ll explain some of the basics here as well.</p><h3 id="preparing-your-cluster-for-metrics-server">Preparing Your Cluster for Metrics-Server</h3><p>To deploy metrics-server on a Talos-based Kubernetes cluster, you&#x2019;ll need to make some changes to the machine configuration for each node. The Talos documentation goes into great detail about the reasons for these changes, so I won&#x2019;t rehash that here. Instead, I&#x2019;ll focus on how to make these updates.</p><p>If you have a configuration file saved for each of your nodes, you can simply update the file and reapply it using the <code>apply-config</code> command as we did earlier. If you don&#x2019;t have a machine config file saved or prefer to edit it directly, you can use the following Talos CLI command:</p><pre><code class="language-bash">talosctl edit machineconfig -e &lt;IP_ADDRESS&gt; -n &lt;IP_ADDRESS&gt;</code></pre><p>This will open the configuration file in your preferred editor, allowing you to make the necessary changes. Specifically, you&#x2019;ll need to enable certificate rotation for the kubelet by adding the following configuration:</p><pre><code class="language-yaml">machine:
  kubelet:
    extraArgs:
      rotate-server-certificates: &quot;true&quot;
</code></pre><p>Once the changes are saved, Talos will automatically apply the change to the node.</p><h3 id="automating-certificate-approval">Automating Certificate Approval</h3><p>With certificate rotation enabled, you&#x2019;ll also need a mechanism to automatically approve the new certificates generated by the kubelets. The Kubelet Serving Certificate Approver automates this process. Deploy the Kubelet Serving Certificate Approver directly using the following command:</p><pre><code class="language-bash">kubectl apply -f https://raw.githubusercontent.com/alex1989hu/kubelet-serving-cert-approver/main/deploy/standalone-install.yaml
</code></pre><h3 id="deploying-the-metrics-server">Deploying the Metrics Server</h3><p>After enabling certificate rotation and setting up automatic approval, you can deploy the Metrics Server by applying its official manifest:</p><pre><code class="language-bash">kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</code></pre><p>Alternatively, you can include both the Kubelet Serving Certificate Approver and the Metrics Server in your cluster&#x2019;s bootstrap process by adding them to the <code>extraManifests</code> section of your cluster configuration:</p><pre><code class="language-yaml">cluster:
  extraManifests:
    - https://raw.githubusercontent.com/alex1989hu/kubelet-serving-cert-approver/main/deploy/standalone-install.yaml
    - https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</code></pre><h3 id="installing-metallb">Installing MetalLB</h3><p>We&#x2019;ll start by installing MetalLB to provide load balancing capabilities to your cluster. Before doing so, you need to make some permission changes to the <code>metallb-system</code> namespace:</p><ol><li>Create the namespace:</li></ol><pre><code class="language-bash">kubectl create --save-config -f - &lt;&lt;EOF
apiVersion: v1
kind: Namespace
metadata:
  name: metallb-system
  labels:
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/warn: privileged
EOF</code></pre><ol start="2"><li>Install MetalLB using Helm:</li></ol><pre><code class="language-bash">helm repo add metallb https://metallb.github.io/metallb
helm install metallb --namespace metallb-system metallb/metallb
</code></pre><p>Once installed, you&#x2019;ll need to configure a Layer 2 or BGP mode IP range by applying the following IPAddressPool and L2Advertisement resources:</p><pre><code class="language-bash">kubectl apply -f - &lt;&lt;EOF
kind: IPAddressPool
apiVersion: metallb.io/v1beta1
metadata:
  name: default-lb-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.0.46-192.168.0.50 
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: metaladvertisement
  namespace: metallb-system
EOF
</code></pre><p>This enables MetalLB to assign IP addresses for load-balanced services.</p><h3 id="installing-nginx">Installing Nginx</h3><p>Next, we&#x2019;ll install Nginx as the ingress controller. Add the Helm repository and deploy it:</p><pre><code class="language-bash">helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx --create-namespace \
  --set publishService.enabled=true
</code></pre><p>Monitor the pods in the <code>ingress-nginx</code> namespace to ensure everything is running smoothly:</p><pre><code class="language-bash">kubectl get pods -n ingress-nginx
</code></pre><h3 id="create-storage-class-for-the-nvme-drive">Create Storage Class for the NVME drive</h3><p>Run the following command</p><pre><code>kubectl apply -f - &lt;&lt;EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ssd-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF

</code></pre><h3 id="installing-kubeappsoptional">Installing Kubeapps - optional</h3><h3 id></h3><p>Finally, let&#x2019;s install Kubeapps to provide a UI for managing applications via Helm charts. First, we need to add a PVC that our Kubeapps application can use for storage.</p><pre><code class="language-bash">kubectl apply -f - &lt;&lt;EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kubeapps-postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
EOF
</code></pre><p>Next, we need to add the Helm repository for Kubeapps and install it in a dedicated namespace:</p><pre><code class="language-bash">helm repo add bitnami https://charts.bitnami.com/bitnami
helm install kubeapps --namespace kubeapps bitnami/kubeapps \
  --namespace kubeapps --create-namespace \
  --set ingress.enabled=true \
  --set postgresql.primary.persistence.enabled=true \
  --set postgresql.primary.persistence.existingClaim=&quot;kubeapps-postgres-pvc&quot;</code></pre><p>You then need to create a credential that can be used to access the dashboard.</p><pre><code class="language-bash">kubectl create --namespace default serviceaccount kubeapps-operator
kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator
cat &lt;&lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: kubeapps-operator-token
  namespace: default
  annotations:
    kubernetes.io/service-account.name: kubeapps-operator
type: kubernetes.io/service-account-token
EOF
</code></pre><p>You can now request access to the token we created to access the dashboard.</p><pre><code class="language-bash">kubectl get --namespace default secret kubeapps-operator-token -o go-template=&apos;{{.data.token | base64decode}}&apos;</code></pre><p>For now you will need to add a host entry to your host file to access the UI through the ingress.  You can find the IP address for your ingress by running the following command.</p><pre><code class="language-bash">kubectl get -n kubeapps  ingress</code></pre><p>You should get something that looks like this.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2025/03/Screenshot-2025-03-01-at-8.30.19-PM.png" class="kg-image" alt="Kubernetes Home Automation Cluster &#x2013; Part 3: Core Apps" loading="lazy" width="890" height="84" srcset="https://joshdmoore.com/content/images/size/w600/2025/03/Screenshot-2025-03-01-at-8.30.19-PM.png 600w, https://joshdmoore.com/content/images/2025/03/Screenshot-2025-03-01-at-8.30.19-PM.png 890w" sizes="(min-width: 720px) 720px"></figure><p>Then edit your /etc/hosts file and add an entry for your kubeapps ingress.</p><pre><code>&lt;ip-address&gt;    kubeapps.local</code></pre><p>Then you should be able to access Kubeapps by opening a browser and navigate to <a href="http://kubeapps.local/?ref=joshdmoore.com" rel="noreferrer">http://kubeapps.local</a></p><h3 id="conclusion">Conclusion</h3><p>By installing MetalLB, Nginx, and Kubeapps, we&#x2019;ve transformed our Kubernetes cluster into a fault-tolerant and user-friendly platform. These tools, along with a UI-based kubectl IDE and metrics-server, make managing and monitoring your cluster a breeze. Happy clustering!</p>]]></content:encoded></item><item><title><![CDATA[Home Automation Kubernetes Cluster – Part 2: Installing Talos Linux]]></title><description><![CDATA[<p></p><p>This is part 2 of my Kubernetes Home Automation Cluster series. In <a href="https://joshdmoore.com/edge-kubernetes-cluster/" rel="noreferrer">the first part</a>, we talked about how to assemble the hardware. In this part, we&#x2019;ll go over how to get Kubernetes up and running.</p><h3 id="why-talos-linux">Why <a href="https://www.talos.dev/?ref=joshdmoore.com" rel="noreferrer">Talos</a> Linux?</h3><p>As I said in Part 1, I really liked</p>]]></description><link>https://joshdmoore.com/home-automation-kubernetes-cluster-part-2-talos/</link><guid isPermaLink="false">677f811f502fbe0001fb6991</guid><category><![CDATA[Home Automation]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Wed, 15 Jan 2025 07:37:27 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2025/01/The-Box.png" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2025/01/The-Box.png" alt="Home Automation Kubernetes Cluster &#x2013; Part 2: Installing Talos Linux"><p></p><p>This is part 2 of my Kubernetes Home Automation Cluster series. In <a href="https://joshdmoore.com/edge-kubernetes-cluster/" rel="noreferrer">the first part</a>, we talked about how to assemble the hardware. In this part, we&#x2019;ll go over how to get Kubernetes up and running.</p><h3 id="why-talos-linux">Why <a href="https://www.talos.dev/?ref=joshdmoore.com" rel="noreferrer">Talos</a> Linux?</h3><p>As I said in Part 1, I really liked the Talos Linux project when I started searching for the best way to run Kubernetes on an SBC. I did a proof of concept on several different hardware/software combinations during my search for the best edge Kubernetes clusters for my home automation needs but ultimately settled on Talos. I won&#x2019;t go into all my POCs right now because this article is going to be long enough already. I tried both purpose-built Kubernetes distributions and Kubernetes on standard Linux, but Talos stood out as the best fit for my needs.</p><h3 id="prerequisites">Prerequisites</h3><p>Before we begin, here&#x2019;s what you&#x2019;ll need:</p><ul><li>A Rock 4 SE SBC or compatible hardware</li><li>An SD card (at least 16 GB)</li><li>A computer with an SD card reader</li><li>Ethernet cable and access to your network&#x2019;s DHCP server</li><li>A tool to flash disk images (I use <a href="https://www.balena.io/etcher/?ref=joshdmoore.com">Balena Etcher</a>)</li></ul><h3 id="downloading-and-flashing-the-talos-image">Downloading and Flashing the <a href="https://www.talos.dev/?ref=joshdmoore.com" rel="noreferrer">Talos </a>Image</h3><p>Let&#x2019;s get started by visiting the <a href="https://factory.talos.dev/?ref=joshdmoore.com" rel="noreferrer">Talos Image Factory</a>:</p><ol><li>Select the <strong>&quot;Single Board Computer&quot;</strong> option.</li><li>Choose the version of Talos you want to install. The current version, as of the time of writing, is 1.9.1.</li><li>Next, select your SBC board. In our case, that&#x2019;s the <strong>Rock 4 SE</strong>.</li><li>Click <strong>&quot;Next&quot;</strong> on the <em>System Extension</em> and <em>Kernel Arguments</em> screens until you reach the download page.</li></ol><p>Download the disk image file from the &quot;Disk Image&quot; link. Then, flash it to your SD card using your favorite image flashing tool. I personally like Balena Etcher because it makes flashing an image to an SD card easy.</p><h3 id="booting-the-board">Booting the Board</h3><p>Once you&#x2019;ve flashed the SD card with the Talos image, insert it into your Rock 4 SE. Plug in the Ethernet cable and power up the board. You should see the board&#x2019;s lights come on, indicating activity.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2705;</div><div class="kg-callout-text">On Rock 4 SE boards, a successful boot is indicated by the network indicator blinking rapidly for a few seconds, turning off briefly, and then resuming to show normal traffic activity.</div></div><h3 id="finding-your-device%E2%80%99s-ip-address">Finding Your Device&#x2019;s IP Address</h3><p>Next, we need to find the IP address assigned to the board by your network&#x2019;s DHCP server. How you do this depends on your router or network setup:</p><ul><li><strong>Using Your Router</strong>: Log in to your router&#x2019;s admin interface and look for connected devices.</li><li><strong>Using Network Tools</strong>: Tools like <code>nmap</code> or Angry IP Scanner can help you find the new device.</li><li><strong>Omada Users</strong>: I use TP-Link Omada networking and can see the device&#x2019;s IP address in the interface.</li></ul><p>Once you&#x2019;ve located the IP, you can confirm it&#x2019;s your Talos system by running the following command:</p><pre><code class="language-bash">talosctl -e &lt;IP_ADDRESS&gt; -n &lt;IP_ADDRESS&gt; get disks --insecure</code></pre><p>This command should list all the disks on the machine. You can also confirm your NVMe drive is detected. You should see something like this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2025/01/Screenshot-2025-01-10-at-11.38.35-PM.png" class="kg-image" alt="Home Automation Kubernetes Cluster &#x2013; Part 2: Installing Talos Linux" loading="lazy" width="2000" height="324" srcset="https://joshdmoore.com/content/images/size/w600/2025/01/Screenshot-2025-01-10-at-11.38.35-PM.png 600w, https://joshdmoore.com/content/images/size/w1000/2025/01/Screenshot-2025-01-10-at-11.38.35-PM.png 1000w, https://joshdmoore.com/content/images/size/w1600/2025/01/Screenshot-2025-01-10-at-11.38.35-PM.png 1600w, https://joshdmoore.com/content/images/size/w2400/2025/01/Screenshot-2025-01-10-at-11.38.35-PM.png 2400w" sizes="(min-width: 720px) 720px"></figure><h3 id="configuring-talos">Configuring <a href="https://www.talos.dev/?ref=joshdmoore.com" rel="noreferrer">Talos</a></h3><p>At this point, <a href="https://www.talos.dev/?ref=joshdmoore.com" rel="noreferrer">Talos</a> is booted in maintenance mode and isn&#x2019;t fully operational yet. The next step is to apply a configuration file to the node so it can boot into the appropriate role. We&#x2019;re essentially following the Talos <a href="https://www.talos.dev/v1.9/introduction/getting-started/?ref=joshdmoore.com" rel="noreferrer">Getting Started guide</a>.</p><h4 id="choosing-the-cluster-endpoint">Choosing the Cluster Endpoint</h4><p>The <code>cluster-endpoint</code> is the IP address for your Talos API server. In an all-metal setup like this, you need a stable IP for the control plane node. Talos uses a <strong>VIP (Virtual IP)</strong> to manage this. A VIP is just an unassigned IP address on your network, and Talos ensures it&#x2019;s always assigned to the active control plane node using <code>etcd</code>.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">Ensure you select an IP address that is not managed by DHCP or any other network service.</div></div><p>Then, run this command:</p><pre><code class="language-bash">talosctl gen config &lt;cluster-name&gt; &lt;cluster-endpoint&gt;</code></pre><h3 id="editing-the-control-plane-file">Editing the Control Plane File</h3><p>The <code>gen config</code> command generates several files with very good defaults. The one we&#x2019;re currently concerned with is the <code>control-plane.yaml</code> file. We need to edit this file to configure our bare-metal cluster. Below are the changes we need to make:</p><h3 id="network-changes">Network Changes</h3><p>We need to update the network configuration to ensure the node is set up correctly for the cluster. Refer to the <a href="https://www.talos.dev/?ref=joshdmoore.com" rel="noreferrer">Talos documentation</a> for more details, but here are the sections to focus on: <code>DeviceSelector</code>, <code>DHCP</code>, and <code>VIP</code>.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">The board&apos;s IP address must be reserved once the node is added to the cluster. I handle this by reserving the assigned DHCP IP address to the board&apos;s MAC address using my TP-Link Omada network. Alternatively, you can use static IP addresses, but this will require creating multiple controlplane.yaml files to configure your control plane nodes. For more details on static configuration options, refer to the <a href="https://www.talos.dev/?ref=joshdmoore.com" target="_new" rel="noopener">Talos documentation</a>.</div></div><p>Here&#x2019;s an example of the network portion:</p><pre><code class="language-yaml">network:
  interfaces:
    - deviceSelector:
        driver: rk*
      dhcp: true
      vip:
        ip: 192.168.0.45 # Specifies the IP address to be used.
  nameservers:
    - 8.8.8.8
    - 1.1.1.1</code></pre><h3 id="setting-the-installation-disk">Setting the Installation Disk</h3><p>To ensure the OS installs on the correct media, we&#x2019;ll use a <code>diskSelector</code> instead of specifying the disk by name. This makes the setup resilient to changes in disk naming conventions:</p><pre><code class="language-yaml">install:
  image: ghcr.io/siderolabs/installer:v1.9.0 # The image used for installation.
  wipe: false # Indicates if the installation disk should be wiped.
  diskSelector:
    type: sd</code></pre><h3 id="mounting-the-nvme-drive">Mounting the NVMe Drive</h3><p>Next, we&#x2019;ll configure the NVMe drive to mount at a usable location. Update the <code>disks</code> section of the file:</p><pre><code class="language-yaml">disks:
  - device: /dev/nvme0n1 # The name of the disk to use.
    partitions:
      - mountpoint: /var/mnt/extra # Where to mount the partition.</code></pre><p>This configuration ensures the NVMe drive is mounted at <code>/var/mnt/extra</code>.</p><h3 id="allowing-workloads-on-control-plane-nodes">Allowing Workloads on Control Plane Nodes</h3><p>Finally, enable workloads to run on control-plane nodes by updating this setting:</p><pre><code class="language-yaml">allowSchedulingOnControlPlanes: true</code></pre><p>With these changes, your control-plane file is ready to bootstrap your bare-metal cluster. </p><h2 id="control-plane-node-setup">Control Plane Node Setup</h2><p>The following steps are adapted from the Talos <a href="https://www.talos.dev/v1.9/introduction/getting-started/?ref=joshdmoore.com#kubernetes-bootstrap" rel="noopener">Getting Started guide</a>:</p><h3 id="configuring-the-first-control-plane-node">Configuring the First Control Plane Node</h3><p>First, we need to apply the configuration to our control plane node using the <code>talosctl apply-config</code> command. Replace <code>&lt;IP_ADDRESS&gt;</code> with the IP address of your node:</p><pre><code class="language-bash">talosctl apply-config -e &lt;IP_ADDRESS&gt; -n &lt;IP_ADDRESS&gt; --talosconfig ./talosconfig --file controlplane.yaml --insecure</code></pre><p>This step applies the settings from your <code>controlplane.yaml</code> file, including network, disk, and scheduling configurations, ensuring the node is properly set up to join the cluster.</p><h3 id="bootstrapping-the-control-plane-node">Bootstrapping the Control Plane Node</h3><p>Next, bootstrap the first node of the control plane cluster. This step sets up the etcd cluster and starts the Kubernetes control plane components:</p><pre><code class="language-bash">talosctl bootstrap -e &lt;IP_ADDRESS&gt; -n &lt;IP_ADDRESS&gt; --talosconfig ./talosconfig</code></pre><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Important:</strong></b> The bootstrap operation should only be called <b><strong style="white-space: pre-wrap;">once</strong></b> on a <b><strong style="white-space: pre-wrap;">single control plane node</strong></b>. If you have multiple control plane nodes, it doesn&#x2019;t matter which one you issue the bootstrap command against.</div></div><h3 id="verifying-the-node">Verifying the Node</h3><p>After a few moments, you should be able to download the Kubernetes client configuration and start using your cluster:</p><pre><code class="language-bash">talosctl kubeconfig --nodes &lt;IP_ADDRESS&gt; --endpoints &lt;IP_ADDRESS&gt; --talosconfig=./talosconfig</code></pre><p>This command merges the cluster configuration into your default Kubernetes configuration file. If you&#x2019;d like to save it to a different file, you can specify an alternative filename:</p><pre><code class="language-bash">talosctl kubeconfig alternative-kubeconfig --nodes &lt;IP_ADDRESS&gt; --endpoints &lt;IP_ADDRESS&gt; --talosconfig ./talosconfig</code></pre><p>To make sure that everything is running corrctly, I like to issue the services command via talosctl like:</p><pre><code class="language-bash">talosctl services -e &lt;VIP_IP_ADDRESS&gt; -n &lt;NODE_IP_ADDRESS&gt; --talosconfig ./talosconfig</code></pre><p>You should get something like this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2025/01/Screenshot-2025-01-15-at-12.04.22-AM.png" class="kg-image" alt="Home Automation Kubernetes Cluster &#x2013; Part 2: Installing Talos Linux" loading="lazy" width="1374" height="344" srcset="https://joshdmoore.com/content/images/size/w600/2025/01/Screenshot-2025-01-15-at-12.04.22-AM.png 600w, https://joshdmoore.com/content/images/size/w1000/2025/01/Screenshot-2025-01-15-at-12.04.22-AM.png 1000w, https://joshdmoore.com/content/images/2025/01/Screenshot-2025-01-15-at-12.04.22-AM.png 1374w" sizes="(min-width: 720px) 720px"></figure><p>You can also run the &quot;<em>health</em>&quot; command like:</p><pre><code class="language-bash">talosctl health -e &lt;VIP_IP_ADDRESS&gt; -n &lt;NODE_IP_ADDRESS&gt; --talosconfig ./talosconfig</code></pre><p>You should get something like this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2025/01/Screenshot-2025-01-15-at-12.07.13-AM.png" class="kg-image" alt="Home Automation Kubernetes Cluster &#x2013; Part 2: Installing Talos Linux" loading="lazy" width="2000" height="943" srcset="https://joshdmoore.com/content/images/size/w600/2025/01/Screenshot-2025-01-15-at-12.07.13-AM.png 600w, https://joshdmoore.com/content/images/size/w1000/2025/01/Screenshot-2025-01-15-at-12.07.13-AM.png 1000w, https://joshdmoore.com/content/images/size/w1600/2025/01/Screenshot-2025-01-15-at-12.07.13-AM.png 1600w, https://joshdmoore.com/content/images/size/w2400/2025/01/Screenshot-2025-01-15-at-12.07.13-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Now, you can connect to your Kubernetes cluster and verify that your nodes are up and running:</p><pre><code class="language-bash">kubectl get nodes</code></pre><p>You should see your control plane node listed and ready.</p><h3 id="adding-additional-nodes">Adding Additional Nodes</h3><p>Adding the remaining SBC boards to your cluster is as simple as scaling up. Ideally, you&#x2019;ll want at least three control plane nodes for redundancy. To scale your cluster, simply apply your <code>controlplane.yaml</code> file to two additional nodes:</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x2757;</div><div class="kg-callout-text">The board&apos;s IP address must be reserved once the node is added to the cluster. I handle this by reserving the assigned DHCP IP address to the board&apos;s MAC address using my TP-Link Omada network. Alternatively, you can use static IP addresses, but this will require creating multiple controlplane.yaml files to configure your control plane nodes. For more details on static configuration options, refer to the <a href="https://www.talos.dev/?ref=joshdmoore.com" target="_new" rel="noopener">Talos documentation</a>.</div></div><pre><code class="language-bash">talosctl apply-config -e &lt;VIP_IP_ADDRESS&gt; -n &lt;NODE_IP_ADDRESS&gt; --talosconfig ./talosconfig --file controlplane.yaml --insecure</code></pre><p>Again, a good way to check node health is to run the &quot;talosctl services&quot; command:</p><pre><code>talosctl services -e &lt;VIP_IP_ADDRESS&gt; -n &lt;NODE_IP_ADDRESS&gt; --talosconfig ./talosconfig</code></pre><p>You should get something that looks like this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2025/01/Screenshot-2025-01-15-at-12.42.06-AM.png" class="kg-image" alt="Home Automation Kubernetes Cluster &#x2013; Part 2: Installing Talos Linux" loading="lazy" width="1220" height="316" srcset="https://joshdmoore.com/content/images/size/w600/2025/01/Screenshot-2025-01-15-at-12.42.06-AM.png 600w, https://joshdmoore.com/content/images/size/w1000/2025/01/Screenshot-2025-01-15-at-12.42.06-AM.png 1000w, https://joshdmoore.com/content/images/2025/01/Screenshot-2025-01-15-at-12.42.06-AM.png 1220w" sizes="(min-width: 720px) 720px"></figure><p>If you get something that looks like this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2025/01/Screenshot-2025-01-15-at-12.41.05-AM.png" class="kg-image" alt="Home Automation Kubernetes Cluster &#x2013; Part 2: Installing Talos Linux" loading="lazy" width="2000" height="279" srcset="https://joshdmoore.com/content/images/size/w600/2025/01/Screenshot-2025-01-15-at-12.41.05-AM.png 600w, https://joshdmoore.com/content/images/size/w1000/2025/01/Screenshot-2025-01-15-at-12.41.05-AM.png 1000w, https://joshdmoore.com/content/images/size/w1600/2025/01/Screenshot-2025-01-15-at-12.41.05-AM.png 1600w, https://joshdmoore.com/content/images/size/w2400/2025/01/Screenshot-2025-01-15-at-12.41.05-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>If issues persist, rebooting the node typically resolves them and allows provisioning to complete.</p><p> You can do this with the following command:</p><pre><code>talosctl reboot -e &lt;NODE_IP_ADDRESS&gt; -n &lt;NODE_IP_ADDRESS&gt; --talosconfig ./talosconfig  </code></pre><p>Once you have configured the additional nodes, you can verify their status by running the following command:</p><pre><code class="language-bash">kubectl get nodes</code></pre><h3 id="adding-additional-worker-nodes">Adding Additional Worker Nodes</h3><p>If you have more than three nodes to add to your cluster, you can make similar edits to the <code>worker.yaml</code> file as you did to the <code>controlplane.yaml</code> file. There are a few exceptions to note:</p><ul><li>You do <strong>not</strong> need to set the <code>VIP</code> in the worker configuration file.</li><li>You also do <strong>not</strong> need to set the <code>allowSchedulingOnControlPlanes</code> property.</li></ul><h3 id="conclusion">Conclusion</h3><p>I had originally planned for this to be a two-article series, but after writing this, I feel this is a good stopping point. If you&#x2019;re already familiar with Kubernetes, you can stop here. However, if you&#x2019;re looking for a &quot;batteries included&quot; version of Kubernetes, I encourage you to check out my next article. There, I&#x2019;ll explain how to set up an application manager, a load balancer, and an ingress controller.</p>]]></content:encoded></item><item><title><![CDATA[Home Automation Kubernetes Cluster]]></title><description><![CDATA[<p>So, I will be honest with you: this article right here is the whole reason I started my blog. I have a need to move all of my home automation components to a more reliable and scalable solution. Given my current background, I decided that would be a home Kubernetes</p>]]></description><link>https://joshdmoore.com/edge-kubernetes-cluster/</link><guid isPermaLink="false">651e054ca88a60000166c472</guid><category><![CDATA[Home Automation]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Platform Engineering]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Wed, 18 Dec 2024 07:03:25 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2024/12/The-Box.png" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2024/12/The-Box.png" alt="Home Automation Kubernetes Cluster"><p>So, I will be honest with you: this article right here is the whole reason I started my blog. I have a need to move all of my home automation components to a more reliable and scalable solution. Given my current background, I decided that would be a home Kubernetes cluster. This whole journey started when I realized that I needed to run InfluxDB for my home energy monitoring and then continued on when I realized my Raspberry Pi running Home Assistant was less fault-tolerant than I desired. With that in mind, I started building my own home Kubernetes (K8s) cluster.</p><p>When looking for an OS to run, I was really looking for something that was Kubernetes-specific. That&#x2019;s all that I want to run&#x2014;I don&#x2019;t need the rest of a typical Linux system and didn&#x2019;t want the headache of having to keep it patched. There are lots of options out there, but I landed on Talos Linux (<a href="https://www.talos.dev/?ref=joshdmoore.com" rel="noopener">https://www.talos.dev/</a>). I don&#x2019;t want to muddy this article with all the reasons for my decision, but what I will say is that Talos Linux is a very good, immutable, purpose-built Linux distribution.</p><p>Once I decided what Linux OS I wanted to run, it was time to decide what SBC board I wanted to use. Since I had decided to use Talos, I started with the boards that they had support for. Looking at the specs for the boards they support, I decided I really liked the Rock 4 line of boards. I settled on the <a href="https://radxa.com/products/rock4/4se/?ref=joshdmoore.com" rel="noreferrer">Rock 4SE</a>. They have the same footprint as a Raspberry Pi but offer better specs and are a bit more readily available. They also support POE with the addition of a simple hat and allow for an M.2 SSD. I wanted to stay with the footprint of a Raspberry Pi because it made selecting cases a bit easier.</p><p>When it came to case selection, there were lots of options, but I wanted something that I could easily swap boards in and that had a spot for a POE switch. I ended up settling on the <a href="https://www.uctronics.com/?ref=joshdmoore.com" rel="noreferrer">Uctronics</a> Raspberry Pi enclosure. For this build, I went with the <a href="https://www.uctronics.com/raspberry-pi/uctronics-upgraded-complete-enclosure-for-raspberry-pi-cluster.html?ref=joshdmoore.com" rel="noreferrer">desktop</a> model, but they also make a <a href="https://www.uctronics.com/raspberry-pi/uctronics-19-1u-raspberry-pi-rack-mount-with-ssd-mounting-brackets.html?ref=joshdmoore.com" rel="noreferrer">rack-mount</a> model.</p><p>For my Kubernetes cluster, I chose a PoE switch from <a href="https://www.tp-link.com/?ref=joshdmoore.com" rel="noreferrer">TP-Link</a> due to its reliable performance and compatibility with the setup&apos;s requirements. <a href="https://www.tp-link.com/?ref=joshdmoore.com" rel="noreferrer">TP-Link</a> also stood out for its support of <a href="https://csa-iot.org/all-solutions/matter/?ref=joshdmoore.com" rel="noreferrer">Matter</a>, a critical standard in smart home interoperability, which aligns with my focus on building systems that integrate seamlessly into modern connected environments. Additionally, I&#x2019;ve found their <a href="https://www.tp-link.com/us/home-networking/smart-switch/?ref=joshdmoore.com" rel="noreferrer">smart light switches</a> to be particularly effective in my own experience, offering functionality that fits well into a home automation ecosystem.</p><p>There are more components I needed, but once I selected the major components, the subcomponents were easier to choose, so I won&#x2019;t go into detail about the reasoning behind those. Maybe in a future article, I will cover why I made some of these decisions, but if I did that here, this article would be way too long.</p><p>Let&#x2019;s get to the good stuff and build a Kubernetes cluster. First, I&#x2019;m going to list all the major components needed to build this. Then, I will get into the build details, and in a follow-up post, I will talk about the software installation and some of the ongoing issues that need to be addressed. Without further ado, here is the build list.</p><h1 id="product-list">Product List</h1>
<table>
<thead>
<tr>
<th>Product</th>
<th>Quantity</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>Rock 4 SE</td>
<td>4</td>
<td><a href="https://us.rs-online.com/product/okdo/rs114se-d4w2p1/73649569/?ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>M.2 extension board</td>
<td>4</td>
<td><a href="https://us.rs-online.com/product/okdo/ra001/74063359/?ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>Rock 4 SE POE hat</td>
<td>4</td>
<td><a href="https://shop.allnetchina.cn/products/rock-pi-4b-poe-hat?srsltid=AfmBOooOBgUEpa5H1SGYDrmrBPztvs8UD6Az6NFH8Gmfi_YDZrAwQg4l&amp;ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>128 GB SD card</td>
<td>4</td>
<td><a href="https://www.amazon.com/gp/product/B07FCMKK5X/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&amp;psc=1&amp;ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>1 TB NVMe SSD</td>
<td>4</td>
<td><a href="https://www.amazon.com/gp/product/B0CP9CXCXG/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&amp;th=1&amp;ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>Uctronics case</td>
<td>1</td>
<td><a href="https://www.amazon.com/UCTRONICS-Upgraded-Enclosure-Raspberry-Compatible/dp/B09S11Q684/ref=sr_1_4?crid=2T6FP4CPSHCPF&amp;keywords=uctronics+pi+case&amp;qid=1733633525&amp;sprefix=uctronics+pi+case%2Caps%2C139&amp;sr=8-4&amp;ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>POE switch</td>
<td>1</td>
<td><a href="https://www.amazon.com/gp/product/B076HZFY3F/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&amp;psc=1&amp;ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>Heatsinks</td>
<td>1</td>
<td><a href="https://www.amazon.com/gp/product/B014KKY3KI/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&amp;psc=1&amp;ref=joshdmoore.com">Link</a></td>
</tr>
<tr>
<td>0.5 ft POE Cat 6 cables</td>
<td>1</td>
<td><a href="https://www.amazon.com/GearIT-24-Pack-Ethernet-Cable-Snagless/dp/B00XIFJSEI/ref=sr_1_4?crid=1HT8921DDEME4&amp;keywords=.5+foot+poe+cat+6&amp;qid=1733634141&amp;sprefix=5+foot+poe+cat+6%2Caps%2C135&amp;sr=8-4&amp;ref=joshdmoore.com">Link</a></td>
</tr>
</tbody>
</table>
<p><strong>Note:</strong> You might also need some standoffs, like these: <a href="https://www.amazon.com/Standoffs-Assortment-Threaded-Circuit-Motherboard/dp/B0BZYTC581/ref=sr_1_2?crid=26LVN20ZB4J9J&amp;keywords=sbc+standoffs&amp;qid=1733633954&amp;sprefix=sbc+standoffs%2Caps%2C126&amp;sr=8-2&amp;ref=joshdmoore.com">Link</a>.</p>
<p>Now comes the time for assembly. The case is built for Raspberry Pis, so the Rock 4SE fits fairly well. Really, the only issue is that the case is designed for a standard 2.5&quot; SSD on the back, not an M.2 SSD board. It is also a bit tight when finally assembled, it could use just a little more room to swap boards out easily. With that in mind, I proceeded to mount the Rock 4SE board with the M.2 extension attached so that I could measure where to put standoffs on the backside of the sled.</p><p>With the Rock 4SE board attached and the M.2 extension connected but free-hanging, I wrapped the M.2 extension around to the back and marked where to drill holes with a pencil. I don&#x2019;t know exactly what size holes I drilled in the sled to mount the standoffs for the M.2 extension board because I have a whole box of drill bits and just used the one that was closest. Then I drilled the holes that I had marked.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3624.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="2667" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3624.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3624.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3624.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3624.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3625.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="2667" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3625.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3625.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3625.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3625.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><p>Once those holes are drilled, assembly can begin. First, we need to install some standoffs for the M.2 extension board on the back of the sled. I&#x2019;m not sure of the exact length of the standoffs because I had them in a kit that wasn&#x2019;t labeled, but they were just slightly shorter than the lip of the sled, like this.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3616.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="1500" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3616.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3616.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3616.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3616.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><p>I used nuts on the back because the thickness of the sled is not enough to hold the standoffs on its own.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3617.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="1500" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3617.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3617.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3617.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3617.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><p>Now for the sled assembly. Start by securing the Rock 4SE board to the front of the sled with the M.2 board connected but hanging freely. Once you have the Rock 4SE board secured, you can then connect the M.2 extension to the back of the sled on the standoffs that were installed. After that, you can put the POE hat on the Rock 4SE. It should look something like this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3618.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="1500" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3618.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3618.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3618.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3618.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3619.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="1500" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3619.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3619.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3619.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3619.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3623.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="1500" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3623.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3623.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3623.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3623.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><p>Build that 4 times and you will have a fully assembled K8s cluster similar to this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2024/12/IMG_3641.jpeg" class="kg-image" alt="Home Automation Kubernetes Cluster" loading="lazy" width="2000" height="1500" srcset="https://joshdmoore.com/content/images/size/w600/2024/12/IMG_3641.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2024/12/IMG_3641.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2024/12/IMG_3641.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2024/12/IMG_3641.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><p>In conclusion, the hardware assembly for my home Kubernetes cluster is now complete, and the setup is coming together nicely. In the next article, I&#x2019;ll dive into the software installation process, ensuring everything is configured properly and running smoothly. Stay tuned for more details, and thanks for following along on this journey! </p>]]></content:encoded></item><item><title><![CDATA[Home Assistant Touchscreen]]></title><description><![CDATA[<p>One of the things that I desired to create early on for my <a href="https://www.home-assistant.io/?ref=joshdmoore.com">Home Assistant</a> installation was a touchscreen control.  I wanted to mount this touchscreen somewhere centrally in my house so that the whole family could easily interface with the smart home. After looking at many different touchscreens available</p>]]></description><link>https://joshdmoore.com/home-assistant-touch-screen/</link><guid isPermaLink="false">6577c2cdf625a900017a8373</guid><category><![CDATA[Home Automation]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Fri, 15 Dec 2023 05:52:49 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2023/12/shutterstock_2273812327.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2023/12/shutterstock_2273812327.jpg" alt="Home Assistant Touchscreen"><p>One of the things that I desired to create early on for my <a href="https://www.home-assistant.io/?ref=joshdmoore.com">Home Assistant</a> installation was a touchscreen control.  I wanted to mount this touchscreen somewhere centrally in my house so that the whole family could easily interface with the smart home. After looking at many different touchscreens available online and not knowing exactly what I wanted, I was conversing with a friend of mine. I was telling him about this next home automation project I was working on when he mentioned that he had just removed a touchscreen from a kiosk he was servicing, and it just happened to still be in working order. Voil&#xE0;, I had found my touchscreen.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/12/IMG_3154.jpeg" class="kg-image" alt="Home Assistant Touchscreen" loading="lazy" width="2000" height="3352" srcset="https://joshdmoore.com/content/images/size/w600/2023/12/IMG_3154.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2023/12/IMG_3154.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2023/12/IMG_3154.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2023/12/IMG_3154.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><p>This is an EFFINET 27&apos;&apos; TFT LCD touchscreen monitor. You can find similar monitors on <a href="https://www.ebay.com/itm/325303421700?ref=joshdmoore.com">eBay</a>. They are typically used in the gaming and kiosk industries. Here is a link to one that sold on eBay. I was easily able to mount this to my wall with an LCD TV wall mount available at most electronics stores.</p><p>Problem number one was: how do I get my Home Assistant displayed on this wonderful monitor in my possession? Luckily, with an easy Google search, I realized I could use one of my gifted Raspberry Pi 3s to do exactly that. There is a great OS out there for Raspberry Pis called <a href="https://github.com/guysoft/FullPageOS?ref=joshdmoore.com">FullPageOS</a>. Formatting an SD card for FullPageOS is very easy with the <a href="https://www.raspberrypi.com/software/?ref=joshdmoore.com">Raspberry Pi Imager</a>. There is very good documentation on how to set up and configure the OS to load the webpage of your choice. In this instance, it would be our local Home Assistant that is being used. I won&apos;t go into a lot of detail about that because it is very well documented, and we have other problems to solve.</p><p>Problem number two was that I wanted to turn this monitor lengthwise. So, I had to figure out how to rotate the display 90 degrees. To make this even more difficult, since it&#x2019;s a touchscreen, I also had to figure out how to rotate the touch interface 90 degrees. I did lots of Googling this time. There are many different ways to accomplish this through Linux commands, but I was lucky enough to stumble across a script that made it really easy. I have tried recently to find where I found this script so that I could give the author proper credit. Unfortunately, I was not able to find the posting for this particular script. It has a reference to the script and author it was created from, but it does not have the author who did the modifications. I will post the script here and publish it on my <a href="https://github.com/orgs/bytecode-tech/dashboard?ref=joshdmoore.com">GitHub</a>, but if the original author ever wants credit for this, I will gladly include that if they email me.</p><pre><code class="language-bash">#!/bin/bash
#  Due to the new display driver in the pi 4 the /boot/config.txt method for screen rotation doesn&apos;t work this is a work around for the time being. 
#  Taken from this gist https://gist.github.com/mildmojo/48e9025070a2ba40795c#gistcomment-2694429
#  Adds the ability to rotate the screen with a single command or in a user created addition at build time
#
if [ -z &quot;$1&quot; ] ; then
  echo &quot;Usage: $0 [normal|inverted|left|right]&quot;
  echo &quot; &quot;
  exit 1
fi

function do_rotate
{
  xrandr --output $1 --rotate $2

  TRANSFORM=&apos;Coordinate Transformation Matrix&apos;

  POINTERS=`xinput | grep &apos;slave  pointer&apos;`
  POINTERS=`echo $POINTERS | sed s/&#x21B3;\ /\$/g`
  POINTERS=`echo $POINTERS | sed s/\ id=/\@/g`
  POINTERS=`echo $POINTERS | sed s/\ \\\[slave\ pointer/\#/g`
  iIndex=2
  POINTER=`echo $POINTERS | cut -d &quot;@&quot; -f $iIndex | cut -d &quot;#&quot; -f 1`
  while [ &quot;$POINTER&quot; != &quot;&quot; ] ; do
    POINTER=`echo $POINTERS | cut -d &quot;@&quot; -f $iIndex | cut -d &quot;#&quot; -f 1`
    POINTERNAME=`echo $POINTERS | cut -d &quot;$&quot; -f $iIndex | cut -d &quot;@&quot; -f 1`
    #if [ &quot;$POINTER&quot; != &quot;&quot; ] &amp;&amp; [[ $POINTERNAME = *&quot;TouchPad&quot;* ]]; then    # ==&gt; uncomment to transform only touchpads
    #if [ &quot;$POINTER&quot; != &quot;&quot; ] &amp;&amp; [[ $POINTERNAME = *&quot;TrackPoint&quot;* ]]; then  # ==&gt; uncomment to transform only trackpoints
    #if [ &quot;$POINTER&quot; != &quot;&quot; ] &amp;&amp; [[ $POINTERNAME = *&quot;Digitizer&quot;* ]]; then   # ==&gt; uncomment to transform only digitizers (touch)
    #if [ &quot;$POINTER&quot; != &quot;&quot; ] &amp;&amp; [[ $POINTERNAME = *&quot;MOUSE&quot;* ]]; then       # ==&gt; uncomment to transform only optical mice
    if [ &quot;$POINTER&quot; != &quot;&quot; ] ; then                                         # ==&gt; uncomment to transform all pointer devices
        case &quot;$2&quot; in
            normal)
              [ ! -z &quot;$POINTER&quot; ]    &amp;&amp; xinput set-prop &quot;$POINTER&quot; &quot;$TRANSFORM&quot; 1 0 0 0 1 0 0 0 1
              ;;
            inverted)
              [ ! -z &quot;$POINTER&quot; ]    &amp;&amp; xinput set-prop &quot;$POINTER&quot; &quot;$TRANSFORM&quot; -1 0 1 0 -1 1 0 0 1
              ;;
            left)
              [ ! -z &quot;$POINTER&quot; ]    &amp;&amp; xinput set-prop &quot;$POINTER&quot; &quot;$TRANSFORM&quot; 0 -1 1 1 0 0 0 0 1
              ;;
            right)
              [ ! -z &quot;$POINTER&quot; ]    &amp;&amp; xinput set-prop &quot;$POINTER&quot; &quot;$TRANSFORM&quot; 0 1 0 -1 0 1 0 0 1
              ;;
        esac      
    fi
    iIndex=$[$iIndex+1]
  done
}

XDISPLAY=`xrandr --current | grep primary | sed -e &apos;s/ .*//g&apos;`
if [ &quot;$XDISPLAY&quot; == &quot;&quot; ] || [ &quot;$XDISPLAY&quot; == &quot; &quot; ] ; then
  XDISPLAY=`xrandr --current | grep connected | sed -e &apos;s/ .*//g&apos; | head -1`
fi

do_rotate $XDISPLAY $1</code></pre><p>Now that I had an easy way to rotate the display and interface, I had to find a way to make that happen when FullPageOS starts up. Luckily, there is a scripts directory in the home directory of FullPageOS located at /home/pi/scripts. This scripts directory has a file called startup_gui. This file configures and runs all the scripts needed by FullPageOS. To ensure that the screen is rotated the direction I want, all I had to do was create a file in the scripts directory for our script that rotates the display. I created a file called /home/pi/scripts/rotate.sh.</p><pre><code class="language-bash">touch /home/pi/scripts/rotate.sh</code></pre><p>Edit this file with your favorite editor and add the previous script.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">Don&apos;t forget to make this script executable!!</div></div><pre><code class="language-bash"> chmod +x rotate.sh </code></pre><p>Before you can execute this script, you will also need to install the xinput dependency and export your display as follows.</p><pre><code class="language-bash">sudo apt-get install xinput
export DISPLAY=:0</code></pre><p>The last thing to do is make sure that this script is run at startup. This is very easy by editing the /home/pi/scripts/startup_gui. The last line of this file runs the binary to start FullPageOS. I simply inserted a line right before that to run my rotate command.</p><pre><code class="language-bash">...
/home/pi/scripts/rotate.sh right

/home/pi/scripts/run_onepageos</code></pre><p>I rotated my screen 90 degrees to the right, but you can rotate it as you please.</p><p>Here are a few more pictures of my the monitor and my installation. </p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/12/IMG_3153.jpeg" class="kg-image" alt="Home Assistant Touchscreen" loading="lazy" width="2000" height="3367" srcset="https://joshdmoore.com/content/images/size/w600/2023/12/IMG_3153.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2023/12/IMG_3153.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2023/12/IMG_3153.jpeg 1600w, https://joshdmoore.com/content/images/2023/12/IMG_3153.jpeg 2393w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/12/IMG_3153-1.jpeg" class="kg-image" alt="Home Assistant Touchscreen" loading="lazy" width="2000" height="3367" srcset="https://joshdmoore.com/content/images/size/w600/2023/12/IMG_3153-1.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2023/12/IMG_3153-1.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2023/12/IMG_3153-1.jpeg 1600w, https://joshdmoore.com/content/images/2023/12/IMG_3153-1.jpeg 2393w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/12/IMG_3155-1.jpeg" class="kg-image" alt="Home Assistant Touchscreen" loading="lazy" width="2000" height="1500" srcset="https://joshdmoore.com/content/images/size/w600/2023/12/IMG_3155-1.jpeg 600w, https://joshdmoore.com/content/images/size/w1000/2023/12/IMG_3155-1.jpeg 1000w, https://joshdmoore.com/content/images/size/w1600/2023/12/IMG_3155-1.jpeg 1600w, https://joshdmoore.com/content/images/size/w2400/2023/12/IMG_3155-1.jpeg 2400w" sizes="(min-width: 720px) 720px"></figure><p>I still need to add a plug behind the monitor and frame it with some nice trim. Hopefully, I will have time to do that soon. The Raspberry Pi is attached to the back with some double-sided industrial tape.</p><p>I hope this article on how I accomplished my Home Assistant touchscreen control is helpful!</p>]]></content:encoded></item><item><title><![CDATA[Home Automation - journey to my blog]]></title><description><![CDATA[<p>When I embarked on the journey of writing this blog, it was more than just a chronicle of my side projects; it was a pathway to share my passion for integrating technology into everyday life. The initial articles of this blog detailed the complexities of setting up Kubernetes, a foundational</p>]]></description><link>https://joshdmoore.com/my-journey-to-a-blog/</link><guid isPermaLink="false">65669803f625a900017a82d5</guid><category><![CDATA[Home Automation]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Wed, 06 Dec 2023 02:09:34 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2023/12/shutterstock_2064549800.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2023/12/shutterstock_2064549800.jpg" alt="Home Automation - journey to my blog"><p>When I embarked on the journey of writing this blog, it was more than just a chronicle of my side projects; it was a pathway to share my passion for integrating technology into everyday life. The initial articles of this blog detailed the complexities of setting up Kubernetes, a foundational step that paved the way for my real passion: home automation.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/11/PNG-image-429C-8C1B-0A-0.png" class="kg-image" alt="Home Automation - journey to my blog" loading="lazy" width="398" height="398"></figure><p>Living in a country home, my venture into home automation was born out of a need for security and energy savings. The idea of keeping my outdoor lights on throughout the night always felt like a safeguard against the unseen &apos;boogeymen&apos; lurking in the dark. However, letting these lights burn during the daylight hours was a clear waste of energy and resources. I yearned for a solution that offered more sophistication and control than the standard dusk-to-dawn lights could provide. It wasn&#x2019;t just about turning the lights on at night and off during the day; I wanted a system that adapted to my lifestyle and offered me the flexibility and control I desired.</p><p>The heart of my home automation system is Home Assistant, chosen for its Python-based architecture and excellent performance on Raspberry Pi devices. This platform is not just a tool; it&apos;s an entire operating system designed for Raspberry Pi, ensuring seamless updates and a robust user experience. The open-source nature of Home Assistant, combined with its extensive community-driven integrations, allows for a high degree of customization. This flexibility is crucial for tailoring the system to fit specific needs and preferences.</p><p>To get started with Home Assistant on a Raspberry Pi, a few essential items are needed: a Raspberry Pi 4 or 3 Model B, a suitable power supply, a Micro SD Card (32 GB or larger), an SD card reader, and an Ethernet cable for a reliable connection during installation. I used a Raspberry Pi 3 because I was gifted some that were being cycled out and it has worked great. The setup process is straightforward and well-documented on the <a href="https://www.home-assistant.io/installation/raspberrypi?ref=joshdmoore.com">Home Assistant website</a>. It involves preparing the Raspberry Pi, writing the Home Assistant OS image to the SD card, starting up the Raspberry Pi, and accessing Home Assistant through a browser. While simple, the process might require troubleshooting, especially if the Home Assistant page does not show up after installation&#x200B;&#x200B;.</p><p>The benefits of this journey into home automation are numerous. The most tangible is the cost savings &#x2013; managing energy usage more intelligently means lower bills and a reduced environmental impact. More importantly, it&apos;s about the comfort and convenience of a home that understands and responds to your unique lifestyle. A home that doesn&apos;t just function smartly but also intuitively.</p><p>As I delve deeper into the world of home automation, future articles will explore specific projects and implementations. I&apos;ll share the practical applications, the challenges encountered, and the innovative solutions that have emerged. This journey is not just about technology; it&apos;s about creating a living space that is as intelligent as it is comfortable.</p>]]></content:encoded></item><item><title><![CDATA[Building an application platform with LKE and Argo CD - Part 2]]></title><description><![CDATA[<p>In my first part, I introduced how to set up a basic Linode Kubernetes(LKE) cluster using Argo CD autopilot and configuring Traefik ingress and ExternalDNS for routing using Linode Domains. &#xA0;In this article, we are going to add automatic HTTPS for the ingress of our applications. &#xA0;To</p>]]></description><link>https://joshdmoore.com/building-part-2/</link><guid isPermaLink="false">63fec6df285d54000110caa1</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Platform Engineering]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Thu, 18 May 2023 06:08:25 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2023/03/linode-argo-2.png" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2023/03/linode-argo-2.png" alt="Building an application platform with LKE and Argo CD - Part 2"><p>In my first part, I introduced how to set up a basic Linode Kubernetes(LKE) cluster using Argo CD autopilot and configuring Traefik ingress and ExternalDNS for routing using Linode Domains. &#xA0;In this article, we are going to add automatic HTTPS for the ingress of our applications. &#xA0;To do this we will be using <a href="https://cert-manager.io/?ref=joshdmoore.com">cert-manager</a> in conjunction with <a href="https://letsencrypt.org/?ref=joshdmoore.com">Let&apos;s Encrypt</a>. &#xA0;</p><h3 id="certificates-with-cert-manger">Certificates with cert-manger</h3><p>Kubernetes cert-manager adds the capability to manage certificates for your services in an automated way. &#xA0;The easiest and cheapest way to get certificates for your services is to configure cert-manager to get certificates from <a href="https://letsencrypt.org/?ref=joshdmoore.com">Let&apos;s Encrypt</a>. &#xA0;Let&apos;s Encrypt provides free trusted certificates to websites to make encryption easy and cheap. &#xA0;</p><p>Cert-manger uses the ACME protocol to verify domains. &#xA0;You can verify domains in two different ways:</p><ul><li>HTTP request to a well-known URL</li><li>DNS recorded created and verified</li></ul><p>We will be using the second mechanism, DNS record created and verified. &#xA0;This mechanism has the advantage of not requiring ingress to be configured and working for domain ownership to be verified. &#xA0;Cert-manager has integration with many DNS providers to allow integration. &#xA0;However, they DO NOT provide native integration with Linode Domains. &#xA0;They do allow third parties to integrate through a webhook that is provided. &#xA0;This is the mechanism that we will use and is documented on their site <a href="https://cert-manager.io/docs/configuration/acme/dns01/?ref=joshdmoore.com#webhook">here</a> with the implementation of the webhook <a href="https://github.com/slicen/cert-manager-webhook-linode?ref=joshdmoore.com">here</a>.</p><p>As we did in the previous article, we will need to add an application spec to Argo CD and the corresponding manifest for cert-manager. &#xA0;In this instance, we are going to be loading a helm chart that will merge the cert-manager dependency with the <a href="https://github.com/slicen/cert-manager-webhook-linode?ref=joshdmoore.com">cert-manager-webhook-linode</a> dependency. </p><p>First, let&apos;s look at the ArgoCD application. &#xA0;Again this will be created in the bootstrap dir and look something like this.</p><pre><code class="language-yaml">#./bootstrap/cert-manager.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  creationTimestamp: null
  labels:
    app.kubernetes.io/managed-by: argocd-autopilot
    app.kubernetes.io/name: cert-manager
  name: cert-manager
  namespace: argocd
spec:
  destination:
    namespace: cert-manager
    server: https://kubernetes.default.svc
  ignoreDifferences:
  - group: argoproj.io
    jsonPointers:
    - /status
    kind: Application
  project: default
  source:
    path: bootstrap/cert-manager
    repoURL: https://github.com/owner/repo.git
  syncPolicy:
    automated:
      allowEmpty: true
      prune: true
      selfHeal: true
    syncOptions:
    - allowEmpty=true
    - CreateNamespace=true
status:
  health: {}
  summary: {}
  sync:
    comparedTo:
      destination: {}
      source:
        repoURL: &quot;&quot;
    status: &quot;&quot;</code></pre><p>In this file, you will need to change the source repo to match your git repo.</p><pre><code class="language-yaml">repoURL: https://github.com/owner/repo.git</code></pre><p>Next, we need to create a cert-manager directory to put the cert-manager HELM files in like this:</p><pre><code class="language-bash">mkdir ./bootstrap/cert-manager</code></pre><p>Then we need to create a master HELM chart to combine cert-manager and the Linode webhook.</p><pre><code class="language-yaml">#./bootstrap/cert-manager/Chart.yaml
apiVersion: v2
name: cert-manager
description: A Helm chart for Kubernetes

# A chart can be either an &apos;application&apos; or a &apos;library&apos; chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They&apos;re included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: &quot;1.16.0&quot;

dependencies:
- name: &quot;cert-manager&quot;
  version: 1.10.1
  repository: https://charts.jetstack.io
- name: &quot;cert-manager-webhook-linode&quot;
  version: 0.2.0
  repository: &quot;file://./chart/cert-manager-webhook-linode&quot;</code></pre><p></p><p>The main thing to notice in this file is that it simply merges 2 dependencies. &#xA0;It combines cert-manager and the cert-manager-webhook-linode.</p><p>Cert-manager is pulled from the jetstack helm chart repository. &#xA0;This makes things very easy to update. &#xA0;When jetstack releases a new version of the cert-manager helm chart you simply edit this file and update the version of the cert-manager dependency.</p><p>Unfortunately, cert-manager-webhook-linode is not stored in a HELM repo and is only accessible on GitHub as a code repository. &#xA0;The only way to reference dependencies in HELM is to reference a repo or a subdirectory. &#xA0;For this reason, we will need to copy version v0.2.0 of <a href="https://github.com/slicen/cert-manager-webhook-linode/releases?ref=joshdmoore.com">cert-manager-webhook-linode</a> to our ./bootstrap/cert-manager/chart/cert-manager-webhook-linode dir.</p><pre><code class="language-ssh">mkdir ./bootstrap/cert-manager/chart</code></pre><p>Now copy the cert-manager-webhook-linode dir from your downloaded resource to the chart dir.</p><p>Unfortunately, v0.2.0 of the cert-manager-wehbook-linode has a bit of an error in its helm chart. &#xA0;We will need to fix this in order for the container to launch correctly. &#xA0;Edit the file ./bootstrap/cert-manager/chart/cert-manager-webhook-linode/values.yaml and comment out the &quot;logLevel: 6&quot; line. &#xA0;The deployment section will then look like this.</p><pre><code class="language-yaml"> #./bootstrap/cert-manager/chart/cert-manager-webhook-linode/values.yaml 
deployment:
  secretName: linode-credentials
  secretKey: token
  # logLevel: 6</code></pre><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F937;&#x200D;&#x2642;&#xFE0F;</div><div class="kg-callout-text">I have not had a chance to dig into the code behind the cert-manager-webhook-container to see why it does not accept the logLevel argument anymore but I will when I have some time. For this article, this is the best solution I could find.</div></div><p>Next, we need to create a values file to set the variables in our dependent HELM charts.</p><pre><code class="language-yaml">#./bootstrap/cert-manager/values.yaml
chartVersion:
keyID:

cert-manager:
  installCRDs: true

cert-manager-webhook-linode:
  api:
    groupName: acme.&lt;your.domain&gt;
  image:
    tag: v0.2.0
  deployment:
    secretName: linode-api-token</code></pre><p>There are a few variables being set in this values file. &#xA0;First is the cert-manager CRD install. &#xA0;The CRDs will be needed for our use cases.</p><pre><code class="language-yaml">cert-manager:
  installCRDs: true</code></pre><p>Next, a few variables for the cert-manager-webhook-linode process need to be set. &#xA0;The api.groupName needs to be filled in with your domain name. &#xA0;Then the image needs to be set to the same version as the download: v0.2.0. Also, the deployment secret needs to be set to our Linode token name: linode-api-token.</p><pre><code class="language-yaml">cert-manager-webhook-linode:
  api:
    groupName: acme.&lt;your.domain&gt;
  image:
    tag: v0.2.0
  deployment:
    secretName: linode-api-token</code></pre><h3 id="api-token">API Token</h3><p>By default, secrets are only accessible within the same namespace in which they are created. This is because secrets are stored as Kubernetes API objects, which are scoped to a particular namespace. This means that secrets cannot be accessed by pods or services in other namespaces, even if those namespaces are part of the same cluster. Because of this, we need to copy our Linode API token to the cert-manager namespace.</p><pre><code class="language-ssh">kubectl create namespace cert-manager
kubectl get secret linode-api-token --namespace=external-dns -oyaml | grep -v &apos;^\s*namespace:\s&apos; | kubectl apply --namespace=cert-manager -f -</code></pre><h3 id="cluster-issuers">Cluster Issuers</h3><p>For cert-manager to work properly with Let&apos;s Encrypt we need to configure a cert-manager ClusterIssuer. &#xA0;We are going to create more than 1 &#xA0;ClusterIssuer to allow testing our setup before using production Let&apos;s Encrypt certificates. This is useful because there is a rate-limit on the production Let&apos;s Encrypt certificate issuer. </p><p>First, let&apos;s create the Let&apos;s Encrypt staging file. &#xA0;This is the Let&apos;s Encrypt staging server. &#xA0;It is not a trusted certificate, but the server is not rate-limited. &#xA0;Use this configuration when you are testing sequences that will cause the issuance of more certificates than are allowed for Let&apos;s Encrypt trusted certificates.</p><pre><code class="language-yaml">#./bootstrap/cert-manager/templates/letsencrypt-staging.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    email: &lt;your@email&gt;
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
      - dns01:
          webhook:
            solverName: linode
            groupName: acme.&lt;your.domain&gt;</code></pre><p>The next file to create is the Let&apos;s Encrypt production ClusterIssuer file. &#xA0;This is the configuration to use when you want to issue a production Let&apos;s Encrypt certificate.</p><pre><code class="language-bash">#./bootstrap/cert-manager/templates/letsencrypt-prod.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: &lt;your@email&gt;
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - dns01:
          webhook:
            solverName: linode
            groupName: acme.&lt;your.domain&gt;</code></pre><p>In both of these files you will need to update the groupName to match the groupName define in the values file of the deployment.</p><pre><code class="language-yaml">groupName: acme.&lt;your.domain&gt;</code></pre><h3 id="update-our-ingress">Update our Ingress</h3><p>The next thing we need to do to manage ingress certificates automatically is open up the ingress SSL port. &#xA0;In our case, we are also going to automatically redirect all traffic to the appropriate SSL interface so that we can enforce secure traffic.</p><p>In my previous post, I showed how to configure Traefik to accept Kubernetes ingress traffic. &#xA0;We must add the following image arguments to our traefik configuration in the file ./traefik/traefik.yaml.</p><pre><code class="language-yaml">- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls</code></pre><p>We will need to open the SSL port on the container running the Traefik service. &#xA0;We will need to add the following to the container.ports section of the yaml.</p><pre><code class="language-yaml">- name: websecure
  containerPort: 443</code></pre><p>The SSL port for the Traefik service will also need to be opened for the ingress to work. &#xA0;This will require adding the following to the traefik-web-service LoadBalancer.</p><pre><code class="language-yaml">    - name: websecure
      targetPort: websecure
      port: 443</code></pre><p>The complete file will look like this.</p><pre><code class="language-yaml">./traefik/traefik.yaml</code></pre><pre><code class="language-yaml">apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-account
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-role

rules:
  - apiGroups:
      - &quot;&quot;
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-role-binding

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-role
subjects:
  - kind: ServiceAccount
    name: traefik-account
    namespace: traefik
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: traefik-deployment
  labels:
    app: traefik

spec:
  replicas: 1
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-account
      containers:
        - name: traefik
          image: traefik:v2.9
          args:
            - --api.insecure
            - --api.dashboard=true
            - --providers.kubernetesingress
            - --providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik-web-service
            - --entrypoints.web.address=:80
            - --entrypoints.web.http.redirections.entrypoint.to=websecure
            - --entrypoints.websecure.address=:443
            - --entrypoints.websecure.http.tls
          ports:
            - name: web
              containerPort: 80
            - name: websecure
              containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-service
spec:
  type: LoadBalancer
  ports:
    - name: web
      targetPort: web
      port: 80
    - name: websecure
      targetPort: websecure
      port: 443
  selector:
    app: traefik</code></pre><p>With all these edits in place, we will need to commit our code changes to our git repo like this:</p><pre><code class="language-bash">git add .
git commit -m &quot;adding cert-manager and updating traefik to force ssl&quot;
git push origin</code></pre><h2 id="lets-test-it">Let&apos;s test it!</h2><p>Let&apos;s update our whoami application to use SSL with a certificate from Let&apos;s Encrypt. &#xA0;We will use the test certificate store to make sure that everything is working correctly before we switch to the production certificate store. &#xA0;To do that we will need to update our whoami ingress in the ./apps/whoami/base/install.yaml file to add the following annotations.</p><pre><code class="language-yaml">  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-staging
    traefik.ingress.kubernetes.io/router.tls: &quot;true&quot;
    traefik.ingress.kubernetes.io/router.entrypoints: websecure</code></pre><p>We will also need to add TLS to the ingress definition.</p><pre><code class="language-yaml">  tls:
  - hosts:
      - whoami.mydomain.com
    secretName: secure-whoami-cert</code></pre><p>The full file will look like this:</p><pre><code class="language-yaml">#./apps/whoami/base/install.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoami
  labels:
    app: whoami

spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  ports:
    - name: http
      port: 80

  selector:
    app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: whoami-ingress
  annotations:
    external-dns.alpha.kubernetes.io/hostname: whoami.yourdomain.com
    cert-manager.io/cluster-issuer: letsencrypt-staging
    traefik.ingress.kubernetes.io/router.tls: &quot;true&quot;
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
  tls:
  - hosts:
      - whoami.yourdomain.com
    secretName: secure-whoami-cert
  rules:
  - host: whoami.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Exact
        backend:
          service:
            name:  whoami
            port:
              number: 80</code></pre><p>Commit the changes:</p><pre><code class="language-ssh">git add .
git commit -m &quot;update whoami ingress to use SSL&quot;
git push origin</code></pre><p>It may take a few seconds/minutes for the verification process and for the certificate to be issued. &#xA0; You can check on the process by watching the cert-manager application in ArgoCD UI. &#xA0;</p><p>You can also check the output for the web service using this curl command:</p><pre><code class="language-ssh">curl -k whoami.yourdomain.com</code></pre><p>Success!! You should have a secure whoami service. &#xA0;We have one problem though, we are using the untrusted staging Let&apos;s Encrypt certificate. &#xA0;We need to make one last change to our ingress to use the trusted production Let&apos;s Encrypt certificate. We need to change the following line:</p><pre><code class="language-yaml">cert-manager.io/cluster-issuer: letsencrypt-staging</code></pre><p>To the following:</p><pre><code class="language-yaml">cert-manager.io/cluster-issuer: letsencrypt-prod</code></pre><p>Commit the change:</p><pre><code class="language-ssh">git add .
git commit -m &quot;update whoami ingress to use production let&apos;s encrypt&quot;
git push origin</code></pre><p>Yay!! We are finally done! &#xA0;You should now have cert-manager configured to provide your services with both staging and production certificates from Let&apos;s Encrypt. </p>]]></content:encoded></item><item><title><![CDATA[Building an application platform with LKE and Argo CD]]></title><description><![CDATA[<p></p><h3 id="tldr">TL;DR</h3><p>This article is intended to show how to set up an LKE cluster, bootstrap it with Argo CD Autopilot and install ExternalDNS and Traefik.</p><p>I have been a customer of Linode for a while now and being that I am a platform engineer at my day job, I</p>]]></description><link>https://joshdmoore.com/linode-argocd/</link><guid isPermaLink="false">63d87469285d54000110c55e</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Platform Engineering]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Thu, 02 Mar 2023 01:08:53 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2023/03/linode-argo.png" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2023/03/linode-argo.png" alt="Building an application platform with LKE and Argo CD"><p></p><h3 id="tldr">TL;DR</h3><p>This article is intended to show how to set up an LKE cluster, bootstrap it with Argo CD Autopilot and install ExternalDNS and Traefik.</p><p>I have been a customer of Linode for a while now and being that I am a platform engineer at my day job, I wanted to build myself a platform on LKE(Linode Kubernetes) for several of my home projects and side hustles.  Doing what any engineer my age does first I &quot;googled&quot; it.  <em>On a side note, I want to mention that it looks like safari is technically switching to duck-duck-go for its default search engine.  We can talk more about that later.</em> While I did find many good articles and videos in my search, I did not find one that centers around GitOps and more specifically Argo CD.  I am a huge fan of both GitOps and Argo CD so I wanted to start my platform with those in mind.  </p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">Since there are many great articles out there on how to set up several of the components I am using, I will be referencing other articles for some of the basic setup.&#xA0;</div></div><p>Wanting to start my platform with Argo CD and having used Argo CD before, I knew they had a project called Argo CD Autopilot.  It is specifically intended for bootstrapping a new Kubernetes cluster with Argo CD and providing a good opinionated project code structure.  I wanted to give the Argo CD Autopilot project a try to see if it would help get me moving quicker.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x26A0;&#xFE0F;</div><div class="kg-callout-text">As I was writing this article <a href="https://www.akamai.com/newsroom/press-release/akamai-to-acquire-linode?ref=joshdmoore.com">Linode was purchased by AKAMAI</a>. Hopefully, this just means better things for the platform. However, I&apos;m still going to call them Linode because I like it better. &#x1F603;</div></div><h2 id="deploying-your-lke-cluster">Deploying your LKE cluster</h2><p>Linode has many good docs and guides.  They have several on how to set up your LKE cluster and configure your local kubectl config.  I will leave that up to them to explain as they will do a much better job than I will.  </p><p>I used the following article, but they have many others as well: <a href="https://www.linode.com/docs/guides/lke-continuous-deployment-part-3?ref=joshdmoore.com">https://www.linode.com/docs/guides/lke-continuous-deployment-part-3</a></p><p>For the rest of this article, I will assume you have a running LKE cluster.  The size of the cluster is not important as this should run on their minimum-sized cluster.</p><h2 id="bootstrapping-with-argo-cd-autopilot">Bootstrapping with Argo CD Autopilot</h2><p>Next, we will want to get our new Kubernetes cluster started right by bootstrapping all of the applications with Argo CD.  Argo CD has a project called Autopilot with the specific goal of making GitOps easier.  </p><p>Here is a link to their homepage: <a href="https://argocd-autopilot.readthedocs.io/en/stable/?ref=joshdmoore.com">https://argocd-autopilot.readthedocs.io/en/stable/</a></p><h3 id="to-get-started-using-argo-cd-autopilot-you-will-need">To get started using Argo CD Autopilot you will need:</h3>
<ul>
<li>a git repo (I use <a href="https://github.com/?ref=joshdmoore.com">GitHub</a>)</li>
<li>a <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token?ref=joshdmoore.com">token</a> for your git repo</li>
<li>the argocd-autopilot <a href="https://argocd-autopilot.readthedocs.io/en/stable/Installation-Guide/?ref=joshdmoore.com">command</a> for your OS installed</li>
<li>a kubernetes cluster (In our case <a href="https://www.linode.com/docs/products/compute/kubernetes/get-started/?ref=joshdmoore.com">LKE</a>)</li>
<li>kubectl <a href="https://www.containiq.com/post/kubectl-config-set-context-tutorial-and-best-practices?ref=joshdmoore.com">configured</a> to connect to your kubernetes cluster</li>
</ul>
<p>Argo CD Autopilot does a good job of documenting their cli and we will be following their getting started guide here: <a href="https://argocd-autopilot.readthedocs.io/en/stable/Getting-Started/?ref=joshdmoore.com">https://argocd-autopilot.readthedocs.io/en/stable/Getting-Started/</a></p><p>First, you will need to export your git token:</p><pre><code class="language-bash">export GIT_TOKEN=ghp_PcZ...IP0
</code></pre><p>Next, you will need to export the git repo you would like to store your code in.</p><pre><code class="language-bash">export GIT_REPO=https://github.com/owner/name/some/relative/path
</code></pre><p>Then you will simply execute the bootstrap command to get your new LKE cluster up and running with Argo CD</p><pre><code class="language-bash">argocd-autopilot repo bootstrap
</code></pre><p>Congratulations!! You should have Argo CD running on your cluster and connected to your git repository to deploy automatically based on your git commits.  The code that was generated will be pushed to the git repository that was specified with the GIT_REPO environment variable. You should be able to connect to the Argo CD UI using a local forward like this:</p><pre><code class="language-bash">kubectl port-forward -n argocd svc/argocd-server 8080:80</code></pre><p>Using a browser to access the address: <a href="http://localhost:8080/?ref=joshdmoore.com">http://localhost:8080</a> and using the username: <strong><em>admin </em></strong>and password given during the bootstrap command, you will get the Argo CD UI.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/Screenshot-2023-02-17-at-7.15.41-PM.png" class="kg-image" alt="Building an application platform with LKE and Argo CD" loading="lazy" width="2000" height="1192" srcset="https://joshdmoore.com/content/images/size/w600/2023/03/Screenshot-2023-02-17-at-7.15.41-PM.png 600w, https://joshdmoore.com/content/images/size/w1000/2023/03/Screenshot-2023-02-17-at-7.15.41-PM.png 1000w, https://joshdmoore.com/content/images/size/w1600/2023/03/Screenshot-2023-02-17-at-7.15.41-PM.png 1600w, https://joshdmoore.com/content/images/size/w2400/2023/03/Screenshot-2023-02-17-at-7.15.41-PM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>If you missed the password at setup you should be able to use this command to retrieve it:</p><pre><code class="language-bash">kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d</code></pre><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F937;&#x200D;&#x2642;&#xFE0F;</div><div class="kg-callout-text">When issuing the above command, if the token returned ends in a %. Exclude the % in the password.</div></div><p>If you look at the git repo used during the repo export command you should have a directory structure that looks like this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/Screenshot-2023-02-17-at-2.49.13-PM.png" class="kg-image" alt="Building an application platform with LKE and Argo CD" loading="lazy" width="672" height="592" srcset="https://joshdmoore.com/content/images/size/w600/2023/03/Screenshot-2023-02-17-at-2.49.13-PM.png 600w, https://joshdmoore.com/content/images/2023/03/Screenshot-2023-02-17-at-2.49.13-PM.png 672w"></figure><p>Argo CD Autopilot gives a good starting code structure to organize your code.  It has the following base dirs:</p><ul><li>app - This is where deployed application specifications live</li><li>bootstrap - This is where all of the applications and manifest live to bootstrap the cluster with Argo CD, including Argo CD itself.</li><li>projects - This is where all the Argo CD projects are defined.</li></ul><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">You should clone your argocd git repo, it will be needed later.</div></div><pre><code class="language-bash">git clone git@github.com:owner/name/some/relative/path.git</code></pre><h2 id="adding-externaldns-to-the-mix">Adding ExternalDNS to the mix</h2><p>Argo CD and GitOps give us a great start, but it would be nice if every service that is started on our platform was able to be accessed by name.  Enter <a href="https://github.com/kubernetes-sigs/external-dns?ref=joshdmoore.com">ExternalDNS</a>!  In their own words, &quot;ExternalDNS makes Kubernetes resources discoverable via public DNS servers. Like KubeDNS&quot;.  ExternalDNS has many integrations with DNS providers, but we are going to couple it with <a href="https://www.linode.com/docs/products/networking/dns-manager/?ref=joshdmoore.com">Linode Domains</a>.  Linode Domains is the DNS service that Linode provides that allows you to manage DNS for your domains and it&apos;s FREE!!! </p><p><strong>To get started we need the following:</strong></p><ul><li>Domain setup in <a href="xhttps://www.linode.com/docs/products/tools/cloud-manager/guides/cloud-domains/">Linode Domains</a></li><li><a href="https://www.linode.com/docs/products/tools/api/guides/manage-api-tokens/?ref=joshdmoore.com">Linode API key</a></li></ul><p>The integration between Linode Domains and ExternalDNS is fairly well documented <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/linode.md?ref=joshdmoore.com">here</a> but I will repost the RBAC deployment as I made some changes.</p><pre><code class="language-yaml">apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [&quot;&quot;]
  resources: [&quot;services&quot;,&quot;endpoints&quot;,&quot;pods&quot;]
  verbs: [&quot;get&quot;,&quot;watch&quot;,&quot;list&quot;]
- apiGroups: [&quot;extensions&quot;,&quot;networking.k8s.io&quot;]
  resources: [&quot;ingresses&quot;]
  verbs: [&quot;get&quot;,&quot;watch&quot;,&quot;list&quot;]
- apiGroups: [&quot;&quot;]
  resources: [&quot;nodes&quot;]
  verbs: [&quot;list&quot;]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: external-dns
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: registry.k8s.io/external-dns/external-dns:v0.13.2
        args:
        - --source=ingress
        - --source=service
        - --provider=linode
        - --domain-filter=example.com # (optional) limit to only example.com 
        - --txt-prefix=xdns-
        env:
        - name: LINODE_TOKEN
          valueFrom:
            secretKeyRef:
              name: linode-api-token
              key: token</code></pre><p>There are a few things I would like to note about my configuration as compared to the configuration in the Linode ExternalDNS example.  If you look at the &quot;args&quot; section of the spec you will notice that I have added 2 more args.  The first of which is another <em>sourc</em>e arg.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">- --source=ingress</div></div><p>This is done so that when we start creating ingresses, the ExternalDNS controller will watch ingress definitions also.  I find this helpful so that I can place all my annotations related to ingress and DNS in one spot on the ingress definition.</p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">- --domain-filter=example.com # (optional) limit to only example.com</div></div><p>This option is in the example but I think it&apos;s very important to mention here.  If you are running multiple instances of ExternalDNS against a DNS provider you will want to set this filter.  If you do not then the ExternalDNS instances will collide with each other and modify each other&apos;s DNS entries.  This might happen for instance, if you are running multiple clusters. </p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">- --txt-prefix=xdns-</div></div><p>The next additional arg added is to specify a prefix for the txt DNS records that are added.  This is used so that there will not be collisions if a CNAME record is created in Linode because CNAME records and TXT records <strong>CAN NOT</strong> be named the same.  ExternalDNS uses some logic to decide whether to create a CNAME or an A record depending if it detects a load balancer or not (This is explained in this <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/faq.md?ref=joshdmoore.com#im-using-an-elb-with-txt-registry-but-the-cname-record-clashes-with-the-txt-record-how-to-avoid-this">article</a>).  CNAME and TXT collisions showed up in my log files when I was originally doing the integration but have since gone away.  I believe they were related to <a href="https://github.com/kubernetes-sigs/external-dns/pull/2716?ref=joshdmoore.com">this issue</a>, which should have been fixed in version 12.2 but I was still seeing the issue.  I need to investigate more.</p><p>To deploy the ExternalDNS manifest we are going to use Argo CD.  In order to accomplish this we will need to add an Argo CD application specification along with the ExternalDNS manifest that we just looked at to the bootstrap directory of our Argo CD application.  Argo CD Autopilot cli has many handy features for creating projects and applications but unfortunately, they do not have one for creating bootstrap applications so we will have to do this manually.</p><p>Create the following files in the locations specified.</p><pre><code class="language-yaml">#./bootstrap/external-dns.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  creationTimestamp: null
  labels:
    app.kubernetes.io/managed-by: argocd-autopilot
    app.kubernetes.io/name: external-dns
  name: external-dns
  namespace: argocd
spec:
  destination:
    namespace: external-dns
    server: https://kubernetes.default.svc
  ignoreDifferences:
  - group: argoproj.io
    jsonPointers:
    - /status
    kind: Application
  project: default
  source:
    path: bootstrap/external-dns
    repoURL: https://github.com/owner/repo.git
  syncPolicy:
    automated:
      allowEmpty: true
      prune: true
      selfHeal: true
    syncOptions:
    - allowEmpty=true
    - CreateNamespace=true
status:
  health: {}
  summary: {}
  sync:
    comparedTo:
      destination: {}
      source:
        repoURL: &quot;&quot;
    status: &quot;&quot;
</code></pre><p>In this file, you will need to change the source repo to match your git repo.</p><pre><code class="language-yaml">repoURL: https://github.com/owner/repo.git</code></pre><p>Then we will need to add the ExternalDNS manifest to a new directory we create under the bootstrap directory like this.</p><pre><code class="language-bash">mkdir ./bootstrap/external-dns</code></pre><pre><code class="language-yaml">#./bootstrap/external-dns/external-dns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [&quot;&quot;]
  resources: [&quot;services&quot;,&quot;endpoints&quot;,&quot;pods&quot;]
  verbs: [&quot;get&quot;,&quot;watch&quot;,&quot;list&quot;]
- apiGroups: [&quot;extensions&quot;,&quot;networking.k8s.io&quot;]
  resources: [&quot;ingresses&quot;]
  verbs: [&quot;get&quot;,&quot;watch&quot;,&quot;list&quot;]
- apiGroups: [&quot;&quot;]
  resources: [&quot;nodes&quot;]
  verbs: [&quot;list&quot;]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: external-dns
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: registry.k8s.io/external-dns/external-dns:v0.13.1
        args:
        - --source=ingress
        - --source=service
        - --provider=linode
        - --domain-filter=example.com # (optional) limit to only example.com 
        - --txt-prefix=xdns-
        env:
        - name: LINODE_TOKEN
          valueFrom:
            secretKeyRef:
              name: linode-api-token
              key: token
</code></pre><p>You will notice in the external-dns manifest we specified the <a href="https://www.linode.com/docs/products/tools/api/guides/manage-api-tokens/?ref=joshdmoore.com">Linode API token</a> needed by external-dns to access your Linode account like this:</p><pre><code class="language-yaml">env:
- name: LINODE_TOKEN
  valueFrom:
    secretKeyRef:
      name: linode-credentials
      key: token
</code></pre><p>The<em> valueFrom:secretKeyRef:</em> allows us to tell Kubernetes to pass in a secret key for this env value.  We will use this mechanism so that we do not have to commit our secret Linode token to source control. </p><div class="kg-card kg-callout-card kg-callout-card-grey"><div class="kg-callout-emoji">&#x26A0;&#xFE0F;</div><div class="kg-callout-text">NOTE: For now you will just need to create the secret manually. We will tackle external-secrets in a future post.</div></div><p>We will create our Linode secret using the kubectl command. To create a secret using kubectl issue this command with kubectl connected to the appropriate Kubernetes context.  We will also issue a command to create the namespace.  Normally argo-cd will do this, but in this instance, we need the secret before argo-cd has run gitops.</p><pre><code class="language-bash">kubectl create namespace external-dns
kubectl create secret generic linode-api-token -n external-dns --from-literal=token=&apos;token_goes_here&apos;
</code></pre>
<p>After this secret is created you are ready to commit the source code in your project created by Argo CD.  You can issue these commands from the repository you cloned previously. </p><pre><code class="language-bash">git add .
git commit -m &quot;adding externalDNS&quot;
git push origin</code></pre><p>Argo CD will see the commit and perform all of its GitOps duties.  Once deployed you should see something like this from <a href="http://localhost:8080/?ref=joshdmoore.com">localhost:8080</a> if you are still running the port-forward command to Argo CD&apos;s service.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/Screenshot-2023-02-17-at-7.16.27-PM.png" class="kg-image" alt="Building an application platform with LKE and Argo CD" loading="lazy" width="2000" height="1192" srcset="https://joshdmoore.com/content/images/size/w600/2023/03/Screenshot-2023-02-17-at-7.16.27-PM.png 600w, https://joshdmoore.com/content/images/size/w1000/2023/03/Screenshot-2023-02-17-at-7.16.27-PM.png 1000w, https://joshdmoore.com/content/images/size/w1600/2023/03/Screenshot-2023-02-17-at-7.16.27-PM.png 1600w, https://joshdmoore.com/content/images/size/w2400/2023/03/Screenshot-2023-02-17-at-7.16.27-PM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>Congratulations!! You now have a working externalDNS.  </p><h2 id="adding-ingress">Adding Ingress</h2><p>The next thing needed to have a good functioning platform is an ingress implementation.  There are many Kubernetes ingress implementations out there but I have come to like <a href="https://traefik.io/traefik/?ref=joshdmoore.com">Traefik</a>.</p><p>Implementing Traefik is fairly simple and that is one of the main reasons that I like using it.  Just like in our ExternalDNS implementation, we will need to add a few files to the Argo CD bootstrap directory.  First, we will add the application specification for Traefik as so:</p><pre><code class="language-yaml">#./bootstrap/traefik.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  creationTimestamp: null
  labels:
    app.kubernetes.io/managed-by: argocd-autopilot
    app.kubernetes.io/name: traefik
  name: traefik
  namespace: argocd
spec:
  destination:
    namespace: traefik
    server: https://kubernetes.default.svc
  ignoreDifferences:
  - group: argoproj.io
    jsonPointers:
    - /status
    kind: Application
  project: default
  source:
    path: bootstrap/traefik
    repoURL: https://github.com/owner/repo.git
  syncPolicy:
    automated:
      allowEmpty: true
      prune: true
      selfHeal: true
    syncOptions:
    - allowEmpty=true
    - CreateNamespace=true
status:
  health: {}
  summary: {}
  sync:
    comparedTo:
      destination: {}
      source:
        repoURL: &quot;&quot;
    status: &quot;&quot;
</code></pre><p>Again, In this file, you will need to change the source repo to match your git repo.</p><pre><code class="language-yaml">repoURL: https://github.com/owner/repo.git</code></pre><p>Next, we need to create a traefik directory to put the traefik manifest and CRD in like this:</p><pre><code class="language-bash">mkdir ./bootstrap/traefik</code></pre><p>Then we need to create the traefik manifest.</p><pre><code class="language-yaml">#./bootstrap/traefik/traefik.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-account
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-role

rules:
  - apiGroups:
      - &quot;&quot;
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-role-binding

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-role
subjects:
  - kind: ServiceAccount
    name: traefik-account
    namespace: traefik
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: traefik-deployment
  labels:
    app: traefik

spec:
  replicas: 1
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-account
      containers:
        - name: traefik
          image: traefik:v2.9
          args:
            - --api.insecure
            - --api.dashboard=true
            - --providers.kubernetesingress
            - --providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik-web-service
            - --entrypoints.web.address=:80
          ports:
            - name: web
              containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-service
spec:
  type: LoadBalancer
  ports:
    - name: web
      targetPort: web
      port: 80
  selector:
    app: traefik</code></pre><p>The key things to notice in the traefik.yaml files are the args and the ports.  Let&apos;s talk about the args first.  Here is the one we want to pay special attention to:</p><pre><code>- --providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik-web-service
- --entrypoints.web.address=:80
</code></pre>
<p>The arg in our list &quot;publishedservice&quot; is a namespace/service argument for the service that is responsible for publishing the ingresses.</p><p>The &quot;entrypoints&quot; args are used to enable access from the outside and associate it with a port.  Currently, we will just open up port 80 we will address SSL in the future.  </p><p>Again we will need to commit our code changes to our git repo like this:</p><pre><code class="language-bash">git add .
git commit -m &quot;adding Traefik ingress&quot;
git push origin</code></pre><p>Once you commit these files and Argo CD refreshes you should see something similar to this:</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/Screenshot-2023-02-17-at-10.33.28-PM.png" class="kg-image" alt="Building an application platform with LKE and Argo CD" loading="lazy" width="2000" height="1079" srcset="https://joshdmoore.com/content/images/size/w600/2023/03/Screenshot-2023-02-17-at-10.33.28-PM.png 600w, https://joshdmoore.com/content/images/size/w1000/2023/03/Screenshot-2023-02-17-at-10.33.28-PM.png 1000w, https://joshdmoore.com/content/images/size/w1600/2023/03/Screenshot-2023-02-17-at-10.33.28-PM.png 1600w, https://joshdmoore.com/content/images/size/w2400/2023/03/Screenshot-2023-02-17-at-10.33.28-PM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p> </p><h2 id="lets-deploy-an-app">Let&apos;s Deploy an App</h2><p>So that we can check all of our hard work let&apos;s deploy a simple app using Argo CD.  As I said before Argo CD Autopilot has a good cli and we can use that to create our Argo CD project and application.  </p><p>Let&apos;s first create a project list this:</p><pre><code class="language-bash">argocd-autopilot project create test</code></pre><p>Next, we will need to create the application specification that Argo CD will use to manage the application.  Currently, Autopilot only supports creating applications from a Kustomization specification.  Some people see this as a limitation, but as mentioned in this <a href="https://github.com/argoproj-labs/argocd-autopilot/issues/38?ref=joshdmoore.com">issue</a>. Kustomize allows a native way to import HELM charts if that is what the application is specified.  </p><p>Argo CD Autopilot cli will need a few things for it to create the application.  One of the things it will need is the initial Kustomization file you want to use to create your application.  It can get the Kustomization file from a git repo or a local file system.  We are going to use a local file system to make things easier.  You will need to create the following files on your local system somewhere outside of the Argo CD Autopilot repo.</p><pre><code class="language-yaml">#kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml</code></pre><pre><code class="language-yaml">#deployment.yaml
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoami
  labels:
    app: whoami

spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  ports:
    - name: http
      port: 80

  selector:
    app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: whoami-ingress
  annotations:
    external-dns.alpha.kubernetes.io/hostname: whoami.yourdomain.com
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
  - host: whoami.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Exact
        backend:
          service:
            name:  whoami
            port:
              number: 80
</code></pre><p> Few things to mention here.  You will need to change:</p><pre><code class="language-yaml">whoami.yourdomain.com</code></pre><p>To a domain that you own and which is configured in Linode Domains.</p><p>Next, we will run the command to create the application in your Argo CD repo.</p><pre><code class="language-bash">argocd-autopilot app create whoami --app ./path/to/kustomization/dir --project test</code></pre><p>You should now be able to access HTTP//:whoami.yourdomain.com in your browser of choice. </p><h2 id="closing">Closing</h2><p>Yay!! You did it.  You should now have a platform that can:</p><ul><li>Deploy an application using GitOps</li><li>Automatically create DNS records</li><li>Automatically setup Ingress for a service</li></ul><p>In my next post, I will show how to add automatic SSL to the mix with cert-manager and LetsEncrypt.</p>]]></content:encoded></item><item><title><![CDATA[What on earth is going on here??]]></title><description><![CDATA[<p>Here is a bit of a back story about why I created a blog in 2023.</p><h2 id="tldr">TL;DR</h2><p>I&apos;m a tech junky with a computer science degree living in the country, full-time teleworking, that has been putting off creating a tech blog for 20 years.</p><p></p><p>Where do I</p>]]></description><link>https://joshdmoore.com/what-on-earth-is-going-on-here/</link><guid isPermaLink="false">63d2d8ce285d54000110c392</guid><category><![CDATA[About Me]]></category><dc:creator><![CDATA[Josh Moore]]></dc:creator><pubDate>Fri, 27 Jan 2023 14:20:12 GMT</pubDate><media:content url="https://joshdmoore.com/content/images/2023/03/IMG_2557-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://joshdmoore.com/content/images/2023/03/IMG_2557-1.png" alt="What on earth is going on here??"><p>Here is a bit of a back story about why I created a blog in 2023.</p><h2 id="tldr">TL;DR</h2><p>I&apos;m a tech junky with a computer science degree living in the country, full-time teleworking, that has been putting off creating a tech blog for 20 years.</p><p></p><p>Where do I start? &#xA0;First I guess I will start by telling you a little about me. &#xA0;I&apos;m a software engineer/devops engineer/platform engineer and just general tech junky. I&apos;m in my mid-forties, which means that I spent my youth in the best decade ever, &#xA0;the 90s!!! &#xA0;I grew up in a small town somewhere in middle America. &#xA0;I grew up in the country; hunting, camping, going to the lake, and shooting guns are the things I did for fun. </p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/1920px-nes-console-set.jpg" class="kg-image" alt="What on earth is going on here??" loading="lazy" width="1200" height="652" srcset="https://joshdmoore.com/content/images/size/w600/2023/03/1920px-nes-console-set.jpg 600w, https://joshdmoore.com/content/images/size/w1000/2023/03/1920px-nes-console-set.jpg 1000w, https://joshdmoore.com/content/images/2023/03/1920px-nes-console-set.jpg 1200w" sizes="(min-width: 720px) 720px"></figure><p>Even though I lived in the country I still grew up as a part of the original Nintendo generation and I have always had a scientific mind. &#xA0;When I was around 12 years old I got my first computer for Christmas. &#xA0;I was so excited, some of my friends had a computer and I had wanted one for quite a while. It was a 286 and it was awesome! </p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/IMG_2559-1.png" class="kg-image" alt="What on earth is going on here??" loading="lazy" width="398" height="398"></figure><p>I did my fair share of playing games as most kids did but I was also very interested in making the computer do things I wanted it to do. &#xA0;One of the first things I did was to write a menu system for our 286. &#xA0;At the time Windows did not exist, so you were stuck using ms-dos. &#xA0;My mom would take me to the library and I would check out programming books. Yes, I had to check out actual books, the internet was not a thing for the common man yet. &#xA0;With help from my library books, I was able to write a simple menu system that made it easier for my family to use the computer. &#xA0;It was simple and the user just typed in a number to get to one of the programs that were loaded on the system. &#xA0;That&apos;s all it took and I was hooked, I knew I wanted to be a software engineer or rock star. &#xA0;We will talk more about that in a future post, maybe.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/IMG_2560.png" class="kg-image" alt="What on earth is going on here??" loading="lazy" width="398" height="398"></figure><p>Fast forward a few years and I was in college pursuing my software engineering degree at a mid-sized state college and downloading illegal music off of Napster of course. I was working at a well-known internet company at the time doing tech support and that would lead me to meet my future wife online. &#xA0; This was way before dating sites and the like, but we met in what would have been a &quot;chat room&quot; at the time. &#xA0;I graduated with my computer science degree and worked for a few start-ups that allowed me to move out of my small town and travel around the country. &#xA0;</p><p>A few years later and my now wife and I were looking to move back home to the state we were from and I was lucky enough to find a job at a government agency where the work seemed interesting. &#xA0;I never really thought I would stay at a company for a long time as I like to move around and do different parts of the tech industry. &#xA0;Nor had I ever really thought about working for the government and what that really meant and some of the connotations associated with it. &#xA0;However, fast-forward about 2 decades and it turns out I am still happily working for that same government agency. &#xA0;In my career at the agency, I have been fortunate enough to have moved around and been allowed to do many different interesting and cutting-edge things. &#xA0;I have been a software engineer, application security SME, a DevOps engineer, and I am currently the lead platform engineer for our new Kubernetes platform. &#xA0;The agency I work for also started letting IT people start teleworking many years ago and I was allowed to start teleworking full-time a few years before the 2020 covid lockdown. &#xA0;I take pride in the work that I do for the government and enjoy my job and the people I work with.</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/IMG_2561.png" class="kg-image" alt="What on earth is going on here??" loading="lazy" width="398" height="398"></figure><p>I always knew I would move back to the country and a little over a decade ago my wife and I were able to purchase our dream property and build our dream home. &#xA0;Many of the people I work with are fascinated that I am a tech guy living on 20 acres somewhere out in the country, but I wouldn&apos;t have it any other way. &#xA0;I have used my tech knowledge to become a home automation junky and I am always trying to automate or modernize some aspect of my family&apos;s daily living. &#xA0;</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/IMG_2562.png" class="kg-image" alt="What on earth is going on here??" loading="lazy" width="398" height="398"></figure><p>Ok long story short, I have always wanted to create a tech blog just never took the time to write one; Notice I registered my joshdmoore.com domain in 2002. &#xA0;</p><!--kg-card-begin: markdown--><pre><code>Domain Name: JOSHDMOORE.COM
Registry Domain ID: 87627451_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.godaddy.com
Registrar URL: https://www.godaddy.com
Updated Date: 2022-06-21T14:37:59Z
Creation Date: 2002-06-17T21:17:45Z
Registrar Registration Expiration Date: 2024-06-17T21:17:45Z
Registrar: GoDaddy.com, LLC
</code></pre>
<!--kg-card-end: markdown--><p>My 2023 resolution is to do some of the things I have wanted to do but have not done for one reason or another. &#xA0;This will mainly be a tech blog but will cover other things such as country living and a few other aspects of my life. &#xA0;</p><figure class="kg-card kg-image-card"><img src="https://joshdmoore.com/content/images/2023/03/IMG_2563.png" class="kg-image" alt="What on earth is going on here??" loading="lazy" width="398" height="398"></figure>]]></content:encoded></item></channel></rss>