ci: gate x86 build until amd64 runner exists; ARM64 release self-sufficient
Some checks failed
CI / Go Tests (push) Successful in 2m30s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Successful in 1m37s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Successful in 2m0s
CI / Shellcheck (push) Failing after 10m50s
Release / Build x86_64 ISO + disk image (push) Blocked by required conditions
ARM64 Build / Build generic ARM64 disk image (push) Failing after 1h6m52s
Release / Test (push) Successful in 1m59s
Release / Build Binaries (linux-amd64) (push) Successful in 1m33s
Release / Build Binaries (linux-arm64) (push) Successful in 1m40s
Release / Build ARM64 disk image (push) Successful in 1h11m43s
Release / Publish Gitea Release (push) Successful in 3m1s

v0.3.1's first release.yaml run exposed two issues:

1. The `ubuntu-latest` label resolved to the Odroid (only runner registered
   with that label), which is arm64. apt-get install grub-efi-amd64-bin
   then failed because ports.ubuntu.com only ships arm64 packages — the
   amd64 grub binaries don't exist in the arm64 repo. Building x86 ISOs
   on an arm64 host requires either a native amd64 runner or
   qemu-user-static emulation; neither is set up.

2. The `arm64-linux:host` runner runs jobs directly on the Odroid host
   (no Docker), and actions/checkout@v4 is a JS action needing Node 20+
   in $PATH. The Odroid had no Node installed at all, so checkout failed.

Fixes:

- `build-iso-amd64` gated `if: false` and `runs-on: amd64-linux`. The job
  stays in the workflow as a placeholder for when an amd64 runner is
  eventually registered. Flip the `if: false` line at that time and it
  starts working.

- `release` job no longer depends on build-iso-amd64, so the workflow
  completes with just ARM64 + Go binaries. `if: always() && needs.X ==
  'success'` for the jobs we actually require.

- Release body no longer promises x86 artifacts that aren't there.
  Replaced with a clear note about how to build x86 from source at the
  release tag.

Operator action required for the Odroid runner:
  curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
  sudo apt install -y nodejs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-15 16:48:58 -06:00
parent 81b29fd237
commit eb39787cf3

View File

@@ -76,7 +76,14 @@ jobs:
build-iso-amd64:
name: Build x86_64 ISO + disk image
runs-on: ubuntu-latest
# Routes to a runner with the `amd64-linux` label. As of v0.3.x no such
# runner exists in this Gitea instance — the only runner is the Odroid
# which is arm64 and would fail apt-installing grub-efi-amd64-bin /
# syslinux because those packages aren't in the arm64 ports repo. The
# job stays in the workflow (so it auto-runs once an amd64 runner is
# registered) but is gated and the release job continues without it.
if: false # remove this line once an amd64-linux runner is registered
runs-on: amd64-linux
needs: build-binaries
steps:
- uses: actions/checkout@v4
@@ -147,7 +154,15 @@ jobs:
release:
name: Publish Gitea Release
runs-on: ubuntu-latest
needs: [build-binaries, build-iso-amd64, build-disk-arm64]
# build-iso-amd64 is gated `if: false` in v0.3.x (no amd64 runner yet);
# don't block the release on it. build-disk-arm64 is required — that's
# the headline artifact for v0.3.x. build-binaries is required since
# the Go binaries are core to every release.
needs: [build-binaries, build-disk-arm64]
# `if: always()` so the release publishes even if the gated x86 job
# somehow ran-and-failed instead of being skipped. The downstream
# `find` in the Flatten step ignores missing files gracefully.
if: always() && needs.build-binaries.result == 'success' && needs.build-disk-arm64.result == 'success'
steps:
- uses: actions/checkout@v4
@@ -190,32 +205,42 @@ jobs:
### Downloads
- \`kubesolo-os-${DISPLAY}.iso\` — bootable x86_64 ISO
- \`kubesolo-os-${DISPLAY}.img.xz\` — x86_64 raw disk image (A/B GPT, GRUB)
- \`kubesolo-os-${DISPLAY}.arm64.img.xz\` — ARM64 raw disk image (A/B GPT, UEFI)
- \`kubesolo-cloudinit-linux-{amd64,arm64}\` — standalone cloud-init parser
- \`kubesolo-update-linux-{amd64,arm64}\` — standalone update agent
- \`SHA256SUMS\` — checksums for every artifact above
> **x86_64 ISO + disk image**: not built automatically yet. The
> release workflow's amd64 build job needs an amd64-linux runner,
> which this Gitea instance doesn't have yet. To produce them
> yourself, clone the repo at this tag and run \`make iso disk-image\`
> on any Linux amd64 host.
### Verify
\`\`\`
sha256sum -c SHA256SUMS
\`\`\`
### Quick start
### Quick start (ARM64)
\`\`\`
# x86_64 in QEMU/KVM
xz -d kubesolo-os-${DISPLAY}.img.xz
qemu-system-x86_64 -m 2048 -smp 2 -enable-kvm \\
-drive file=kubesolo-os-${DISPLAY}.img,format=raw,if=virtio \\
-nographic
# ARM64 on Graviton/Ampere or under qemu-system-aarch64
# On Graviton/Ampere/any UEFI ARM64 host:
xz -d kubesolo-os-${DISPLAY}.arm64.img.xz
dd if=kubesolo-os-${DISPLAY}.arm64.img of=/dev/sdX bs=4M status=progress
sudo dd if=kubesolo-os-${DISPLAY}.arm64.img of=/dev/sdX bs=4M status=progress
# Under qemu-system-aarch64 (Apple Silicon w/ HVF):
UEFI_FW=\$(brew --prefix qemu)/share/qemu/edk2-aarch64-code.fd
qemu-system-aarch64 -M virt -accel hvf -cpu host -m 2048 -smp 2 \\
-nographic -bios "\$UEFI_FW" \\
-drive file=kubesolo-os-${DISPLAY}.arm64.img,format=raw,if=virtio,media=disk \\
-device virtio-rng-pci \\
-net nic,model=virtio \\
-net user,hostfwd=tcp::6443-:6443,hostfwd=tcp::8080-:8080
\`\`\`
Then from the host: \`curl http://localhost:8080 > ~/.kube/kubesolo-config\`
and \`kubectl --kubeconfig ~/.kube/kubesolo-config get nodes\`.
EOF
cat release-body.md