Files
kubesolo-os/.gitea/workflows/release.yaml
Adolfo Delorenzo eb39787cf3
Some checks failed
CI / Go Tests (push) Successful in 2m30s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Successful in 1m37s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Successful in 2m0s
CI / Shellcheck (push) Failing after 10m50s
Release / Build x86_64 ISO + disk image (push) Blocked by required conditions
ARM64 Build / Build generic ARM64 disk image (push) Failing after 1h6m52s
Release / Test (push) Successful in 1m59s
Release / Build Binaries (linux-amd64) (push) Successful in 1m33s
Release / Build Binaries (linux-arm64) (push) Successful in 1m40s
Release / Build ARM64 disk image (push) Successful in 1h11m43s
Release / Publish Gitea Release (push) Successful in 3m1s
ci: gate x86 build until amd64 runner exists; ARM64 release self-sufficient
v0.3.1's first release.yaml run exposed two issues:

1. The `ubuntu-latest` label resolved to the Odroid (only runner registered
   with that label), which is arm64. apt-get install grub-efi-amd64-bin
   then failed because ports.ubuntu.com only ships arm64 packages — the
   amd64 grub binaries don't exist in the arm64 repo. Building x86 ISOs
   on an arm64 host requires either a native amd64 runner or
   qemu-user-static emulation; neither is set up.

2. The `arm64-linux:host` runner runs jobs directly on the Odroid host
   (no Docker), and actions/checkout@v4 is a JS action needing Node 20+
   in $PATH. The Odroid had no Node installed at all, so checkout failed.

Fixes:

- `build-iso-amd64` gated `if: false` and `runs-on: amd64-linux`. The job
  stays in the workflow as a placeholder for when an amd64 runner is
  eventually registered. Flip the `if: false` line at that time and it
  starts working.

- `release` job no longer depends on build-iso-amd64, so the workflow
  completes with just ARM64 + Go binaries. `if: always() && needs.X ==
  'success'` for the jobs we actually require.

- Release body no longer promises x86 artifacts that aren't there.
  Replaced with a clear note about how to build x86 from source at the
  release tag.

Operator action required for the Odroid runner:
  curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
  sudo apt install -y nodejs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-15 16:48:58 -06:00

295 lines
11 KiB
YAML

name: Release
# Triggered by `git push origin vX.Y.Z`. Builds Go binaries (amd64+arm64),
# x86_64 ISO + disk image, ARM64 disk image, computes SHA256SUMS over all
# artifacts, and posts a Gitea release with everything attached via the
# Gitea API.
#
# Notes for future-you:
# - upload-artifact / download-artifact are pinned to @v3 because Gitea's
# act_runner v1.0.x doesn't fully implement v4 yet.
# - The release step uses curl against Gitea's own /api/v1/repos/.../releases
# instead of a third-party action (softprops/action-gh-release et al);
# act_runner doesn't reliably proxy GitHub.com-targeted actions.
# - The arm64 disk-image build runs on the Odroid self-hosted runner via
# the `arm64-linux` label. Docs in docs/ci-runners.md.
on:
push:
tags:
- 'v*'
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Test cloud-init
run: cd cloud-init && go test ./... -count=1
- name: Test update agent
run: cd update && go test ./... -count=1
build-binaries:
name: Build Binaries (${{ matrix.suffix }})
runs-on: ubuntu-latest
needs: test
strategy:
matrix:
include:
- goos: linux
goarch: amd64
suffix: linux-amd64
- goos: linux
goarch: arm64
suffix: linux-arm64
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Get version
id: version
run: echo "version=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
- name: Build cloud-init
run: |
CGO_ENABLED=0 GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} \
go build -ldflags="-s -w -X main.version=${{ steps.version.outputs.version }}" \
-o kubesolo-cloudinit-${{ matrix.suffix }} ./cmd/
working-directory: cloud-init
- name: Build update agent
run: |
CGO_ENABLED=0 GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} \
go build -ldflags="-s -w -X main.version=${{ steps.version.outputs.version }}" \
-o kubesolo-update-${{ matrix.suffix }} .
working-directory: update
- name: Upload binaries
uses: actions/upload-artifact@v3
with:
name: binaries-${{ matrix.suffix }}
path: |
cloud-init/kubesolo-cloudinit-${{ matrix.suffix }}
update/kubesolo-update-${{ matrix.suffix }}
build-iso-amd64:
name: Build x86_64 ISO + disk image
# Routes to a runner with the `amd64-linux` label. As of v0.3.x no such
# runner exists in this Gitea instance — the only runner is the Odroid
# which is arm64 and would fail apt-installing grub-efi-amd64-bin /
# syslinux because those packages aren't in the arm64 ports repo. The
# job stays in the workflow (so it auto-runs once an amd64 runner is
# registered) but is gated and the release job continues without it.
if: false # remove this line once an amd64-linux runner is registered
runs-on: amd64-linux
needs: build-binaries
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Install build deps
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends \
cpio gzip genisoimage isolinux syslinux syslinux-common \
syslinux-utils xorriso xz-utils wget squashfs-tools \
dosfstools e2fsprogs fdisk parted libarchive-tools \
grub-common grub-efi-amd64-bin grub-pc-bin kpartx \
busybox-static iptables nftables
- name: Build kernel + ISO + disk-image
run: |
make kernel
make build-cloudinit build-update-agent
make rootfs initramfs
make iso
make disk-image
- name: Compress disk image
# The raw .img is 4 GB sparse; xz takes it to ~50-300 MB depending
# on dictionary level. Use -6 (default) for memory safety on the
# GitHub-Actions-style runner.
run: |
xz -k -T0 --memlimit-compress=1500MiB -6 output/*.img
ls -lh output/
- name: Upload x86_64 artifacts
uses: actions/upload-artifact@v3
with:
name: image-amd64
path: |
output/*.iso
output/*.img.xz
build-disk-arm64:
name: Build ARM64 disk image
runs-on: arm64-linux
needs: test
steps:
- uses: actions/checkout@v4
- name: Show host info
run: |
uname -a
nproc
free -h
df -h /
- name: Build kernel + rootfs + disk-image
# Runner runs as root via systemd; explicit sudo is harmless but
# documented as such in docs/ci-runners.md.
run: |
make kernel-arm64
make build-cross
make rootfs-arm64
make disk-image-arm64
- name: Compress disk image
run: |
xz -k -T0 --memlimit-compress=1500MiB -6 output/*.arm64.img
ls -lh output/
- name: Upload ARM64 artifacts
uses: actions/upload-artifact@v3
with:
name: image-arm64
path: output/*.arm64.img.xz
release:
name: Publish Gitea Release
runs-on: ubuntu-latest
# build-iso-amd64 is gated `if: false` in v0.3.x (no amd64 runner yet);
# don't block the release on it. build-disk-arm64 is required — that's
# the headline artifact for v0.3.x. build-binaries is required since
# the Go binaries are core to every release.
needs: [build-binaries, build-disk-arm64]
# `if: always()` so the release publishes even if the gated x86 job
# somehow ran-and-failed instead of being skipped. The downstream
# `find` in the Flatten step ignores missing files gracefully.
if: always() && needs.build-binaries.result == 'success' && needs.build-disk-arm64.result == 'success'
steps:
- uses: actions/checkout@v4
- name: Get version
id: version
# `cat VERSION` would be stale on tag pushes (VERSION already bumped
# for the tag, but using ref_name is unambiguous).
run: echo "version=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: Download all artifacts
uses: actions/download-artifact@v3
with:
path: artifacts
- name: Flatten artifacts + compute checksums
run: |
mkdir -p release
# Each upload-artifact wrote into artifacts/<name>/...
find artifacts -type f \( \
-name "*.iso" -o \
-name "*.img.xz" -o \
-name "kubesolo-*" \
\) -exec cp {} release/ \;
(cd release && sha256sum * | sort > SHA256SUMS)
ls -lh release/
cat release/SHA256SUMS
- name: Install release tooling
run: sudo apt-get update && sudo apt-get install -y jq curl
- name: Render release body
id: body
run: |
VERSION="${{ steps.version.outputs.version }}"
# Strip the leading 'v' for cosmetic display in the body.
DISPLAY="${VERSION#v}"
cat > release-body.md <<EOF
See [docs/release-notes-${DISPLAY}.md](./docs/release-notes-${DISPLAY}.md)
and [CHANGELOG.md](./CHANGELOG.md) for the full release notes.
### Downloads
- \`kubesolo-os-${DISPLAY}.arm64.img.xz\` — ARM64 raw disk image (A/B GPT, UEFI)
- \`kubesolo-cloudinit-linux-{amd64,arm64}\` — standalone cloud-init parser
- \`kubesolo-update-linux-{amd64,arm64}\` — standalone update agent
- \`SHA256SUMS\` — checksums for every artifact above
> **x86_64 ISO + disk image**: not built automatically yet. The
> release workflow's amd64 build job needs an amd64-linux runner,
> which this Gitea instance doesn't have yet. To produce them
> yourself, clone the repo at this tag and run \`make iso disk-image\`
> on any Linux amd64 host.
### Verify
\`\`\`
sha256sum -c SHA256SUMS
\`\`\`
### Quick start (ARM64)
\`\`\`
# On Graviton/Ampere/any UEFI ARM64 host:
xz -d kubesolo-os-${DISPLAY}.arm64.img.xz
sudo dd if=kubesolo-os-${DISPLAY}.arm64.img of=/dev/sdX bs=4M status=progress
# Under qemu-system-aarch64 (Apple Silicon w/ HVF):
UEFI_FW=\$(brew --prefix qemu)/share/qemu/edk2-aarch64-code.fd
qemu-system-aarch64 -M virt -accel hvf -cpu host -m 2048 -smp 2 \\
-nographic -bios "\$UEFI_FW" \\
-drive file=kubesolo-os-${DISPLAY}.arm64.img,format=raw,if=virtio,media=disk \\
-device virtio-rng-pci \\
-net nic,model=virtio \\
-net user,hostfwd=tcp::6443-:6443,hostfwd=tcp::8080-:8080
\`\`\`
Then from the host: \`curl http://localhost:8080 > ~/.kube/kubesolo-config\`
and \`kubectl --kubeconfig ~/.kube/kubesolo-config get nodes\`.
EOF
cat release-body.md
- name: Create release via Gitea API
env:
# Gitea's act_runner auto-populates this with repo-write scope.
# If not, set a personal access token as a secret named GITEA_TOKEN
# on the org and swap the var name below.
TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
TAG="${{ steps.version.outputs.version }}"
REPO_API="${{ github.api_url }}/repos/${{ github.repository }}"
# 1. Create the release. The API is GitHub-compatible at the
# request shape; the response includes the numeric release id we
# need for asset uploads.
PAYLOAD=$(jq -n \
--arg tag "$TAG" \
--arg name "KubeSolo OS $TAG" \
--rawfile body release-body.md \
'{tag_name: $tag, name: $name, body: $body, draft: false, prerelease: false}')
echo "==> Creating release for $TAG against $REPO_API"
CREATE_RESP=$(curl -fsSL -X POST \
-H "Authorization: token $TOKEN" \
-H "Content-Type: application/json" \
-d "$PAYLOAD" \
"$REPO_API/releases")
RELEASE_ID=$(echo "$CREATE_RESP" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Could not extract release id from response:"
echo "$CREATE_RESP" | jq . || echo "$CREATE_RESP"
exit 1
fi
echo "==> Release id: $RELEASE_ID"
# 2. Upload each asset. asset?name= names the attachment; we use
# the basename so users see the same filename the build produced.
for f in release/*; do
[ -f "$f" ] || continue
name=$(basename "$f")
echo "==> Uploading $name ($(du -h "$f" | cut -f1))"
curl -fsSL -X POST \
-H "Authorization: token $TOKEN" \
-F "attachment=@$f" \
"$REPO_API/releases/$RELEASE_ID/assets?name=$name" >/dev/null
done
echo "==> Release published: $REPO_API/../releases/tag/$TAG"