Files
kubesolo-os/.gitea/workflows/release.yaml
Adolfo Delorenzo 76ed2ffc14
Some checks failed
ARM64 Build / Build generic ARM64 disk image (push) Failing after 5s
CI / Go Tests (push) Successful in 1m49s
CI / Shellcheck (push) Successful in 56s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Successful in 1m43s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Successful in 1m54s
fix(arm64): resolve dual-glibc loading that triggers stack-canary aborts
Second nft crash report from QEMU virt:

  failed to set up pod masquerade
    nft add table ip kubesolo-masq:
      signal: aborted (output: *** stack smashing detected ***: terminated)

Root cause: two glibcs are visible to dynamically-linked binaries in the
rootfs. piCore64 ships glibc at /lib/libc.so.6; we copy the build host's
glibc (for the iptables-nft / nft / xtables-modules family) to
/lib/$LIB_ARCH/libc.so.6. The dynamic linker can resolve one binary's
NEEDED libc.so.6 to piCore's and another (via transitive load through
e.g. libnftables.so.1) to ours. Each libc has its own __stack_chk_guard
global; stack frames whose canary was written by code from libc-A and
checked by code from libc-B trip "stack smashing detected" → SIGABRT.
This didn't fire before nft was added because no host-installed dyn
binary actually got invoked before kubesolo crashed at first-boot
preflight.

Three layered fixes in inject-kubesolo.sh:

1. Bundle the full glibc family (was just libc.so.6 + ld). Now also
   libpthread, libdl, libm, libresolv, librt, libanl, libgcc_s. Without
   these, transitively-loaded host libs could pull them in from piCore's
   /lib and re-introduce the split.

2. After bundling, delete piCore's duplicates from /lib/ where our copy
   exists in /lib/$LIB_ARCH/. The dynamic linker's search now has
   exactly one match per soname.

3. Write /etc/ld.so.conf giving /lib/$LIB_ARCH precedence over /lib, and
   run `ldconfig -r "$ROOTFS"` to bake an explicit /etc/ld.so.cache.
   The runtime linker uses the cache (when present) instead of falling
   back to compiled-in default paths, making lookup order deterministic.

Also done (followups from previous commit):

- build/Dockerfile.builder gains nftables so docker-build picks up nft.
- .gitea/workflows/release.yaml's amd64 build job installs iptables +
  nftables (previously only listed iptables-related libs but not the
  CLIs themselves).

Verified by shellcheck. End-to-end QEMU verification on the Odroid next.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-15 07:56:49 -06:00

270 lines
9.4 KiB
YAML

name: Release
# Triggered by `git push origin vX.Y.Z`. Builds Go binaries (amd64+arm64),
# x86_64 ISO + disk image, ARM64 disk image, computes SHA256SUMS over all
# artifacts, and posts a Gitea release with everything attached via the
# Gitea API.
#
# Notes for future-you:
# - upload-artifact / download-artifact are pinned to @v3 because Gitea's
# act_runner v1.0.x doesn't fully implement v4 yet.
# - The release step uses curl against Gitea's own /api/v1/repos/.../releases
# instead of a third-party action (softprops/action-gh-release et al);
# act_runner doesn't reliably proxy GitHub.com-targeted actions.
# - The arm64 disk-image build runs on the Odroid self-hosted runner via
# the `arm64-linux` label. Docs in docs/ci-runners.md.
on:
push:
tags:
- 'v*'
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Test cloud-init
run: cd cloud-init && go test ./... -count=1
- name: Test update agent
run: cd update && go test ./... -count=1
build-binaries:
name: Build Binaries (${{ matrix.suffix }})
runs-on: ubuntu-latest
needs: test
strategy:
matrix:
include:
- goos: linux
goarch: amd64
suffix: linux-amd64
- goos: linux
goarch: arm64
suffix: linux-arm64
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Get version
id: version
run: echo "version=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
- name: Build cloud-init
run: |
CGO_ENABLED=0 GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} \
go build -ldflags="-s -w -X main.version=${{ steps.version.outputs.version }}" \
-o kubesolo-cloudinit-${{ matrix.suffix }} ./cmd/
working-directory: cloud-init
- name: Build update agent
run: |
CGO_ENABLED=0 GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} \
go build -ldflags="-s -w -X main.version=${{ steps.version.outputs.version }}" \
-o kubesolo-update-${{ matrix.suffix }} .
working-directory: update
- name: Upload binaries
uses: actions/upload-artifact@v3
with:
name: binaries-${{ matrix.suffix }}
path: |
cloud-init/kubesolo-cloudinit-${{ matrix.suffix }}
update/kubesolo-update-${{ matrix.suffix }}
build-iso-amd64:
name: Build x86_64 ISO + disk image
runs-on: ubuntu-latest
needs: build-binaries
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Install build deps
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends \
cpio gzip genisoimage isolinux syslinux syslinux-common \
syslinux-utils xorriso xz-utils wget squashfs-tools \
dosfstools e2fsprogs fdisk parted libarchive-tools \
grub-common grub-efi-amd64-bin grub-pc-bin kpartx \
busybox-static iptables nftables
- name: Build kernel + ISO + disk-image
run: |
make kernel
make build-cloudinit build-update-agent
make rootfs initramfs
make iso
make disk-image
- name: Compress disk image
# The raw .img is 4 GB sparse; xz takes it to ~50-300 MB depending
# on dictionary level. Use -6 (default) for memory safety on the
# GitHub-Actions-style runner.
run: |
xz -k -T0 --memlimit-compress=1500MiB -6 output/*.img
ls -lh output/
- name: Upload x86_64 artifacts
uses: actions/upload-artifact@v3
with:
name: image-amd64
path: |
output/*.iso
output/*.img.xz
build-disk-arm64:
name: Build ARM64 disk image
runs-on: arm64-linux
needs: test
steps:
- uses: actions/checkout@v4
- name: Show host info
run: |
uname -a
nproc
free -h
df -h /
- name: Build kernel + rootfs + disk-image
# Runner runs as root via systemd; explicit sudo is harmless but
# documented as such in docs/ci-runners.md.
run: |
make kernel-arm64
make build-cross
make rootfs-arm64
make disk-image-arm64
- name: Compress disk image
run: |
xz -k -T0 --memlimit-compress=1500MiB -6 output/*.arm64.img
ls -lh output/
- name: Upload ARM64 artifacts
uses: actions/upload-artifact@v3
with:
name: image-arm64
path: output/*.arm64.img.xz
release:
name: Publish Gitea Release
runs-on: ubuntu-latest
needs: [build-binaries, build-iso-amd64, build-disk-arm64]
steps:
- uses: actions/checkout@v4
- name: Get version
id: version
# `cat VERSION` would be stale on tag pushes (VERSION already bumped
# for the tag, but using ref_name is unambiguous).
run: echo "version=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: Download all artifacts
uses: actions/download-artifact@v3
with:
path: artifacts
- name: Flatten artifacts + compute checksums
run: |
mkdir -p release
# Each upload-artifact wrote into artifacts/<name>/...
find artifacts -type f \( \
-name "*.iso" -o \
-name "*.img.xz" -o \
-name "kubesolo-*" \
\) -exec cp {} release/ \;
(cd release && sha256sum * | sort > SHA256SUMS)
ls -lh release/
cat release/SHA256SUMS
- name: Install release tooling
run: sudo apt-get update && sudo apt-get install -y jq curl
- name: Render release body
id: body
run: |
VERSION="${{ steps.version.outputs.version }}"
# Strip the leading 'v' for cosmetic display in the body.
DISPLAY="${VERSION#v}"
cat > release-body.md <<EOF
See [docs/release-notes-${DISPLAY}.md](./docs/release-notes-${DISPLAY}.md)
and [CHANGELOG.md](./CHANGELOG.md) for the full release notes.
### Downloads
- \`kubesolo-os-${DISPLAY}.iso\` — bootable x86_64 ISO
- \`kubesolo-os-${DISPLAY}.img.xz\` — x86_64 raw disk image (A/B GPT, GRUB)
- \`kubesolo-os-${DISPLAY}.arm64.img.xz\` — ARM64 raw disk image (A/B GPT, UEFI)
- \`kubesolo-cloudinit-linux-{amd64,arm64}\` — standalone cloud-init parser
- \`kubesolo-update-linux-{amd64,arm64}\` — standalone update agent
- \`SHA256SUMS\` — checksums for every artifact above
### Verify
\`\`\`
sha256sum -c SHA256SUMS
\`\`\`
### Quick start
\`\`\`
# x86_64 in QEMU/KVM
xz -d kubesolo-os-${DISPLAY}.img.xz
qemu-system-x86_64 -m 2048 -smp 2 -enable-kvm \\
-drive file=kubesolo-os-${DISPLAY}.img,format=raw,if=virtio \\
-nographic
# ARM64 on Graviton/Ampere or under qemu-system-aarch64
xz -d kubesolo-os-${DISPLAY}.arm64.img.xz
dd if=kubesolo-os-${DISPLAY}.arm64.img of=/dev/sdX bs=4M status=progress
\`\`\`
EOF
cat release-body.md
- name: Create release via Gitea API
env:
# Gitea's act_runner auto-populates this with repo-write scope.
# If not, set a personal access token as a secret named GITEA_TOKEN
# on the org and swap the var name below.
TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
TAG="${{ steps.version.outputs.version }}"
REPO_API="${{ github.api_url }}/repos/${{ github.repository }}"
# 1. Create the release. The API is GitHub-compatible at the
# request shape; the response includes the numeric release id we
# need for asset uploads.
PAYLOAD=$(jq -n \
--arg tag "$TAG" \
--arg name "KubeSolo OS $TAG" \
--rawfile body release-body.md \
'{tag_name: $tag, name: $name, body: $body, draft: false, prerelease: false}')
echo "==> Creating release for $TAG against $REPO_API"
CREATE_RESP=$(curl -fsSL -X POST \
-H "Authorization: token $TOKEN" \
-H "Content-Type: application/json" \
-d "$PAYLOAD" \
"$REPO_API/releases")
RELEASE_ID=$(echo "$CREATE_RESP" | jq -r '.id')
if [ -z "$RELEASE_ID" ] || [ "$RELEASE_ID" = "null" ]; then
echo "ERROR: Could not extract release id from response:"
echo "$CREATE_RESP" | jq . || echo "$CREATE_RESP"
exit 1
fi
echo "==> Release id: $RELEASE_ID"
# 2. Upload each asset. asset?name= names the attachment; we use
# the basename so users see the same filename the build produced.
for f in release/*; do
[ -f "$f" ] || continue
name=$(basename "$f")
echo "==> Uploading $name ($(du -h "$f" | cut -f1))"
curl -fsSL -X POST \
-H "Authorization: token $TOKEN" \
-F "attachment=@$f" \
"$REPO_API/releases/$RELEASE_ID/assets?name=$name" >/dev/null
done
echo "==> Release published: $REPO_API/../releases/tag/$TAG"