Commit Graph

14 Commits

Author SHA1 Message Date
76ed2ffc14 fix(arm64): resolve dual-glibc loading that triggers stack-canary aborts
Some checks failed
ARM64 Build / Build generic ARM64 disk image (push) Failing after 5s
CI / Go Tests (push) Successful in 1m49s
CI / Shellcheck (push) Successful in 56s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Successful in 1m43s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Successful in 1m54s
Second nft crash report from QEMU virt:

  failed to set up pod masquerade
    nft add table ip kubesolo-masq:
      signal: aborted (output: *** stack smashing detected ***: terminated)

Root cause: two glibcs are visible to dynamically-linked binaries in the
rootfs. piCore64 ships glibc at /lib/libc.so.6; we copy the build host's
glibc (for the iptables-nft / nft / xtables-modules family) to
/lib/$LIB_ARCH/libc.so.6. The dynamic linker can resolve one binary's
NEEDED libc.so.6 to piCore's and another (via transitive load through
e.g. libnftables.so.1) to ours. Each libc has its own __stack_chk_guard
global; stack frames whose canary was written by code from libc-A and
checked by code from libc-B trip "stack smashing detected" → SIGABRT.
This didn't fire before nft was added because no host-installed dyn
binary actually got invoked before kubesolo crashed at first-boot
preflight.

Three layered fixes in inject-kubesolo.sh:

1. Bundle the full glibc family (was just libc.so.6 + ld). Now also
   libpthread, libdl, libm, libresolv, librt, libanl, libgcc_s. Without
   these, transitively-loaded host libs could pull them in from piCore's
   /lib and re-introduce the split.

2. After bundling, delete piCore's duplicates from /lib/ where our copy
   exists in /lib/$LIB_ARCH/. The dynamic linker's search now has
   exactly one match per soname.

3. Write /etc/ld.so.conf giving /lib/$LIB_ARCH precedence over /lib, and
   run `ldconfig -r "$ROOTFS"` to bake an explicit /etc/ld.so.cache.
   The runtime linker uses the cache (when present) instead of falling
   back to compiled-in default paths, making lookup order deterministic.

Also done (followups from previous commit):

- build/Dockerfile.builder gains nftables so docker-build picks up nft.
- .gitea/workflows/release.yaml's amd64 build job installs iptables +
  nftables (previously only listed iptables-related libs but not the
  CLIs themselves).

Verified by shellcheck. End-to-end QEMU verification on the Odroid next.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-15 07:56:49 -06:00
51c1f78aea fix(arm64): bundle nft binary + always show access banner
Some checks failed
ARM64 Build / Build generic ARM64 disk image (push) Failing after 5s
CI / Go Tests (push) Successful in 1m55s
CI / Shellcheck (push) Successful in 53s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Failing after 1m0s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Successful in 2m18s
Two real v0.3.0 bugs that surface on first-boot:

1. KubeSolo v1.1.4+ owns its pod-masquerade rules directly via
     nft add table ip kubesolo-masq
   instead of going through kube-proxy/CNI. Without the standalone nft
   CLI in PATH, KubeSolo FATALs at startup with:
     "nft": executable file not found in $PATH
   then the init exits and the kernel panics on PID 1 death.

   inject-kubesolo.sh now also copies /usr/sbin/nft and its non-shared
   libraries (libnftables, libedit, libjansson, libgmp, libtinfo, libbsd,
   libmd). The iptables-nft block above already covered libmnl, libnftnl,
   libxtables, libc, ld.

2. The host-access banner ("From your host machine, run: curl -s
   http://localhost:8080 ...") was gated on the kubeconfig appearing
   within 120s. When KubeSolo crashed early (bug 1 above) or simply took
   longer than the wait window, the user never saw the connection
   instructions.

   90-kubesolo.sh now:
     - writes the banner to /etc/motd so it shows on any later shell
       (SSH ext, emergency shell, console login)
     - prints the banner to console unconditionally, after the wait
       loop, regardless of whether the kubeconfig was found

Both fixes are pure rootfs changes — no kernel rebuild required.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-15 07:16:12 -06:00
1de36289a5 fix(arm64): tr -d '[:space:]' is parsed as literal char-set by busybox 1.30.1
Some checks failed
CI / Go Tests (push) Successful in 1m39s
CI / Shellcheck (push) Failing after 44s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Failing after 1m13s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Failing after 1m31s
Ubuntu's busybox-static 1.30.1 (which we use for the ARM64 rootfs after
piCore64's BusyBox crashes in QEMU virt) doesn't recognize POSIX character
classes. `tr -d '[:space:]'` is interpreted as "delete any of the literal
characters [, :, s, p, a, c, e, ]" — so every s/p/a/c/e in module names and
sysctl keys gets eaten.

Symptoms in the boot log:
  virtio_net  -> virtio_nt   (e dropped)
  overlay     -> ovrly       (e, a dropped)
  bridge      -> bridg       (e dropped)
  nf_conntrack -> nf_onntrk  (c, a, c dropped)
  net.bridge.bridge-nf-call-iptables -> nt.bridg.bridg-nf-ll-itbl

Fix: use explicit whitespace chars `tr -d ' \t\r\n'` in both
30-kernel-modules.sh and 40-sysctl.sh. Works under any tr implementation.

Also: filter functions.sh out of the init.d stage-copy loop. It's a shared
library (sourced by init.sh), not a numbered stage. With it in init.d the
main loop runs it as a stage after stage 90, then panics with "Init
completed without exec'ing KubeSolo".

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 16:02:21 -06:00
06e12a79bd fix(arm64): override piCore64's BusyBox with host's static busybox
Some checks failed
CI / Go Tests (push) Successful in 1m26s
CI / Shellcheck (push) Failing after 36s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Failing after 1m15s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Failing after 1m14s
piCore64 v15.0.0 ships BusyBox built with ARM instructions that QEMU virt
cannot emulate even under -cpu max — applets like mkdir, uname, readlink
SIGILL on first invocation (el0_undef in the panic trace). mount works
because piCore's busybox.suid happens to use a different code path.

Fix: when building the arm64 rootfs, replace piCore's bin/busybox and
bin/busybox.suid with /bin/busybox from the build host (Ubuntu's
busybox-static, statically linked, built for generic ARMv8-A).

Also add busybox-static to Dockerfile.builder so the Docker-based build
flow has the same fallback available.

Long-term: source a known-good ARM64 BusyBox build (Alpine, or our own
from upstream BusyBox) so we don't depend on the build host's package
manager. Tracked as future work.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 15:38:05 -06:00
5cf81049f6 fix: install our staged init at /init too, not just /sbin/init
Some checks failed
CI / Go Tests (push) Successful in 1m29s
CI / Shellcheck (push) Failing after 33s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Failing after 1m7s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Failing after 1m12s
The kernel ALWAYS runs /init when booting from an initramfs. If /init doesn't
exist, the kernel falls back to the legacy root-mount path (looking for a real
root partition via root= cmdline), which we don't want — our system IS the
initramfs.

Previous fix removed piCore's /init to stop it from being run; that caused the
kernel to skip the initramfs entrypoint entirely and panic with 'Cannot open
root device' (error -6).

Correct fix: replace piCore's /init with a copy of our init.sh. The kernel
runs /init -> our staged boot, which is exactly what we want. Keep
/sbin/init as well (some boot paths exec it directly, e.g. via init= cmdline
override) and the existing init=/sbin/init in grub-arm64.cfg as a belt.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 15:01:20 -06:00
863f498cc2 fix: kernel must use /sbin/init, not piCore's /init
Some checks failed
CI / Go Tests (push) Failing after 53s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Has been skipped
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Has been skipped
CI / Shellcheck (push) Failing after 27s
Root cause of the 'Run /init as init process' -> immediate SIGSEGV panic on
the generic ARM64 boot: piCore64's rootfs ships a /init script at the rootfs
root, and the kernel's init search order picks /init over /sbin/init. piCore's
init then exec's something incompatible with our environment and segfaults.

Two fixes:
1. inject-kubesolo.sh now removes the upstream /init after replacing
   /sbin/init. This is the structural fix — the rootfs no longer has the
   conflicting entry-point.
2. grub-arm64.cfg passes init=/sbin/init explicitly. Belt-and-suspenders in
   case any future rootfs source re-introduces /init.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 14:43:35 -06:00
d51618badb build: separate generic ARM64 from Raspberry Pi kernel builds
Splits the ARM64 build into two tracks per docs/arm64-architecture.md:

Generic ARM64 (mainline kernel.org, UEFI, virtio, GRUB):
- New build/scripts/build-kernel-arm64.sh builds mainline LTS (6.12.x by default)
  from arm64 defconfig + shared container fragment + arm64-virt enables
  (VIRTIO_*, EFI_STUB, NVMe). Output: build/cache/kernel-arm64-generic/.
- New Makefile targets: kernel-arm64, rootfs-arm64 (now consumes the mainline
  kernel modules via TARGET_VARIANT=generic).
- versions.env: pin MAINLINE_KERNEL_VERSION=6.12.10, declare cdn.kernel.org URL
  and SHA256 placeholder.

Raspberry Pi (raspberrypi/linux fork, custom DTBs, autoboot.txt):
- build-kernel-arm64.sh (RPi-flavoured) renamed to build-kernel-rpi.sh; cache
  dir renamed from custom-kernel-arm64 to custom-kernel-rpi.
- New Makefile targets: kernel-rpi, rootfs-arm64-rpi (uses TARGET_VARIANT=rpi).
- rpi-image now depends on rootfs-arm64-rpi + kernel-rpi instead of the generic
  rootfs-arm64.
- create-rpi-image.sh + inject-kubesolo.sh updated to reference the new cache
  path. inject-kubesolo.sh now takes a TARGET_VARIANT env var (rpi|generic) to
  select which ARM64 kernel modules to consume.

Shared substrate:
- rpi-kernel-config.fragment renamed to kernel-container.fragment. The contents
  were never RPi-specific (cgroup, namespaces, AppArmor, netfilter) — just
  misnamed. Extended with extra subsystem disables (KVM, WLAN, CFG80211,
  INFINIBAND, PCMCIA, HAMRADIO, ISDN, ATM, INPUT_JOYSTICK, INPUT_TABLET, FPGA)
  and CONFIG_LSM=lockdown,yama,apparmor.
- build-kernel.sh (x86) refactored to apply the shared fragment via a generic
  apply_fragment function (two-pass for the TC stock config security dance),
  killing ~50 lines of inline config duplication.

Note: rename detection shows build-kernel-arm64.sh as 'modified' because the
new file at that path is the mainline build, while the old RPi-flavoured
content lives in build-kernel-rpi.sh (which appears as a new file). The git
log for build-kernel-rpi.sh is empty; the RPi history is preserved at the
original path until this commit.

No actual kernel build runs in this commit — that's Phase 3 work.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-14 10:30:11 -06:00
ba4812f637 fix: complete ARM64 RPi build pipeline
Some checks failed
CI / Go Tests (push) Has been cancelled
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Has been cancelled
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Has been cancelled
CI / Shellcheck (push) Has been cancelled
Release / Test (push) Has been cancelled
Release / Build Binaries (amd64, linux, linux-amd64) (push) Has been cancelled
Release / Build Binaries (arm64, linux, linux-arm64) (push) Has been cancelled
Release / Build ISO (amd64) (push) Has been cancelled
Release / Create Release (push) Has been cancelled
- fetch-components.sh: download ARM64 KubeSolo binary (kubesolo-arm64)
- inject-kubesolo.sh: use arch-specific binaries for KubeSolo, cloud-init,
  and update agent; detect KVER from custom kernel when rootfs has none;
  cross-arch module resolution via find fallback when modprobe fails
- create-rpi-image.sh: kpartx support for Docker container builds
- Makefile: rootfs-arm64 depends on build-cross, includes pack-initramfs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 17:20:04 -06:00
efc7f80b65 feat: add security hardening, AppArmor, and ARM64 Raspberry Pi support (Phase 6)
Some checks failed
CI / Go Tests (push) Has been cancelled
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Has been cancelled
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Has been cancelled
CI / Shellcheck (push) Has been cancelled
Security hardening: bind kubeconfig server to localhost, mount hardening
(noexec/nosuid/nodev on tmpfs), sysctl network hardening, kernel module
loading lock after boot, SHA256 checksum verification for downloads,
kernel AppArmor + Audit support, complain-mode AppArmor profiles for
containerd and kubelet, and security integration test.

ARM64 Raspberry Pi support: piCore64 base extraction, RPi kernel build
from raspberrypi/linux fork, RPi firmware fetch, SD card image with 4-
partition GPT and tryboot A/B mechanism, BootEnv Go interface abstracting
GRUB vs RPi boot environments, architecture-aware build scripts, QEMU
aarch64 dev VM and boot test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 13:08:17 -06:00
d9ac58418d fix: macOS dev VM, CA certs, DNS fallback, Portainer Edge integration
Some checks failed
CI / Go Tests (push) Has been cancelled
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Has been cancelled
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Has been cancelled
CI / Shellcheck (push) Has been cancelled
- dev-vm.sh: rewrite for macOS (bsdtar ISO extraction, Homebrew mkfs.ext4
  detection, direct kernel boot, TCG acceleration, port 8080 forwarding)
- inject-kubesolo.sh: add CA certificates bundle from builder so containerd
  can verify TLS when pulling from registries (Docker Hub, etc.)
- 50-network.sh: add DNS fallback (10.0.2.3 + 8.8.8.8) when DHCP client
  doesn't populate /etc/resolv.conf
- 90-kubesolo.sh: serve kubeconfig via HTTP on port 8080 for reliable
  retrieval from host, add 127.0.0.1 and 10.0.2.15 to API server SANs
- portainer.go: add headless Service to Edge Agent manifest (required for
  agent peer discovery DNS lookup)
- 10-parse-cmdline.sh + init.sh: add kubesolo.edge_id/edge_key boot params
- 20-persistent-mount.sh: auto-format unformatted data disks on first boot
- hack/fix-portainer-service.sh: helper to patch running cluster

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 02:11:31 -06:00
39732488ef feat: custom kernel build + boot fixes for working container runtime
Build a custom Tiny Core 17.0 kernel (6.18.2) with missing configs
that the stock kernel lacks for container workloads:
- CONFIG_CGROUP_BPF=y (cgroup v2 device control via BPF)
- CONFIG_DEVTMPFS=y (auto-create /dev device nodes)
- CONFIG_DEVTMPFS_MOUNT=y (auto-mount devtmpfs)
- CONFIG_MEMCG=y (memory cgroup controller for memory.max)
- CONFIG_CFS_BANDWIDTH=y (CPU bandwidth throttling for cpu.max)

Also strips unnecessary subsystems (sound, GPU, wireless, Bluetooth,
KVM, etc.) for minimal footprint on a headless K8s edge appliance.

Init system fixes for successful boot-to-running-pods:
- Add switch_root in init.sh to escape initramfs (runc pivot_root)
- Add mountpoint guards in 00-early-mount.sh (skip if already mounted)
- Create essential device nodes after switch_root (kmsg, console, etc.)
- Enable cgroup v2 controller delegation with init process isolation
- Mount BPF filesystem for cgroup v2 device control
- Add mknod fallback from sysfs in 20-persistent-mount.sh for /dev/vda
- Move KubeSolo binary to /usr/bin (avoid /usr/local bind mount hiding)
- Generate /etc/machine-id in 60-hostname.sh (kubelet requires it)
- Pre-initialize iptables tables before kube-proxy starts
- Add nft_reject, nft_fib, xt_nfacct to kernel modules list

Build system changes:
- New build-kernel.sh script for custom kernel compilation
- Dockerfile.builder adds kernel build deps (flex, bison, libelf, etc.)
- Selective kernel module install (only modules.list + transitive deps)
- Install iptables-nft (xtables-nft-multi) + shared libs in rootfs

Tested: ISO boots in QEMU, node reaches Ready in ~35s, CoreDNS and
local-path-provisioner pods start and run successfully.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 23:13:31 -06:00
8d25e1890e feat: add A/B partition updates with GRUB and Go update agent (Phase 3)
Implement atomic OS updates via A/B partition scheme with automatic
rollback. GRUB bootloader manages slot selection with a 3-attempt
boot counter that auto-rolls back on repeated health check failures.

GRUB boot config:
- A/B slot selection with boot_counter/boot_success env vars
- Automatic rollback when counter reaches 0 (3 failed boots)
- Debug, emergency shell, and manual slot-switch menu entries

Disk image (refactored):
- 4-partition GPT layout: EFI + System A + System B + Data
- GRUB EFI/BIOS installation with graceful fallbacks
- Both system partitions populated during image creation

Update agent (Go, zero external deps):
- pkg/grubenv: read/write GRUB env vars (grub-editenv + manual fallback)
- pkg/partition: find/mount/write system partitions by label
- pkg/image: HTTP download with SHA256 verification
- pkg/health: post-boot checks (containerd, API server, node Ready)
- 6 CLI commands: check, apply, activate, rollback, healthcheck, status
- 37 unit tests across all 4 packages

Deployment:
- K8s CronJob for automatic update checks (every 6 hours)
- ConfigMap for update server URL
- Health check Job for post-boot verification

Build pipeline:
- build-update-agent.sh compiles static Linux binary (~5.9 MB)
- inject-kubesolo.sh includes update agent in initramfs
- Makefile: build-update-agent, test-update-agent, test-update targets

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 11:12:46 -06:00
d900fa920e feat: add cloud-init Go parser (Phase 2)
Implement a lightweight cloud-init system for first-boot configuration:
- Go parser for YAML config (hostname, network, KubeSolo settings)
- Static/DHCP network modes with DNS override
- KubeSolo extra flags and API server SAN configuration
- Portainer Edge Agent and air-gapped deployment support
- New init stage 45-cloud-init.sh runs before network/hostname stages
- Stages 50/60 skip gracefully when cloud-init has already applied
- Build script compiles static Linux/amd64 binary (~2.7 MB)
- 17 unit tests covering parsing, validation, and example files
- Full documentation at docs/cloud-init.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:39:05 -06:00
e372df578b feat: initial Phase 1 PoC scaffolding for KubeSolo OS
Complete Phase 1 implementation of KubeSolo OS — an immutable, bootable
Linux distribution built on Tiny Core Linux for running KubeSolo
single-node Kubernetes.

Build system:
- Makefile with fetch, rootfs, initramfs, iso, disk-image targets
- Dockerfile.builder for reproducible builds
- Scripts to download Tiny Core, extract rootfs, inject KubeSolo,
  pack initramfs, and create bootable ISO/disk images

Init system (10 POSIX sh stages):
- Early mount (proc/sys/dev/cgroup2), cmdline parsing, persistent
  mount with bind-mounts, kernel module loading, sysctl, DHCP
  networking, hostname, clock sync, containerd prep, KubeSolo exec

Shared libraries:
- functions.sh (device wait, IP lookup, config helpers)
- network.sh (static IP, config persistence, interface detection)
- health.sh (containerd, API server, node readiness checks)
- Emergency shell for boot failure debugging

Testing:
- QEMU boot test with serial log marker detection
- K8s readiness test with kubectl verification
- Persistence test (reboot + verify state survives)
- Workload deployment test (nginx pod)
- Local storage test (PVC + local-path provisioner)
- Network policy test
- Reusable run-vm.sh launcher

Developer tools:
- dev-vm.sh (interactive QEMU with port forwarding)
- rebuild-initramfs.sh (fast iteration)
- inject-ssh.sh (dropbear SSH for debugging)
- extract-kernel-config.sh + kernel-audit.sh

Documentation:
- Full design document with architecture research
- Boot flow documentation covering all 10 init stages
- Cloud-init examples (DHCP, static IP, Portainer Edge, air-gapped)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:18:42 -06:00