fix(arm64): bundle nft binary + always show access banner
Some checks failed
ARM64 Build / Build generic ARM64 disk image (push) Failing after 5s
CI / Go Tests (push) Successful in 1m55s
CI / Shellcheck (push) Successful in 53s
CI / Build Go Binaries (amd64, linux, linux-amd64) (push) Failing after 1m0s
CI / Build Go Binaries (arm64, linux, linux-arm64) (push) Successful in 2m18s

Two real v0.3.0 bugs that surface on first-boot:

1. KubeSolo v1.1.4+ owns its pod-masquerade rules directly via
     nft add table ip kubesolo-masq
   instead of going through kube-proxy/CNI. Without the standalone nft
   CLI in PATH, KubeSolo FATALs at startup with:
     "nft": executable file not found in $PATH
   then the init exits and the kernel panics on PID 1 death.

   inject-kubesolo.sh now also copies /usr/sbin/nft and its non-shared
   libraries (libnftables, libedit, libjansson, libgmp, libtinfo, libbsd,
   libmd). The iptables-nft block above already covered libmnl, libnftnl,
   libxtables, libc, ld.

2. The host-access banner ("From your host machine, run: curl -s
   http://localhost:8080 ...") was gated on the kubeconfig appearing
   within 120s. When KubeSolo crashed early (bug 1 above) or simply took
   longer than the wait window, the user never saw the connection
   instructions.

   90-kubesolo.sh now:
     - writes the banner to /etc/motd so it shows on any later shell
       (SSH ext, emergency shell, console login)
     - prints the banner to console unconditionally, after the wait
       loop, regardless of whether the kubeconfig was found

Both fixes are pure rootfs changes — no kernel rebuild required.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-05-15 07:16:12 -06:00
parent f8c308d9b7
commit 51c1f78aea
2 changed files with 54 additions and 8 deletions

View File

@@ -420,6 +420,30 @@ else
echo " WARN: xtables-nft-multi not found in builder (install iptables package)" echo " WARN: xtables-nft-multi not found in builder (install iptables package)"
fi fi
# Install nft (nftables CLI). KubeSolo v1.1.4+ uses `nft add table ip
# kubesolo-masq` to own pod masquerade rules directly instead of going
# through kube-proxy/CNI. Without nft in PATH, KubeSolo FATALs at startup
# with: nft: executable file not found in $PATH.
echo " Installing nft (nftables CLI) from builder..."
if [ -f /usr/sbin/nft ]; then
cp /usr/sbin/nft "$ROOTFS/usr/sbin/"
# nft pulls in libnftables + a few extras beyond what iptables-nft needed.
# libmnl, libnftnl, libxtables already copied by the iptables-nft block.
for lib in \
"/lib/$LIB_ARCH/libnftables.so.1"* \
"/lib/$LIB_ARCH/libedit.so.2"* \
"/lib/$LIB_ARCH/libjansson.so.4"* \
"/lib/$LIB_ARCH/libgmp.so.10"* \
"/lib/$LIB_ARCH/libtinfo.so.6"* \
"/lib/$LIB_ARCH/libbsd.so.0"* \
"/lib/$LIB_ARCH/libmd.so.0"*; do
[ -e "$lib" ] && cp -aL "$lib" "$ROOTFS${lib}" 2>/dev/null || true
done
echo " Installed nft + shared libs"
else
echo " WARN: nft not found in builder (install nftables package) — KubeSolo v1.1.4+ pod masquerade will fail"
fi
# Kernel modules list (for init to load at boot) # Kernel modules list (for init to load at boot)
if [ "$INJECT_ARCH" = "arm64" ]; then if [ "$INJECT_ARCH" = "arm64" ]; then
cp "$PROJECT_ROOT/build/config/modules-arm64.list" "$ROOTFS/usr/lib/kubesolo-os/modules.list" cp "$PROJECT_ROOT/build/config/modules-arm64.list" "$ROOTFS/usr/lib/kubesolo-os/modules.list"

View File

@@ -76,6 +76,29 @@ while [ ! -f "$KUBECONFIG_PATH" ] && [ $WAIT -lt 120 ]; do
fi fi
done done
# Render the access banner. Written to /etc/motd so it's visible to anyone
# who later shells in (SSH extension, emergency shell, console login), and
# printed unconditionally to console below so the user sees it even when
# KubeSolo hasn't yet finished generating the kubeconfig.
ACCESS_BANNER="$(cat <<'BANNER'
============================================================
KubeSolo OS — host access
From your host machine, run:
curl -s http://localhost:8080 > ~/.kube/kubesolo-config
kubectl --kubeconfig ~/.kube/kubesolo-config get nodes
Notes:
- port 8080 serves the kubeconfig (admin) over HTTP
- port 6443 serves the Kubernetes API (HTTPS)
- Both ports are forwarded under QEMU's `-net user,hostfwd=…` config
============================================================
BANNER
)"
printf '%s\n' "$ACCESS_BANNER" > /etc/motd 2>/dev/null || true
if [ -f "$KUBECONFIG_PATH" ]; then if [ -f "$KUBECONFIG_PATH" ]; then
log_ok "KubeSolo is running (PID $KUBESOLO_PID)" log_ok "KubeSolo is running (PID $KUBESOLO_PID)"
@@ -95,18 +118,17 @@ if [ -f "$KUBECONFIG_PATH" ]; then
done) & done) &
log_ok "Kubeconfig available via HTTP on port 8080" log_ok "Kubeconfig available via HTTP on port 8080"
echo ""
echo "============================================================"
echo " From your host machine, run:"
echo ""
echo " curl -s http://localhost:8080 > ~/.kube/kubesolo-config"
echo " kubectl --kubeconfig ~/.kube/kubesolo-config get nodes"
echo "============================================================"
echo ""
else else
log_warn "Kubeconfig not found after ${WAIT}s — KubeSolo may still be starting" log_warn "Kubeconfig not found after ${WAIT}s — KubeSolo may still be starting"
log_warn "Check manually: cat $KUBECONFIG_PATH" log_warn "Check manually: cat $KUBECONFIG_PATH"
fi fi
# Show the banner regardless of kubeconfig state: the HTTP server above only
# starts on success, but printing the instructions during the long first-boot
# wait is useful and harmless (user retries the curl until it 200s).
echo ""
printf '%s\n' "$ACCESS_BANNER"
echo ""
# Keep init alive — wait on KubeSolo process # Keep init alive — wait on KubeSolo process
wait $KUBESOLO_PID wait $KUBESOLO_PID