Files
kubesolo-os/init/lib/50-network.sh
Adolfo Delorenzo 39732488ef feat: custom kernel build + boot fixes for working container runtime
Build a custom Tiny Core 17.0 kernel (6.18.2) with missing configs
that the stock kernel lacks for container workloads:
- CONFIG_CGROUP_BPF=y (cgroup v2 device control via BPF)
- CONFIG_DEVTMPFS=y (auto-create /dev device nodes)
- CONFIG_DEVTMPFS_MOUNT=y (auto-mount devtmpfs)
- CONFIG_MEMCG=y (memory cgroup controller for memory.max)
- CONFIG_CFS_BANDWIDTH=y (CPU bandwidth throttling for cpu.max)

Also strips unnecessary subsystems (sound, GPU, wireless, Bluetooth,
KVM, etc.) for minimal footprint on a headless K8s edge appliance.

Init system fixes for successful boot-to-running-pods:
- Add switch_root in init.sh to escape initramfs (runc pivot_root)
- Add mountpoint guards in 00-early-mount.sh (skip if already mounted)
- Create essential device nodes after switch_root (kmsg, console, etc.)
- Enable cgroup v2 controller delegation with init process isolation
- Mount BPF filesystem for cgroup v2 device control
- Add mknod fallback from sysfs in 20-persistent-mount.sh for /dev/vda
- Move KubeSolo binary to /usr/bin (avoid /usr/local bind mount hiding)
- Generate /etc/machine-id in 60-hostname.sh (kubelet requires it)
- Pre-initialize iptables tables before kube-proxy starts
- Add nft_reject, nft_fib, xt_nfacct to kernel modules list

Build system changes:
- New build-kernel.sh script for custom kernel compilation
- Dockerfile.builder adds kernel build deps (flex, bison, libelf, etc.)
- Selective kernel module install (only modules.list + transitive deps)
- Install iptables-nft (xtables-nft-multi) + shared libs in rootfs

Tested: ISO boots in QEMU, node reaches Ready in ~35s, CoreDNS and
local-path-provisioner pods start and run successfully.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 23:13:31 -06:00

62 lines
1.7 KiB
Bash
Executable File

#!/bin/sh
# 50-network.sh — Configure networking
# Priority: cloud-init (stage 45) > saved config > DHCP fallback
# If cloud-init already configured networking, skip this stage
if [ "$CLOUDINIT_APPLIED" = "1" ]; then
log "Network already configured by cloud-init — skipping"
return 0
fi
# Check for saved network config (from previous boot or cloud-init)
if [ -f "$DATA_MOUNT/network/interfaces.sh" ]; then
log "Applying saved network configuration"
. "$DATA_MOUNT/network/interfaces.sh"
return 0
fi
# Fallback: DHCP on first non-loopback interface
log "Configuring network via DHCP"
# Bring up loopback (use ifconfig for BusyBox compatibility)
ifconfig lo 127.0.0.1 netmask 255.0.0.0 up 2>/dev/null || \
{ ip link set lo up 2>/dev/null && ip addr add 127.0.0.1/8 dev lo 2>/dev/null; } || true
# Find first ethernet interface
ETH_DEV=""
for iface in /sys/class/net/*; do
iface="$(basename "$iface")"
case "$iface" in
lo|docker*|veth*|br*|cni*|dummy*|tunl*|sit*) continue ;;
esac
ETH_DEV="$iface"
break
done
if [ -z "$ETH_DEV" ]; then
log_err "No network interface found"
return 1
fi
log "Using interface: $ETH_DEV"
ifconfig "$ETH_DEV" up 2>/dev/null || ip link set "$ETH_DEV" up 2>/dev/null || true
# Run DHCP client (BusyBox udhcpc)
if command -v udhcpc >/dev/null 2>&1; then
udhcpc -i "$ETH_DEV" -s /usr/share/udhcpc/default.script \
-t 10 -T 3 -A 5 -b -q 2>/dev/null || {
log_err "DHCP failed on $ETH_DEV"
return 1
}
elif command -v dhcpcd >/dev/null 2>&1; then
dhcpcd "$ETH_DEV" || {
log_err "DHCP failed on $ETH_DEV"
return 1
}
else
log_err "No DHCP client available (need udhcpc or dhcpcd)"
return 1
fi
log_ok "Network configured on $ETH_DEV"