Over the past few weeks I have been busy rebuilding my K8S cluster. As a foreword:

I think this sums it up…pretty much one has to be a bit of a (web) developer, network engineer, infrastructure engineer and generally a geek to operate Kubernetes at home…also…you must be an expert in “Google-fu”…it also helps, if you are near-bold (it is less easier to tear out your hair if it is only a few millimetres’ long’)

But I have never been the one who gives things up easily, so I – admittedly with a lot of help from the community at k8s-at-home – now almost completed my new cluster. From the template available there I have created my version as it stands today…this is very much a work-in-progress, so I do not go into details here, as my guide here might significantly change over time.

Currently I have:

  • a heavily modified version of original cluster
  • the cluster is running on my servers (currently these are 4x i3 mini PC-s, but I have plans to upgrade these to servers or more capable workstations)

  • I am using Flux2, which keeps the cluster in sync with the Github repository; it is also notifies me on changes using Discord
  • renovatebot keeps the dependencies updated
  • external dns keeps the ingress records in sync with my Cloud Flare DNS (so I do not have to worry to create them as I add services)
  • and a simple Dynamic DNS ensures my domain points to my external IP
  • all my important services have ingresses, and I am using forward auth with Google Oauth2 to secure them from the Internet; they are available – with the whitelisted credentials – via a minimalist Hajimari starting page

  • I now have a complete media management (*arr-s and Plex) as well as a few usefull apps (as seen on the starting page’s screenshot); the applications configs are on a Ceph storage block, the media library and other data are stored on a NAS (admittedly, at this time this is a single point of failure, as the NAS also provides the iSCSI disks for the Ceph, but this is something I will also intend to revisit)
  • Currently I am using Lens to visualise the cluster, but I am working on the deployment of a proper metrics stack with Grafana, as well as a backup solution for disaster recovery scenarios.

    I might write some detailed guides on the creation of the cluster (I will probably ommit the part when I nearly gave up after failing to get Authentik work properly, or the part when Ceph driven me to the wall, thanks to my complete unfamiliarity with it). Until then my notes would have to do!