thomasorgeval.com
lab

Homelab and IaC playground

A personal infrastructure lab that evolved into a reproducible production baseline; secure VPS setup in under 15 minutes, then weekly maintenance on autopilot.

Project Signals

<15 min
secure VPS baseline setup
3 VPS + 2 NAS
currently operated in the lab
0
major security incidents so far

Overview

This homelab started as a personal playground, but it quickly became something more useful: a place to build a repeatable infrastructure baseline that can go from empty VPS to production-ready in under 15 minutes.

What matters in this portfolio is not the “homelab” label itself. What matters is the operating model behind it: automated provisioning, secure defaults, and maintenance that keeps running after day one.

The practices validated here have already been reused in real delivery contexts, including an Ansible + Docker Compose setup for a client VPS.

It also powers real environments I operate directly: the infrastructure hosting Bibliocards and its development environment.

The turning point

The project changed when I did a full infrastructure reset in 2025.

Before that, the setup was functional but still too manual, too personal, and too dependent on memory. It worked for experimentation, but not for repeatable execution.

The reset objective was simple: if I need to bring up a new server, I should be able to do it quickly, safely, and the same way every time.

That shift turned the lab from a collection of scripts into a reproducible system.

Ready for production in minutes

The current baseline is designed around a clear target: provision, secure, and connect a server in less than 15 minutes.

In that flow, automation handles:

  • creation of admin and automation users
  • SSH hardening and restrictive firewall defaults
  • Fail2Ban and baseline security settings
  • Docker runtime setup
  • automatic private network join through Tailscale mesh
  • DNS as code workflows and CI hooks
  • host onboarding into Dokploy
  • monitoring bootstrap with Beszel agent

Application-specific Docker Compose stacks are added afterward in Dokploy based on project needs, but the secure platform baseline is already in place.

Technical proof: bootstrap baseline

The initial hardening is codified as Ansible tasks, so every new host gets the same security baseline by default:

- name: Ensure Ansible automation user exists
  ansible.builtin.user:
    name: "{{ ansible_automation_user }}"
    groups: sudo
    append: true

- name: Allow passwordless sudo for automation user
  ansible.builtin.copy:
    dest: "/etc/sudoers.d/90-{{ ansible_automation_user }}"
    mode: '0440'
    content: "{{ ansible_automation_user }} ALL=(ALL) NOPASSWD:ALL\n"
    validate: /usr/sbin/visudo -cf %s

Not just provisioning: operating discipline

This lab is not only about first deployment speed. It is also about keeping systems healthy over time.

A weekly maintenance pipeline runs automatically against VPS targets:

  • dependency and package updates
  • system cleanup
  • conditional reboot when required
  • post-maintenance checks on critical services

That gives me a practical operating loop: bootstrap fast, then maintain continuously without manual drift.

Technical proof: weekly updates on autopilot

The maintenance loop is scheduled in CI, not run manually when I remember it.

on:
  schedule:
    - cron: '0 3 * * 0'

Each run applies dependency updates and validates service health before considering the cycle complete:

- name: Upgrade packages
  ansible.builtin.apt:
    upgrade: dist

- name: Ensure critical services are running
  ansible.builtin.assert:
    that:
      - ansible_facts.services[item + '.service'] is defined
      - ansible_facts.services[item + '.service'].state == 'running'

This is a small detail with a big operational effect: security and stability stay maintained even when no human is actively touching the servers that week.

Technical proof: dynamic Tailscale inventory

Weekly maintenance does not depend on static IP files. CI builds the inventory dynamically from Tailscale peers tagged tag:vps:

jq -r '
  .Peer
  | to_entries
  | map(.value)
  | map(select((.Tags // []) | index("tag:vps")))
  | .[]
  | "\(.HostName) ansible_host=\((.TailscaleIPs // []) | first)"
'

That removes manual inventory drift and keeps maintenance targeting aligned with the current mesh state.

Infrastructure model

Today the lab runs across 3 VPS and 2 NAS, with all server management aligned on the same automation backbone:

  • GitHub Actions for repeatable workflow execution
  • Ansible playbooks for provisioning and maintenance
  • Tailscale for private mesh access
  • Dokploy for deployment surface
  • OctoDNS for DNS configuration as code

The same baseline currently supports both production and non-production workloads, including Bibliocards and its dev environment.

The key point is not tool count. It is coherence: each server follows the same model, which makes operations safer and troubleshooting faster.

What I own and what this proves

This lab is useful portfolio evidence because it demonstrates applied ops, not isolated experimentation.

I own the full loop:

  • provisioning and hardening automation
  • CI workflows for bootstrap and weekly maintenance
  • deployment handoff through Dokploy
  • DNS as code and environment consistency

In practice, it shows that the setup can be reproduced quickly and maintained without much manual work.