This is the setup script to configure my private homelab. It builds on Ansible to configure the homelab nodes and provision the services using Podman. The Ansible playbook is built such that it configures an Alpine Linux machine for Charon and a Debian based Pi OS Lite for Daisy. For local development, the Ansible playbook is run on an Alpine/Debian Linux VM, provisioned using Vagrant.
- just - Used as the task runner for this project. All common operations are available as
justrecipes. - direnv (optional) - Can be used to automatically set up a project-local Python virtual environment and Ansible
installation via the
.envrcfile. If not used, a system-wide Ansible installation works as well.
The project structure is inspired by the golang-standards/project-layout convention:
| Directory | Description |
|---|---|
ansible/ |
Ansible playbooks, roles, inventory and variables for provisioning the homelab nodes. |
configs/ |
Configuration file templates used during manual setup steps (e.g. sshd configuration). |
deployments/ |
Local development environment definitions (Vagrant VMs). |
scripts/ |
Build, deploy and utility scripts invoked by just recipes. |
This is the installation guide describes how to set up ACME Let's Encrypt certificate configuration, how to prepare the local and remote DNS entries and how to install the os and perform manual setup steps.
- Install Alpine Linux on Charon.
- Download the latest basic Alpine Linux iso from the official website and verify its integrity and authenticity.
- Setup Alpine Linux according to the documentation in "System Disk Mode (sys)".
- Create a none-root user during this step (or later on). This user should be used by Ansible to configure the server. It is advised to create this user, to avoid running Ansible with the root user.
- Choose "OpenSSH" when prompted for the ssh server.
- Choose "sys" when prompted for the Disk Mode.
- Reboot and login as root.
- Prepare data
- Create data usb drive with the documents found in the
configs/folder. - Create new ssh key file for each device that needs to connect to the homelab machine via ssh using
ssh-keygen. - Add ssh configuration on the devices to more easily connect to the homelab machine.
- Copy public key of the newly created ssh keys to the data usb drive
- Install mount on the Alpine Linux homelab node
apk add mount. - Mount the data usb drive using
mount /dev/sdXY /media/usb.
- Create data usb drive with the documents found in the
- Setup SSH daemon
- Copy sshd configuration files to the sshd config folder using
cp /media/usb/sshd_config/*.conf /etc/ssh/sshd_config.d/. - Restart sshd service
rc-service sshd restart.
- Copy sshd configuration files to the sshd config folder using
- Setup Ansible user
- Create none-root user, with which the Ansible script will be run later on using
setup-user. - Navigate to new users home directory.
- Create an .ssh folder in the home directory of this new user and change its owner and mode
mkdir .ssh && chown <username> .ssh && chmod 755 .ssh. - Navigate to this new .ssh folder.
- Create an authorized_keys file in the .ssh directory its owner and mode
touch authorized_keys && chown <username> authorized_keys && chmod 600 authorized_keys. - Populate authorized_keys file with ssh public keys found on the mounted usb drive
cat /media/usb/<pulic-key.pub> >> authorized_keys.
- Create none-root user, with which the Ansible script will be run later on using
- Elevate Ansible user privileges
- Install doas (Alpine Linux sudo equivalent)
apk add doas. - Give new user superuser privileges
echo "permit persist keepenv <username>" >> /etc/doas.conf.
- Install doas (Alpine Linux sudo equivalent)
- Prepare system to be Ansible target
- Install Python on the system
apk add python3.
- Install Python on the system
To access the services from anywhere in the world, without the need to expose the node into the internet, a VPN solution, such as Tailscale, can be used.
- Install Tailscale using
apk add tailscale. - Add Tailscale to autostart list
doas rc-update add tailscale. - Start Tailscale service
doas rc-service tailscale start. See the status usingrc-service tailscale status. - Connect to Tailnet
doas tailscale up. - Disable key expiry in the Tailscale admin panel for the node, to avoid need for reauthentication of the node
after token is expired. If this step is not performed, the service might no longer be accessible at some point,
until it is reauthenticated using
doas tailscale up.
The current setup uses Traefik's built in ACME client configured for Infomaniak's NameServer to perform a DNS challenge
to get a valid Let's Encrypt certificate. To perform this challenge, Traefik, the reverse proxy used in this setup,
needs an appropriate access token with write privileges to the NameServer. If a different NameServer is supposed to be used,
the environment variable passed to the Traefik-Container has to be modified. Furthermore, the
certificatesResolvers.[resolver-name].acme.dnsChallenge.provider value has to be updated, to contain the
specific value required by the used NameServer. In addition (not necessary, but results in a cleaner
configuration), name of the certresolver in the service.yaml.j2 and traefik.yaml.j2 files have to be
updated.
- To improve network speed and avoid unnecessary routing, it is advised to setup a DNS entry in the local gateway. This can be done in the appropriate admin panel of the home router.
- To setup remote access over Tailscale, additional DNS entries have to be manually added to the NameServer of the domain to be used. An A and AAAA record should be added with the IPv4 and IPv6 of the node in the Tailnet, respectively.
- Check that all secrets and variable values for all services are set correctly in the
ansible/vars/prod/files. Take inspiration from theansible/vars/dev/files to get a list of all necessary variables that have to be set. In addition to the ones found in the development variable files, the following secret variables have to be set:Variablename Description infomaniak_dns_api_token Token used to edit the DNS entries of the NameServer for the ACME challenge performed by Traefik. - Check that the
ansible/inventory/charon.inifile is configured correctly. - Execute
just deploy-prod charon.
TODO
The following services are available on Charon:
| Service | Reserved Portrange | Exposed Service | Port of exposed Service | Directly accessible | Url Accessibility |
|---|---|---|---|---|---|
| Traefik | 10000 - 19999 | ||||
| Traefik API/Dashboard | 8080 | ❌ | traefik(.dev).* |
||
| Traefik Web | 10080 | ✔️ | |||
| Traefik Websecure | 10443 | ✔️ | |||
| Immich | 20000 - 20099 | ||||
| Immich Server | 20000 | ❌ | immich(.dev).* |
||
| Filebrowser | 20100 - 20199 | ||||
| Filebrowser Server | 20100 | ❌ | filebrowser(.dev).* |
||
| PhotoPrism | 20200 - 20299 | ||||
| PhotoPrism Server | 20200 | ❌ | photoprism(.dev).* |
||
| Gitea | 20300 - 20399 | ||||
| Gitea Server | 20300 | ❌ | gitea(.dev).* |
||
| Gitea Server SSH | 20322 | ❌ | This is currently not working due to firewall and traefik configurations. Repositories can only be cloned via https. |
The following services are available on Daisy:
| Service | Reserved Portrange | Exposed Service | Port of exposed Service | Directly accessible | Url Accessibility |
|---|---|---|---|---|---|
| Traefik | 10000 - 19999 | ||||
| Traefik API/Dashboard | 8080 | ❌ | traefik(.dev).* |
||
| Traefik Web | 10080 | ✔️ | |||
| Traefik Websecure | 10443 | ✔️ | |||
| Pihole | 20400 - 20499 | ||||
| Homeassistant | 20500 - 20599 | ||||
| Homeassistant Server | 20500 | ❌ | homeassistant(.dev).* |
||
| Zigbee2MQTT Frontend | 20501 | ❌ | zigbee2mqtt(.dev).* |
||
| Karakeep | 20600 - 20699 | ||||
| Karakeep Server | 20600 | ❌ | karakeep(.dev).* |
||
| Miniflux | 20700 - 20799 | ||||
| Miniflux Server | 20700 | ❌ | miniflux(.dev).* |
To introduce a new service, follow these steps:
- Create a new role in the
ansible/roles/servicedirectory with the name of the service.- Implement new role in accordance with the other roles.
- Every service has its own user to increase security and have a clearer separation of the independent services.
- Add all relevant configuration for the new service to the
ansible/group_vars/all.yamlfile.- Secrets and other variables that are environment dependent have to be added to the
ansible/vars/<environment>/<node>.yamlfile.
- Secrets and other variables that are environment dependent have to be added to the
- Add the service role to the
ansible/<node>.yamlplaybook file. - Add the port to the Vagrant configuration in
deployments/local/<node>/Vagrantfileto be forwarded to the host machine for accessing the service locally without the need to go through Traefik (nice for debugging). - Update router DNS entries to easily access the service in the home network.
- Setup Immich admin account and user accounts using the web overview.
- Setup Filebrowser user accounts using the web overview.
- Setup Gitea admin account and user accounts using the web overview. In addition, set configuration flags during first
setup as follows:
- Set domain to
localhost - Check that the ports are set to the containers internal ports, which are exposed to the host, not the ports over which the service is accessible from the outside.
- Disable
Enable Local Mode. - Disable
Enable OpenID Sign-In. - Enable
Disable Self-Registration. - Enable
Require Sign-In to View Pages. - Enable
Allow Creation of Organizations by Default. - Enable
Enable Time Tracking by Default. - Set password hash algorithm to
argon2, if memory is not a limiting factor. Otherwise, choose based on your system configuration. - Set correct administrator credentials.
- Set domain to
- Setup Miniflux using the web overview.
- Create a non-admin user account for daily use.
- Create an API key under
Settings > API Keysfor third-party integrations. - Configure ntfy integration under
Settings > Integrationsfor push notifications.