How it started?
Sometime last year I started collecting old hardware to build a lab setup where I can experiment with multiple tools without paying a premium to the cloud providers. I realised that the processors on our old laptops/PCs are quite capable these days and rather than throwing them out in trash we can use them for many meaningful things. So I set out to collecting hardware which looked like trash to others and started setting them up in my bedroom (literally) to run containers.
(UPDATE: I ended up choosing Kubernetes in the end) Before we go any further I would like to clarify why I have picked up docker-swarm for this? My initial hunch was to go with “Kubernetes”. I have a EKS cluster on AWS which I use for running production workloads and I love it. Still, I believe that Kubernetes is a hammer and if you are too much involved with a hammer everything you come across will look like a nail.
- Docker-swarm takes only a few seconds to get up and running as compared to Kubernetes which is a pain to setup.
- All I want is to run containers easily across multiple nodes with minimal effort — and that’s what docker swarm gives me.
- I want to build things which can be easily replicated by everyone. Kubernetes has a huge entry barrier and many people don’t even need it.
What I am trying to do?
- Run websites which might not require 100% uptime
- Run github self hosted runners to use with github actions
- Ensure that I have a good logging and monitoring setup done
- Run asynchronous applications on these cluster to reduce load on my AWS EC2 boxes.
Where I am right now?
As of now I have following hardware with me for my lab
- Dell Vostro PC -> i5 4th Gen, 120GB SSD, 8GB RAM
- PC 2 -> i5 4th Gen, 240GB SSD, 20GB RAM
- Laptop 1 -> i5 , 120GB SSD, 8GB RAM
- Laptop 2 -> i3, 120GB SSD,4GB RAM
Out of these 4, I am currently using only two because I’m too lazy to setup the remaining 2 right now — but hopefully they’ll be up and running very soon in future.
Phase 1 Goals
- Setup docker on all nodes — obviously
- Create an EC2 instance on AWS — mainly to get a public IP address and an always up instance on cloud to bring in public traffic to my lab (I don’t have a public IP from my ISP)
- Setup openvpn on EC2 instance and connect all my nodes with it — to ensure all nodes are in a single network.
- Init docker swarm cluster
- Setup prometheus + grafana for monitoring
Phase 2 Goals
- Setup a public and private Ingress using Traefik. Private ingress for applications which I want to be reachable only from within my home or VPN. Public ingress for applications in which I want to bring in external traffic from internet.
- Automate letsencrypt based cert generation for both internal as well as external applications.
- Setup a private docker registry to store docker images
Phase 3 Goals
- Setup self-hosted github action runners
- Setup logging pipeline using EFK (Elasticsearch + Fluentd + Kibana) stack.
Phase 4 Goals
By this time I should have a basic foundation ready. I have tons of ideas in my mind about it but nothing concrete that I can write about as of now.
Looking forward to publishing a blog in this series every week!