by nerdyadventurer on 3/22/23, 12:43 PM with 69 comments
How do you
- deploy from source repo? Terraform?
- keep software up to date? ex: Postgres, OS
- do load balancing? built-in load balancer?
- handle scaling? Terraform?
- automate backups? ex: databases, storage. Do you use provided backups and snapshots?
- maintain security? built-in firewall and DDoS protection?
If there is any open source automation scripts please share.
by alex7734 on 3/22/23, 2:00 PM
- Deploy by stopping the server, rsyncing in the changes, and starting the server. The whole thing is automated by script and takes 5 seconds which is acceptable for us.
- Run apt upgrade manually biweekly or so.
- We use client-side load balancing (the client picks an app server at random) but most cloud providers will give you a load balancer IP that transparently does the same thing (not for free though).
- For scaling just manually rent more servers.
- For backups we use a cronjob that does the backup and then uploads to MEGA
- For security we setup a script that runs iptables-restore but this isn't really all that necessary if you don't run anything that listens on the network (except your own server obviously).
- DDoS is handled transparently by our provider.
While this might change if you're super big and have thousands of servers, in my experience simple is best and "just shell scripts" is the simplest solution to most sysadmin problems.by vlaaad on 3/22/23, 1:09 PM
To expand a little bit:
- It's a very small service
- I use sqlite db
- Preparation step before the restart ensures all the deps are downloaded for the new repo state. I.e. "a build step"
- I use simple nginx in front of the web server itself
- Backups are implemented as a cron job that sends my whole db as an email attachment to myself
- journalctl shows how it restarted so I see it's working
by 0xblinq on 3/22/23, 2:37 PM
> deploy from source repo? Terraform?
I use Dokku (https://dokku.com/), then the workflow is the same as if you'd be using Heroku
> keep software up to date? ex: Postgres, OS
Automattic ubuntu updates + I once a week SSH to it and apt-get update, etc.
> do load balancing? built-in load balancer?
I just don't. I don't need for the load of my projects.
> handle scaling? Terraform?
Just vertical scaling for now. A single powerful server can do great before you might need to add more servers.
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
I just enable the "backup" feature on their admin panel. Adds 20% to the cost but works great and it's easy.
> maintain security? built-in firewall and DDoS protection?
I only expose the HTTP(s) and SSH ports, and I also have setup fail2ban for bruteforce attacks.
> If there is any open source automation scripts please share.
Dokku.
by nemo136 on 3/22/23, 2:31 PM
- install machines with ansible (using hetzner scripts for OS install)
- machines communicate over vswitch/vlans, external interfaces disabled whenever possible. Pay attention to the custom mtu trick.
- harden machines, unattended-upgrades mandatory on each machine
- ssh open with IP whitelists from iptables on gateways
- machines organized as k8s clusters, took ~1 year to have everything working cleanly
- everything deployed as k8s resources (kustomize, fluxcd, gitops)
- use keepalived for external IPs with floating IPs for ingress on 3 machines per cluster
Machines are managed as cattle, it takes <1h+ hetzner provisioning time to add as many machines as we need.
by mtmail on 3/22/23, 1:05 PM
by e12e on 3/22/23, 1:57 PM
> Introducing MRSK - 37signals way to deploy
by jasonvorhe on 3/22/23, 1:42 PM
* https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... (Terraform, Kubernetes bootstrap)
* Flux for CI
* nginx-ingress + Hetzner Loadbalancer (thanks to https://github.com/hetznercloud/hcloud-cloud-controller-mana...)
* Hetzner storage volumes (thanks to https://github.com/hetznercloud/csi-driver)
Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.
by cstuder on 3/22/23, 1:42 PM
- Running dokku with Heroku Buildpacks to deploy both from source and to run Docker images behind an ngnix reverse proxy.
- Autoupgrade apt's, manually updating the OS.
- No load balancing.
- No scaling.
- Automated backups with restic/rclone to OneDrive.
- Hetzner firewall, no DDoS protection.
by Y_Y on 3/22/23, 1:06 PM
by mesmertech on 3/22/23, 1:29 PM
- don't remember the last time I updated lol
- traefik + worker nodes on docker swarm
- again docker swarm
- I have a cronjob that makes backup using postgres, then uploads it to a digitalocean spaces, you can just use S3 as well
- I'm using cloudflare in front of server, but I also use inbuilt firewall as I host a postgres server with hetzner(only allow traffic from the web server worker nodes)
by simon83 on 3/22/23, 1:39 PM
I have a Hetzner dedicated server (not the Cloud offering) and I setup OpnSense as an all-in-one routing and firewall solution in a separate VM. All incoming and outgoing traffic goes through this OpnSense VM, which acts as default gateway for the host system and all other VMs/Docker containers. You either need to book a 2nd public IPv4 address (or just use IPv6 for free if that is good enough for your use case, since each server comes with a IPv6 /64 subnet), or if you want to just have 1 IPv4 address you could do some Mac spoofing on the main eth interface of the host OS and give the actual Mac address and public IP to the OpnSense's WAN interface. This is necessary because Hetzner has some Mac address filtering in place, meaning only the Mac address connected to the public IP is allowed to make traffic.
by sshine on 3/22/23, 1:45 PM
- Store and run Terraform setup in git
- Store and distribute SSH keys
- Store and run Ansible scripts for bootstrapping (e.g. Kubernetes clusters on dedicated, or more VPS'es)
- Host VPN and some low-intensity services (I'd delegate both of these if I had a bigger budget)
Specifically, this replaces the use of Terraform Cloud.I enjoyed using Terraform Cloud for a more cloudy setup with easy GitHub pull-request integration at a past employer.
But I'm specifically aiming for simplicity here. It doesn't scale as well to a team of 2+ without establishing conventions.
I haven't explored what self-hosted alternatives there are to Terraform Cloud.
by artellectual on 3/22/23, 1:57 PM
It does load balancing / automatic ssl issuing out of the box. It will also allow you to scale horizontally. I’m working towards making it public soon.
by notpushkin on 3/22/23, 1:58 PM
- Deploy from source repo: Lunni docs guide you how to setup CI building your repo as a docker image, and you can create a webhook that pulls it and redeploys.
- Scaling, load balancing: in theory you can just throw more servers in the swarm, tweak your configuration a bit and it should work. However, I've yet to run past what a single, moderately beefy server can handle :')
- Automate backups: definitely on my roadmap! Right now I'm configuring them manually on critical services, and doing them manually every now and then using the Vackup script.
- Maintain security: Docker's virtual networks acts as a de-facto firewall here. In Lunni, you only expose services you need to the reverse proxy (for HTTP), and if you absolutely must expose some ports directly (e. g. SSH for Git), you have to explicitly list them.
Some other similar alternatives to consider: Dokku, Coolify, Portainer with Traefik / Caddy / nginx-gen. I'll be glad if you choose Lunni though :-) Let me know if you have any questions!
by bluelu on 3/22/23, 2:17 PM
- deploy from source repo? Terraform?
* local build server, which rsyncs to application servers (e.g. files), or through docker registry * scripts to start/stop/restart services * centralised database on which services run on which servers, which serves as base where specific applications run
- keep software up to date? ex: Postgres, OS
ansible for automated installs (through hetzner API) ansible scripts to execute commands on servers (e.g. update software, or adapt firewall when new hosts are being added)
- do load balancing? built-in load balancer? * proxy to route requests to multiple backend servers (e.g nginx) * flexi ip (needs to manually mapped to new server in case of failure over API, so you need to check yourself that the IP is reachable)
- handle scaling? Terraform?
* more servers
- automate backups? ex: databases, storage. Do you use provided backups and snapshots?
* Seperate hdfs cluster, which allows production nodes to write once and read data, but not delete/overwrite any data. * For less data, you could also use their backup servers. * The "backups and snapshots" feature you mention is only available for vservers, not for dedicated servers.
- maintain security? built-in firewall and DDoS protection?
* Hetzner router Firewall * Software firewall (managed through ansible) * Don't use their VLAN feature, as there seems to be often some problems with connectivity (see their forum). * Never had DDos issues
- monitoring of failures: * internal tool to monitor hardware and software issues (e.g. wrongly deployed software, etc...).
by Gordonjcp on 3/22/23, 1:25 PM
Every couple of months I remember to pay the bill, then start browsing the auction page, then think "hey that thing isn't much more than I'm paying now, maybe I should upgrade...", but mostly I just stick with things as they are.
by creshal on 3/22/23, 2:14 PM
Deploy from source: Gitlab CI builds and deploys containers
Keep software uptodate: Deploy new containers / migrate all containers from a host to upgrade that with OS tools (Debian for us, so just apt dist-upgrade)
Load balancing: nginx container
Scaling: Hasn't really been an issue for us yet, but terraform/k8s work fine from what I've heard
Backups: Dedicated SX server pulls backups via rsnapshot, including DB dumps. All data is on minutely replicated ZFS pools, so we got short-term snapshots for free anyway.
Security: Still on IPTables and Fail2ban for on-system stuff. DDoS protection from Hetzner itself is okay-ish, but for really critical sites Akamai or Cloudflare are still the safer choices. Both work fine.
by RamblingCTO on 3/22/23, 1:14 PM
by throwaway81523 on 3/24/23, 3:39 AM
by leephillips on 3/22/23, 1:36 PM
- deploy from source repo? Terraform?
rsync
- keep software up to date? ex: Postgres, OS
apt-get
- automate backups? ex: databases, storage.
rsync, pg_dump
- maintain security?
systemd-nspawn
by bckygldstn on 3/22/23, 3:59 PM
- Apt auto-upgrades, other software updates are handled in docker. The only software on the machine is haproxy, git and docker for deploys, newrelic and vector for monitoring.
- Haproxy runs on the server to route requests to docker containers. Cloudflare loadbalancing routes to servers.
- Scaling is avoided through over-provisioning cheap Hetzner machines. Adding new machines is done so rarely that a bash script is fine.
- DB backups are done in docker.
- Ufw locks down to ports 22, 80, 443, and DB ports. Because docker can interact with firewalls in surprising ways, I also replicate the rules in the Hetzner filewall.
by jmstfv on 3/23/23, 11:33 AM
* Deploying using Git and Capistrano: `git push && cap production deploy` (aliased to cpd)
* Using Hetzner backups + daily backups to Tarsnap using cron
* Updating software by SSH-ing into a server and updating apt packages; I update Ruby gems locally
* For security, built-in firewall + ufw, two-factor authentication, public key-only authentication (SSH key is protected with a password), SSH running on a non-standard port with a non-standard username.
* I use sqlite as a database and caddy as a web server
by saccharose on 3/22/23, 1:27 PM
[1] https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne...
by sergioisidoro on 3/22/23, 3:27 PM
It provides a super nice balance between going all manual VPS and going all on the kubernetes cool aid
by KingOfCoders on 3/22/23, 1:59 PM
- deploy from source repo? Github copy Go binary
- keep software up to date? Using Hetzner Cloud + hosted Postgres
- do load balancing? Hetzner LB + DNSMadeEasy LB failover
- handle scaling? I don't need to scale fast
- automate backups? Snapshots + hosted Postgres
- maintain security? SSH on other port, Hetzner private networks, built-in firewall and DDoS protection
by nurettin on 3/23/23, 6:50 AM
traffic: no ddos protection, no load balancing
backups: daily automated backups provided by host. no incrementals.
update: unattended updates, software is tested doesn't break when databases and message queues restart due to unattended upgrades.
security: intact selinux, ufw, proper users and permissions.
by styren on 3/22/23, 1:25 PM
by sirodoht on 3/22/23, 6:29 PM
OS kept up-to-date manually.
No load balancing necessary, it's one server.
No scaling necessary, it's a few thousand users.
Backups: cron with script that s3-compatible copies over to off-site cloud every 6 hours.
Security: firewall yes, DDoS protection no.
by bestest on 3/22/23, 2:09 PM
DDoS protection could might be off-loaded to CloudFlare, don't need it personally.
I don't need to scale yet. But I believe caprover is somewhat scaleable.
Security? As others said, SSH keys.
by kjuulh on 3/22/23, 1:33 PM
Hetzner is just a bunch of vms, they are all connected over wireguard for ease of use. UFW at the edge for locking down ports.
No DDoS protection, but I can turn it on in cloudflare which I use for DNS.
by PaywallBuster on 3/22/23, 2:27 PM
Rundeck to automate/schedule jobs/deployments/upgrades or scale deployments (to fleet of servers)
by johne20 on 3/22/23, 2:09 PM
by t312227 on 3/22/23, 6:34 PM
* ansible for CM (first 4 points)
btw. i don't do any deploys from source-repos, either build packages and use your favorite distributions package-mgmt or use containers.
* some shell/awk/perl/python-scripts for backups & security-related stuff :)
by js4ever on 3/22/23, 2:34 PM
Please check: https://elest.io
Disclaimer: I'm CTO & founder
by x86hacker1010 on 3/23/23, 4:18 AM
Right now I provision my nodes automatically with Terraform. I use cloud init scripts during machine initialization and an adhoc remote provisioner for some firewall stuff and config updates after complete.
This is for boot. For configuration management I’m working on getting my Saltstack complete and easy to use.
Saltstack can be used like chef/ansible but it’s much more intuitive to me and very flexible. This is for automating and managing package installation on my nodes, firewall rules, grouping nodes by config, etc.
What’s also cool with salt is you can have it make changes based on a web hook (salt reactor), ie merging commit into master.
My plan is to basically version control everything into salt so things like VPN setup, software, alerts are all automatically setup. I would love to extend this to also manage a NAS with automated backups.
Tl;Dr I am migrating my flow to be automated deployments from GitHub using Ansible to automated provisioning and deployments using Salt, Terraform and GitHub/Gitea
by Udo on 3/22/23, 2:33 PM
- Proxmox as the base OS, stock install. Close every port except SSH, 80, 443 (alternatively you may want to go with Wireguard instead of SSH). There is an nginx instance running in front of the containers, it passes data along to them as per config. Otherwise, nothing is reachable from the outside.
- Servers are on Proxmox containers, mostly also Nginx, some Nodejs, some other, you know the drill. The containers are pretty low overhead, so you can implement basically any deployment strategy in that environment. They're also easy to back up and to replicate to other machines.
> keep software up to date? ex: Postgres, OS
I run a periodic "apt update && apt upgrade -y && apt autoremove -y" as a cron job on most containers. Some configurations tend to break occasionally, so I do those specific ones manually or with additional scripts. I have a repo of scripts and snippets that I use everywhere, just little hacks that accumulated over the years because they automate useful things.
> do load balancing? built-in load balancer?
That depends on where your loads are, and what the structural needs of your applications are. If this is about external web requests to a mostly read-heavy application, I highly suggest using a CDN such as Cloudflare rather than rolling your own. That being said, Nginx makes load balancing pretty painless.
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
Their storage offering is pretty okay, but I would consider restoring a whole-system backup a last resort. Proxmox has built-in support for container snapshots/backups, which gives you more granular control. These snapshots are also easy to rsync periodically to another host. If the physical machine dies, you just start the container on another host from a recent backup. There are HA options for this on Proxmox if you link more than one host into a cluster (which is overkill for most setups).
> maintain security? built-in firewall and DDoS protection?
Close down your ports. No complicated firewall rules, either. Just block anything that isn't directed at one of your 3 necessary ports. With DDoS protection: don't roll your own, use a CDN. Also, install only things you can audit or come from a reasonably safe source. For instance, I would highly discourage running npm installs/updates unsupervised. If you have a production app that needs to work and needs to be reasonably secure, don't automatically pull data from free-for-all package managers - deploy them with reviewed or known-good versions hard locked (or deploy them with dependencies already included).
As a final tip: Hetzner servers come with RAID setups (usually RAID1). Monitor the status of those drives! If one fails, tell them to replace it. They will usually do it within the hour on a running system.
by ancieque on 3/22/23, 5:13 PM
We have built some tooling around setting up and maintaining the swarm using ansible [0]. We also added some Hetzner flavour to that [1] which allows us to automatically spin up completely new clusters in a really short amount of time.
deploy from source repo:
- We use Azure DevOps pipelines that automate deployments based on environment configs living in an encrypted state in Git repos. We use [2] and [3] to make it easier to organize the deployments using `docker stack deploy` under the hood.
keep software up to date:
- We are currently looking into CVE scanners that export into prometheus to give us an idea of what we should update
load balancing:
- depending on the project, Hetzner LB or Cloudflare
handle scaling:
- manually, but i would love to build some autoscaler for swarm that interacts with our tooling [0] and [1]
automate backups:
- docker swarm cronjobs either via jobs with restart condition and a delay or [4]
maintain security:
- Hetzner LB is front facing. Communication is done via encrypted networks inside Hetzner private cloud networks
- [0] https://github.com/neuroforgede/swarmsible
- [1] https://github.com/neuroforgede/swarmsible-hetzner
- [2] https://github.com/neuroforgede/nothelm.py
- [3] https://github.com/neuroforgede/docker-stack-deploy
===================
EDIT - about storage:
We use cloud volumes.
For drivers:
We use https://github.com/costela/docker-volume-hetzner which is really stable.
CSI support for Swarm is in beta as well and already merged in the Hetzner CSI driver (https://github.com/hetznercloud/csi-driver/tree/main/deploy/...). There are some rough edges atm with Docker + CSI so I would stick with docker-volume-hetzner for now for prod usage.
Disclaimer: I contributed to both repos.
by KronisLV on 3/22/23, 1:37 PM
> deploy from source repo? Terraform?
Personally, I use Gitea for my repos and Drone CI for CI/CD.
Gitea: https://gitea.io/en-us/
Drone CI: https://www.drone.io/
Some might prefer Woodpecker due to licensing: https://woodpecker-ci.org/ but honestly most solutions out there are okay, even Jenkins.
Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).
Docker Swarm: https://docs.docker.com/engine/swarm/ (uses the Compose spec for manifests)
K3s: https://k3s.io/
K0s: https://k0sproject.io/ though MicroK8s and others are also okay.
I also like having something like Portainer to have a GUI to manage the clusters: https://www.portainer.io/ for Kubernetes Rancher might offer more features, but will have a higher footprint
It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: https://docs.portainer.io/user/docker/services/webhooks
> keep software up to date? ex: Postgres, OS
I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
Drone CI makes this easy to have happen in the background, as long as I don't update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: https://docs.drone.io/cron/
Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I've set up the persistent data directories correctly.
> do load balancing? built-in load balancer?
This is a bit more tricky. I use Apache2 with mod_md to get Let's Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-...
Some might prefer Caddy, which is another great web server with automatic HTTPS: https://caddyserver.com/ but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.
However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: https://www.hetzner.com/cloud/load-balancer which will make everything less painless once you need to scale.
Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: https://caddyserver.com/docs/automatic-https#storage You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: https://webmasters.stackexchange.com/a/12704
So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you'll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you'd manually propagate to all of the web servers.
From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).
> handle scaling? Terraform?
None, I manually provision how many nodes I need, mostly because I'm too broke to hand over my wallet to automation.
They have an API that you or someone else could probably hook up: https://docs.hetzner.cloud/
> automate backups? ex: databases, storage. Do you use provided backups and snapshots?
I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.
Then I use something like BackupPC to connect to those servers (SSH/rsync) and pull data to my own backup node, which then compresses and deduplicates the data: https://backuppc.github.io/backuppc/
It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: https://www.bacula.org/
> maintain security? built-in firewall and DDoS protection?
I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: https://owasp.org/www-project-modsecurity-core-rule-set/
You might want to just cave in and go with Cloudflare for the most part, though: https://www.cloudflare.com/waf/