from Hacker News

My deployment platform is a shell script

by j3s on 4/9/24, 10:58 PM with 138 comments

  • by anonzzzies on 4/10/24, 11:20 AM

    I use similar things for bigger (multi-server) deploys too. It's light and it just works and works for decades without changes/updates. People say it's brittle; I have a proof of n>0 that this is not the case compared to many other solutions, this post making that point too. Sh/bash/perl(8) have been around forever, they don't break after update etc.

    I sadly don't recommend it for my day job, simply because of liability. When something messes up with ansible, terraform, docker, cloudformation etc, no-one gets any blame because 'complex systems', 'it happens' etc etc; with a simple script going wrong, they would hang me high even though it probably saves a crap load in maintenance, compute etc over the 10s of years. Same reason we use clusters and IaC while of course nothing we do needs it; if an aws cluster goes down, no-one but aws gets blamed, while if the $2 postgres vps@cheapafhosting (with an higher uptime than that aws cluster by the way; human error downed it a few times, short, but still) is down even for a ping, everyone is upset and pointing fingers.

  • by throwaway458864 on 4/10/24, 12:33 PM

    Shell scripts are a more evolved form of programming and nobody can change my mind on that. They require less work, they're easier to make, they're flexible, compatible, composable, portable, small, interpreted, and simple. You can do more with few characters and do complex things without the complexity of types, data structures, locks, scoping, etc. You don't write complex programs in it, but you use complex programs with it, in ways that would be over complicated, buggy and time consuming in a traditional language.

    That said, it's a tool. Like any tool, it depends how you use it. People who aren't trained on the tool, or don't read the instruction manual, might get injured. I'd like to see a version of it that is safer and retains its utility without getting more complicated, but it would end up less useful in many cases. Maybe that's fine; maybe it needs to be split into multiple tools.

  • by chasil on 4/10/24, 11:36 AM

    The use of ls in this way is not good form:

      cd /root
      for project in $(ls go-cicd);
    
    I think a better expression would be:

      for project in ./*
      do [ -d "$project" ] || continue
         ...
  • by pevey on 4/10/24, 11:30 AM

    You can also use webhooks to deploy with each GitHub push. The advantage over GitHub actions is you don’t have to store any secrets on GitHub or with integrators like Vercel. Just send a payload to your own endpoint each time a commit is made, and that can trigger your shell script to rebuild and deploy. Using symbolic links helps make it more robust to errors. Trigger a pull of the repo, and build. Only if the build is successful, move the symbolic link of your production app to the new build. This also allows keeping some history of builds in case you ever need to troubleshoot.
  • by Ingon on 4/10/24, 1:23 PM

    I also started with a simple shell script. Upload sources, build on the target system (golang) and restart the systemd service(es).

    Then, I needed to make another machine like this, so enter Ansible. This worked well for a long time, and was relatively content with it. Along the way, I leaned about nix (and enough of it) to adopt a simple flake to pull out my tools (like golang, ansible, terraform) through. For a long time, I used it like this (e.g. still ansible, but I started building locally)

    Finally, I learned enough nix to adopt NixOS. Now, I've converted my project to a nix package and a NixOS module, which allows me to totally describe the state of the machine I want. With this, remote builds and colmena (mostly for pushing secrets), I deploy a complete system, including my own software.

  • by 1vuio0pswjnm7 on 4/10/24, 4:53 AM

    "i like things that work for years with as little interaction from me as possible."

    Shell scripts written in NetBSD sh/Debian ash will work for as long as I live.

  • by z_zetetic_z on 4/10/24, 1:15 PM

    Or, you could use NixOS and just declare your systems in some text files, git commit; git push.

    You build script becomes:

       while true; do
    
       git pull
    
       nixos-rebuild switch
    
       sleep x
    
       done
    
    That's it. You can even do it remotely and push the new desired state to remote machines (and still build on the target machine, no cross compile required).

    I've completely removed Ansible as a result and no more python version mismatches, no more hunting endless task yaml syntax, no more "my god ansible is slow" deplyments.

  • by bravetraveler on 4/10/24, 9:20 AM

    Not to deride this (too much), but the 'robustness' of deployments with shell scripts is tempting bait. Things are until they aren't, 'nobody rides for free' - decide what you're willing to pay.

    Example: this interprets the output of 'ls'. Reliability is dependent on good quoting/never introducing a project with spaces

    Ansible is a nice middle ground, personally. I write the state that differs, use a library of scripting.

  • by Gys on 4/10/24, 12:09 PM

    I assume this script runs on the server. I was building Go projects on the server as well, a vps where I have several things running. At some point I noticed that larger builds severely effected the other websites. So now I build locally and push the binary to git. To not bloat the project repo with big binary blobs I use a special deploy repo.
  • by pfitzsimmons on 4/10/24, 1:28 PM

    As a pythonista, I am a huge fan of the plumbum library as a replacement for bash. It makes it very straightforward to run a sequence of *nix commands, but you get all the simplicity and power of the python language in terms of loops and functions and so forth. These days, I do all my server management and deployment scripts with python/plumbum.

    And while simple is great, the necessary features not included in OP's scripts is that I want to spin up the new instance in parallel, verify it is running correctly, and then switch nginx or the load balancer to point to the new server. You are less prone to break production and you get zero downtime deploys.

  • by codegeek on 4/10/24, 12:54 PM

    Love stuff like this. For my personal blog, I have a simple Makefile that builds the Go Binary, generates a static HTML output and then deploys it to a DigitalOcean VPS using ssh, reloads Caddy and Supervisor and boom.
  • by mediumsmart on 4/10/24, 7:52 AM

    Mine too, but I am not a repoman.

    pushthis="rsync -avzh --del ~/path.local/ somedude@path.online:path.online"

  • by gregsadetsky on 4/10/24, 2:56 PM

    There's a lot of good in that script - it's just that it doesn't seem to cover functionalities that I'm used to after years of deploying side and "real" (business, etc.) projects to Heroku and Render.

    How do you manage domain names, who deals with the ssl certificates, how do you set environment variables i.e. "secrets", how can you run postgres, how do you run remote commands i.e. dbmigrate.py, etc.

    A friend and I have been working for a few months on a project to simplify this - we're not the first to do an open source IaC, but we're scratching our own itch on a lot of features that we've been missing. It's basically "deploy with git push to your own VPS and manage everything with a CLI".

    I'd love to ask - what do people feel is mostly lacking from OP's script? Which features seem like the most important when deploying/managing a remote server? How do you choose if you're going to use Ansible or K8S or a script, or a full blown IaC i.e. Heroku? Is it price/ownership (i.e. having full control over the machine)/ease of use/speed of deployment/something else? Thanks!

  • by gigatexal on 4/10/24, 11:22 AM

    Off topic: I love the writing style of the author and this blog. Gonna follow it.
  • by kragen on 4/10/24, 1:49 PM

    here's the deployment script i use most often

    http://canonical.org/~kragen/sw/dev3.git/hooks/post-update

        #!/bin/sh
        set -e
    
        echo -n 'updating... '
        git update-server-info
        echo 'done. going to dev3'
        cd /home/kragen/public_html/sw/dev3
        echo -n 'pulling... '
        env -u GIT_DIR git pull
        echo -n 'updating... '
        env -u GIT_DIR git update-server-info
        echo 'done.'
    
    dev3.git is the origin for dev3, so the `git pull` in there pulls from the bare repo that just got pushed to

    it doesn't have the 60-second lag and it doesn't load the server all the time. it also doesn't run `go build` or restart a server with openrc, but those would be easy things to add if i wanted them

  • by mnahkies on 4/10/24, 6:25 AM

    I have a very similar system[1] for my personal projects, only I use GitHub actions to push a docker image to ECR and a commit to a config repo bumping the tag. I then have a cronjob to pull the config repo and reconcile using docker compose.

    I wouldn't use it for serious stuff, but it's been working great for my random personal projects (biggest gap is if something crashes it'll stay crashed until manual intervention currently)

    - [1] https://github.com/mnahkies/shoe-string-server/pull/2

  • by tacone on 4/10/24, 1:46 PM

    I use this deploy script for my hobby project: https://gist.github.com/tacone/230d5c305a9c5eff7f58ea2744f20...

    It will connect over ssh, pull the code, build the containers and restart them (scripts/live is just a wrapper around docker-compose).

    If the build fails, the services will keep running.

    The only problem I have is that hitting CTRL+C in the very moment the containers are being restarted will leave me with the services down.

  • by zoidb on 4/10/24, 11:25 AM

    Not exactly the same configuration as the op, but if you are developing software using Go, the combination of Caddy, a single go binary, and systemd or some other supervisor is extremely flexible and i think is the way for running multiple services on a single VM.

    A shell script that deploys a couple config files and off you go. Use different accounts for each service for isolation and put all of your static files in your binary using embed.FS. No need for fancy configuration management or K8s.

  • by anonyfox on 4/10/24, 2:05 PM

    I have a similar deploy.sh script for my go projects, with a slight twist:

    I compile my Go projects to a binary on a github action, scp it to the server, ssh into it and restart - all done in my deploy.sh and the GHA itself only installs Go and deps (its cached) and then calls that deploy.sh script which sits right in the repo itself.

    Super happy with it. Speaking as a previous DevOps guy that got sick of AWS complexities.

  • by rcarmo on 4/10/24, 10:21 AM

    (Shameless plug) Mine is a Python script: http://piku.github.io
  • by OddMerlin on 4/10/24, 11:54 AM

    Why not use Ansible for something like this?

    Don’t get me wrong, I love bash scripts like any other old hat, but Ansible scratches this exact itch.

    You’ve got playbooks that can execute shell, provide logging, better management, history of execution, fleet management, and it’s light weight. And there’s a robust community of shared modules, etc.

  • by lyxell on 4/10/24, 10:26 AM

    Related: https://github.com/containrrr/watchtower

    Polls a docker registry and automatically restarts the container with the same flags that it was started with using the latest image.

  • by fforflo on 4/10/24, 2:54 PM

    Scripts get complicated when people start worrying about "portability," like we were in 1991 so we have to use "sh".

    It's 2024 and you're not developing the next vim or Postgres. Use bash.

  • by medv on 4/10/24, 11:35 AM

  • by strzibny on 4/10/24, 12:41 PM

    I agree that sometimes Bash is enough, which is what I show in Deployment from Scratch. However I am moving pretty much everything to Kamal now...
  • by dvfjsdhgfv on 4/10/24, 11:18 AM

    If the author is reading this: did you code the fish animation CSS manually or used some wrapper for all these moz- and webkit- variants?
  • by philkrylov on 4/10/24, 10:53 AM

    The heredoc in the script is never terminated. Probably it's not the production version but one for publishing ;-)
  • by chapterjason on 4/10/24, 6:04 AM

    What is existentialcrisis.sh? :D
  • by yodaforever on 4/10/24, 8:33 AM

    Can someone ELI5 what he said in the blog post please? What does the script do?
  • by TheCapeGreek on 4/10/24, 12:23 PM

    I can't speak to the validity for the author's use case as I'm not a golang dev, but in spirit I do like the idea. I think this trend back to simplicity (monoliths, sqlite, bash scripts) makes it good timing to be posting and learning things like this.

    Especially as more and more new, easy/low config tools come out like Caddy, this gets simpler over time.

    I have a testbed boilerplate project for Laravel in which server provisioning & deployments are done by bash scripts over SSH. Excluding comments/spacing, the provisioning script is 35 lines of mostly installing dependencies and minor file template copies. For simpler projects not needing queue workers and/or not using more "exotic" tech like Laravel Octane, this could probably be cut down to 30.

    TL;DR do the simplest thing that works for you and move on with life - the value in your project, if you intend to deploy it, is for it to be used.

  • by makz on 4/10/24, 3:57 PM

    I always say: my pipeline is a shell script I call pipeline.sh
  • by danpalmer on 4/10/24, 1:22 PM

    > my script will never:

    > - go down

    I've had cron log files get too big and causes issues.

    > - require an upgrade

    I can't count the number of times unattended-upgrades has broken something.

    > - force me to migrate

    Let's hope the OS is still receiving security updates, because installing on a VPS like this always has a high migration cost.

    This sort of deployment is a fair starting point, but let's not pretend it's some perfect ideal.

    ....

    Look. For deploying a blog, sure, but no one is deploying their blog on k8s. There is a reason why big complex deployment and orchestration systems exist, because there are use-cases for them. This is not one of them, but there's no need to stick your head in the sand over requirements and pretend they don't exist.

  • by dartos on 4/10/24, 3:12 AM

    Limitations are a good thing
  • by logro on 4/10/24, 10:25 AM

      my script will never:
      - go down
      - require an upgrade
      - force me to migrate
      - surprise me
      - keep me up at night
    
    Oh my sweet summer child.
  • by RandomWorker on 4/10/24, 12:55 PM

    Yes more you don’t need