by MorganGallant on 6/14/20, 5:06 PM with 11 comments
A few ideas I had:
- https://equinox.io is a good option - $29/mo for automatic release channels, deployments, etc.
- Periodic clone from GitHub is another solution here - every x minutes, clone, build and replace if needed. This option works great, but can lead to some annoyances.
- I'd guess the simplest way is to write a small script which copies the binaries over to all the machines, then restarts the servers. This is fine I guess...?
Has anyone else worked on something similar? How did you / do you automatically update the binaries running on your on-prem servers?
by moondev on 6/14/20, 5:35 PM
vSphere is the fabric that connects everything, packer builds the machine images, tf/govc deploys.
To be honest though these days it's more for standing up K8S clusters, workloads are managed via kube manifests and less directly on the vm level.
by shoo on 6/15/20, 9:31 AM
I do roughly this. custom application is rigged to run as a service on debian , managed by systemd. I.e. path of least resistance, app is run in idiomatic way by the operating system. I have an ansible script to copy over a new version, ensure all dependencies are installed using apt, run admin tasks to update anything that needs updating in database, then restart the service to pick up the new version of application code. It is a bit sloppy & doesn't automate everything but does automate the common path for upgrading an application code change to an existing deployment.
by user_agent on 6/14/20, 5:32 PM
You can always go with the Bash / Python scripts, but hey, Ansible is really cool! I'd give it a try even for small things. I use it to get in touch with my home RaspberryPi cluster.
The above doesn't make you free from knowing how your operating systems work, BTW! That's a must.
Have fun!
by asguy on 6/14/20, 5:11 PM