by lukasfischer on 1/29/23, 12:05 AM with 78 comments
by jpgvm on 1/29/23, 5:50 AM
Once you have that under your belt it's not hard to work out how Docker itself works and how you can use it to fulfill the sort of CI/CD objectives you have outlined. Docker itself isn't important, the semantics of containerization are.
Something that Docker (and Docker like things) take massive advantage of are overly filesystems like AUFS and overlayfs, you would do good to understand these (atleast skin deep).
Finally networking becomes really important when you start playing with network namespaces, you should be somewhat familiar with atleast the Linux bridge infrastructure and how Linux routing works.
Good luck!
by Too on 1/29/23, 11:24 AM
For example. Docker has absolutely zero knowledge of the branches lifetime or even branches at all. This is something you have to design using the existing capabilities of docker together with features or existing integrations provided by GitHub or bitbucket.
Of course knowing docker deeper will help you understand these boundaries better and use them.
One secret is that there is actually not much to it, most things are just variations of docker run and various tricks within docker build, sprinkled with some volume and image management like tagging and pruning. Other orchestrators like GH Actions, Compose, Kubernetes etc can be seen as building around these basic blocks.
If you already know these basics, you are probably going to learn faster by getting your hands dirty, trying to solve the scenarios you need, rather than binge watching tutorial#187 on YouTube.
by arthurcolle on 1/29/23, 3:17 AM
1) your reverse proxy, Nginx/Caddy
2) your "app", or API, whatever. pick whatever you want, a Rails API, a Phoenix microservice, a Django monolithic app, whatever you want.
3) your database. Postgres, whatever
4) Redis - not just for caching. Can use it for anything that requires some level of persistence, or any message bus needs. They even have some plugins you can use (iirc, limited to enterprise plans... maybe?) like RedisGraph.
5) elasticsearch, if you need real-time indexing and search capabilities. Alternatively you can just spin up a dedicated API that leverages full text search for your database container from 3)
6) ??? (sky is the limit!)
I prefer docker compose to Kubernetes because I am not a megacorp. You just define your different services, let them run, expose the right ports, and then things should "just work"
Sometimes you need to specifically name your containers (like naming redis container `redis`, and then in your code you will have to use `redis` as the hostname instead of `localhost` for example).
basically That's It (tm)
by q845712 on 1/29/23, 2:06 AM
that's just my guess though :) Happy hacking!
by nickjj on 1/29/23, 11:45 AM
https://github.com/nickjj/docker-node-example is an up to date Node example app[0] that's ready to go for development and production and sets up GitHub Actions. Its readme links to a DockerCon talk from about a year ago that covers most of the patterns used in that project and if not some of my more recent blog posts cover the rest.
None of my posts cover feature branch deployments tho. That's a pretty different set of topics mostly unrelated to learning Docker. Implementing this also greatly depends on how you plan to deploy Docker. For example, are you using Docker Compose or Kubernetes, etc..
[0]: You can replace "node" in the GitHub URL with flask, rails, django and phoenix for other example apps in other tech stacks.
by ddtaylor on 1/29/23, 6:46 AM
1. Rebuilding the entire container which often involves stopping and starting it, etc.
2. Manually running commands that copy the files into the container. This is irritating because if I forget which files I changed or forget to run the copy command I end up with a "half updated" container.
3. SSHing into the container. This is irritating because I have to modify the port layout and permissions of the container and later remember to "restore" them when I'm "done" making the container.
Thanks!
by CodeAndCuffs on 1/29/23, 3:43 AM
by verdverm on 1/29/23, 1:01 PM
You want to learn more about your CI system and then try things out until you hit the harder / edge cases.
Some things to try or think about
- Push two commits quickly, so the second starts while the first is running.
- rebuild a a commit while the current build is executing. Which one writes the final image to the registry? How do you know?
- How do you tag your images? If by branch name, how do you know which build produced an image? If by commit, how do you know which branch?
- Do you want to run the entire system per commit, shutting it down at the end of a build? Do you want to run supporting systems for the life of a branch? How do you clean up resources and not blow up your cloud budget? Do you clean up old containers each build (from old commits on this branch)? How do you clean up containers after a branch is deleted?
- Build a CI process that triggers subjobs, because eventually you may want to split things up. If you push a commit before the last build's subjob triggers, does it get the original commit or the latest commit? CI systems have nuances, Jenkins always fetches the latest commit when a job starts for a branch, so you may not be testing the code you think you are.
- Do you use a polyrepo or monorepo setup? For poly, how do you gather the right version of components for your commit? For mono, how do you build only what is necessary while still running a full integration test?
- Should you be doing integration testing inside or outside of the build system?
One of the reasons content that addresses these questions is harder to find is that the answers are highly dependent on the situation and tools. My solutions to many are handled with a mix of CUE and Python. You'll be writing code in most solutions
by lamroger on 1/29/23, 2:51 AM
Start step by step.
Before building on Github Actions, build locally.
See if you can build and tag and image with the git SHA. Then run your automated test command against the image/container.
Then see if you can write a github action doing exactly what you did locally.
Random blog posts have been more helpful in my experience vs youtube videos.
by anyfactor on 1/29/23, 9:06 AM
This is the reason I gave up on learning docker properly. I had 3 devices at my disposal - M1 mac, a windows 10 pc and a rpi. The random errors I was getting made me quite frustrated. Keep a code diary and document your mistakes and solutions.
Also get a VPS. Never ever try a serverless solution when trying to properly learn docker. Also, do not try to do anything that involves GPU processing.
by mattlondon on 1/29/23, 2:20 PM
Like do you just run e.g. nodejs or javac locally and then "deploy" to a container, or do you have a development container where you code "in it", or is a new container built on every file change and redeployed?
At my current place of work, all of this is totally abstracted away so no idea how real world people do it!
by illusiveman on 1/29/23, 9:21 AM
by BretFisher on 1/29/23, 8:54 PM
by zelphirkalt on 1/29/23, 1:26 PM
It is even hard to find undoubtedly holistically good examples for docker usage. Many people do many things in different ways, some better some not so good. One can often find good aspects of docker usage in projects though. Like "What kind of environment variables should to let the user pass in, to avoid having to hardcode them in the image and keeping things configurable?", or "How to use multi-stage builds?". It is up to the thoughtful observer, to identify those and adapt ones own process of creating docker images.
I don't see docker as some kind of thing, that one sits down with for a few evenings and then fully knows. More like a thing one picks up over time. One runs into a problem, then searches for answers of how to solve this problem in a docker scenario, then finds several answers and picks one that seems appropriate, then learns, whether that choice was a good one later on. Until then it works for as long as that solution works. It is not like docker is some kind of scientific thing, where there is one correct answer to every question. Many things in docker are rather ad-hoc developed solutions to problems. Just look at the language that makes a docker file and you will see the ad-hoc-ness of it all. Then there are limitations that seem just as arbitrary. For example limited number of layers (stemming from being afraid of too much recursion not being supported by Go and not "externalizing the stack"), not being able to change most of a container's attributes (like labels) while the container is running.
As for questions of CI and so on: I think they are separate issues, which are solved by having a good workflow for the version control system of choice. One could for example configure the CI to do things for specific brances. Like deploying only the master branch or deploying a test branch to another machine/server. But this has nothing to do with docker.
by AlexITC on 1/29/23, 3:09 PM
- https://softwaremill.com/preview-environment-for-code-review...
- https://softwaremill.com/preview-environment-for-code-review...
- https://softwaremill.com/preview-environment-for-code-review...
While the examples use Gitlab, it shouldn't be very hard to port the same idea to a Bitbucket.
by zyl1n on 1/29/23, 8:52 AM
It demystified a lot of docker features for me.
by nunez on 1/29/23, 2:07 PM
I'm in the process of making a follow-up to this that covers more advanced topics. Stay tuned.
I also have a course that shows you how to use Docker for the build-test-deploy loop, though some of it is a little stale. Check that out here: https://www.linkedin.com/learning/devops-foundations-your-fi...
by alpinelogic on 1/29/23, 2:49 PM
by zczc on 1/29/23, 8:42 AM
by bradwood on 1/29/23, 5:44 PM
We use this mechanism with AWS, the serverless framework and some terraform. It works well. With us, the only thing remotely container related is the runtime context for the CI/CD pipeline.
That being said, you could make this work against a k8s cluster, fargate, or just some build servers.
by killthebuddha on 1/29/23, 2:46 AM
by iteratorx on 1/29/23, 2:26 PM
An easy way to get ephemeral envs starting from your docker-compose definition is Bunnyshell.com. It uses Kubernetes behind the scenes, but it's all pretty much abstracted away from the user. There is a free so you can experiment.
Disclosure: I'm part of the Bunnyshell team.
by theusus on 1/29/23, 7:24 AM
by MindTooth on 1/29/23, 1:10 PM
I use it severel times a week. Buildx, Dockerfile, etc.
by jeffybefffy519 on 1/29/23, 8:35 PM
Good luck!
by cinntaile on 1/29/23, 11:06 AM
by agumonkey on 1/29/23, 11:26 AM
by basic_banana on 1/29/23, 12:03 PM
by paulcarroty on 1/29/23, 10:13 AM
by ancieque on 1/29/23, 8:54 AM
by rlt on 1/29/23, 6:09 AM
Ask it something like “Explain how to get started with Docker” and it will give you a bunch of steps in a reasonable order. Then ask it for details for each step, like:
“How do I install Docker on macOS?”
“Write a commented Dockerfile for an application written in $WHATEVER”
“Now write a commented Docker Compose file for this application and a Postgres database”
etc
by zhangruinan on 1/30/23, 3:58 AM
by sciencesama on 1/29/23, 12:56 AM
by wendyshu on 1/29/23, 2:24 AM
by bottlepalm on 1/29/23, 7:21 AM
by MonkeyMalarky on 1/29/23, 5:13 AM