by psviderski on 6/18/25, 11:17 PM with 164 comments
In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.
So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.
docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.
Would love to hear your thoughts and use cases!
by shykes on 6/19/25, 11:37 PM
1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.
2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.
by nine_k on 6/19/25, 12:04 AM
by richardc323 on 6/19/25, 8:10 PM
by alisonatwork on 6/19/25, 2:29 AM
Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.
by metadat on 6/19/25, 1:07 AM
Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.
by amne on 6/19/25, 7:54 AM
I need this in my life.
by lxe on 6/19/25, 12:27 AM
by modeless on 6/19/25, 2:38 AM
by scott113341 on 6/19/25, 1:22 AM
by fellatio on 6/19/25, 2:41 AM
Edit: that thing exists it is uncloud. Just found out!
That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.
by matt_kantor on 6/19/25, 2:47 PM
@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?
by revicon on 6/19/25, 3:29 PM
My workflow in my homelab is to create a remote docker context like this...
(from my local development machine)
> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"
Then I can do...
> docker context use mylinuxserver
> docker compose build
> docker compose up -d
And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.
No fuss, registry, no extra applications needed.
Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.
by dboreham on 6/19/25, 2:37 PM
Being able to run a registry server over the local containerd image store is great.
The details of how some other machine's containerd gets images from that registry to me is a separate concern. docker pull will work just fine provided it is given a suitable registry url and credentials. There are many ways to provide the necessary network connectivity and credentials sharing and so I don't want that aspect to be baked in.
Very slick though.
by jokethrowaway on 6/19/25, 3:05 AM
Both approaches are inferior to yours because of the load on the server (one way or another).
Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.
The size of my images are tiny, the extra complexity is unwarranted.
Then of course I'm not a 1000 people company with 1GB docker images.
by actinium226 on 6/19/25, 12:31 AM
FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.
by bradly on 6/18/25, 11:59 PM
Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?
by larsnystrom on 6/19/25, 6:49 AM
by sushidev on 6/20/25, 8:46 AM
#!/bin/bash
set -euo pipefail
IMAGE_NAME="my-app"
IMAGE_TAG="latest"
# A temporary Docker registry that runs on your local machine during deployment.
LOCAL_REGISTRY="localhost:5000"
REMOTE_IMAGE_NAME="${LOCAL_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
REGISTRY_CONTAINER_NAME="temp-deploy-registry"
# SSH connection details.
# The jump host is an intermediary server. Remove `-J "${JUMP_HOST}"` if not needed.
JUMP_HOST="user@jump-host.example.com"
PROD_HOST="user@production-server.internal"
PROD_PORT="22" # Standard SSH port
# --- Script Logic ---
# Cleanup function to remove the temporary registry container on exit.
cleanup() {
echo "Cleaning up temporary Docker registry container..."
docker stop "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
docker rm "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
echo "Cleanup complete."
}
# Run cleanup on any script exit.
trap cleanup EXIT
# Start the temporary Docker registry.
echo "Starting temporary Docker registry..."
docker run -d -p 5000:5000 --name "${REGISTRY_CONTAINER_NAME}" registry:2
sleep 3 # Give the registry a moment to start.
# Step 1: Tag and push the image to the local registry.
echo "Tagging and pushing image to local registry..."
docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${REMOTE_IMAGE_NAME}"
docker push "${REMOTE_IMAGE_NAME}"
# Step 2: Connect to the production server and deploy.
# The `-R` flag creates a reverse SSH tunnel, allowing the remote host
# to connect back to `localhost:5000` on your machine.
echo "Executing deployment command on production server..."
ssh -J "${JUMP_HOST}" "${PROD_HOST}" -p "${PROD_PORT}" -R 5000:localhost:5000 \
"docker pull ${REMOTE_IMAGE_NAME} && \
docker tag ${REMOTE_IMAGE_NAME} ${IMAGE_NAME}:${IMAGE_TAG} && \
systemctl restart ${IMAGE_NAME} && \
docker system prune --force"
echo "Deployment finished successfully."
by layoric on 6/19/25, 2:09 AM
by koakuma-chan on 6/18/25, 11:44 PM
by MotiBanana on 6/19/25, 5:17 AM
by esafak on 6/19/25, 12:48 AM
by nothrabannosir on 6/18/25, 11:53 PM
by mountainriver on 6/19/25, 3:01 AM
by yjftsjthsd-h on 6/19/25, 12:48 AM
by iw7tdb2kqo9 on 6/19/25, 6:59 AM
by dzonga on 6/18/25, 11:59 PM
the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command
by sebastos on 6/27/25, 3:31 PM
I have spent an absolutely bewildering 7 years trying to understand why this huge gap in the docker ecosystem tooling exists. Even if I never use your tool, it’s such a relief to find someone else who sees the problem in clear terms. Even in this very thread you have people who cannot imagine “why you don’t just docker save | docker load”.
It’s also cathartic to see Solomon regretting how fucky the arbitrary distinction between registries and local engines is. I wish it had been easier to see that point discussed out in the open some time in the past 8 years.
It always felt to me as though the shape of the entire docker ecosystem was frozen incredibly fast. I was aware of docker becoming popular in 2017ish. By the time I actually stated to dive in, in 2018 or so, it felt like its design was already beyond question. If you were confused about holes in the story, you had to sift through cargo cult people incapable of conceiving that docker could work any differently than it already did. This created a pervasive gaslighty experience: Maybe I was just Holding It Wrong? Why is everyone else so unperturbed by these holes, I wondered. But it turns out, no, damnit - I was right!
by rcarmo on 6/19/25, 10:23 AM
by quantadev on 6/19/25, 3:54 AM
by armx40 on 6/19/25, 12:06 AM
by remram on 6/19/25, 12:43 AM
by spwa4 on 6/19/25, 11:19 AM
by cultureulterior on 6/19/25, 4:00 AM
by hoppp on 6/19/25, 11:32 AM
by victorbjorklund on 6/19/25, 8:08 AM
by alibarber on 6/19/25, 9:26 AM
I personally run a small instance with Hetzner that has K3s running. I'm quite familiar with K8s from my day job so it is nice when I want to do a personal project to be able to just use similar tools.
I have a Macbook and, for some reason I really dislike the idea of running docker (or podman, etc) on it. Now of course I could have GitHub actions building the project and pushing it to a registry, then pull that to the server, but it's another step between code and server that I wanted to avoid.
Fortunately, it's trivial to sync the code to a pod over kubectl, and have podman build it there - but the registry (the step from pod to cluster) was the missing step, and it infuriated me that even with save/load, so much was going to be duplicated, on the same effective VM. I'll need to give this a try, and it's inspired me to create some dev automation and share it.
Of course, this is all overkill for hobby apps, but it's a hobby and I can do it the way I like, and it's nice to see others also coming up with interesting approaches.
by peyloride on 6/19/25, 6:20 AM
by s1mplicissimus on 6/18/25, 11:59 PM
by czhu12 on 6/19/25, 4:13 AM
by bflesch on 6/19/25, 7:34 AM
by isaacvando on 6/19/25, 1:29 AM
by politelemon on 6/19/25, 6:20 AM
> Linux via Homebrew
Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.
by jdsleppy on 6/19/25, 11:22 AM
DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d
It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.
Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.
by jlhawn on 6/18/25, 11:52 PM
docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)!by Aaargh20318 on 6/19/25, 1:35 PM
by ajd555 on 6/19/25, 1:34 PM