by doganugurlu on 3/7/24, 11:51 AM with 10 comments
- About 9 months ago we burned 2 separate days handling migrations on Supabase. Supabase CLI version and/or local Docker version change made it impossible for my teammate to bring up the db on their local. Migrations couldn't be applied. Fresh pull didn't work either. I thought this was a fluke.
- Working with a new team/stack (flask, alembic, postgres). Twice in the last month I had to change the `alembic_version` in the db just to get the migrations to apply. I think we found ourselves in this situation because we rewrote the history by having the new new migration point to a previous revision (HEAD^) instead of the last one (HEAD). Not sure what our motivation was.
- What am/are I/we doing wrong? - What's the right/robust way to do this? - How are teams of size >2 handling this? - Isn't this scary?
by konha on 3/7/24, 1:46 PM
Directory with .sql files starting with a number. Each file contains a set of changes to the db. The db has a table with the number that was applied last. To migrate your db you check if you have a file with a number that is higher than the one in the db. Then you apply that file to your db. That’s it.
Sounds like you are working in a way that is not intended by your tool / framework.
by ianpurton on 3/11/24, 7:04 AM
Firstly I use a devcontainer for development so I know my versions line up. dbmate uses .sql files which also makes things a lot easier.
You can see my setup here https://github.com/bionic-gpt/bionic-gpt
Have a look at the CONTRIBUTING.md to get an idea of the dev workflow.
by sethammons on 3/11/24, 8:53 AM
Don't rewrite shared history.
As for how migrations are ran. Last place, each team/service had their own data store. Numbered sql files and forward compatible changes only. Sometimes this meant application code would have to write to two locations and in a later update change the read location.
Current gig, everything is managed by the django orm. Great when you have a single db; sucks to scale out to sharded tables, sharded db instances, and is a pain for migrating to new data stores.
by sergiotapia on 3/7/24, 1:40 PM
The only time I had problems with database migrations were in javascript with Prisma. But that was because we had a local dev dataset, and the CEO had gone into past migration files and edited them manually. Prisma exploded every time and it was a pain to fix.
It sounds like your pains are also coming from people manually editing past migration files and committing them. Don't do that!
by ddorian43 on 3/7/24, 2:53 PM