by wolframhempel on 4/4/24, 11:54 AM with 160 comments
by RHSeeger on 4/6/24, 5:24 PM
Without any intent to insult what you've done (because the information is interesting and the writeup is well done)... how do the numbers work out when you account for actually implementing and maintaining the database?
- Developer(s) time to initially implement it
- PjM/PM time to organize initial build
- Developer(s) time for maintenance (fix bugs and enhancement requirements)
- PjM/PM time to organize maintenance
The cost of someone to maintain the actual "service" (independent of the development of it) is, I assume, either similar or lower, so there's probably a win there. I'm assuming you have someone on board that was on charge of making sure Aurora was configured / being used correctly; and it would be just as easier if not easier to do the same for your custom database.
The cost of 120,000/year for Aurora seems like it would be less than the cost of development/organization time for the custom database.
Note: It's clear you have other reasons for needing your custom database. I get that. I was just curious about the costs.
by jrockway on 4/6/24, 5:57 PM
Atomic: not applicable, as there are no transactions. Consistent: no, as there is no protection about losing the tail end of writes (consider "no space left on device" halfway through a record). Independent: not applicable, as there are no transactions. Durable: no, the data is buffered in memory before being written to the network (EBS is the network, not a disk).
So with all of this in mind, the engineering cost is not going to be higher than $10,000 a month. It's a print statement.
If it sounds like I'm being negative, I'm not. Log files are one of my favorite types of time series data storage. A for loop that reads every record is one of my favorite query plans. But this is not what things like Postgres or Aurora aim to do, they aim for things like "we need to edit past data several times per second and derive some of those edits from data that is also being edited". Now you have some complexity and a big old binary log file and some for loops isn't really going to get you there. But if you don't need those things, then you don't need those things, and you don't need to pay for them.
The question you always have to ask, though, is have you reasoned about the business impacts of losing data through unhandled transactional conflicts? "read committed" or "non-durable writes" are often big customer service problems. "You deducted this bill payment twice, and now I can't pay the rent!" Does it matter to your end users? If not, you can save a lot of time and money. If it does, well, then the best-effort log file probably isn't going to be good for business.
by mdaniel on 4/5/24, 2:13 AM
It also seems that just about every open source "datadog / new relic replacement" is built on top of ClickHouse, and even they themselves allege multi-petabyte capabilities <https://news.ycombinator.com/item?id=39905443>
OT1H, I saw the "we did research" part of the post, and I for sure have no horse in your race of NIH, but "we write to EBS, what's the worst that can happen" strikes me as ... be sure you're comfortable with the tradeoffs you've made in order to get a catchy blog post title
by yau8edq12i on 4/6/24, 5:41 PM
by zX41ZdbW on 4/6/24, 7:02 PM
As a demo, I've recently implemented a tool to browse 50 billion airplane locations: https://adsb.exposed/
Disclaimer: I'm the author of ClickHouse.
by MuffinFlavored on 4/6/24, 5:21 PM
> This storage engine is part of our server binary, so the cost for running it hasn’t changed. What has changed though, is that we’ve replaced our $10k/month Aurora instances with a $200/month Elastic Block Storage (EBS) volume. We are using Provisioned IOPS SSD (io2) with 3000 IOPS and are batching updates to one write per second per node and realm.
I would be curious to hear what that "1 write per second" looks like in terms of throughput/size?
by time0ut on 4/6/24, 6:08 PM
> EBS has automated backups and recovery built in and high uptime guarantees, so we don’t feel that we’ve missed out on any of the reliability guarantees that Aurora offered.
It may not matter for their use case, but I don't believe this is accurate in a general sense. EBS volumes are local to an availability zone while Aurora's storage is replicated across a quorum of AZs [0]. If a region loses an AZ, the database instance can be failed over to a healthy one with little downtime. This has only happened to me a couple times over the past three years, but it was pretty seamless and things were back on track pretty fast.
I didn't see anything in the article about addressing availability if there is an AZ outage. It may simply not matter or maybe they have solved for it. Could be a good topic for a follow up article.
[0] https://aws.amazon.com/blogs/database/introducing-the-aurora...
by kumarm on 4/6/24, 5:52 PM
The project I believe still appears in success story on JGroups website after 20+ years. I am surprised people are writing their own databases for location storage in 2024 :). There was no need to invent new technology in 2002 and definitely not in 2024.
by afro88 on 4/6/24, 8:40 PM
> [We need to cater for] Delivery companies that want to be able to replay the exact seconds leading up to an accident.
> We are ok with losing some data. We buffer about 1 second worth of updates before we write to disk
Impressive engineering effort on it's own though!
by xyst on 4/6/24, 6:04 PM
Even moderately sized Kafka clusters can handle the throughput requirement. Can even optimize for performance over durability.
Some limited query capability with components such as ksqldb.
Maybe offload historical data to blob storage.
Then again, Kafka is kind of complicated to run at these scales. Very easy to fuck up.
by the_duke on 4/6/24, 5:46 PM
I especially like Clickhouse, it's generic but also a powerhouse that handles most things you throw at it, handles huge write volumes (with sufficient batching), supports horizontal scaling, and offloading long-term storage to S3 for much smaller disk requirements. The geo features in clickhouse are pretty basic, but it does have some builtin geo datatypes and functions for eg calculating the distance.
by kaladin_1 on 4/6/24, 5:33 PM
Sure it won't cover the bazillion cases the DBs out there do but that's not what you need. The source code is small enough for any team member to jump in and debug while pushing performance in any direction you want.
Cudos!
by CapeTheory on 4/6/24, 6:15 PM
by yunohn on 4/6/24, 5:36 PM
Not a negative though, not everything needs a general purpose database. Clearly this satisfies their requirements, which is the most important thing.
by diziet on 4/6/24, 5:49 PM
by bawolff on 4/6/24, 5:28 PM
I think everything is cheaper than cloud if you do it yourself when you don't count staffing cost.
by Simon_ORourke on 4/6/24, 5:43 PM
by rstuart4133 on 4/12/24, 7:24 AM
The one thing they do say is "no ACID". That implies no b-trees, because an unexpected stop means a corrupted b-tree. Perhaps they use a hash instead, but it would have to be a damned clever hash tree implementation to avoid the same problem. Or perhaps they just rebuild the index after a crash.
Even a append only log file has to be handled carefully without ACID. An uncontrolled shutdown in more file systems will at leave blocks of nulls in the file and 1/2 written blocks if they cross disk block boundaries.
It's a tantalising headline, but after reading the 1,200 words I'm none the wiser on what they built or whether it meets their own specs. A bit of a disappointment.
by INTPenis on 4/6/24, 5:35 PM
You might as well say "we saved 100% of cloud costs by writing our own cloud".
by endisneigh on 4/6/24, 5:44 PM
I use managed databases, but is there really that much to do for maintaining a database? The host requires some level of maintenance - changing disks, updating the host operating system, failover during downtime for machine repair, etc. if you use a database built for failover I imagine much of this doesn’t actually affect the operations that much assuming you slightly over provision.
For a database alone I think the work needed to maintain is greatly exaggerated. That being said I still think it’s more than using a managed database, which is why my company still does so.
In this case though, an append log seems pretty simple imo. Better to self host.
by fifilura on 4/6/24, 5:48 PM
Stream the events to s3 stored as Parquet or Avro files, maybe in Iceberg format.
And then use Trino/Athena to do the long term heavy lifting. Or for on-demand use cases.
Then only push what you actually need live to a Aurora.
by kroolik on 4/6/24, 7:59 PM
What they say is that the logic is embedded into their server binary and they write to a local EBS. But what happens when they have two servers? EBS can't be rw mounted in multiple places.
Won't adding the second and more servers cause trouble like migrating data when new server joins the cluster, or a server leaves the cluster?
I understand Aurora was too expensive for them. But I think it is important to note their whole setup is not HA at all (which may be fine, but the header could be misleading).
by rvba on 4/6/24, 11:30 PM
Maybe Im cynical but interesting that "the business" didnt start to check it to cut costs. I know that customers love this feature. Cynically I can see it costing more, so some customers would drop in.
Also it looks they rewrote a log / timeseries "database" / key value store? As pthers mention sounds like reinventing the wheel to get a cool blog post and boost career solving "problems".
by rad_gruchalski on 4/7/24, 12:06 AM
Reminds me how I implemented mssql active-active log replication over dropbox shares back in 2010 to synchronise two databases in the Us and in the UK. Worked perfectly fine except of that one hurricane that took them out for longer than 14 days. This was more than the preconfigured log retention period.
by pheatherlite on 4/7/24, 6:54 AM
by remram on 4/7/24, 3:00 AM
> Databases are a nightmare to write, from Atomicity, Consistency, Isolation, and Durability (ACID) requirements to sharding to fault recovery to administration - everything is hard beyond belief.
Then talk about their geospatial requirements, PostGIS etc, making it seems they need geospatial features ("PostGIS for geospatial data storage" -- wtf? you need PostGIS for geospatial query not merely storage...)
In reality, they did not require any of the features they mention throughout the article. What a weird write-up!
I guess the conclusion is "read the F*-ing specs". Don't grab a geospatial DBMS just because you heard the words "longitude" and "database" once.
by nikonyrh on 4/5/24, 4:19 PM
by trebecks on 4/7/24, 4:36 AM
the ebs slas look reasonable to a non expert like me and you can take snapshots. it sound like you need to be careful when snapshotting to avoid inconsistencies if stuff is only partially flushed to disk. so you'd need to pause io while it snapshots if those inconsistencies matter. that sounds bad and would encourage you to take less frequent snapshots...? you also pay for the snapshot storage but i guess you wouldn't need to keep many. i like that aws defines "SnapshotAPIUnits" to describe how you get charged for the api calls.
with aurora, it looks like you can synchronously replicate to a secondary (or multiple secondaries) across azs in a single region. it sounds nice to have a sync copy of stuff that people are using. op says the'yre ok with a few seconds of data loss so i'm wondering how painful losing a volume right before taking a snapshot would be.
i wonder if anything off the shelf does something similar. it sounds like people are suggesting clickhouse. i saw buffer table in their docs and it sounds similar https://clickhouse.com/docs/en/engines/table-engines/special.... it looks like it has stuff to use s3 as cold storage too. i even see geo types and functions in the docs. i've never used clickhouse so i don't know if i'm understanding what i read, but it sounds like you could do something similar to whats described in the post with clickhouse if the existing geo types + functions work and you are too lazy to roll something yourself.
by loftsy on 4/6/24, 5:27 PM
by exabrial on 4/7/24, 4:02 AM
And if you write your own db as they did here, it can 100% take advantage of your setup.
by zinodaur on 4/6/24, 5:52 PM
by mavili on 4/6/24, 5:58 PM
by selimnairb on 4/7/24, 11:14 AM
by awinter-py on 4/6/24, 5:24 PM
by halayli on 4/7/24, 4:26 AM
by tshanmu on 4/7/24, 6:54 AM
by icsa on 4/4/24, 12:50 PM
by bevekspldnw on 4/6/24, 9:04 PM
…that’s not something to brag about.
by brianhama on 4/7/24, 12:44 PM
by SmellTheGlove on 4/6/24, 6:10 PM
- This is core to their platform, makes sense to fit it closely to their use cases
- They didn't need most of what a full database offers - they're "just" logging
- They know the tradeoffs and designed appropriately to accept those to keep costs down
I'm a big believer in building on top of the solved problems in the world, but it's also completely okay to build shit. That used to be what this industry did, and now it seems to have shifted in the direction of like 5-10% of large players invent shit and open source it, and the other 90-95% are just stitching together things they didn't build in infrastructure that they don't own or operate, to produce the latest CRUD app. And hell, that's not bad either, it's pretty much my job. But it's also occasionally nice to see someone build to their spec and save a few dollars. It's a good reminder that costs matter, particularly when money isn't free and incinerating endless piles of it chasing a (successful) public exit is no longer the norm.
I get the arguments that developer time isn't free, but neither is running AWS managed services, despite the name. And they didn't really build a general purpose database, they built a much simpler logger for their use case to replace a database. I'd be surprised if they hired someone additional to build this, and if they did, I'd guess (knowing absolutely nothing) that the added dev spends 80% of their time doing other things. It's not like they launched a datacenter. They just built the software and run it on cheaper AWS services versus paying AWS extra for the more complex product.