by spektom on 4/23/17, 9:56 AM with 38 comments
by jnordwick on 4/23/17, 9:36 PM
Is it that people want to make the problem more complex that it needs to be? Is it that those who know most about these issues don't share their secrets so implemented from the outside often don't have a good understanding of how to do things properly? If you were to asked the guy behind Prometheus if he's looked at the commercial offerings and what he's learned from them, would even be able to speak about them intelligently?
There seems to be a huge skills gap on these things that I can't put my finger on. I'd love to be able to use a real TSDB, even at only half the speed and usefulness. It would be great for these smaller firms that cant or wont pay the license fees for a commercial offering until they get larger.
by iksaif on 4/23/17, 5:05 PM
by ah- on 4/23/17, 8:07 PM
As the raw storage seems pretty optimal now, I suspect next we'll see a comeback of indices for more precise queries to get another jump in performance.
by nicolaslem on 4/24/17, 8:42 AM
When you compare with the extreme efforts traditional databases take to ensure that unplugging a server will never ever result in data loss[0], silencing this problem makes me wonder.
Is it that at this ingest rate even trying to ensure durability is a vain effort?
by bogomipz on 4/23/17, 6:28 PM
>"Prometheus's storage layer has historically shown outstanding performance, where a single server is able to ingest up to one million samples per second as several million time series"
How are there one million samples per second equating to several million time series? Is a single sample not equivalent to a single data point in a time series db for a particular metric in Prometheus?
by bongonewhere on 4/23/17, 7:29 PM