by dm03514 on 1/31/25, 12:17 PM with 0 comments
Building off of DuckDB makes it easy to leverage all the integrations DuckDB supports.
Using batching it's trivial to insert 5000 rows / second into a small postgres instance running locally in docker!
Would love your thoughts and feedback, thank you!
What do your data stacks look like for doing this non-differentiating data work like connecting streams and sources together?