by davidgomes on 12/29/23, 11:51 PM with 101 comments
by twoodfin on 12/30/23, 4:26 AM
Here’s the Apache Iceberg table format specification:
https://iceberg.apache.org/spec/
As they like to say in patent law, anyone “skilled in the art” of database systems could use this to build and query Iceberg tables without too much difficulty.
This is nominally the Delta Lake equivalent:
https://github.com/delta-io/delta/blob/master/PROTOCOL.md
I defy anyone to even scope out what level of effort would be required to fully implement the current spec, let alone what would be involved in keeping up to date as this beast evolves.
Frankly, the Delta Lake spec reads like a reverse engineering of whatever implementation tradeoffs Databricks is making as they race to build out a lakehouse for every Fortune 1000 company burned by Hadoop (which is to say, most of them).
My point is that I’ve yet to be convinced that buying into Delta Lake is actually buying into an open ecosystem. Would appreciate any reassurance on this front!
Editing to append this GitHub history, which is unfortunately not reassuring:
https://github.com/delta-io/delta/commits/master/PROTOCOL.md
Random features and tweaks just popping up, PR’d by Databricks engineers and promptly approved by Databricks senior engineers…
by wenc on 12/30/23, 3:51 AM
Most people use Hive partitioning convention (i.e. directory names like /key3=000/key2=002/) but Iceberg goes farther than this by exposing even more structure to the query engine.
In a traditional DBMS like Postgres, the schema, the query engine and the storage format come as a single package.
But with big data, we're building database components from scratch, and we can mix and match. We can use Iceberg as a metadata format, DuckDB as the query engine, Parquet as the storage format, and S3 as the storage medium.
by benjaminwootton on 12/30/23, 6:27 AM
It means that the storage and much of the processing is being standrdised so that you can move between databases easily and almost all tools will eventually be able to work with the same set of files in a transactionally sound way.
For instance, Snowflake could be writing to a file, a data scientist could be querying the data live from a Jupyter notebook, and ClickHouse could be serving user facing analytics against the same data with consistency guarantees.
If the business then decide to switch Snowflake to Databricks then it isn’t such a big deal.
Right now it isn’t quite as fast to query these formats on S3 as a native ingestion would be, but every database vendor will be forced by the market to optimise for performance such that they tend towards the performance of natively ingested data.
It’s a great win for openness and open source and for businesses to have their data in open and portable formats.
Lakehouse has the same implications. Lots of companies have data lakes and data warehouses and end up copying data between the two. To query the same set of data and have just one system to manage is equally impactful.
It’s a very interesting time to be in the data engineering world.
by jamesblonde on 12/30/23, 5:58 AM
The best way to build your own non-JVM lakehouse is to use Iceberg for metadata, Parquet for the Data, Query with DuckDB using Arrow tables (read Parquet directly into Arrow is very low cost), and then use Arrow->Pandas or Polars (either directly or via a service with Arrow Flight).
If you put Feather in the mix, the whole Python lakehouse stack doesn't currently work.
by debo_ on 12/30/23, 4:48 AM
by alentred on 12/30/23, 8:30 AM
I didn't look into Iceberg since, but plan to, and I am really looking forward for this to develop. We have the tools and the compute power today to deal with data without legacy tech, and not all data is big data either. Consequently "data engineering", thankfully, resembles the regular back-end development more and more, with its regular development practices being put in place.
So, here is to the hope of having a pure Python Iceberg lib some day very soon!
by throwitaway222 on 12/30/23, 7:31 PM
by lysecret on 12/30/23, 2:08 PM
Overall this kind of architecture is just awesome.
by Lyngbakr on 12/30/23, 2:27 PM
I really like this attitude and have started embracing it myself both on paper and in notes on my website.
by lmeyerov on 12/30/23, 7:40 AM
It seems inevitable, more of a when vs if. Being able to have our cake & eat it too will be very cool :)
by hawaiianSpork on 12/30/23, 1:15 PM
by berniedurfee on 1/1/24, 5:40 AM
Overall, I like the whole concept of the Lakehouse because it can be done cheaply.
Most datalakes turn into swamps pretty quickly, so cheaper is better.
Let it sit unused for a while in S3 and then quietly nuke it without burning money on a big compute environment.
by albert_e on 12/30/23, 4:11 AM
>> Understanding Parquet, Iceberg and Data Lakehouses at Broad
by alexott on 12/30/23, 5:45 AM
There was a paper at VLDB about Delta Lake: https://www.vldb.org/pvldb/vol13/p3411-armbrust.pdf - it describes why it was created, plus details of implementation.
by pitah1 on 12/30/23, 11:39 AM
by fancy_pantser on 12/30/23, 8:00 PM
by Nelkins on 12/30/23, 9:45 PM
by Boxxed on 12/30/23, 6:26 AM
by jbmsf on 12/30/23, 6:26 PM
by aejm on 12/30/23, 2:57 PM
Is this a typo: “Hive, Delta Lake and Iceberg all support support of schema registry or metastore.”?
by mulmen on 12/30/23, 7:14 AM
by plopz on 12/30/23, 2:48 PM
by wokwokwok on 12/30/23, 8:16 AM
You need to step back and look from a broader perspective to understand this domain.
Talking about arrow/parquet/iceberg is like talking about InnoDB vs MyISAM when you're talking about databases; yes, those are technically storage engines for mysql/mariadb, but no, you probably do not care about them until you need them, and you most certainly do not care about them when you want to understand what a relational DB vs. an no-SQL db are.
They are technical details.
...
So, if you step back, what you need to read about is STAR SCHEMAS. Here are some links (1), (2).
This is what people used to be before data lakes.
So the tldr: you have a big database which contains condensed and annotated versions of your data, which is easy to query, and structured in a way that is suitable for visualization tools such as PowerBI, Tableau, MicroStrategy (ugh, but people do use it), etc. to use.
This means you can generate reports and insights from your data.
Great.
...the problem is that generating this structured data from absolutely massive amounts of unstructured data involves a truly colossal amount of engineering work; and it's never realtime.
That's because the process of turning raw data into a star schema was traditionally done via ETL tools that were slow and terrible. 'Were'. These tools are still slow and terrible.
Basically, the output you get is very valuable, but getting it is very difficult, very expensive and both of those problems scale as the data size scales.
So...
Datalakes.
Datalakes are the solution to this problem; you don't transform the data. You just injest it and store it, basically raw, and on the fly when you need the data for something, you can process it.
The idea was something like a dependency graph; what if, instead of processing all your data every day/hour/whatever, you defined what data you needed, and then when you need it, you rebuild just that part of the database.
Certainly you don't get the nice star schema, but... you can handle a lot of data, and what you need to do process it 'adhoc' is pretty trivial mostly, so you don't need a huge engineering effort to support it; you just need some smart table formats, a lot of storage and on-demand compute.
...Great?
No. Totally rubbish.
Turn out this is a stupid idea, and what you get is a lot of data you can't get any insights from.
So, along come the 'nextgen' batch of BI companies like databricks so they invent this idea of a 'lake house' (3), (4).
What is it? Take a wild guess. I'll give you a hint: having no tables was a stupid idea.
Yes! Correct, they've invented a layer that sits on top of a data lake that presents a 'virtual database' with ACID transactions that you then build a star schema in/on.
Since the underlying implementation is (magic here, etc. etc. technical details) this approach supports output in the form we originally had (structured data suitable for analytics tools), but it has some nice features like streaming, etc. that make it capable of handling very large volumes of data; but it's not a 'real' database, so it does have some limitations which are difficult to resolve (like security and RBAC).
...
Of course, the promise, that you just pour all your data in and 'magic!' you have insights, is still just as much nonsense as it ever was.
If you use any of these tools now, you'll see that they require you to transform your data; usually as some kind of batch process.
If you closed your eyes and said "ETL?", you'd win a cookie.
All a 'lake house' is, is a traditional BI data warehouse built on a different type of database.
Almost without exception, everything else is marketing fluff.
* exception: kafka and streaming is actually fundamentally different for real time aggregated metrics, but its also fabulously difficult to do well, so most people still don't, as far as I'm aware.
...and I'll go out on a limb here and say really, you probably do not care if your implementation uses delta tables or iceberg; that's an implementation detail.
I guarantee that correctly understanding your domain data and modelling a form of it suitable for reporting and insights is more important and more valuable than what storage engine you use.
[1] - https://learn.microsoft.com/en-us/power-bi/guidance/star-sch... [2] - https://www.kimballgroup.com/data-warehouse-business-intelli...
[3] - https://www.snowflake.com/guides/what-data-lakehouse [4] - https://www.databricks.com/glossary/data-lakehouse
by meehai on 12/30/23, 8:35 AM
We interface with BigQuery (via Airflow) mostly, and except one very annoying situation it's a big improvement in terms of speed (parsing floats after querying the DB is NEVER a good option).
---
In case anyone's wondering, it's basically storing and loading native numpy arrays in BigQuery via the python client(s).
You have a bunch of options (assume you have one or more cols with float32 numpy arrays):
- dataframe -> to_parquet -> upload to GCS -> GCSToBigQueryOperator (https://airflow.apache.org/docs/apache-airflow-providers-goo...)
-> instead of storing as a `FLOAT, REPEATED` it will be stored as a STRUCT with a structure of `list>item` OR `list>element` (pyarrow==11 OR pyarrow==13).This requires a manual parsing from this 'json structure' that you get when querying the DB back to np.array -> slow and basically you are using CSVs again.
-> Read more: https://stackoverflow.com/questions/68303327/unnecessary-list-item-nesting-in-bigquery-schemas-from-pyarrow-upload-dataframe
-> set the schema before uploading? Nope, all values will uploaded as null in BQ.
- dataframe -> bigquery.Client -> upload the dataframe from python - very slow, you need to batch your data (imagine 24h vs 5 minutes kind of slow as dataframe sizes increase + necessity to keep all data in memory or batch it so extra save/load of each batch before uploading)
- arrays are stored properly
- solution: you must do 2 things, one on the pyarrow side and one on the BigQuery side - `df.to_parquet(..., use_compliant_nested_type=True)` (in pyarrow==14 it's True by default, but airflow needs pyarrow==11, where it's False by default)
- use `enable_list_inference=True` (link: https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-parquet#list_logical_type)
- when both of this are true (i.e. save parquet files [to GCS] using that flag and load parquet files [from GCS to BQ] using the other flag arrays can be stored as (FLOAT, REPEATED) and queried as numpy arrays out of the box without any manual management.
This took me like 1 week of debugging and reading source code, obscure SO comments and GH issues etc.