by benhamner on 3/26/18, 5:53 AM with 36 comments
by benhamner on 3/26/18, 7:35 AM
As a data publisher, you have an easy way to publish data online, see how it's used, and interact with the users of the data. You can create the dataset via a simple web interface, and update it through the interface or an API. We automatically version these updates under the hood.
As a data consumer, you can browse the data online and download it (through the web or an API). You can see the code and insights others have generated on the data through Kaggle Kernels (hosted, versioned IPython notebooks that run in Docker containers). You can fork their code to get started on the data, or start coding from scratch on your own analysis. If you find improvements that could be made to the metadata (dataset/file/column-level descriptions), you can make those directly.
We're rapidly iterating on this product and expanding it's functionality, and would love any feedback and suggestions.
by QasimK on 3/26/18, 6:44 AM
by antirez on 3/26/18, 9:22 AM
by Radim on 3/26/18, 5:06 PM
Not to assume bad faith on Kaggle's part, but we got burned one too many times with private companies pushing their proprietary ("open") platforms for gobbling up data. The "it's free! just create an account — data lock-in — gap after project death/monetization" pattern leaves me a little cynical.
It's awesome that resources like these exist, but I'd be more comfortable paying attention if this was hosted as raw data somewhere (Github?), with a clear licensing and access model.
by neuromantik8086 on 3/27/18, 1:52 AM
by metakermit on 3/26/18, 2:00 PM
As an aside – I'm really curious to explore the datasets with "fake" in the title :)
https://www.kaggle.com/datasets?sortBy=relevance&group=publi...
by cosmic_ape on 3/26/18, 7:59 AM
by socksy on 3/26/18, 5:01 PM
by naushit on 3/26/18, 1:15 PM