by neiman on 1/31/24, 8:07 PM with 166 comments
by nonrandomstring on 1/31/24, 8:46 PM
Having tried fringe technologies over the years, spun up a server and run them for a few months, struggled and seen all the rough edges and loose threads, I often come to the point of feeling - this technology is good, but it's not ready yet. Time to let more determined people carry the torch.
The upside is:
- you tried, and so contributed to the ecosystem
- you told people what needs improving
Just quitting and not writing about your experience seems a waste for everyone, so good to know why web hosting on IPFS is still rough.
by b_fiive on 1/31/24, 9:03 PM
by koito17 on 1/31/24, 9:01 PM
This has always been my major UX gripe with IPFS. The fact that `ipfs add` in the command line does little but generate a hash and you need to actually pin things in order to "seed" them, so to speak. So "adding a file to IPFS", in the sense of "adding a file to the network", requires the user to know that (1) the "add" in `ipfs add` does not add a file to the network, and (2) you must pin everything you want to replicate manually. I remember as recently as 2021 having to manually pin each file in a directory since pinning the directory does not recursively pin files. Doing this by hand for small folders is okay, but large folders? Not so much.
More importantly, the BitTorrent duplication problems that IPFS has solved are also solved in BitTorrent v2, and BitTorrent v2 IMO solves these problems in a much better way (you can create "hybrid torrents" which allows a great deal of backwards compatibility with existing torrent software).
This isn't a UX issue, but another thing that makes it hard for me to recommend IPFS to friends is the increasing association with "Web3" and cryptocurrency. I don't have any strong opinions on "Web3", but to many people, it's an instant deal-breaker.
by lindig on 1/31/24, 8:42 PM
by p4bl0 on 1/31/24, 8:44 PM
Anyway, I came to the same conclusion as the author, but several years ago: in the end, nothing is actually decentralized, and maintaining this illusion of decentralization is actually costly, for no real purpose (other than the initial enjoyment of playing with a new tech, that is).
So I stopped maintaining it a few years ago. That decision was also because of the growing involvement of some of these projects with blockchain tech that I never wanted to be a part of. This is also why I cancelled my article series in 2600 before publishing those on IPFS and ZeroNet.
[1] See for example this archive of my HN profile page from 2016 with the link to it: https://web.archive.org/web/20161122210110/https://news.ycom...
by axegon_ on 1/31/24, 9:13 PM
by MenhirMike on 1/31/24, 8:59 PM
In other words: Once a few big websites are established, no small website will ever be able to gain traction again because the big websites are simply easier to reach and thus more attractive to use. And just like an unpopular old torrent, eventually you run out of seeders and your site is lost forever.
One can argue about the value of low traffic websites, but I got to wonder: Who in their right mind thinks "Yeah, I want to make a website and then have others decide if it's allowed to live". Then again, maybe that kind of "survival of the fittest" is appealing to some folks.
As far as I am concerned, it sounds like a stupid idea. (Which the author goes into more detail, so that's a good write up)
by sharperguy on 2/1/24, 12:19 PM
And so naturally relays pop up, and the relays end up being more convenient than actually using the underlying protocol.
by alucart on 1/31/24, 10:00 PM
Wonder if there is actual use or need for such thing.
by nikisweeting on 2/1/24, 8:50 AM
by wyck on 1/31/24, 10:56 PM
by mawise on 1/31/24, 11:12 PM
by ianopolous on 2/1/24, 9:00 AM
My personal website served from a peergos gateway (anyone can run one) is https://ianopolous.peergos.me/
If you want to read more check out our book: https://book.peergos.org
by deephire on 1/31/24, 11:17 PM
Services like https://dappling.network, https://spheron.network, https://fleek.co, etc?
I've seen some DeFi protocols use IPFS to add some resiliency to their frontends. If their centralized frontend with vercel or whatever is down, they can direct users to their IPFS/ENS entrypoint.
by mbgerring on 1/31/24, 9:56 PM
by tempaccount1234 on 1/31/24, 10:23 PM
by shp0ngle on 1/31/24, 9:44 PM
(Also I heard it's computationally costly, but I am not sure if it's true, I can't imagine why it would be the case actually.)
As a result it's actually more centralised than web, there are like 3 pinning services that everyone uses. At which point I don't get the extra hoops.
by yieldcrv on 1/31/24, 8:58 PM
it is simple enough and free even on hosted solutions, and it keeps my Netlify and Vercel free during spikes in traffic
but the availability issue is perplexing, just like OP encountered
some people just randomly wont be able to resolve some assets on your site, sometimes! the gateways go up and down, their cache of your file comes and goes. browsers dont natively resolve ipfs:// uris. its very weird.
by schmichael on 1/31/24, 8:42 PM
> The blog you’re reading now is built with Jekyll and is hosted on my own 10$ server.
> don’t get me wrong, I’m still an IPFS fanboy.
...how could you still be a fanboy? When IPFS cannot fulfill even the most basic function of globally serving static content, why does it deserve anyone's interest? It's not even new or cutting edge at this point. After 8 years of development how can the most basic functionality still not work for even an expert?
by hirako2000 on 2/1/24, 8:27 AM
E.g there is already helia.
Just waiting for running a node on the browser tab to become insignificant, resource wise.
by fsiefken on 1/31/24, 11:26 PM
by filebase on 1/31/24, 9:47 PM
by zubairq on 2/1/24, 7:04 AM
The current status is that I plan to bring back IPFS usage more in the future for my project, but will wait for the ecosystem to mature a bit more first with regards to libraries.
by dannyobrien on 1/31/24, 11:42 PM
by ChrisArchitect on 1/31/24, 9:26 PM
by anacrolix on 2/1/24, 12:43 AM
by xiaojay2022 on 2/1/24, 4:56 PM
by geokon on 2/1/24, 8:09 AM
As far as I understand this isn't a solved technical problem - but mostly a cultural quirk and probably just due to how the early torrent clients were configured
There is for instance a major Chinese torrent client (that name escapes me) that doesn't seed by default - so the whole thing could have easily not worked. If IPFS clients don't seed by default then that kinda sounds like either a design mistake or a "culture problem"
I've always wondered if there was a way to check if a client is reseeding (eg. request and download a bit from a different IP) and then blacklist them if they provide the data (or throttle them or something)
by fractalnetworks on 2/1/24, 4:37 AM
by miga on 2/1/24, 10:05 PM