from Hacker News

Compiler Explorer and the promise of URLs that last forever

by anarazel on 5/28/25, 4:28 PM with 189 comments

  • by kccqzy on 5/28/25, 5:10 PM

    Before 2010 I had this unquestioned assumption that links are supposed to last forever. I used the bookmark feature of my browser extensively. Some time afterwards, I discovered that a large fraction of my bookmarks were essentially unusable due to linkrot. My modus operandi after that was to print the webpage as a PDF. A bit afterwards when reader views became popular reliable, I just copy-pasted the content from the reader view into an RTF file.
  • by mananaysiempre on 5/28/25, 4:54 PM

    May be worth cooperating with ArchiveTeam’s project[1] on Goo.gl?

    > url shortening was a fucking awful idea[2]

    [1] https://wiki.archiveteam.org/index.php/Goo.gl

    [2] https://wiki.archiveteam.org/index.php/URLTeam

  • by s17n on 5/28/25, 5:58 PM

    URLs lasting forever was a beautiful dream but in reality, it seems that 99% of URLs don't in fact last forever. Rather than endlessly fighting a losing battle, maybe we should build the technology around the assumption that infrastructure isn't permanent?
  • by creatonez on 5/28/25, 6:28 PM

    There's something poetic about abusing a link shortener as a database and then later having to retrieve all your precious links from random corners of the internet because you've lost the original reference.
  • by amiga386 on 5/28/25, 4:48 PM

    https://killedbygoogle.com/

    > Google Go Links (2010–2021)

    > Killed about 4 years ago, (also known as Google Short Links) was a URL shortening service. It also supported custom domain for customers of Google Workspace (formerly G Suite (formerly Google Apps)). It was about 11 years old.

  • by layer8 on 5/28/25, 7:08 PM

    I find it somewhat surprising that it’s worth the effort for Google to shut down the read-only version. Unless they fear some legal risks of leaving redirects to private links online.
  • by olalonde on 5/28/25, 5:34 PM

    > This article was written by a human, but links were suggested by and grammar checked by an LLM.

    This is the second time today I've seen a disclaimer like this. Looks like we're witnessing the start of a new trend.

  • by wrs on 5/28/25, 6:54 PM

    I hate to say it, but unless there’s a really well-funded foundation involved, Compiler Explorer and godbolt.org won’t last forever either. (Maybe by then all the info will have been distilled into the 487 quadrillion parameter model of everything…)
  • by 2YwaZHXV on 5/28/25, 10:45 PM

    Presumably there's no way to get someone at Google to query their database and find all the shortened links that go to godbolt.org?
  • by swyx on 5/28/25, 5:45 PM

    idk man how can URLs last forever if it costs money to keep a domain name alive?

    i also wonder if url death could be a good thing. humanity makes special effort to keep around the good stuff. the rest goes into the garbage collection of history.

  • by sedatk on 5/28/25, 7:53 PM

    Surprisingly, purl.org URLs still work after a quarter century, thanks to Internet Archive.
  • by rurban on 5/29/25, 7:55 AM

    He missed the archive.org crawl for those links in the blog post. they have them stored also now. https://github.com/compiler-explorer/compiler-explorer/discu...
  • by jimmyl02 on 5/28/25, 4:39 PM

    This is great perspective about how assumptions play out over longer period of time. I think that this risk is much greater for free third party services for critical infrastructure.

    Someone has to foot the bill somewhere and if there isn't a source of income then the project is bound to be unsupported eventually.

  • by sebstefan on 5/29/25, 9:03 AM

    >Over the last few days, I’ve been scraping everywhere I can think of, collating the links I can find out in the wild, and compiling my own database of links1 – and importantly, the URLs they redirect to. So far, I’ve found 12,000 links from scraping:

    >Google (using their web search API)

    >GitHub (using their API)

    >Our own (somewhat limited) web logs

    >The archive.org Stack Overflow data dumps

    >Archive.org’s own list of archived webpages

    You're an angel Matt

  • by shepmaster on 5/28/25, 4:56 PM

    As we all know, Cool URIs don't change [1]. I greatly appreciate the care taken to keep these Compiler Explorer links working as long as possible.

    The Rust playground uses GitHub Gists as the primary storage location for shared data. I'm dreading the day that I need to migrate everything away from there to something self-maintained.

    [1]: https://www.w3.org/Provider/Style/URI

  • by 3cats-in-a-coat on 5/29/25, 12:58 PM

    Nothing lasts forever.

    I've pondered that a lot in my system design which bears some resemblance to the principles of REST.

    I have split resources in ephemeral (and mutable), and immutable, reference counted (or otherwise GC-ed), which are persistent while referred to, but collected when no one refers to them.

    In a distributed system the former is the default, the latter can exist in little islands of isolated context.

    You can't track references throughout the entire world. The only thing that works is timeouts. But those are not reliable. Nor you can exist forever, years after no one needs you. A system needs its parts to be useful, or it dies full of useless parts.

  • by 90s_dev on 5/28/25, 5:41 PM

    Some famous programmer once wrote about how links should last forever.

    He advocated for /foo/bar with no extension. He was right about not using /foo/bar.php because the implementation might change.

    But he was wrong, it should be /foo/bar.html because the end-result will always be HTML when it's served by a browser, whether it's generated by PHP, Node.js or by hand.

    It's pointless to prepare for some hypothetical new browser that uses an alternate language other than HTML and that doesn't use HTML.

    Just use .html for your pages and stop worrying about how to correctly convert foo.md to foo/index.html and configure nginx accordingly.

  • by devrandoom on 5/28/25, 11:34 PM

    > despite Google solemnly promising ...

    I'm pretty sure the lore says that a solemn promise from Google carries the exact same value as a prostitute saying she likes you.

  • by nssnsjsjsjs on 5/29/25, 12:55 AM

    The collolary of URLs that last forever is we have both forever storage (costs money forever) and forever institutional care and memory.

    Where URLs may last longer is where they are not used for the RL bit. But more like a UUID for namespacing. E.g. in XML, Java or Go.

  • by devnullbrain on 5/28/25, 5:59 PM

    >despite Google solemnly promising that “all existing links will continue to redirect to the intended destination,” it went read-only a few years back, and now they’re finally sunsetting it in August 2025

    It's become so trite to mention that I'm rolling my eyes at myself just for bringing it up again but... come on! How bad can it be before Google do something about the reputation this behaviour has created?

    Was Stadia not an expensive enough failure?

  • by diggan on 5/28/25, 5:19 PM

    URLs (uniform resource locator) cannot ever last forever, as it's a location and locations can't last forever :)

    URIs however, can be made to last forever! Also comes with the added benefit that if you somehow integrate content-addressing into the identifier, you'll also be able to safely fetch it from any computer, hostile or not.

  • by Ericson2314 on 5/29/25, 3:20 AM

    The only type of reference that lasts forever is a content address.

    We should be using more of them.

  • by mbac32768 on 5/29/25, 2:36 AM

    it seems a bit crazy to try to avoid storing a relatively small amount of data when a link is shared when storage costs and bandwidth costs are rapidly dropping with time

    but perhaps I don't appreciate how much traffic godbolt gets

  • by sdf4j on 5/28/25, 7:13 PM

    > One of my founding principles is that Compiler Explorer links should last forever.

    And yet… that was a very self-destructive decision.