from Hacker News

Should we design for iffy internet?

by surprisetalk on 6/16/25, 3:34 PM with 230 comments

  • by DannyPage on 6/17/25, 2:32 PM

    A big focus is (rightly) on rural areas, but mobile internet packet loss can also a big issue in cities or places where there are a lot of users. It's very frustrating to be technically online, but effectively offline. An example: Using Spotify on a subway works terribly until you go into Airplane mode, and then it suddenly works correctly with your offline music.
  • by zeinhajjali on 6/17/25, 2:46 PM

    This reminds me of a project I worked on for a grad school data science course here in Canada. We tried to map this "digital divide" using public data.

    Turns out, it's really tough to do accurately. The main reason is that the public datasets are a mess. For example, the internet availability data is in neat hexagons, while the census demographic data is in weird, irregular shapes that don't line up. Trying to merge them is a nightmare and you lose a ton of detail.

    So our main takeaway, rather than just being a pretty map, was that our public data is too broken to even see the problem clearly.

    I wrote up our experience here if anyone's curious: https://zeinh.ca/projects/mapping-digital-divide/

  • by linsomniac on 6/18/25, 8:56 PM

    As mentioned, this article is more about slow internet than about "iffy" internet. People are commenting about the need for slimming down pages, removing bloat, but...

    There are lots of cases for sending MORE data on "iffy" internet connections.

    One of our websites is a real estate for-sale browsing site (think Zillow). It works great from home or office, but if you are actively out seeing properties it can be real frustrating when Internet comes and goes. Where any interaction with the page can take 10-60 seconds to refresh because of latency and packet loss.

    A few months ago I vibe-coded a prototype that would locally cache everything, and use cached versions primarily and update the cache in the background. Using developer tools to simulate bad networking it was a day and night experience. Largely because I would fetch first property photos of all properties as well as details about the first few hundred properties that matched your search criteria.

    "Bloat" when used intelligently, isn't so bad. ;-)

  • by nine_k on 6/17/25, 2:52 PM

    At one of my previous jobs, we designed a whole API to be slightly more contrived but requiring only one round-trip for all key data, to address the iffy internet connectivity most of our users had. The frontend also did a lot of background loading to hide the latency when scrolling.

    It's really eye-opening to set up something like toxiproxy, configure bandwidth limitations, latency variability, and packet loss in it, and run your app, or your site, or your API endpoints over it. You notice all kinds of UI freezing, lack of placeholders, gratuitously large images, lack of / inadequate configuration of retries, etc.

  • by Sanzig on 6/17/25, 2:54 PM

    While many websites are bad for large unoptimized payloads sizes, they are even worse for latency sensitivity.

    You can easily see this when using WiFi aboard a flight, where latency is around 600 msec at minimum (most airlines use geostationary satellites, NGSO for airline use isn't quite there yet). There is so much stuff that happens serially in back-and-forth client-server communication in modern web apps. The developer sitting in SF with a sub-10 ms latency to their development instance on AWS doesn't notice this, but it's sure as as heck noticeable when the round trip is 60x that. Obviously, some exchanges have to be serial, but there is a lot of room for optimization and batching that just gets left on the floor.

    It's really useful to use some sort of network emulation tool like tc-netem as part of basic usability testing. Establish a few baseline cases (slow link, high packet loss, high latency, etc) and see how usable your service is. Fixing it so it's better in these cases will make it better for everyone else too.

  • by steelegbr on 6/17/25, 7:49 AM

    Just get on the road with a 3/4/5G connection on a mobile phone if you want to understand why we still need to design for "iffy" Internet. So many applications have a habit of hanging when the connection isn't formally closed but you're going through a spotty patch. Connections to cell towers with full bars and backhaul issues are surprisingly common. It's a real problem when you're dealing with streaming media (radio can be in the low kbps) or even WebSockets.
  • by smelendez on 6/18/25, 7:05 PM

    Physical businesses need to think about this too. Too many seem to assume every customer is tech-savvy and equipped with an iPhone.

    Should you assume all your customers have smartphones? Smartphones with internet connections? Working cameras? Zelle? Venmo? Facebook? WhatsApp? Uncracked screens (for displaying QR codes to be scanned)? The ability to install an app?

    I recently bought a snack from a pop-up food vendor who could only accept Venmo, which luckily I have, or exact cash, since he didn't have change. I'm pretty sure he only told me this after he handed me my food. I know lots of people who don't have Venmo—some don't want it because they see it as a security risk, some have had their accounts banned, some just never used it and probably don't want to set it up in a hurry while buying lunch.

    I also recently stayed at a rural motel that mentioned in the confirmation email that the front desk isn't staffed 24/7, so if you need to check in after hours, you have to call the on-call attendant. Since cell service is apparently spotty there (though mine worked fine), they included the Wi-Fi password so you could call via Wi-Fi. There were also no room phones, so if the Wi-Fi goes out after hours, guests are potentially incommunicado, which sounds like the basis of a mystery novel.

  • by o11c on 6/17/25, 4:47 PM

    This fails to address the main concern I run into in practice: can you recover if some resources timed out while downloading?

    This often fails in all sorts of ways:

    * The client treats timeout as end-of-file, and thinks the resource is complete even though it isn't. This can be very difficult for the user to fix, except as a side-effect of other breakages.

    * The client correctly detects the truncation, but either it or the server are incapable of range-based downloads and try to download the whole thing from scratch, which is likely to eventually fail again unless you're really lucky.

    * Various problems with automatic refreshing.

    * The client's only (working) option is "full page refresh", and that re-fetches all resources including those that should have been cached.

    * There's some kind of evil proxy returning completely bogus content. Thankfully less common on the client end in a modern HTTPS world, but there are several ways this can still happen in various contexts.

  • by maxcomperatore on 6/18/25, 10:42 PM

    I've shipped stuff used in parts of Latin America and Southeast Asia and one of the biggest lessons was this: latency, not bandwidth, is what kills UX.

    A 10mb download over 3G is fine if you can actually start it. But when the page needs 15 round trips before first render, you're already losing the user.

    We started simulating 1500ms+ RTT and packet loss by default on staging. That changed everything. Suddenly we saw how spinners made things worse, how caching saved entire flows, and how doing SSR with stale-while-revalidate wasn’t just optimization anymore. It was the only way things worked.

    If your app can work on a moving train in Bangladesh, then it's gonna feel instant in SF.

  • by pfych on 6/17/25, 4:18 AM

    I had a frequent 1:30-hour train commute every couple of days to my previous role, and during that time, I learned how horrific our product was to use on a spotty connection.

    Made me think more about poor & unstable connections when building out new features or updating existing things in the product. Easily parsable loading states, better user-facing alerts about requests timing out, moving calculations/compute client-side where it made sense, etc.

  • by potatolicious on 6/17/25, 2:32 PM

    A good point. The author does briefly address the point of mobile internet but I think it deserves a lot more real estate in any analysis like this. A few more points worth adding:

    - Depending on your product or use case, somewhere between a majority and a vast majority of your users will be using your product from a mobile device. Throughput and latency can be extremely high, but also highly variable over time. You might be able to squeeze 30Mbps and 200ms pings for one request and then face 2Mbps and 4000ms pings seconds later.

    - WiFi generally sucks for most people. The fact that they have a 100Mbps/20Mbps terrestrial link doesn't mean squat if they're eking out 3Mbps with eye-watering packet loss because they're in their attic office. The vast majority of your users are using wireless links (WiFi or cell) and are not in any way hardlined to the internet.

  • by demosthanos on 6/17/25, 3:52 PM

    > This shows pretty much what I'd expect: coverage is fine in and around cities and less great in rural areas. (The Dakotas are an interesting exception; there's a co-op up there that connected a ton of folks with gigabit fiber. Pretty cool!)

    Just a warning about the screenshot he's referencing here: the slice of map that he shows is of the western half of the US, which includes a lot of BLM land and other federal property where literally no one lives [0], which makes the map look a lot sparser in rural areas than it is in practice for humans on the ground. If you look instead at the Midwest on this map you'll see pretty decent coverage even in most rural areas.

    The weakest coverage for actually-inhabited rural areas seems to be the South and Appalachia.

    [0] https://upload.wikimedia.org/wikipedia/commons/0/0f/US_feder...

  • by dbetteridge on 6/17/25, 2:52 AM

    Assume terrible internet and then anyone with good internet is pleasantly surprised how not terrible your websites are.

    Youre also opening up to more potential customers in rural areas or areas with poor reception, where internet may exist but may not be consistent or low latency

  • by Workaccount2 on 6/17/25, 2:41 PM

    This gets down to a fundamental problem that crops up everywhere: How much is x willing to exponentially sacrifice to satisfy the long tail of y?

    It's grounds for endless debate because it's inherently a fuzzy answer, and everyone has their own limits. However the outcome naturally becomes an amalgamation of everyone's response. So perhaps a post like this leads to a few more slim websites.

  • by cjs_ac on 6/17/25, 2:52 PM

    The first computer I ever used had a 56k modem, but I can empathise with greybeard stories about watching text appear one character at a time from 300 baud modems because of the task-tracking software my employer uses. I load it up in a browser tab in the morning, and watch as the various tasks appear one at a time. It's an impediment to productivity.

    The rule I've come up with is one user action, one request, one response. By 'one response', I mean one HTTP response containing DOM data; if that response triggers further requests for CSS, images, fonts, or whatever, that's fine, but all the modifications to the DOM need to be in that first request.

  • by jazzyjackson on 6/17/25, 2:45 PM

    Article skips consideration for shared wifi such as cafes where, IME, a lot of students do their work. Consumer wifi routers might have a cap of ~24 clients, and kind of rotate which clients they're serving, so not only is your 100Mbit link carved up, but you periodically get kicked off and have to renew your connection. I cringe when I see people trying to use slack or office365 in this environment.

    Grateful for the blog w/ nice data tho TY

  • by pianom4n on 6/18/25, 8:28 PM

    Despite the title, the article doesn't talk about "iffy internet" at all. It's all about "slightly slow" internet which is a complete non-issue except for large downloads (e.g. modern games).

    Congested and/or weak wifi and cell service are what "iffy" is about. Will a page _eventually_ load if I wait long enough? Or are there 10 sequential requests, 100KBs each, that all have succeed just to show me 2 sentences of text?

  • by jmajeremy on 6/17/25, 3:25 PM

    I'm a minimalist in this regard, and I really believe that a website should only be as complex as it needs to be. If your website requires fast Internet because it's providing some really amazing service that takes advantage of those speeds, then go for it. If it's just a site to provide basic information but it loads a bunch of high-res images and videos and lengthy javascript/css files, then you should consider trimming the fat and making it smaller. Personally I always test my website on a variety of devices, including an old PC running Windows XP, a Mac from 2011 running High Sierra, an Android phone from 2016, and a Linux machine using Lynx text browser, and I test loading the site on a connection throttled to 128kbps. It doesn't have to run perfectly on all these devices, but my criterion is that it's at least usable.
  • by hosh on 6/17/25, 5:23 PM

    There is a (perverse?) incentive to have an always-on network connected to services that can be metered and billed -- that is how we get monthjly recurring revenue. Even hardware companies want in on that -- think HP printers and authorized toner cartridges.

    A different perspective on this shows up in a recent HN submission, "Start your own Internet Resiliency Club" (https://news.ycombinator.com/item?id=44287395). The author of the article talks about what it would take to have working internet in a warzone where internet communications are targeted.

    While we can frame this as whether we should design our digital products to accommodate people with iffy internet, I think seeing this as a resiliency problem that affects our whole civilization is a better perspective. It is no longer about accomodating people who are underserved, but rather, should we really be building for a future where the network is assumed to be always-connected? Do we really want our technologies to be concentrated in the hands of the few?

  • by mlhpdx on 6/17/25, 2:56 PM

    > you should not assume that it's better than around 25Mbps down and 3Mbps up

    This is spot on for me. I live in a low-density community that got telcom early and the infrastructure has yet to be upgraded. So, despite being a relatively wealthy area, we suffer from poor service and have to choose between flaky high latency high bandwidth (Starlink) and flaky low latency low bandwidth (DSL). I’ve chosen the latter to this point. Point to point wireless isn’t an option because of the geography.

  • by bob1029 on 6/17/25, 6:37 PM

    If you really want to engineer web products for users at the edge of the abyss, the most robust experiences are going to be SSR pages that are delivered in a single response with all required assets inlined.

    Client-side rendering with piecemeal API calls is definitely not the solution if you are having trouble getting packets from A to B. The more you spread the information across different requests, the more likely you are going to get lose packets, force arbitrary retries and otherwise jank up the UI.

    From the perspective of the server, you could install some request timing middleware to detect that a client is in a really bad situation and actually do something about it. Perhaps a compromise could be to have the happy path as a websocketed react experience that falls back to a ultralight, one-shot SSR experience if the session gets flagged as having a bad connection.

  • by simonw on 6/17/25, 3:20 PM

    Any time I'm on a road trip or traveling outside of major cities it becomes very obvious that a lot of developers don't consider slower network connections at all.

    The other issue that's under-considered is lower spec devices. Way more people use cheap Android phones than fancy last-five-years iPhones. Are you testing on those more common devices?

  • by zzo38computer on 6/17/25, 10:35 PM

    I think you should not assume fast internet or any internet when it is not necessary to do so. Many programs could mostly work without needing an internet connection (e.g. a email program will only need to connect to internet to send/receive; you can compose drafts and read messages that are already received without an internet connection), so they should be designed to work mostly without internet connection where appropriate (this also includes to avoid spyware, etc as well). When you do need an internet connection, you should avoid adding excessive data (for HTML files, this includes pictures, CSS, JavaScripts, etc; for other protocols and file formats it includes other things), too.

    For such things as streaming audio/video, there is the codec and other things to be considered as well. If the data can be coded in real time or if multiple qualities are available already on the server then this can be used to offer a lower quality file to clients that request such a file. The client can download the file for later use and may be able to continue download later, if needed.

    There is also, e.g. do you know that you should need a video call (or whatever else you need) at all? Sometimes, you can do without it, or it can be an optional possibility.

    There is also the avoiding needing specific computers, too. It is not only for internet access, although that is a part of it, too. However, this does not mean that computer and internet cannot be helpful. They can be helpful, but should be overly relied on so much.

    The Gemini protocol does not have anything like the Range request and Content-length header, and I thought this was not good enough so I made one that does have these things. (HTTP allows multiple ranges per request, but I thought that is more complicated than it needs to be, and it is simpler to only allow one range per request.)

  • by continuational on 6/17/25, 3:50 PM

    Here's a fun exercise: Put the front page of your favorite web framework though https://pagespeed.web.dev/

    (if you don't have a favorite, try react.dev)

    We're using this benchmark all the time on https://www.firefly-lang.org/ to try to keep it a perfect 100%.

  • by jabroni_salad on 6/18/25, 8:54 PM

    I mainly work in a rural area. We have some clients who access the internet through private fixed wireless backhaul mounted to grain silos, satellite such as hughesnet, and DSL. Modern web design on connections like that means that these guys can be unable to download websites the normal way but they can stream a desktop over VDI and browse the net that way pretty reliably.

    One of our biggest sticking points when new forms of multifactor came around is that it can sometimes take longer than a minute to deliver a push notification or text message even in areas that are solid red on Verizon's coverage map.

    > This is likely worse for B2C software than B2B.

    These are regional retail banks that all use the same LOB software. Despite the product being sold mainly to banks, which famously have branches, the developer never realized that there could be more than a millisecond between a client and a server. The reason they have VDI is so their desktop environment is in the same datacenter as their app server. It's a fucking CRUD app and the server has to deal with maybe a couple hundred transactions per hour.

    I think this is pretty typical for B2B. You don't buy software because it is good. You buy software because the managers holding the purse strings like the pretty reports it makes and they are willing to put up with A LOT of bullshyt to keep getting them.

  • by ryukoposting on 6/18/25, 2:40 AM

    The author glosses over this a bit, but designing for latency and unreliability is important for good UX too. It's not just about keeping things small, it's about making sure the UI is usable in the face of high latency connections, and tolerant of intermittent failure. I can't count the number of times I've had an app shit the bed as I'm leaving my apartment and my phone switches from wi-fi to cell.
  • by neepi on 6/17/25, 2:31 PM

    Yes. I select everything to work disconnected for long periods of time. I suspect we are in a temporary time of good connectivity. What we really have to look forward to is balkanisation, privacy threats from governments, geopolitical uncertainty and crazy people running our communications infra.

    Seems sensible to take a small convenience hit now to mitigate those risks.

  • by kulahan on 6/18/25, 1:33 AM

    This feels so much like a totally obvious question, I was surprised this was even the actual topic of discussion rather than a lead-in to another, actually questionable idea.

    This is a major part of why I cannot stand software devs (I am loathe to call them “engineers”). Of COURSE YOU SHOULD design for an iffy internet. It’s never perfect. Thank the LORD code monkeys don’t build anything important like bridges or airplanes.

  • by martinald on 6/18/25, 8:45 PM

    "Remember that 4G is like 5/1Mbps, and 3G is even worse" this is just completely untrue. 4G can do 300meg+ real world no problem, and even back in the day with DC-HSPDA on 3G you could get 20meg real world.

    HOWEVER the main problem (apart from just not having service) is congestion. There is a complete shortage of low bandwidth spectrum which can penetrate walls well in (sub)urban areas, at ~600-900MHz. Most networks have managed to add enough capacity in the mid/upper bands to avoid horrendous congestion, but (eg) 3.5GHz does not penetrate buildings well at all.

    This means it is very common to walk into a building and go from 100meg++ speeds on 5G to dropping down to 5G on 700MHz which is so congested that you are looking at 500kbit/sec on a good day.

    Annoyingly phone OSs haven't got with the times yet. And just display signal quality for the bars. Which will usually be excellent. It really needs to also have a congestion indicator (could be based on how long your device is waiting for a transmission slot for example).

  • by jebarker on 6/17/25, 2:33 PM

    Yes, for the same reason we should design for low end HW: it makes everyone’s experience better. I wish websites and apps treated phoning home as a last resort.
  • by wat10000 on 6/17/25, 4:59 PM

    So much software guidance can be subsumed by a simple rule:

    Use the software that you make, in the same conditions that your users will use it in.

    Most mobile apps are developed by people in offices with massive connections, or home offices with symmetric gigabit fiber or similar. The developers make sure the stuff works and then they're on to the next thing. The first time someone tries to use it on a spotty cellular connection is probably when the first user installs the update.

    You don't have to work on a connection like that all the time, but you need to experience your app on that sort of connection, on a regular basis, if you care about your users' experience.

    Of course, it's that last part that's the critical missing piece from most app development.

  • by gadders on 6/17/25, 3:08 PM

    A thousand times yes. I hate apps that need to spend 2 minutes or so deciding whether your internet is bad or not, even though they can function offline (Spotify, TomTom Go).
  • by ChrisMarshallNY on 6/18/25, 7:44 PM

    I write software for "budget-conscious" orgs.

    Translation: shitty servers.

    That means that the connection might be fine, but the backend is not.

    I need to have a lot of error management in my apps, and try to keep the server interactions to a minimum. This also means that I deal with bad connections fairly well.

  • by morleytj on 6/17/25, 2:58 PM

    This is a huge issue for me with a lot of sites. For whatever reason I've spent a lot of time in my life in areas with high latency or jist spotty internet service in general, and a lot of these modern sites with massive payload sizes and chained together dependencies (click this button to load this animation to display the next thing that you have to click to get the information you want) seriously struggle or outright break in those situations.

    The ol reliable plain HTML stuff usually works great though, even when you have to wait a bit for it to load.

  • by _-_-__-_-_- on 6/18/25, 6:53 PM

    Even a regular mobile connection 4G-5G can feel spotty with connections/disconnections dropping for a few seconds. I spend some time every summer in rural Haliburton Highlands/North Hastings (middle Ontario), cellular reception is hit and miss, one bar maybe two, voice calls, when successful, sound awful and text messages frequently stay unsent (or send multiple times inexplicably). Unless you can afford starlink, or drive into the next town and hit the library wifi, you're out-of-luck. As you're driving cell service will drop depending on elevation. A quick check of facebook messenger and maybe loading a webpage for information. Forget a fancy app.
  • by reactordev on 6/17/25, 3:28 PM

    It’s not that we should design for iffy internet, it’s we should design sites and apps that don’t make 1,000 xhr calls and load 50mb of javascript to load ads that also load javascript that refresh the page on purpose to trigger new ad bids to inflate viewership. (rant)
  • by xp84 on 6/18/25, 7:01 PM

    Why are all your maps only of the Western US? Most people live in the eastern half.

    https://www.reddit.com/r/MapPorn/comments/vfwjsc/approximate...

    I know a lot of the West has terrible broadband, but a not insignificant majority of that land area is uninhabited federal land -- wilderness, such as high mountains, desert, etc. By focusing on the West and focusing on maps that don't acknowledge inhabitedness as an important factor, it confuses the issue.

    I'd argue it's more of a travesty that actual fiber optic internet is only available in maybe 15% of addresses nationwide, than the white holes in Eastern Oregon or Northern Nevada. One major reason I believe this is that even at my house, where I have "gigabit" available via DOCSIS, my upload is barely 20Mbps and I have a bandwidth cap of 1.25TB a month which means if I saturate my bandwidth I can only use it for 2 hours 46 minutes per month.

    If you compare "things that would be possible if everyone had a 500Mbps upload without a bandwidth cap" vs "things I can do on this connection" it's a huge difference.

  • by inopinatus on 6/18/25, 3:50 AM

    I also aim to build services that continue to function when JS/CSS assets don’t, can’t, or won’t load correctly/at all.

    As with lossy, laggy, and slow connections, this scenario is also more common that the average tragically online product manager will grasp.

  • by donatj on 6/17/25, 2:26 PM

    My parents live just 40 miles outside Minneapolis and use a very unreliable T-Mobile hotspot because the DSL available to them still tops out at a couple megabit. Their internet drops constantly and for completely unknown reasons.

    I've been trying to convince them to try Starlink, but they're unwilling to pay for the $500+ equipment costs.

  • by nottorp on 6/18/25, 7:01 PM

    Well yes.

    I can think of at least two supermarkets where I have crap internet inside in spite of having whole city decent 5G coverage outside.

    One thing that never loads is the shopping app for our local equivalent of Amazon. I'm sure they lost some orders because I was in said supermarkets and couldn't check out the competition's offers. Minor cheap-ish stuff or I would have looked for better signal, but still lost orders.

  • by coppsilgold on 6/17/25, 11:04 PM

    There is also packet loss to consider. When making QUIC, Google fumbled with Forward Error Correction[1] and since then it has been stuck in draftland[2][3].

    [1] <https://http3-explained.haxx.se/en/quic-future>

    [2] <https://www.ietf.org/archive/id/draft-michel-quic-fec-01.htm...>

    [3] <https://www.ietf.org/id/draft-zheng-quic-fec-extension-00.ht...>

  • by ipdashc on 6/17/25, 2:47 PM

    It's an edge case, but I noticed that the first two sections focus on people's Internet access at home. But what about when on the move? Public Wi-Fi and hotspots both kinda suck. On those, there are some websites that work perfectly fine, and some that just... aren't usable at all.
  • by bobdvb on 6/18/25, 9:13 PM

    Every time you make a decision which increases the needed bandwidth, or device performance, you eliminate a portion of your target market for whatever you're doing.

    There comes a point at which attempting to address everyone means you start making sacrifices that impact your product/offering (lowest common denominator), which itself can eliminates some higher end clients. Or you're spending so much creating multiple separate experiences that it significantly impacts the effort you have to put in and in business that hits profitability, or otherwise can cause burnout.

    So, follow elegance, as well as efficiency, in the architecture and design to make it accessible to as wider audience as is practical. You have to think about what is practical, and what you're obligations are to your audience. Being thoughtful and intentional in design is no bad thing, it stops you being lazy and loading a 50MB JPEG as the backdrop when something else will do.

  • by danpalmer on 6/18/25, 12:29 AM

    I have effectively unlimited 5G data on my phone, and a 1.5Gbps connection at home, and yet, yes you should absolutely design for iffy internet.

    I commute in tunnels where signal can drop out. I walk down busy city streets where I technically have 5G signal but often degrade to low bandwidth 4G because of capacity issues. I live in Australia so I'm 200ms from us-east-1 regardless of how much bandwidth I have.

    It's amazing how, on infrastructure that's pretty much as good as you can get, I still experience UX issues with apps and sites that are only designed for the absolutely perfect case of US-based hard-wired connections.

  • by klik99 on 6/18/25, 10:05 PM

    This article assumes people are primarily accessing these things at home. Maybe for most apps / websites that’s appropriate but people access these things internet on phones in places where connections can be spotty, and it’s very frustrating to be driving through a dead zone and have apps freak out.

    I hope people making apps to unlock cars or other critical things that you might need at 1am on a road trip in the middle of nowhere don’t have this attitude of “everyone has reliable internet these days!”

    Concrete example: I made an app for Prospect Park in Brooklyn that had various features that were meant to be accessed while in the park which had (has?) very spotty cell service, so it was designed to sync and store locally using an eventually consistent DB, even with things that needed to be uploaded.

  • by AnotherGoodName on 6/17/25, 3:12 PM

    >you should not assume that it's better than around 25Mbps down and 3Mbps up

    It's hard to make a website that doesn't work reasonably well with that though. Even with all the messed up Javascript dependencies you might have.

    I feel for those on older non-Starlink Satellite links. eg. islands in the pacific that still rely on Inmarsat geostationary links. 492 kbit/s maximum (lucky if you get that!), 3 second latency, pricing by the kb of data. Their lifestyle just doesn't use the internet much at all by necessity but at those speeds even when willing to pay the exorbitant cost sites will just timeout.

    Starlink has been a revolution for these communities but it's still not everywhere yet.

  • by sn9 on 6/17/25, 5:03 PM

    Reminds me of this old Dan Luu blog post: "How web bloat impacts users with slow connections" [0].

    [0] https://danluu.com/web-bloat/

  • by esseph on 6/17/25, 4:38 PM

    Note:

    The NTIA or FCC just released an updated map a few days ago (part of the BEAD overhaul) that shows the locations currently covered by existing unlicensed fixed wireless.

    Quick Google search didn't find a link but I have it buried in one of my work slack channels. I'll come back with the map data if somebody else doesn't.

    The state of broadband is way, way worse than people think in the US.

    Indirect Link: https://medium.com/spin-vt/impact-of-unlicensed-fixed-wirele...

  • by ItCouldBeWorse on 6/18/25, 8:49 PM

    We should design for censor obfuscation- as in the censors should not be able to grab and manipulate any system used. In that regard, ai has been a great success. Even though there are censorship systems in the prompt and in the filter, the final source of truth is not poisoned, can only with great effort be poisoned and shows itself under local interrogation.

    Systems hardened against authoritarianism are a great thing. Even the taliban have mobile coverage in kabul- and thus, every woman forced under the dschador holding on to a phone, has a connection to the world in her hand. Harden humanity against the worst of its sides in economic decline. I dream of a math proof, coming out of some kabul favella.

  • by londons_explore on 6/17/25, 8:22 AM

    Notably, two products that work really great in bad internet are WhatsApp and the openAI API (ie. I can ask GPT4 some question, then have the internet cut out for a couple of minutes, and when a few more packets get delivered I have my answer there!)
  • by Zaylan on 6/18/25, 2:21 AM

    I’ve run into this a few times while working on projects. Even a few seconds of connection loss can cause serious issues. A lot of systems seem to assume the network is always available and always fast.

    We’ve gotten used to try again later or pull to refresh, but very few apps are built to handle offline states gracefully. Sometimes I wonder if developers ever test outside their office WiFi.

  • by almosthere on 6/17/25, 4:29 PM

    Design for:

        * blind
        * def
        * reading impaired
        * other languages/cultures
        * slow/bad hardware/iffy internet
    
    To me at some point we need to get to an LCARs like system - where we don't program bespoke UIs at all. Instead the APIs are available and the UI consumes it, knows what to show (with LLMs) and a React interface is JITted on the spot.

    And the LLM will remember all the rules for blind/def/etc...

  • by cwillu on 6/17/25, 3:36 PM

    > Strangely, they don't let you zoom out enough to grab a screenshot of the whole country so I'm going to look at the west. That'll get both urban and rural coverage, as well as several famously internet-y locations (San Francisco Bay Area, Seattle.)

    Huh, worked fine for me: https://i.imgur.com/Y7lTOac.png

  • by chung8123 on 6/18/25, 8:37 PM

    What are good ways others are testing for iffy internet?

    Dropped packets? Throttling? Jitter?

    I am trying to figure out if there are good testing suites for this or if it is something I need to manually setup.

  • by drawsome on 6/18/25, 11:56 PM

    Resiliency and adaptability is awesome.

    The phone is connected to the car on Bluetooth. The user shuts off the car. The call continues without delay.

    While there are so many more features that the phone could provide, staying connected during the call seems essential. However, I’m sure this wasn’t part of the first release of the app. But, once they get it to work, it is fucking magic.

  • by Taterr on 6/18/25, 5:02 AM

    I don't see many people mentioning what to me is the largest cause of slow internet. A weak WiFi or 5G signal caused by being behind a brick wall or far away from the router.

    This is the only reason I know why some websites/apps perform poorly on a bad connection (Discord struggles simply load the text content of messages).

    Another one is being at a huge event with thousands of people trying to use mobile data at the same time.

    If nothing else I think these two cases are enough to motivate caring a little bit about poor connections. Honestly I find them more motivating than developing for whatever % of users have a bad connection all the time.

  • by divbzero on 6/18/25, 6:46 PM

    I have fiber at home and at my office, but when I’m out I appreciate sites like HN that work well on unreliable or congested cellular/Wi-Fi.
  • by slater on 6/17/25, 5:06 PM

    Showing my age here, but I remember working hard in the late 90s to get every image ultra-optimized before go-live. Impromptu meetings all "OK go from 83% to 82% on that JPG quality, OK that saves 10KB and it doesn't look like ass, ship it"
  • by __MatrixMan__ on 6/17/25, 2:43 PM

    This is about high speed internet accessibility under normal circumstances. It seems like good analysis as far as it goes, but the bigger reason to design for iffy internet has to do with being able to rely on technology even after something bad happens to the internet.
  • by CM30 on 6/17/25, 4:52 PM

    It's also worth noting that poor quality internet connections can be depressingly common in countries other than the US too. For example, here in the UK, there are a surprising number of areas with no fibre internet available even in large cities. I remember seeing a fair few tech companies getting lumbered with mediocre broadband connections in central London for example.

    So if your market is a global one, there's a chance even a fortune 500 company could struggle to load your product in their HQ because of their terrible internet connection. And I suspect it's probably even worse in some South American/African/Asian countries in the developing world...

  • by lukeschlather on 6/17/25, 9:59 PM

    > However, it's also worth keeping in mind that this is a map of commercial availability, not market penetration. Hypothetically, you could get the average speed of a US residential internet connection, but the FCC doesn't make such a statistic available.

    It's actually worse than this. Companies will claim they offer gigabit within a zip code if there's a single gigabit connection, but they will not actually offer gigabit lines at any other addresses in the zip code.

  • by dghlsakjg on 6/17/25, 2:56 PM

    For the love of god, yes, design as if all of your users are going to be on a 1mbps connection that drops out for 5s every minute, because at some point, a lot of them (most of them, I would wager) will be using that connection. Often it is when you are on those connections that it is most important that your software work.

    The article looks at broadband penetration in the US. Which is useful, but you need to plan for worst cases scenario, not statistically likely cases.

    I have blazing fast internet at home, and that isn't helpful for the AAA app when I need to get roadside assistance.

    I want the nytimes app to sync data for offline reading, locally caching literally all of the text from this week should be happening.

  • by GuB-42 on 6/17/25, 6:34 PM

    The short answer is yes, and there are tools to help you. There are ways to simulate a poor network in the dev tools of major browsers, in the Android emulator, there is "Augmented Traffic Control" by Facebook, "Network Link Conditioner" by Apple and probably many others.

    It is telling that tech giants make tools to test their software in poor networking conditions. It may not look like they care, until you try software by those who really don't care.

  • by metalman on 6/17/25, 8:18 AM

    iffy internet and iffy devices, as what works with a good signal, can go haywire with a few settings not quite right. I run exlclusivly from mobile internet, two phones, one is serving as the primary data conection for both and recently figured out that the way the wifi was configured was contributing to the signal dropping, but only in rural areas with a weak cell signal on busy towers, it took some experimenting to get things to work reliably. But there are still a lot of places where the signal comes and goes, so useing a phone as a local only device is normal, or as a degraded device with perhaps just voice and text...perhaps not.
  • by purplezooey on 6/17/25, 4:19 PM

    This was table stakes not long ago. There seems to be an increase in apps/UIs blaming the network for what is clearly poor performance on the backend, as well.
  • by paulmooreparks on 6/18/25, 6:33 AM

    What I find surprising is that this article really only discusses US Internet (to be fair, it states as much at the end of the article). If we're designing anything for the Internet, we should assume worldwide distribution except in special cases, no? That definitely means assuming iffy Internet.
  • by awkward on 6/17/25, 5:34 PM

    It's crazy to me that almost all new web projects start with two assumptions:

    - Mobile first design

    - Near unlimited high speed bandwidth

    There's never been a case where both are blanket true.

  • by MatthiasPortzel on 6/17/25, 2:22 AM

    Upvoting not for the surprising fact that North Dakota has great gigabit fiber.
  • by amelius on 6/17/25, 4:01 PM

    Internet providers: Maybe we should provide faster internet for our rural users.

    Programmers: Let's design for crappy internet

    Internet providers: Maybe it's not necessary

  • by sneak on 6/17/25, 2:29 PM

    > Terrestrial because—well, have you ever tried to use a satellite connection for anything real? Latency is awful, and the systems tend to go down in bad weather.

    This isn’t true anymore. Starlink changed the whole game. It’s fast and low latency now, and almost everyone on any service that isn’t Starlink has switched en masse to Starlink because previous satellite internet services were so bad.

  • by b0a04gl on 6/17/25, 3:30 PM

    been quietly rolling out beacon-based navigation inside metro stations in bengaluru. this post is about the pilot at vidhana soudha {https://www.linkedin.com/posts/shruthi-kshirasagar-622274121...}. i had a role to contribute in the early scoping and feedback loop. no flashy tech, just careful placement, calibration, and signal mapping. real work is in making this reliable across peak hours, metal obstructions, dead zones. location precision is tricky underground, bluetooth’s behavior shifts with crowd density. glad to see this inching forward. bmrc seems serious about bringing commuter-first features to public infra
  • by madeofpalk on 6/17/25, 2:45 PM

    > What if that person is on a slow link? If you've never had bad internet access, maybe think of this as plane wifi

    Loads of people are on "a slow link" or iffy internet that would otherwise have a fast internet. Like... plane wifi! Or driving through less populated areas (or the UK outside of london) and have spotty phone reception.

  • by 0xbadcafebee on 6/17/25, 10:48 PM

    I have a Samsung Galaxy S10e, and a ThinkPad T14s Gen4 (AMD). And stable internet. Every time I search Google on the phone, typing input into the search bar lags for ~20 seconds, so badly that the letters randomly jump around in the text box (away from where the cursor is). It happens on the laptop too (to a lesser extent) when I'm on battery with power saving.

    When I complain about this, I get downvoted by angry people. They blame me for using "old" or "buggy" devices (they're not that old or slow), and blame my internet connection (it's fast and stable). Is it the CPU? The bandwidth? Latency? Some weird platform-specific bug? Who knows. But if every other web page I visit does not have this problem, then it's not my device, it's the website's design.

    Whenever practical, you should design for efficiency. That means not using more resources then you have to, choosing a method that's fast rather than slow, trying to avoid unnecessary steps, etc. Of course people will downvote me for saying that too. Their first comment is going to be that this is "premature optimization". But it isn't "premature" to pick a design that isn't bloated and slow. If you know what you are doing, it's not hard to choose an efficient design.

    Every year software is more bloated, more buggy. New software is released constantly, but it isn't materially better than what we had decades ago. New devs I talk to seem to know less and less about how computers work at all. Perhaps the enshittification of technology isn't the tech itself getting shittier, as it can't actually make itself worse. In an industry that doesn't have minimum standards, perhaps it's the people that are enshittifying.

  • by nicbou on 6/18/25, 12:46 AM

    I live in Berlin and I travel a lot. Some frequent problems include:

    - Minutes-long dead spots on public transit, even above ground

    - Bad reception in buildings

    - Bad wi-fi at various accommodations

    - Google Maps eating through my data in a day or two

  • by scumola on 6/17/25, 2:27 PM

    mosh is awesome for ssh over iffy connections
  • by pluto_modadic on 6/18/25, 9:13 PM

    if you want to /design/ a website, every KB should count. Needlessly including huge bits of clunky, flakey, extra heavy bits and huge images or fancy videos is.... bleh!
  • by CommenterPerson on 6/18/25, 6:56 PM

    Yes yes yes we should. Bloated pages are a symptom of Enshittification. When I try to buy something on Amazon there's so much crap, menus that drop down on hover, that I just close and go away. Or search Amazon via Duck. Also part of the bloat everywhere is the unconstrained tracking and surveillance. .. Design benchmark should be lightweight pages like HN and Craigslist!
  • by m3047 on 6/18/25, 6:13 PM

    Focuses on the contiguous US States. It gets more interesting when you're accessing resources not located in North America / Western Europe. It gets even more interesting when neither you or the resources are located there.

    I don't quote the following to discount what the article is saying, I think it is what the article is saying:

    > This may or may not be OK for your market—"good internet" tends to be in population centers, and population centers tend to contain more businesses and consumers.

    and this:

    > That said, I'm deliberately not making any moral judgments here. If you think you're in a situation where you can ignore this data, I'm not going to come after you. But if you dismiss it out of hand, you're likely going to be putting your users (and business) in a tough spot.

  • by jagged-chisel on 6/17/25, 11:12 PM

    Yes. And no internet.
  • by grishka on 6/17/25, 9:00 PM

    What really grinds my gears is websites with news/articles that assume that you have a stable fast internet connection for the whole duration of you reading the article, and so load images lazily to "save data".

    Except I sometimes read articles on the subway and not all subway tunnels in my city have cell service. Or sometimes I read articles when I eat in some place that's located deep inside an old building with thick brick walls. Public wifi is also not guaranteed to be stable — I stayed in hotels where my room was too far from the AP so the speed was utter shit. Once, I loaded some Medium articles on my phone before boarding a plane, only to discover, after takeoff, that these articles don't make sense without images that didn't load.

    Anyway. As a user, for these kinds of static pages, I expect the page to be fully loaded as soon as my browser hides the progress bar. Dear web developers, please do your best to meet this expectation.

  • by pier25 on 6/17/25, 2:52 PM

    How does the US infrastructure compare to the rest of the world?
  • by loog5566 on 6/18/25, 2:29 AM

    I am almost always on 5G or starlink.
  • by RugnirViking on 6/17/25, 2:43 PM

    at the very least consider it. It makes things better for everyone, highlights reflow messes as things load in etc
  • by dfxm12 on 6/17/25, 2:37 PM

    Yes. Assume your users have a poor or metered connection. I don't want unnecessary things (like images) to load because it takes time, eats at my data quota and to be frank, I don't want people looking over my shoulder at media on my phone (especially when I have no idea what it is going to be). This is especially true for social media (and the reason I prefer HN over bluesky, reddit, etc.).
  • by jedberg on 6/17/25, 7:31 PM

    This doesn't show the whole picture. YEs, I have super reliable high speed internet in my house. But I do about 1/2 of my interneting on my mobile phone. And despite living in Silicon Valley with 5G, it's totally unreliable.

    So yes, please assume that even your most adept power users will have crappy internet at least some of the time.

  • by LAC-Tech on 6/18/25, 9:50 PM

    I don't think most developers are capable of this. Call me elitist, but the concept of data from the network as being qualitatively different as data in memory is just foreign to most people.
  • by rkagerer on 6/17/25, 4:09 AM

    Yes.

    Next que...<loading>

  • by 1970-01-01 on 6/17/25, 5:52 PM

    Yes, because Wi-Fi 7 and 5G still isn't anywhere near Ethernet in terms of packet loss.

    New headline: Betteridge's rule finally defeated. Or is it?

  • by RajT88 on 6/17/25, 3:55 PM

    Yes.
  • by mschuster91 on 6/18/25, 8:13 PM

    Yes. Go to Germany and travel from any large urban area to the next and you'll see why - it's a meme by now that coverage on trains and even Autobahns is piss poor.

    And no, geography is not an excuse. Neighboring Austria has same geography as Bavaria, and yet it is immediately noticeable when exactly you have passed the border by the cellphone signal indicator going up to full five bars. And neither is money an excuse, Romania - one of the piss poorest countries of Europe - has 5G built out enough to watch youtube in 4k on a train moving 15 km/h with open doors to Sannicolao Mare.

    The issue is braindead politics ("we don't need 5G at any remote milk jug") and too much tolerance for plain and simple retarded people who think that all radiation is evil.

  • by lo_zamoyski on 6/17/25, 3:25 PM

    We can avoid the problem simply by employing better design and a clear understanding of the intended audience.

    There is no need or moral obligation for all of the internet to be accessible to everyone. If you're not a millionaire, you're not going to be join a rich country club. If you don't have a background in physics, the latest research won't be accessible to you. If you don't have a decent video card, you won't be able to play many of the latest games. The idea that everything should be equally accessible to everyone is simply the wrong assumption. Inequality is not a bad thing per se.

    However, good design principles involve an element of parsimony. Not minimalism, mind you, but a purposeful use of technology. So if the content you wish to show is best served by something resource intensive that excludes some or even most people from using it, but those that can access it are the intended audience, then that's fine. But if you're just jamming a crapton of worthless gimmickry into your website that doesn't serve the purpose of the website, and on top of that, it prevents your target audience from using it, then that's just bad design.

    Begin with purpose and a clear view of your intended audience and most of this problem will go away. We already do that by making websites that work with both mobile and desktop browsers. You don't necessarily need to make resource heaviness a first-order concern. It's already entailed by audience and informed by the needs of the presentation.

  • by jekwoooooe on 6/17/25, 4:56 PM

    Something that is missing is… who cares? If you have bad internet why assume the product or page is for you?