from Hacker News

Ask HN: Was programming more interesting when memory usage was a concern?

by morph123 on 4/2/23, 7:08 PM with 46 comments

I got into doing programming professionally around 2014, which was way after memory and stuff was really much of a concern for 99% of all applications seemingly.

I always do look enviously at back when you had to figure out sort of bootstrappy ways to solve problems due to inherent limitations of the hardware instead of just sitting there gluing frameworks together like a loser.

Am I wrong or was programming just way cooler back then?

  • by alkonaut on 4/2/23, 10:35 PM

    Its still a huge concern. Cache today is what memory was in the 80s and 90s. Memory today is what disk was. And your L1 cache today might be 64kb! That’s basically what you can work with if you want to use your CPU at full speed.

    For anything that isn’t IO bound you are very often bound by memory access time. CPUs are incredibly fast and feeding them data to work on is very difficult, more so today than before!

    The big difference is that a category of programming jobs has appeared where this is almost never a concern because it’s about shoving text around between servers.

  • by lousken on 4/2/23, 11:31 PM

    I think certain developers don't care because they don't even understand programming on a lower level nor they understand hardware. As a sysadmin that really grinds me gears.

    These days, developers have no shame - even Microsoft shows Teams was a utter piece of garbage with both memory and cpu usage https://youtu.be/CT7nnXej2K4 . Are they even serious with 22 second loading on a modern hardware and multiple gigabytes of memory usage for a stupid chat app?

    We still use laptops that have 8GB of RAM in our company for positions like HR, marketing and others and the fact that Teams and couple of browser tabs can totally kill their PCs is insane. Laptop manufacturers haven't caught up to the news and they still charge premium for 32GB machines, while 8GB is no longer sufficient for anything and 16GB should be the baseline these days.

    And of course this is not just Teams but many other applications. Too bad most users don't understand the issue and think that something is wrong with their computers.

    Regular users are not great testers. Testing used to be a dedicated job...

    Anyway I think there should be a lot more pressure on developers to use our current hardware more efficiently

  • by GianFabien on 4/2/23, 10:15 PM

    Programming embedded systems using Atmel and similar low-cost SoC's is very much like working on computers in the 1980s and 1990s. But with PC hosted dev tools far easier and cheaper.

    Since you mention an interest in this space, consider buying an Arduino kit and building some cool personal projects. Once you have some experience, look around your part of the world and identify any needs that you could solve with an embedded solution.

  • by rerdavies on 4/2/23, 11:22 PM

    The first programming job I had was programming operating systems for Micom 2000 word processors, back in 1979, two years before the IBM PC was released.

    The machine had an 8-bit Z80 processors, 128k of bank-swappable memory (64k address space), and two 8" 240kb floppy disk drives.

    The 8080 assembler source for my module was spread over 14 floppy disks. Compiling involved meticulously swapping floppies in the correct order for about a day and half. Inserting the wrong floppy in the wrong order would cause the build to fail, and you'd have to start over from scratch.

    Despite that, one had a breathless sense that you were working with a technology that was about to change everything. The implications of the coming technological age were unmistakable. And the only limit was imagination and ingenuity.

    Having a MILLION instructions a second to play with seemed limitless; but having to fit things into 16kb memory pages felt a bit constraining. For the most part, user data had to fit in a 16kb memory page because that was all that was left over after the operating system was loaded.

    In retrospect, there's only so much you can do with a 4Mhz processor when you only have 16kb of data to work with, so processing power wasn't the constraint.

    Probably exactly the same as what it's like to be working with AI technology today. Everything is going to change. And the future is going to be unrecognizable. And you're sitting in the middle of it.

    So yes, absolutely, it was way cooler back then. Unless you're currently working on AI projects.

  • by drpixie on 4/2/23, 11:02 PM

    In many ways it's still a problem. Sure you've now got xx-GB of RAM and zz-TB of storage, but you now live in a world where images and video are used for day-to-day comms, instead of text.

    Compared with (say) 20 years ago, you've got 20-50 times more memory, and your data is 20-50 times (or more) bigger.

    We're doing the same as always, but using more bits, because bits are cheaper.

  • by jitl on 4/3/23, 12:50 AM

    I still worry about memory usage every day, and I’m writing Typescript for a single page React app.

    Sure, there’s more memory these days and many languages make memory correctness errors a thing of the past, but play your cards wrong and you’ll exhaust your users’ resources just the same. In many ways it’s harder today because you can’t easily look at an abstraction and understand its memory or cache cost in languages like Java, Python, or JavaScript. you just pray it’ll get optimized or won’t cost too much and big-O analysis is enough. I can’t tell you how many times I’ve heard programmers working with GC languages worry about GC pressure or GC pauses.

    You can always work in Zig, C++, Rust, etc to make reasoning about memory use easier. There’s more positions today working in those contexts than in the past, even though they’re a smaller overall percent of the market.

  • by PaulHoule on 4/2/23, 7:11 PM

    Like not having enough or managing the memory you do have?

    I’d say that garbage collection was key to large scale software reuse as if you didn’t have garbage collection you’d have to have a lot of cooperation between libraries and applications (how does the application know the library doesn’t need a piece of memory or vice versa?)

  • by xboxnolifes on 4/2/23, 10:34 PM

    Constraints are what make puzzles interesting. Low memory/compute/throughput/etc availability makes software more like a good puzzle.

    The more constraints you remove, the more something transitions from being a puzzle to being a canvas. Each end appeals to different kinds of people.

  • by bell-cot on 4/2/23, 8:08 PM

    YES, it was for-sure cooler.

    Memory mattered. Ditto CPU cycles, disk, and network bandwidth. (Not that your code could count on there being a network, in a lot of cases.) Security wasn't a Garden of Eden...but now that can feel more like being a night-shift ICU nurse during a COVID surge that never ends. And the software stack under your code was orders of magnitude smaller, and slower-changing. These days - that can feel like you aren't a programmer, but instead a Wall Street attorney who's trying to pilot some giga-corporate merger through the shifting regulatory and political obstacles in 37 different countries.

  • by dave4420 on 4/2/23, 7:56 PM

    It was more interesting in the sense of “May you live in interesting times.”

    I don’t miss low level programming in a work context at all.

  • by japhyr on 4/3/23, 12:04 AM

    I first learned programming in the early 80s, but never took it too seriously until the mid 2000s. I first learned Python at a time when I thought I'd do most of my serious work in Java.

    That was a time when Python was rarely someone's first language. People would say, "Try Python, your programs will be about a third as long as they are now!" I was skeptical, but holy heck those people were right.

    I am academic enough to enjoy the intellectual challenge of being conscious of memory usage. But when you had to pay attention to it in every project, it was not always fun. It was really fun to use Python and forget about memory for a while. I built many projects in Python that I probably wouldn't have made time for in Java.

    I really enjoy programming these days, where you can forget about memory until you have a reason to optimize. Then when you do need to optimize, you can profile your code with fantastic tools, reason about memory enough to get things working efficiently again, and move on with the project. I really enjoy hopping back and forth between focus on a larger project, and focus on the inner workings of a project.

    I think we're seeing the same kind of fundamental change with GPT-assisted programming. There's a whole generation starting to use these tools right now who will ask a question on a 2040s version of HN, "Was programming more interesting when correct syntax was a concern?"

  • by mitchellpkt on 4/2/23, 9:33 PM

    There are still some performance-sensitive niches that you might enjoy. Large scale numeric simulation workloads can have a variety of bottlenecks, and the difference between the easiest solution and the fastest solution can be orders of magnitude. Smart contract development also requires tight memory management, otherwise the fees can become prohibitively high.
  • by Dwedit on 4/2/23, 10:25 PM

    You had to deal with dynamic loading of code or data when needed rather than preloading everything into RAM all at once.
  • by wruza on 4/3/23, 3:03 AM

    As a former pascal, C, assembly programmer I learned perl, python, lua and finally type/javascript (when it became decent) and never looked back. It was cool, but not-worth-it-today kind of cool. When I think about cascade-freeing my pointers again or [de]serializing a struct for storage or ffi, I shudder. We digged the same ditches again and again and again.

    I was pretty late into it (ca 2000) and can only imagine the pain of switching banks, mapping windows, dealing with segments and overlays, unloading things. When you hear about “cache invalidation and naming things”, don’t you ask yourself what the hell is “cache invalidation”? Lucky you.

    There’s so much you can do today in just a few hours. But if you want that feeling, start with npm-remove-ing your bundler. That’s not the same, but the feeling will be the same.

  • by picklebarrel on 4/2/23, 11:31 PM

    I think it is much cooler now. So many more things are possible now. Instead of having 8 low res sprites to push around, we have high quality 3d rendering. Instead of making a robot with 2 wheels that bumps against a wall and turns, we have humanoid robots to work on. I find it all really exciting.
  • by eesmith on 4/2/23, 8:20 PM

    For the people who found that interesting, yes, it was more interesting back then. Most projects were tight on memory, so those skills were needed, and easy to find comrades in the struggle.

    However, people go into programming for many reasons. For those who were not interested in squeezing out every word, it was not interesting. Probably some people realized their tasks couldn't be done in the limited hardware of the time, so didn't even try until memory became significantly cheaper.

    The past can seem like a golden era of mythic heroes.

    Some from the generation you envy are themselves envious of the previous generation, where programmers were expected to write their own operating system - the vendor didn't provide one - and pull out a soldering iron and oscilloscope for debugging.

    When many were writing yet another COBOL program for accounts processing.

  • by jrexilius on 4/3/23, 12:04 AM

    I don't think it's so much dealing with resource constraints that made it more exciting, but the fact that you were often originating things, rather than (haphazardly) stitching together exisiting, known works of others. It's always more exciting to create something from scratch rather than "just add water and stir" sorta thing.

    [edit to add:] Although, another part of having to understand memory usage, for example, meant that you really had to understand the fundamental building materials you were working with. Where as now, no one has the man hours to really understand the stack of tens of thousands of lines of node shit you have to glue together to do some basic task.. you feel more excited when you are really mastering the materials, I think..

  • by fsociety on 4/2/23, 11:03 PM

    It is still cool. An easy path to success: find a medium to large company with a strong engineering culture, join, and start memory profiling things. You will likely end up with 1-2 line commits that reduce memory usage across the entire fleet of services by 20-50%.
  • by cc101 on 4/2/23, 11:28 PM

    It works both ways. Some folks were overly concerned about both memory footprint and total run-time as their importance actually declined. I made a career out of ignoring both and making leading edge apps that pushed these limits. Few other programmers had yet realized that these things were becoming practical. Within my organization I had an un-earned reputation as a leading edge programmer. Not so. I was a mostly self-taught hacker trying to keep up with the others. Now I was doing scientific programming, and there are plenty of other places where this approach won't work, but it worked for me. Look forward, do new things, and build the future.
  • by helph67 on 4/2/23, 10:20 PM

    Yes, knowing your hardware always helped. Subroutines used frequently and placed early in the code helped reduce memory use and improve speed. Back in the early 1980's "80 Microcomputing" magazine ran an annual competition for BASIC one line programs. Some entries proved just how clever some coders could be. https://en.wikipedia.org/wiki/80_Micro
  • by quechimba on 4/2/23, 11:41 PM

    Memory usage is still a problem. Trying to figure out why some objects don't get garbage collected at the moment.
  • by kadoban on 4/2/23, 11:30 PM

    > Am I wrong or was programming just way cooler back then?

    You're wrong, to be blunt.

    Sure there's some interesting challenges to solve around keeping memory usage low, but mostly it was just tedious and limiting.

    Programming has enough challenges and interesting optimizations without worrying as much as one used to have to about memory.

  • by DamonHD on 4/2/23, 7:13 PM

    It still is cool because of memory and CPU (power) and code constraints for embedded computing.
  • by markus_zhang on 4/2/23, 9:29 PM

    Definitely interesting. I'd love to do everything my way, however shitty it is. But it's MINE! Plus if you ever scroll back to 1991 and take a look of the first version of any Linux command line program I bet it's not pretty.
  • by mistrial9 on 4/2/23, 10:37 PM

    a huge amount of effort went into managing "soft loading" assets in RAM pre-1995 or so, even later. To swap allocated blocks in and out well was very much a thing. Games got away with different tricks since forever, then secondly desktop work apps with non-trivial parts included things like this. Does it make you sharper or the work more interesting to have to jump through hoops on a regular basis? YMMV
  • by revelio on 4/2/23, 8:52 PM

    Memory stopped being a big limiter on the desktop way before 2014. You'd want to go back to the 90s for it to be a major change in how things were done. So, back when people had 8mb of RAM or maybe 16mb if the machine was high spec and in theory Windows could run in 4mb. Swap existed but disks were so slow that if you actually hit swap then your machine would drag to a halt.

    Anyway. No, it really wasn't cool.

    The thing to realize is that software wasn't really that much better optimized for RAM than today. Maybe a bit, but mostly it just created a lot of pain:

    - Every single API was specified to be able to fail due to OOM, and a lot of programmers tried to check for this and recover. However, there were no good ways to simulate or test it, and unit testing was in its infancy as a practice (e.g. no widely adopted frameworks for testing), and so in practice this just yielded a ton of codepaths and boilerplate that never really got tested and probably didn't work.

    - Because RAM was so tight, you pretty much had to use the operating system's APIs for everything, even if they sucked or were full of bugs. It wasn't just RAM of course, it was also disk space, CD space, nearly non-existent network bandwidth. Duplication of what Windows had wasn't feasible. It meant everything was pretty consistent, which had its upsides, but it also meant that everything was consistently quite ugly and the horrible Windows/macOS APIs were all you got.

    - Memory limits resulted in awkward APIs. No type safety, no enums (only bit flags), no reflection, lots of annoying (and by the late 90s obsolete) memory locking protocols that were holdovers from win16, and "rerendering" literally meant re-drawing small areas of the screen because pixels weren't cached. In particular a lot of APIs required you to call them twice, once to figure out how much memory something would require, and then a second time to fill out that memory once it was allocated.

    - Error messages? Logging? Hah no. You get 32 bit error codes because there isn't enough memory or disk space for all the strings proper logging and errors would require. So you got really good at decoding HRESULTs.

    - Garbage collection existed as a tech but because swap was so slow and RAM so tight it was very easy for it to trigger swap storms, so in common practice automatic memory management (when it existed) was all based on refcounting. So you couldn't quite just forget about it because you could still leak memory pretty easily even using a higher level language like VB.

    The transition towards relative memory abundance was fairly painful. For example, Java because popular with devs because it had all those nice things that used a lot of memory but it led to slow/swappy apps that were painful for users. You still see the same complaints w.r.t. Electron apps, though today machines can take it and so it's more about people feeling it's wasteful rather than it literally killing the responsiveness of the machine like it used to be.

    Of course in the above I say "was" and "used to be" but if you ever have to do any Win32 or low level POSIX programming you'll be right back in that world.

  • by butterisgood on 4/2/23, 11:30 PM

    Read “Racing the Beam” for a good time learning about how to do a lot with crazy constraints we don’t have today!
  • by eurticket on 4/3/23, 12:14 AM

    Wasn't around that time but I imagine Limitations made for some creative problem solving.
  • by zxcvbnm on 4/2/23, 7:24 PM

    Yes. Also, back then in the offline-ish times people shipped working software. Nowadays stuff is huge, slow, craps out all the time, and the risk of stuff breaking with a next update is comparable to the security risks of turning off updates completely.
  • by kazinator on 4/2/23, 11:48 PM

    > when memory usage was a concern

    I checked this morning; it was still the case.

  • by nathants on 4/2/23, 8:40 PM

    these things still matter for the interesting/challenging sections of problem space.
  • by boppo1 on 4/2/23, 11:58 PM

    Programmers like you are why I have to upgrade my computer every two years just to browse the web.
  • by czzr on 4/2/23, 10:30 PM

    No