by AlexDenisov on 5/4/19, 9:14 PM with 182 comments
by apo on 5/5/19, 4:04 AM
> I consider debuggers to be a drug -- an addiction. Programmers can get into the horrible habbit of depending on the debugger instead of on their brain. IMHO a debugger is a tool of last resort. Once you have exhausted every other avenue of diagnosis, and have given very careful thought to just rewriting the offending code, then you may need a debugger.
https://www.artima.com/weblogs/viewpost.jsp?thread=23476
I'm still baffled by people like the author who "do not use a debugger.' A print statement is a kind of debugger, but one with the distinct disadvantage of only reporting the state you assume to be important.
This part was a little surprising:
> For what I do, I feel that debuggers do not scale. There is only so much time in life. You either write code, or you do something else, like running line-by-line through your code.
I could easily add something else you might spend the valuable time of your life doing: playing guessing games with print statements rather than using a powerful debugger to systematically test all of your assumptions about how the code is running.
by Stratoscope on 5/5/19, 12:23 AM
I spend a lot of time working with APIs and libraries that are poorly documented and often that I haven't used before.
Instead of writing out a bunch of code based on my limited understanding of the docs, and likely with many bugs, what works for me is to just write a few lines of code, until I get to the first API call I'm not sure about or am just curious about. I add a dummy statement like "x = 1" on the next line and set a breakpoint there.
Then I start the debugger (which conveniently is also my code editor) and hopefully it hits the breakpoint. Now I get to see what that library call really did, with all the data in front of me. Then I'm ready to write the next few lines of code, with another dummy breakpoint statement after that.
Each step along the way, I get to verify if my assumptions are correct. I get to write code with actual data in front of me instead of hoping I understood it correctly.
If I'm writing Python code in one of the IntelliJ family of IDEs, I can also hit Alt+Shift+P to open a REPL in the context of my breakpoint.
Of course this won't work for every kind of code. If I were writing an OS kernel I might use different techniques. But when the work I'm doing lends itself to coding in the debugger, it saves me a lot of time and makes coding more fun.
by maximus1983 on 5/4/19, 11:23 PM
I've been a professional now for about 15 years and very, very rarely do I get to work on "my code". Almost all of the code I have to work with was written by someone else originally and I have to just modify the system for new requirements. Tests do not exist or if they do they are largely incomplete.
So the only thing to do is to step through find the problem, fix the ticket and move on.
Sure If I get to design the system I normally write it very simply / well structured with appropirate levels of abstraction and with enough tests to expose the bugs in my code. But very rarely do I get paid to work on my code, because my code doesn't need a lot of maintenance. I normally am asked to make changes to bad systems.
> Brian W. Kernighan and Rob Pike wrote that stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Kernighan once wrote that the most effective debugging tool is still careful thought, coupled with judiciously placed print statements.
Thinking harder when I have a project with millions of lines of code (this is normal in large financial systems) won't help me. A debugger will.
I think a lot of these famous programmers have never had to work with something terrible and probably never will and that is why they make such blasé statements.
by roca on 5/5/19, 3:13 AM
Those record-and-replay debuggers also fix some of the other big problems with debugging, such as the need to stop a program in order to inspect its state.
The author is right that the traditional debugger feature set leaves a lot to be desired. Where they (and many other developers) go wrong is to assume that feature set (wrapped in a pretty Visual Studio interface) is the pinnacle of what debuggers can be. It's an understandable mistake, since progress in debugging has been so slow, but state-of-the-art record-and-replay debuggers prove them wrong. Furthermore, record-and-replay isn't the pinnacle either; much greater improvements are possible and are coming soon.
by greenyoda on 5/4/19, 10:01 PM
Also, when you're working with code that takes a long time to compile and link, using a debugger to check program state can be a lot quicker than recompiling the code with added print statements.
Different developers work with different types of code in different environments, so just because Linus Torvalds or Rob Pike does something one way doesn't mean that this is the most effective way for everyone else to do it.
by drewg123 on 5/4/19, 11:15 PM
In my job on the kernel team at Netflix's Open Connect CDN, I do post-mortem debugging of kernel core dumps almost daily. On a fleet the size of our CDN, there will invariably be a kernel panic which you'd not otherwise be able to reproduce. In fact, I often use 2 debuggers: kgdb for most things, and occasionally a port of the OpenSolaris mdb for easily scripting exploration of the core.
My hat is off to all the people who made my debugging so much easier. Eg, the llvm/clang folks who emit DWARF, gdb folks who make the FreeBSD kgdb debugger possible, and the Solaris folks who wrote the weird and wonderful mdb.
by BrissyCoder on 5/5/19, 12:06 AM
I could show anyone of these authorities a situation where their bravado would fail a trying to figure out a particular bug by "staring at the code and thinking harder" would be impossible.
Besides - they all tend to concede that they will use print statements or whatever as a last resorted. That is just retarded - you have to add the statements, recompile the code, and eventually remove them again. Just use the goddamn debugger that's what it's there for.
What starts as a completely reasonable "hey maybe sometimes you should try to read the code and not let the debugger become a crutch" turns into a sensationalist black/white statement "I NEVER USE A DEBUGGER". Grow up.
by tootie on 5/4/19, 11:36 PM
by ampersandy on 5/5/19, 7:10 AM
Once you have extensive knowledge of how a system works, you can more easily spot incorrect code and are less likely to write it yourself. You should always treat any debugging session as "learning more about the system". If you can't explain to another person _why_ a bug occurred, then you won't be able to convince them that the problem is truly fixed.
The approach you take to learn how the system works is not important, and will vary from person to person. Experienced developers can read code, think about it, and understand what it does. Some people understand control flow through print statements. Some people can fly through with a debugger. Saying someone else's method of study is "wrong" is just silly if in the end they understand how it works.
This article is especially wrong because it makes the assumption that you must step through a program line by line when you use a debugger, which you don't have to do. You can just put breakpoints at the same places where you would have put an equivalent print statement, except now you can inspect _everything_, instead of the one or two things you bothered to print out.
by heinrichhartman on 5/4/19, 11:32 PM
1 Formulate a Hypothesis
2 Determine what data you need for verification
3 Instrument code using gdb break commands, to hook function calls, sys-calls, signals, state-changes, etc. and print debugging information (variables, stack traces, timings) to a log file.
4 Then run an automated or manual test, of the failure you are debugging.
5 Then STOP data collection. Kill the process. Shut down the debugger.
6 Perform forensics on the log file.
7 Validate/Discard the hypothesis.
With this allows you reason reliably about the computational process:
- Why did I see this log line before this one?
- Why was this variable NULL here, but not in here.
You don't need to do that while you are in debugging session. You can take your time. You can seek back and forth in the file. It's like dissecting a dead animal, not chasing a fly.
In addition, you can share concise information with your colleagues:
- This is the instrumentation
- This is the action I performed
- This is the output
And ask concise questions, that people might be able to answer:
> I expected XYZ in long line 25 to be ABC, but it was FGH. Why is this?
by alkonaut on 5/5/19, 10:11 AM
I'm sure I'd write more thought through code if I took a screwdriver and removed my backspace key. But that doesn't mean it's a good idea.
How about: use a debugger AND reason about the code? My pet theory: The reason the listed developers Kerningham, Torvalds et.al, don't use debuggers is because the debuggers they have available just aren't good enough. It would be very interesting to have them describe their development environments, and what debuggers they have actually used, and in which languages.
A debugger isn't much better than println if you are working in a weak type system and with a poorly integrated development environment. If you do C on linux/unix and command line gdb is your debugger then I understand why println is just as convenient.
println doesn't solve the problem of breaking in the correct location, trying a change without restarting (by moving the next statement marker to the line before the chnaged line), It doesn't support complex watch statements to filter data or visualize large data structures and so on.
by mahidhar on 5/5/19, 3:26 AM
A few weeks back, we got some complaints from a client who said some of their employees weren't getting allotted an appointment slot, despite the fact that the slot is supposedly free. I dived into the codebase to try and figure out the problem. There were some minor bugs which I spotted first, and could fix without using a debugger. But the appointments were still getting dropped occasionally. So I started tracing the control flow more carefully.
That’s when I found one of the strangest pieces of code I had ever seen. To figure out what the next available appointment slot was, there was a strange function which got the last occupied slot from the database as a DateTime object, converted it to a string, manipulate it using only string operations, and finally wrote it back to the database after parsing it back to a DateTime object, before returning the response! This included some timezone conversions as well! Rails has some very good support for manipulating DateTimes and timezones. And yet, the function's author had written the operation in one of the most confounding ways possible.
Now, I could have sat there and understood the function without a debugger as the article recommends. And then, having understood the function, I could have then rewritten the function using proper DateTime operations. But with a client and my mangers desperately waiting for a fix, I used a debugger to step through the code, line by line, just understanding the issue locally, and fixed the bug which was buried in one of the string operations. That solved the problem temporarily, and everyone was happy.
A week later, when I had more time, I went back and again used the debugger to do some exploratory analysis, and create a state machine model of the function, observing all the ways it was manipulating that string. I added a bunch of extra tests, and finally rewrote the function in a cleaner way.
Instead of romanticising the process of developing software by advocating the use or disuse of certain tools, we should be using every tool available to simplify our work, and achieve our tasks more efficiently.
by gumby on 5/5/19, 12:01 AM
I started out as a user of "print" as a debugging technique, but early on Bill Gosper took pity on me and introduced me to ITS DDT and the Lispm debugger. They both had an important property that they were always running, so when your program fails you can immediately inspect the state of open files and network connections. No automatic core dumps except for daemons. The fact that you explicitly have to attach a debugger before starting the program is a regression in Unix IMHO.
It doesn't surprise me that Linus doesn't use one as kernel debugging is its own can of worms.
by MaulingMonkey on 5/5/19, 1:06 AM
I use debuggers to track down and understand undefined behavior, where mutating the code with logging statements may cause the bug to disappear.
I use debuggers to understand the weird state third party libraries have left things in, many of which I don't have the source code to, or even headers for library-internal structures, but do have symbols for.
I use debuggers to understand and create better bug reports and workarounds for third party software crashing, when I don't have the time, the patience, or the ability (if closed source) to dive into the source code to fix it myself.
I use debuggers to verify I understand the exact cause of the crash, and to reassure myself that my "fixes" actually fixed the bug. This is especially important with once-in-a-blue-moon heisencrashes with no good repro steps. I want a stronger guarantee than "simplify and pray that fixed it".
Yes, if your buggy overcomplicated system is a constant stream of bugs, think hard, refactor, simplify, do whatever it takes to fix the system, stem the tide, and make it not a broken pile of junk.
But sometimes bugs happen to good code too though, and sneaks through all your unit tests and sanity checks anyways. And despite rumors such as:
> Linus Torvalds, the creator of Linux, does not use a debugger.
Linus absolutely uses debuggers:
>> I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program.
He just pretends he's not using it as a debugger (as if "a disassembler on steroids that you can program" isn't half of what makes a debugger a debugger) and strongly encourages coding styles that don't require you to rely on them heavily.
by shin_lao on 5/5/19, 12:11 AM
I too, when I'm on Linux, don't use a debugger, because there's no good debugger and adding a print statement is faster and easier, and figuring out what is going on with gdb is just plain horrible and slow.
That's why as soon as I find a bug on Linux I try to have it on Windows to leverage VS's debugger. I can't count the number of times where I could instantly spot and understand a bug thanks to VS's debugger.
"Think harder about the code", sure, and what if you didn't write the code?
by tie_ on 5/4/19, 11:45 PM
Debuggers are the most amazing tools to explore and learn the code in your hands. They do have limitations, as pointed out, but presuming that your inner insight/gut feeling leads to more scalable and lasting results is ridiculous.
by michaelmrose on 5/5/19, 12:09 AM
"In the 1950s von Neumann was employed as a consultant to IBM to review proposed and ongoing advanced technology projects. One day a week, von Neumann "held court" at 590 Madison Avenue, New York. On one of these occasions in 1954 he was confronted with the Fortran concept; John Backus remembered von Neumann being unimpressed and that he asked, "Why would you want more than machine language?" Frank Beckman, who was also present, recalled that von Neumann dismissed the whole development as "but an application of the idea of Turing's 'short code."' Donald Gilles, one of von Neumann's students at Princeton, and later a faculty member at the University of Illinois, recalled that the graduate students were being "used" to hand-assemble programs into binary for their early machine (probably the IAS machine). He took time out to build an assembler, but when von Neumann found out about it he was very angry, saying (paraphrased), "It is a waste of a valuable scientific computing instrument to use it to do clerical work.
by yonixw on 5/4/19, 11:26 PM
Anyway, imagine a detective trying to figure out a murder scene without going through it step by step. Of-curse with experience come speed. So, weird flex but OK. I will still prefer C# over any language just because the debugger in Visual Studio is amazing!
After re-reading the article, I see it as another "Just code better" which hopefully will make it easy to pinpoint the bug just from the problem itself. In complex system with a lot of feedback loops, It's nearly impossible without debugging or using logs (which for me are just serialized debugger).
by Ace17 on 5/5/19, 9:02 AM
It necessarily implies some loss of control over the code you (or somebody else) wrote, i.e you're not sure anymore of what the program does - otherwise, you wouldn't be debugging it, right?
If you get into this situation, then indeed, firing up a debugger might be the fastest route to recovery (there are exceptions: e.g sometimes printf-debugging might be more appropriate, because it flattens time, and allows you to visually navigate through history and determine when thing started to went wrong).
But getting into this situation should be the exception rather than the norm. Because whatever your debugging tools are, debugging remains an unpredictable time sink (especially step-by-step debugging, cf. Peter Sommerlad "Interactive debugging is the greatest time waste" ( https://twitter.com/petersommerlad/status/107895802717532979... ) ). It's very hard to estimate how long finding and fixing a bug will take.
Using proper modularity and testing, it's indeed possible to greatly reduce the likelihood of this situation, and when it occurs, to greatly reduce the search area of a bug (e.g the crash occurs in a test that only covers 5% of the code).
I suspect, though, that most of us are dealing with legacy codebases and volatile/undocumented third-party frameworks. Which means we're in this "loss-of-control" situation from the start, and most of the time, our work consist in striving to get some control/understanding back. To make the matter worse, fully getting out of this situation might require an amount of work whose estimate would give any manager a heart attack.
by emp on 5/5/19, 6:17 AM
by bch on 5/5/19, 12:35 AM
It tells us more about Linus. I hope no impressionable developers take this article too seriously.
by speedplane on 5/5/19, 9:54 AM
The speed of iterating changes to code has a huge influence on the tools that you use. In a language like C, compiling new changes to code can take many seconds or sometimes minutes. Thus, you'll want to have heavy duty tools that can carefully analyze how your code is operating.
In contrast, in a scripting language, changing a line and rerunning the code can take far less time (a few seconds for even large programs). Thus, you can iterate more often, and so you don't have to be as careful in each iteration.
The moral of the story is that debuggers can be extremely helpful for some languages, especially those that take a long time to compile. However, while still helpful, they are far less helpful for languages that you can run quickly (I'm thinking Python here).
by rspeele on 5/5/19, 3:50 AM
Sure, maybe that guy exists. Maybe you've seen "that guy" or been "that guy". Does that mean that it has no value to be able to stop a program and look at all the values?
by pbiggar on 5/5/19, 12:15 AM
Is this a debugger? Sure. It sorta lets you step through the program and inspect state at various points. But it's also a REPL, and it also strongly resembles inserting print statements everywhere. It's also like an exception tracker/crash reporter and a tracing framework.
IMO it's both simpler and more powerful than any of these. It's like if every statement had a println that you never have to add but is available whenever you want to inspect it. Or like a debugger where you never have to painfully step through the program to get to the state you want.
So overall, I think we need to think deeper about what a debugger is and how it can work. Most of the people quoted do not have a good debugger available to them, nor a good debugging workflow.
by breatheoften on 5/5/19, 1:07 AM
“Debuggable” code is written in a certain style — just like “testable” code.
I consider a codebase to be good when there are meaningful places to put break points sufficient for running learning experiments about the code. Just like a codebase with “tests” is often a better codebase as a result of being written in a way that supports testing - a codebase that supports debugging can also often be a better codebase. And these work well together putting break points in test cases is often a really great idea!).
I think one of the reasons the value of the debugger so often fails to be noticed by experienced developers is that so many systems are architected in a horrific way which really does not allow easy debugger sessions — or the debugger platform is so underpowered that debugging is unreliable. There’s nothing worse than not trusting the debugger interface — “i want to do an experiment where I run code from here up to here” needs to be easy to describe and reliable to execute otherwise it is too much pain for the gain. In my opinion, failure to make this easy is not a fault of the concept of debugger but a fault of the codebase or the tooling (which often is very inadequate).
by jchw on 5/5/19, 12:32 AM
- Because I forgot how to use it (or never knew how.) There are many debuggers and UIs and I still know how to use some of them to decent effect, but I simply don’t know how to be effective with most of them.
- Because I’m pretty confident I have a good understanding of what code is doing nowadays. My intuition has been honed over the years and I tend to quickly guess why my code isn’t working.
- Because my code is all unit tested now. This contributes to my ability to be more sure about what code is actually doing.
There are still some cases where I may try a debugger. I had one recently where I was unsure what path my code was taking and I wasn’t sure how to printf debug. That helped a lot.
Not using a debugger is not really a choice I made or something I do to try to look impressive, rather it’s most likely a result of the growing diversity of programming languages and environments I work in, combined with better testing habits. I just feel like I have enough confidence to fix the bugs quickly. When I lose that confidence is when I break out printf or the debugger.
by hirundo on 5/5/19, 12:06 AM
Of course the first tool is careful thought. But when forced to fall back to print or log statements it feels like a handicap.
If I have to use a print statement it means I'm not sure what's going on so I'm not sure what, exactly, to print. By breaking on that code I don't have to know exactly because I can execute arbitrary code in that scope. What takes multiple iterations with print is often just one with break.
I can compare directly, because on a production-only bug I'm forced to use log debug statements. And usually in that case I have the same code open locally in a debugger. The difference is night versus day.
Maybe it's like a chess master who really doesn't need the board. For the kid in Searching for Bobby Fischer, it may really be a distraction. But I notice that grandmasters use a chess board in serious competition. And as a programmer I'm no grandmaster, and I play this weird game better with the board in front of me.
by systemBuilder on 5/5/19, 2:49 AM
Sadly, gdb is running out of gas at Google - it takes 60s to load "hello world" + 200MB of Google middleware and often it would step into whitespace or just hang, forever. This was often because not smart people were maintaining the gdb/emacs environment at Google.
by saagarjha on 5/4/19, 11:31 PM
by userbinator on 5/5/19, 5:25 AM
Thus I mostly agree with the author of this article --- blindly stepping through code with a debugger is not a very productive way of problem solving (I've seen it very often when I taught beginners; they'll step through code as if waiting for the debugger to say "here is the bug", completely missing the big picture and getting a sort of "tunnel-vision", tweaking code messily multiple times in order to get it to "work".) If you must use it, then make an educated guess first, mentally step through the code, and only then confirm/deny your hypothesis.
by tom_ on 5/4/19, 11:30 PM
by zwaps on 5/4/19, 11:59 PM
To a point, this is due to me not planning through my programs. So I can see that for some areas a debugger may not be essential.
But in other cases, I am actually interested to trace what happens with data in my mechanisms. For this kind of work, a debugger is essential.
Most importantly, as someone who maybe does not use the absolute best practices of designing software, debugging allows me to write solid and successful programs without having completed a CS degree.
by CergyK on 5/5/19, 3:48 PM
by namelosw on 5/5/19, 5:50 AM
The debugger is the best answer when you are working with bad code, which you cannot reason about the code locally -- a random bug only happens on the production env, a mutable monolith, or a complex system integrating with 3rd party services. Because debugging is to know what happens in the first step, then build a theory to reason about everything. Sometimes, fixing the plane in middle-air requires you need to know what happened to the specific plain, instead of building a theory to make a plane having the exact problem without looking at the plane itself. Like in control theory if there are too many possible states, it's not cost efficient to reason about the state from behavior -- wherein software you can cheat and inspect the state directly.
However, I agree with the author that in the ideal scenario there's almost no need to use the debugger. Like in SICP, in the first few chapters the mental model is about the substitution model -- it's not how a computer really works, but it's easier to reason about, you don't have to in specific step with the specific environment to reproduce the problem (that's where debugger really helps). The code is written is a matter which is more coupled with the environment (which reflects the environment model in the following chapters), the more one needs to use the debugger to work with that code. And that's why the virtue of functional programming and the referential transparency are praiseworthy.
by badpun on 5/5/19, 9:15 AM
by ATsch on 5/5/19, 10:49 AM
A tier up from that for me are higher-level debuggers, generally specific to some technology. For example, the browser dev tools, GTK Inspector, wireshark, RenderDoc etc. I'd also put tools like AddressSanitizer, Valgrind and Profilers into this category. Because they are more specialized, they can give you richer information and know what information actually matters to you. I usually find I use these regularly when developing.
The highest tier is tools specialized to your specific application. This could be a custom wireshark decoder, a mock server or client, DTrace/BPFTrace scripts and probes, metrics, or even an entirely custom toolbox. Interestingly, print statements end up in this same category for me, despite being the possibly simplest tool. Being specific to the problems you actually face allows you to focus on the very specific problems you have. This tier is interesting because these tend to become not just something you use when things go wrong, but become part of how you write or run your code.
Under this lens, I don't think it's that surprising people don't really use general step debuggers that much. They are a primitive tool that allows you to have many of the benefits of tier3 debuggers without any of the effort involved in making custom tools. They are the maximum reward/effort in terms of debugging.
by tzhenghao on 5/4/19, 11:31 PM
by gizmo686 on 5/5/19, 12:29 AM
I am currently working on a codebase where all the developers are adamant debugger users. The code is practically impossible to debug without the use of a debugger, because no one has ever had to build up the debugging infrastructure.
Complex/diffucult bugs still take about as long to fix, but simple bugs take far longer than they normally do, because every time you use a debugger you are starting from scratch.
by sytelus on 5/5/19, 8:35 AM
https://lwn.net/2000/0914/a/lt-debugger.php3
Many people say not using debugger is other side of the pendulum but perhaps it is not. You want to have assertions/prints in your program at critical junctions. That should be able to explain the behavior of the program you are seeing. If it doesn't then you probably have missed some critical junctions OR don't really understand your own code. There is actually a third possibility where you will need debugger. This is the case when compiler/programming language/standard libraries itself has bug. Instead of more time consuming binary search for where you first get unexpected output, debugger might be better option.
by fourier_mode on 5/5/19, 12:27 AM
by kazinator on 5/5/19, 6:35 AM
Oh wait; all of that is judiciously inserted print statements.
by UglyToad on 5/5/19, 12:12 PM
When you work mainly on enterprise code where you're unlikely to encounter any code you wrote, rather than another team member, on a daily basis you'll need a debugger or print statements.
But the reasonable point the article makes is a debugger makes it very easy to solve the problem localised to a function or a couple of lines of code rather than take the time to improve the whole area and/or add test coverage. But the other thing people who don't work on enterprise code won't necessarily understand is you don't usually have time to do that. So it's a good thing to keep in mind but it feels a little too Ivory Tower to be broadly applicable.
by RickJWagner on 5/5/19, 1:16 AM
But sometimes I do. And when I do, I'm glad they're there, because they are a great tool of last resort.
by gridlockd on 5/5/19, 4:50 AM
A lot of people also have never used a debugger that isn't terrible to use. Most debuggers fall into that category.
As for all of this "read the code first and get a better understanding" talk - this is obvious highfalutin' bullshit. You're human, you made a dumb mistake somewhere, the debugger will help you find it faster than your brain going on an excursion.
by bartimus on 5/5/19, 6:23 AM
I don't have anything against debuggers. I do - however - have a concern with people who rely heavily on the find feature in their IDE to search for certain code they need to change. Oftentimes they don't look at the bigger picture and miss how certain things might better be implemented elsewhere. They don't run into the problem of finding code that's poorly structured. They don't have a need to restructure it.
by SAI_Peregrinus on 5/5/19, 1:28 AM
Of course embedded systems aren't the focus of the article, but they're all over and a very good place to have a debugger.
by ashelmire on 5/5/19, 12:31 AM
Do whatever works for you. But it's good to have more tools in our toolbox - use puts debugging if that's easier (often is in very complex environments), use a debugger if you need to.
by Rotareti on 5/5/19, 10:45 AM
by yingw787 on 5/5/19, 12:48 AM
Since the author quoted Guido van Rossum, I'll share a recent anecdote from my interaction with Guido at PyCon. I first met Guido this past Wednesday, I'm guessing after he attended the Python language summit. He honestly seemed to be bristly at the time, probably because he's heard many of the same arguments raised over thirty years (I can imagine something like "hey why not get rid of the GIL" -> "Wow, why didn't I think of that?! Just get rid of the GIL!" although hopefully it was more higher-level than that). One of the other language summit attendees was talking about a particular Python feature, which I don't remember, but the underlying notion was that even if you get testy with contributors they'll still be a part of the community. I remember thinking, "No. That's totally not how it works. If you get testy with contributors they'll just leave and you'll never hear from them again, or you'll turn off new contributors and behead the top of your adoption funnel meaning your language dies when you do".
Python has such great adoption because it caters to the needs of its users first. Take the `tornado` web server framework. I haven't confirmed it myself but apparently it has async in Python 2 (async is a Python 3 feature). How? By integrating exceptions into its control flow and having the user handle it. But it shipped, and it benefited its users. IMHO, `pandas` has a decent amount of feature richness in its method calls, to the point where sometimes I can't figure out what exactly calling all of them does. Why? Because C/Python interop is likely expensive for the numerical processing `pandas` does and for the traditional CPython interpreter, and ideally you want to put together the request in Python once before flushing to C, and because people need different things and so need different default args. `pandas` also ships, and benefits a lot of people.
Shipping is so important in production, because it means you matter, and you get to put food for you, your family, and the families of people you employ. You can't just bemoan debugging is bad because somebody isn't a genius or not in an architect role. Debugging means you ship, and you can hire the guy who isn't aiming for a Turing Prize who may have a crappier job otherwise and he can feed his family better.
Don't cargo cult people you're not. Use a debugger.
by cozzyd on 5/5/19, 5:10 AM
Also watches are invaluable when you know something is getting a wrong value but you don't know where.
by GuB-42 on 5/5/19, 12:37 AM
Turning a blind eye to one of your tools is disingenuous. Step by step debugging has its uses and some work environments may favor it. And in fact, I'm am quite sure that guys like Linus are competent in using them and will do when needed. It is just that it is not their favorite tool and it is not well suited too their project.
by Jorge1o1 on 5/5/19, 12:18 AM
by jayd16 on 5/5/19, 12:32 AM
I think I'll keep using them.
by astrobe_ on 5/5/19, 9:22 AM
I tried hard to use the hardware debugger because it was rather expensive (this is I think a case of sunken costs fallacy). Problem is, our system is soft real time; stepping in the main program causes the other things connected to the system notice that the main program does not respond, and act upon this.
The hardware debugger was quite capable so we had watchpoints and scripting to avoid this problem, but you had to invest considerable amounts of time to learn to program all that correctly. Amusingly, this was another occasion to make more bugs. Now you need a debugger to debug your debugger scripts...
Moreover, the "interesting" bugs were typically those who happened very rarely (that is, on a scale of days) - bugs typically caused by subtly broken interrupt handlers; to solve that kind of bug in a decent time frame with you would need to run dozens of targets under debuggers to test various hypothesis or to collect data about the bug faster. That's not even possible sometimes.
I also happen to have developed as a hobby various interpreters. The majority were bytecode interpreters. There again debuggers were not that useful, because a generic debugger cannot really decode your bytecode. Typically you do it by hand, or if you are having real troubles, you write a "disassembler" for your bytecode and whatever debugger-like feature you need. Fortunately, the interpreters I was building all had REPLs, which naturally helps a lot with debugging.
So I'm kind of trained not to use debuggers. I learned to observe carefully the system instead, to apply logic to come up with possible causes, and to use print statements (or when it's not even possible, just LEDs) to test hypothesis.
One should keep in mind that debuggers are the last line of defense, just like unit tests that will never prove the absence of bugs. So you'd rather do whatever it takes not to have to use a debugger.
My current point of view is that the best "debugger" is a debugger built inside the program. It provides more accurate features than a generic debugger and because it is built with functions of the program, it helps with testing it too. That's a bit more work but when you do that functionality, debugging and testing support each other.
by Insanity on 5/4/19, 11:53 PM
But when there is a bug in code I wrote, I can reason about it and place prints where I need them. For side projects I hardly ever use a debugger.
by thrax on 5/5/19, 5:38 AM
by CalChris on 5/4/19, 11:53 PM
by sanderjd on 5/4/19, 11:39 PM
Well, I use debuggers. I think they're great tools. My feeling is that I'm very happy to use any tool that helps me create, understand, and improve software. When people tell me that a tool that is useful to me in that endeavor is not actually useful, all I can think to do is roll my eyes.
Having said that, something I am very interested in is learning new approaches to interrogate software complexity and solve problems. So, "here are some approaches to understanding and debugging code that have worked for me" from someone who doesn't use debuggers would be interesting to me. But I actually don't see any of that here.
by purplezooey on 5/5/19, 2:17 AM
by tanilama on 5/4/19, 11:31 PM
by EdSharkey on 5/4/19, 11:48 PM
We "control" our code when tests prove it does what we think it should.
Debuggers are rarely-usable, inefficient tools for software development. I agree with OP, debuggers don't scale.
by kjar on 5/4/19, 11:36 PM
by dboreham on 5/5/19, 12:07 AM
> there are cases where a debugger is the right tool
by sys_64738 on 5/5/19, 1:13 AM
by azhenley on 5/5/19, 12:29 AM
by tahoemph999 on 5/5/19, 1:24 AM
Out of the 5 beliefs he quotes from celebrities we only have reasons for 3. Of those 3 the common thread I see is that we should be able to reason about our code and debuggers act to derail that. Furthermore, it appears that the aspect of debugging most being maligned here is stepping through code line by line. I'm fairly certain that is specific to a certain type of mindset. If I was writing a title for a talk in this area it might be more like "single stepping bad for students" and then talk about how to build code that is easy to model and think about and then use that to work through most problems. If you've got yourself past that student part (and yes, you'll dip back into this with new tech / languages) then being able to single step when it makes sense (don't have docs, processor isn't doing the right thing, etc.) makes you more powerful. Not less.
The focus on printing is a bit annoying. The writer seems to have never worked in embedded systems, distributed systems, or systems where reproducing the bug isn't an option. In the last case a debugger is your tool for grunging around in a core dump. In the embedded case some type of debugger or forcing a core dump (and thus using a debugger) might be your only choices.
I also question if he has ever worked on web systems. Reasoning "harder" about how some new CSS or javascript "feature" behaves across different browsers is useless. Writing little ad hoc uses (maybe in a debugger) and carefully tracking how they act in a debugger is powerful.
A lesson from my history is that of systems that take a long time to build and upload. The one I worked on early took 60 minutes to build and 30 minutes to upload to test hardware. You didn't fix bugs one by one. You fixed them by discovery, fixing on the platform (inserting nops, etc.) in assembly while replicating that in source (probably kicking of a build in case that was the last bug of this run), and then continuing to test / debug and get every little bit you could out of the session. And if you had to single step then that was worth it. Is this entirely a historical artifact? I havn't worked with anything that bad in decades but I still work with embedded (and some web) systems where the time to build and upload can be a minute to minutes. Getting more out of the session is useful and debuggers are part of that.
Refactoring as a response to a bug seems like a mistake worse than line by line stepping to me. Not understanding a cause but making a change propagates incorrect thinking about the system.
But I think the real missing part of this article is a discussion of what are other useful tools. The last comment in the article mentions "Types and tools and tests". It is easy to say tests are table stakes but a similar article about testing would create a flamefest so it is a bit hard to tell what kind of table (or is it stakes)? So what are those tools beyond testing? I'd love to have DTrace everywhere I worked. The number one best tool I've ever seen for working with a live system. The ideas in Solaris mdb about being able to build little composable tools around data structures is awesome. Immutable methods of managing databases are wonderful. It would have been nice if this author talked about design and refactoring "tools" (could be methodologies) he liked or thinks should exist.
by zmmmmm on 5/5/19, 4:28 AM
I don't pull out the debugger very often, but knowing how and when to do that, and do it well is a significant tool in my arsenal. There are times when I can guarantee you I would have spent a massive number of hours or maybe never properly resolved certain bugs without doing it.
by simonsays2 on 5/5/19, 6:35 AM