by malloc47 on 1/20/13, 10:42 PM with 105 comments
by pkaler on 1/20/13, 11:24 PM
It doesn't matter. This is not how programming works in the real world. In the real world, you write the most correct program you can under time pressure. A new compiler, operating system, or platform arrives that exposes a bug. You fix it and you move on. It doesn't matter if the language is future proof or not. The process is similar for any complex program.
The blog's name is "Embedded in Academia" and this is perfectly valid viewpoint for someone in academia to take. And people in academia should research towards building more robust tools and languages. But it really is not going to matter in the real world. Languages and platforms will always not be future proof because computing is complex.
by haberman on 1/21/13, 1:10 AM
I can speak as someone who has been programming in C and C++ for over ten years, but only in the last few years became aware of this issue and started taking it seriously. Five years ago I would do things like cast function pointers to void-pointer and back, or calculate addresses that were outside the bounds of any allocated object and compare against them, all without really even realizing I was doing something wrong.
I don't think this will spell doom-and-gloom for C and C++ though. I think a few things will happen.
First of all, the compiler people are walking a fine line; yes, they are breaking code that relies on undefined behavior, but they often avoid breaking too much. For example, I've had it explained to me that at least for the time being, gcc's LTO avoids breaking any programs that would work when compiled with a traditional linker. In addition, they often provide switches that preserve traditional semantics for non-compliant code that needs it (like -fno-strict-aliasing and -fwrapv).
Secondly, I believe that tooling will get better, and rather than ignoring the warnings I believe that people's general awareness of this issue will raise, as well as knowledge of standard-compliant ways of working around common patterns of undefined behavior. For example, it's often easy to avoid aliasing problems by using memcpy(), and this can usually be optimized away.
Thirdly, I expect that the standard may begin to define some of this behavior. For example, I think that non-twos-complement systems are exceedingly rare these days; I wouldn't be surprised if a future version of the standard defines unsigned->signed conversions accordingly.
by pacaro on 1/20/13, 11:40 PM
For me, this is perhaps the biggest issue raised in this article, as static and dynamic analysis tools become more ubiquitous we should be learning to fix the issues that they raise, not ignore them.
I remember a while ago (2004 or 5) interviewing a college-hire candidate, I had asked about working with others and we had gotten to talking about code review - the candidate was passionate about how code review had helped with a group project he worked on, but every single example he gave of a a bug found by code review was something that -Wall would have found...
The same applies to static analysis - let the machines do the work that they can do, that leaves the humans to get on with the work that the machines can't do (yet!)
by ge0rg on 1/21/13, 11:37 AM
Making a correct overflow check in C/C++ is not just not straightforward, it is overy complicated even for experienced developers [2]. This is IMHO inacceptable for a thing that is required often in a security context.
Therefore, I hope that option 3 proposed by the author (change of the C/C++ standard to define the correct behavior at for least integer overflows) will be adopted. However, this probably will not happen for a long time, leaving us with security holes all over the net.
[1] http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30475
[2] http://stackoverflow.com/questions/3944505/detecting-signed-...
by c3d on 1/21/13, 7:04 AM
C++11 added many changes intended for "do-it-yourself" crowd, like auto, new function syntax, lambdas. It didn't add much in terms of "let the compiler do the work for me" crowd (one notable exception being variadic templates, something that was in my own XL programming language since 2000). In C++, you are still supposed to do the boring work yourself.
For example, C++11 still lack anything that would let you build solid reflexion and introspection, or write a good garbage collector that doesn't need to scan tons of non-pointers.
If you want to extend C++, it's just too hard. C++11 managed to add complexity to the most inanely complex syntax of all modern programming languages. Building any useful extension on top of C++, like Qt's slots and signals, is exceedingly difficult. By contrast, Lisp has practically no syntactic construct and is future proof. My own XL has exactly 8 syntactic elements in the parse tree.
So in my opinion, C and C++ are already left behind for a lot of application development these days because they lack a built-in way to evolve. If you are curious, this is a topic I explore more in depth under the "Concept programming" moniker, e.g. http://xlr.sourceforge.net/Concept%20Programming%20Presentat....
by mjn on 1/20/13, 11:36 PM
by shmerl on 1/20/13, 11:22 PM
by bcoates on 1/21/13, 12:59 AM
These are not problems of a language per se, but the original sins of neo-vaxocentrism and confusing "I understand how this might work, at some random abstraction layer" and "I can depend on what happens when I do something stupid". Free your mind of these and the rest will follow.
These low-level bit banging errors are vastly less common than shared-memory concurrency issues, which as far as I can tell are endemic to all code that attempts shared-memory concurrency, in any language. If you want to have an axe to grind about languages that aren't future proof, look there.
by dysoco on 1/20/13, 11:20 PM
Talking about C, well... it's unsafe by nature, let's face it.
by chipsy on 1/20/13, 11:33 PM
by X4 on 1/21/13, 12:43 PM
C is there since 1972, it is one of the most widely used programming languages of all time and there are very few computer architectures for which a C compiler does not exist. Many later languages have borrowed directly or indirectly from C, including C#, D, Go, Java, JavaScript, Limbo, LPC, Perl, PHP, Python, and Unix's C shell.
by anuraj on 1/21/13, 6:44 AM
by jbert on 1/21/13, 11:06 AM
Don't all languages have "don't do that" corners, even if they are just bugs in the current versions of the compilers/interpreters?
C and C++ at least tell you where some of these are, so actually the situation is better?
by Executor on 1/21/13, 2:58 AM
by malkia on 1/21/13, 5:39 AM
Laws are the be broken, and C/C++ is the wild west in this respect - cowboy programming is welcomed.
And I love it :)
by nib952051 on 1/21/13, 8:44 AM
omfg:))
by cmccabe on 1/21/13, 12:50 AM
Why is this on HN?
Use the right tool for the job. Sometimes that C or C++, sometimes it's not.
by nnq on 1/21/13, 4:12 AM