by geoffhill on 3/27/13, 6:41 PM with 168 comments
by mncolinlee on 3/27/13, 8:29 PM
Automatic parallelization is very possible. The problem is tends to be less efficient. A decent developer can often do a better job than the compiler by performing manual code restructuring. The compiler cannot always determine which changes are safe without pragmas to guide it. With that said, our top compiler devs did some amazing work adding automatic parallelization to some awful code.
We inevitably sold our supercomputers because we had application experts who would manually restructure the most mission-critical code to fit cache lines and fill the vectors. Most other problems would perform quite adequately with the automatically-generated code.
What this article lacks is a description of why Erlang is more uniquely suited to writing parallel code than all the other natively parallel languages like Unified Parallel C, Fortran2008, Chapel, Golang, etc. There are so many choices and many have been around for a long, long time.
by konstruktor on 3/27/13, 8:33 PM
The first part of this statement is plain wrong. Single thread performance has improved a lot due to better CPU architecture. Look at http://www.cpubenchmark.net/singleThread.html and compare CPUs with the same clock rate, where a 2.5 GHz. An April 2012 Intel Core i7-3770T scores 1971 points while a July 2008 Intel Core2 Duo T9400 scores 1005 points. This is almost double the score in less than four years. Of course, one factor is the larger cache that the quad core has, but this refutes Armstrong's point that the multicore age is bad for single thread performance even more.
For exposure to a more balanced point of view, I would highly recommend Martin Thompson's blog mechanical-sympathy.blogspot.com. It is a good a starting point on how far single threaded programs can be pushed and where multi-threading can even be detrimental.
Also, I think that fault tolerance is where Erlang really shines. More than a decade after OTP, projects like Akka and Hysterix are finally venturing in the right direction.
by daleharvey on 3/27/13, 8:16 PM
Erlang solved a problem really well over 20 years ago, its the sanest language by far that I have used when dealing with concurrent programming. (I havent tried go or dart yet) and I owe a lot of what I know to the very smart people building erlang.
However it has barely evolved in the last 10 years, will 2013 be the year of the structs? (I doubt it), every new release comes with some nice sounding benchmark about how much faster your programs will run in parallel and there is never a mention of whats actually important to programmers, a vibrant ecosystem and community, language improvements that doesnt make it feel like you are programming in the 80's. Better constructs for reusing and packaging code in a sane way.
Its fairly trivial in most languages to get the concurrency you need, I think erlang is solving the wrong problem in 2013.
by dvt on 3/27/13, 8:45 PM
This kind of belligerent rhetoric (we're solving the right problems, everyone else is dumb) is the kind of drivel that gives momentum to language zealots that think language X is better than language Y.
I've contributed to Google Go in the early phases and I was naïve and really believed that Go was the "next big thing." But it turned out to be yet another general-purpose language with some things that were really interesting (goroutines, garbage collection, etc.) but some things that were just same-old same-old. Now, I'm editing a book about Dart and I've since lost my enthusiasm for new languages; I can already see that Dart solves some problems but often creates new ones.
And in a lot of ways Erlang sucks, too. The syntax is outdated and stupid (Prolog lol), it has weird type coercion, memory management isn't handled that well (and many more). Of course, since Facebook uses it, people think it's a magic bullet (Erlang is to Facebook like Python is to Google).
The article also forces readers to attack a straw man. Often times, algorithms simply cannot be parallelized. The Fibonacci sequence is a popular example (unless you use something called a prefix sum -- but that's a special case). So in many ways, the rhetorical question posed by the article -- "but will your servers with 24 cores be 24 times faster?" -- is just silly.
by djvu9 on 3/27/13, 9:38 PM
It is understandable though. Just think about how much resources have been put into development of Erlang VM and the runtime/libraries(OTP), and compare it with JVM/JDK. There is just no magic in software development. When talking about high concurrency and performance, the essential things are data layout, cache locality and CPU scheduling etc for your business scenario, not the language.
by eridius on 3/27/13, 7:36 PM
by eksith on 3/27/13, 9:11 PM
Unlike another general purpose language (like say, C++ or C#) allow me to grasp what's happening after staring at it for 30 seconds. This is the same problem, I have with Lisp.
Maybe I'm just dyslexic, but these rhetoric pieces for one language or another that says it's concurrent (which it is), fast (obviously), more C than C, will bring the dead to life, create unicorns and other wonderful, fantastic things that I'm sure are all true, just don't seem to be capable of passing into my grey matter.
You know another thing all these amazing super power languages haven't been able to do that even a crappy, broken, in many ways outright wrong, carcinogenic etc... etc... language like even PHP has allowed me to do? Ship in 48 hours.
Before, I get flamed, I already tried that with Nitrogen (http://nitrogenproject.com). It didn't end well, but maybe it will work for someone already familiar with Erlang.
It's like you've written the Mahabharata; it'a a masterpiece and it's one of the greatest epics of all time. Unfortunately, it's written in Sanskrit.
by acdha on 3/27/13, 11:31 PM
Ignoring that point, this seems like a poor point for comparison as it's a trivially parallelized task because zlib operates on streams and shouldn't have any thread contention. There's very little information in the description but unless there are key details missing, this doesn't sound like a problem where Erlang has much interesting to add. The most interesting aspect would be the relative measures for implementation complexity and debugging.
by Uchikoma on 3/28/13, 6:51 AM
1. Erlang has locks and semaphores [1], receive is a lock, actors are semaphore. Erlang chose a 1 semaphore/ 1 lock per process model
2. Erlang scales better not because of being lock-free (see above), but because it easily uses async compared to other languages
3. Async prevents deadlocks not Erlang being lock-free (see above)
Some 4year old reading http://james-iry.blogspot.de/2009/04/erlang-style-actors-are...by splicer on 3/28/13, 12:39 AM
> The problem that the rest of the world is solving is how to parallelise legacy code.
As member of the rest of the world, I can assure you that I'm not trying to solve either of these problems. :p
by InclinedPlane on 3/28/13, 10:40 AM
It'll be the same over the next 20 years as well.
I predict that we'll see a lot of technological leaps which will serve as much to maintain the ability to run "old code" in new and interesting ways as to enable a brave new world of purpose-built languages.
In the next few decades we'll see advances in micro-chip fabrication and design as well as memory and storage technology (such as memristors) which will result in even handheld battery powered devices having vastly more processing power than high-end workstations do today.
Is that an environment in which one seeks to trade programmer effort and training in order to squeeze out the maximum possible efficiency from hugely abundant resources? Seems unlikely to me, to be honest.
Indeed, it seems like the trend of relying on even bloatier languages (like Java) will continue. Do you think anyone is going to seriously consider rewriting the code for a self-service point-of-sale terminal in Erlang in order to improve performance? That's not the long pole, it never has been, and it's becoming a shorter and shorter pole over time.
In the future we'll be drowning in processor cycles. The most important factor will very much not be figuring out how to use them most efficiently, it'll be figuring out how to maximize the value of programmer time and figuring out how to use any amount of cycles to provide value to customers effectively.
(I think that advancements in core, fundamental language design and efficiency will happen and take hold in the industry, but mostly via back-door means and blue sky research, rather than being forced into it through some impending limitation due to architecture.)
by meshko on 3/27/13, 8:20 PM
by kamaal on 3/28/13, 4:13 AM
The problem with these languages remain unchanged. The syntax is so strange and esoteric, learning and doing anything basic things with them will likely require months of learning and practice. This lone fact will make it impractical for 99% for all programmers in the world.
No serious company until its absolutely unavoidable(and situation gets completely unworkable without it) will ever use a language like Erlang or Lisp. Because every one knows the number of skilled people in market who know Erlang, are close to zero. And those who can work for you are going to be crazy expensive. And not to mention the night mare of maintaining the code in this kind of a language for years. There is no friendly documentation or a easy way a ordinary programmer can use to learn these languages. And there is no way the level of reusable solutions available for these languages as they are for other mainstream C based languages.
In short using these languages attracts massive maintenance nightmares.
The concurrency/parallelisation problem today is very similar to what memory management was in the 80's and 90's. Programmers hate to do it themselves. These are sort of things that the underlying technologies(Compilers/VM's) are supposed to do it for us.
I bet most of these super power languages will watch other pragmatic languages like Perl/Python/Ruby/Php etc eat their lunch over the next decade or so when they figure out more pragmatic means of achieving these goals.
by zzzeek on 3/27/13, 8:06 PM
vs three paragraphs later
> Alexander’s talk gave us a glimpse of the future. His company concurix is showing us where the future leads. They have tools to automate the detection of sequential bottlenecks in Erlang code.
why is that not a contradiction? because an erlang program isn't "sequential" to start with?
by surferbayarea on 3/27/13, 9:27 PM
by dap on 3/28/13, 1:43 AM
by alexchamberlain on 3/27/13, 8:38 PM
So, there was an error in someone's code which you rewrote without the error and it ran faster? Well done detective...
We need more parallel programs, no doubt, but we need more, better programmers, who are willing to write in compiled languages with low-overhead.
by artsrc on 3/27/13, 11:53 PM
Erlang allows you to create concurrent programs, i.e.: programs where the result is schedule dependent.
One right problem is allowing people to write deterministic parallel programs. This gives you the speed (from parallel) with the reliability (from deterministic).
by spenrose on 3/27/13, 7:50 PM
- break program into function calls that match the steps that can happen in parallel
- wrap the function calls in messages passed over the network
+ i.e. process(thing) -> post(thing)/poll_for_things()
- split the sender and receiver into different processes
OF COURSE there are big advantages to using a language (Erlang) or a heavyweight framework (map/reduce) designed for concurrency. Rolling your own process-centric concurrency is a different set of tradeoffs, not a panacea. But it's worth considering for some problems.by uwiger on 3/29/13, 8:30 PM
I've spent many years developing and reviewing products in the telecoms realm, and have found that failing to realize when something like Erlang brings life-saving concepts to your project may well make the difference between delivering on time and disappearing into a black hole of endless complexity. It's not for everyone, but when it fits, boy does it help!
by damian2000 on 3/28/13, 1:34 AM
by ternaryoperator on 3/29/13, 6:53 AM
Donald Knuth: "During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading."
by fulafel on 3/27/13, 8:32 PM
by jlebrech on 3/28/13, 11:38 AM