by meirish on 2/22/13, 3:48 PM with 139 comments
by jbert on 2/22/13, 5:16 PM
I found the (good) result that I could spawn a new goroutine for each incoming connection with minimal (~4k) overhead. This is pretty much what you'd expect since a goro just needs a page for it's stack if it's doing no real work. I had something like 4 VMs each making ~30k conns (from one process) to the central go server with something like 120k conns.
I found one worrying oddity however. Resource usage would spike up on the server when I shut down my client connections (e.g. ctrl-C of a client proc with ~30k conns).
Reasoning about things a bit, I think this is due to the go runtime allocating an OS thread for each goro as it goes through the socket close() blocking call. I think it has to do this to maintain concurrency. So I end up with hundreds of OS threads (each only lives long enough to close(), but I'm doing a lot at the same time).
Can anyone comment:
- is this guess as to the problem likely to be correct?
- is this "thundering herd" a problem in practice?
- are there ways to avoid this? (Other than not using a goro-per-connection, which I think it the only idiomatic way to do it?)
My situation was artificial, but I could well imagine a case that losing, say a reverse proxy, could cause a large number of connections to suddenly want to close() and it would be a shame if that overwhelmed the server.
by jgrahamc on 2/22/13, 5:15 PM
This is very true. Go is a pleasure to write. In fact, it's such a pleasure then when you hit something that wasn't really well designed it's horrid.
by btown on 2/22/13, 6:31 PM
Nonblocking I/O isn't just a "best practice" in the sense that consistent indentation is a "best practice," it's a core tenet of the Node ecosystem. Sure, you could write a Haskell library by putting everything in mutable-state monad blocks, and porting over your procedural code line-for-line. It's allowed by the language, just like blocking is allowed by Node. But the whole point of Haskell is to optimize the function-composition use case.
The Node community has the benefit of designing all its libraries from scratch with this tenet in mind, so in practice you never/rarely need to look for "stinkers" unless they're documented to be blocking. And unless they're using badly-written blocking native code, you can just grep for `Sync` to see any blocking calls.
by jacobmarble on 2/22/13, 4:55 PM
Go: No one knows this language, there's a small-but-growing community, there are enough libraries to get a lot done, and you get even better performance
Java: They are paying me (money!) to write in this language
by burke on 2/22/13, 5:02 PM
by stcredzero on 2/22/13, 6:04 PM
Major point for saving man hours right there.
by SeanDav on 2/22/13, 5:15 PM
Personally I hope that Go does just as well, if not a lot better. I am a bit of a fan of both.
by islon on 2/22/13, 6:24 PM
by jamwt on 2/22/13, 10:50 PM
https://gist.github.com/jamwt/5017172
Haskell was ghc 7.6.1 with ghc --make -O2
Go is go1.0.2 with "go build".
by hrwl on 2/22/13, 5:27 PM
by mjijackson on 2/22/13, 6:26 PM
First, go:
$ ab -c 100 -n 10000 http://localhost:8000/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: localhost
Server Port: 8000
Document Path: /
Document Length: 1048576 bytes
Concurrency Level: 100
Time taken for tests: 10.085 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 10489017384 bytes
HTML transferred: 10487857152 bytes
Requests per second: 991.62 [#/sec] (mean)
Time per request: 100.846 [ms] (mean)
Time per request: 1.008 [ms] (mean, across all concurrent requests)
Transfer rate: 1015729.90 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 2 0.8 2 6
Processing: 21 99 5.6 98 137
Waiting: 1 3 2.7 2 41
Total: 25 101 5.6 101 139
Percentage of the requests served within a certain time (ms)
50% 101
66% 102
75% 103
80% 103
90% 105
95% 106
98% 108
99% 112
100% 139 (longest request)
Secondly, node.js: $ ab -c 100 -n 10000 http://localhost:8000/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: localhost
Server Port: 8000
Document Path: /
Document Length: 1048576 bytes
Concurrency Level: 100
Time taken for tests: 15.765 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 10487558651 bytes
HTML transferred: 10486808576 bytes
Requests per second: 634.31 [#/sec] (mean)
Time per request: 157.653 [ms] (mean)
Time per request: 1.577 [ms] (mean, across all concurrent requests)
Transfer rate: 649639.92 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.7 1 11
Processing: 2 156 34.7 159 272
Waiting: 1 47 29.7 42 136
Total: 2 157 34.7 161 273
Percentage of the requests served within a certain time (ms)
50% 161
66% 174
75% 182
80% 187
90% 198
95% 209
98% 221
99% 227
100% 273 (longest request)
Not only does go serve the traffic more quickly, but it also has a much lower standard deviation between slow and long requests. Impressive.by dpweb on 2/22/13, 5:21 PM
by tferris on 2/23/13, 8:17 AM
But what I don't like: the negativity against Node and omitting some facts. In the replies of the orignal post a guy tested two (!) times Node and once it was significantly faster (v0.6) and once it had same speed (v8.0). So, why has mjijackson such different results in this thread at the top?? And maybe we should test it on real servers and not on a MBA. Moreover, we have here some micro benchmark which possibly doesn't reflect reality well. Don't get me wrong, I appreciate any benchmarking between languages but then please do it right and make no propaganda out of it. Further, Go's package manager seems to be nice but it does NOT have version control. How do you want to use this in a serious production environment. Maybe version control will come (but then tell how without loosing its flexibility) or not but this is something serious and definitely not an alternative to any server environment except for some mini services.
EDIT: downvoting is silly, propaganda and won't help the Go community in getting more credibility, better do some further benchmarks; otherwise this post/thread is full of distinct misinformation and should be closed
by WhaleFood on 2/22/13, 8:58 PM
by dgudkov on 2/24/13, 4:30 AM
Does anybody from HNers use Go with websockets? What package do you use?
by stesch on 2/22/13, 10:06 PM
by tferris on 2/22/13, 8:15 PM
by babuskov on 2/23/13, 1:10 AM
by logn on 2/23/13, 12:23 PM