from Hacker News

How we’ve made Raptor fast

by triskweline on 11/10/14, 4:56 PM with 71 comments

  • by gnufied on 11/11/14, 2:50 AM

    Just some glaring inconsistencies that I found:

    1.

    > It’s less work for the user. You don’t have to setup Nginx. If you’re not familiar with Nginx, then using Raptor means you’ll have one tool less to worry about.

    >For example, our builtin HTTP server doesn’t handle static file serving at all, nor gzip compression.

    Sounds like I would need nginx(or another frontend server) anyways?

    2. > By default, Raptor uses the multi-process blocking I/O model, just like Unicorn.

    > When we said that Raptor’s builtin HTTP server is evented, we were not telling the entire truth. It is actually hybrid multithreaded and evented.

    So, which it is? I assume by default is multi-process + events, but a paid version offers multithreaded + events? If so, isn't unicorn's model of multi-process+blocking IO is pretty good as well because OS becomes load balancer in that case.

    Overall it seems they wrote a very fast web server. Kudos to that! But I don't think the web server was ever the problem for Rack/Ruby apps? Still on fence with this one until more details emerge. :-)

  • by bratsche on 11/11/14, 4:26 AM

    Does it seem weird to anyone else that they're doing all this marketing for a project that's supposedly going to be open source? Why all the suspense, why not just release it? Or, if it's not ready yet, why not just finish it and then start promoting it?
  • by bithive123 on 11/11/14, 3:47 AM

    As someone working in an enterprise environment, I've sort of lost interest in this breed of Rack server now that I've gotten used to having SSO and LDAP authorization available via Apache modules, to name a few features. Apache allows me to accommodate all sorts of requirements like setting up vhosts that require authentication except on the LAN, or vhosts which allow members of certain groups to access an internal app via reverse proxying.

    I don't mean to be negative; other posters have that angle covered. But I would comment that this ongoing proliferation in prefork backends is hardly disruptive to organizations who have already made significant commitments to Ruby web apps. Our Apache/Passenger servers aren't going away anytime soon.

  • by rarepostinlurkr on 11/11/14, 5:35 AM

    This is Passenger Phusion +1. As was pointed out in a thread several months ago, the DNS resolves to the same place. The writing style is similar and the feature set far too mature for a 1.0 product.
  • by randall on 11/11/14, 3:06 AM

    How would one use this on Heroku? It doesn't support static file transfers allegedly... and per Heroku they require your app server to serve them by default.

    https://github.com/heroku/rails_12factor#rails-4-serve-stati...

    Any ideas?

  • by fiatmoney on 11/11/14, 1:54 AM

    "You will need 5000 processes (1 client per process). A reasonably large Rails app can consume 250 MB per process, so you’ll need 1.2 TB of RAM."

    Quibble: most multi-process web servers use fork() for child processes, which means they can share identical memory pages.

  • by hurrycane on 11/11/14, 1:50 AM

    I strongly believe that this is the next version of Phusion Passenger.
  • by triskweline on 11/10/14, 4:57 PM

    Spoiler: Insane amounts of low-level optimization.
  • by jonaphin on 11/11/14, 1:59 AM

    Congratulations on Raptor, I'll definitely give it a whirl. Regarding static asset serving, I'm fairly certain serving them through the application server is often not the way to go anyway.
  • by resca79 on 11/11/14, 7:04 AM

    Raptor seems pretty interesting, but personally I don't like its marketing approach, and I'm not the only one.

    On twitter some ruby heroes say : " Raptor is 4x faster than existing ruby web servers for hello world applications" :)

    The strong proclamations in favour of an open source project is a little bit strange if the open code is not yet released.

    However I hope that all graphs on the home page are real for the ruby programmers happiness

  • by alvare on 11/11/14, 4:16 PM

  • by covi on 11/11/14, 4:58 AM

    The section "Hybrid evented/multithreaded: one event loop per thread" suggests that the whole model is basically SEDA [1]. I'm surprised the article does not directly reference the project/paper.

    [1] http://www.eecs.harvard.edu/~mdw/papers/seda-sosp01.pdf

  • by jrk on 11/11/14, 7:28 AM

    The "hybrid" IO architecture is historically known as AMPED (asynchronous multi-process event driven): https://www.usenix.org/legacy/event/usenix99/full_papers/pai...
  • by simonmales on 11/11/14, 9:19 AM

    Puma says it runs best when using Rubinius or JRuby, but from my limited understanding, not everything can run on those implementations.

    Are there any giveaways in the blog that wouldn't allow Raptor to run on Rubinius or JRuby?

  • by jcampbell1 on 11/11/14, 5:19 AM

    Does anyone know if the 60.000 in the chart means 60 or 60,000?
  • by Ono-Sendai on 11/11/14, 1:18 AM

    Not bad work. Seems somewhat futile though, since the speed will probably be massively slowed down by the actual ruby application code and database accesses etc..
  • by mrinterweb on 11/11/14, 6:12 AM

    I am all for a faster ruby application server. If Raptop can stand behind its claims on November 25th, that will be the best birthday present I could get.
  • by corford on 11/11/14, 12:47 AM

    Slightly OT but does uwsgi feature much in the ruby world? /from a curious python guy
  • by swrobel on 11/11/14, 3:43 AM

    Does it support SPDY?
  • by coned88 on 11/10/14, 7:25 PM

    what are these instead of just using a webserver like apache or nginx?
  • by kondro on 11/11/14, 1:30 AM

    Does this mean that the majority of the performance improvements over Puma actually come from the fact that they are using the considerably less battle-tested HTTP parser of PicoHTTPParser over the Mongrel one?

    Of course, this may be mitigated by the fact that any reasonable production environment will have a web server layer over the app server/s anyway for load balancing, fail-over and exploit detection/prevention anyway.