by ateevchopra on 10/17/16, 6:28 PM with 48 comments
by btown on 10/17/16, 7:08 PM
> After peeling through the React codebase we discovered React’s mountComponent function. This is where the HTML markup is generated for a component. We knew that if we could intercept React's instantiateReactComponent module by using a require() hook we could avoid the need to fork React and inject our optimization. We keep a Least-Recently-Used (LRU) cache that stores the markup of rendered components (replacing the data-reactid appropriately).
> We also implemented an enhancement that will templatize the cached rendered markup to allow for more dynamic props. Dynamic props are replaced with template delimiters (i.e. ${ prop_name }) during the react component rendering cycle. The template is them compiled, cached, executed and the markup is handed back to React. For subsequent requests the component's render(..) call is short-circuited with an execution of the cached compiled template.
by misterbowfinger on 10/17/16, 7:05 PM
Also.... 20ms with caching the pages in Redis? That sounds really, really slow. There's definitely something else going on.
by prashnts on 10/17/16, 7:29 PM
> Whenever you deploy, new chunk hash for js and css files gets generated. This means if you’re storing the whole HTML string in the cache, it’ll become invalid with the deployment. Hence, whenever you deploy, the redis db needs to be flushed completely.
These arguments sound like over-engineered "solutions" by somebody who did not do their homework. To speed up your SSR, render components, not the _whole_ page. Vast majority of your page is going to remain constant: the head, navigation, footer. Rendering these components in partials will save you the Redis store by a very, _very_ large margin. Once they start caching hundreds of thousands of HTMLs, the Redis server will start swapping the content from memory to the disk. There, you just defied the whole point of using Redis.
Also, your users are _still_ going to see the 810ms latency the first time they access your service. How often do you think they'll be reloading right after the page is loaded? And once the cache is invalidated -- which I suppose would happen frequently -- the _visible_ latency is still high.
by mgallowa on 10/17/16, 7:02 PM
by sciurus on 10/17/16, 7:06 PM
by wmf on 10/17/16, 7:07 PM
by yazaddaruvala on 10/17/16, 7:04 PM
You can also invest in a CDN. Now we have a React.js SSR with 0ms server response time! :)
by jeffnappi on 10/17/16, 7:57 PM
by schmrz on 10/17/16, 8:33 PM
by merb on 10/17/16, 7:15 PM
that's really really really bad for generating a html (and even sending it to the user).
I can achieve the whole damn thing in 20ms or less.
> The average response time fell to 20 ms !
yep in java/go/whatever you don't need the cache, with it your avg response would drop even further!
by petetnt on 10/17/16, 9:02 PM
[0]: https://github.com/aickin/react-dom-stream [1]: https://www.youtube.com/watch?v=PnpfGy7q96U [2]: https://github.com/facebook/react/issues/6420
by benguild on 10/18/16, 4:44 AM
As far as I know, as long as the content is embedded in JSON/JS when the page loads, it's fine to then "render" it with JavaScript. It's 2016 and Google started crawling JS websites a while ago.
However, if you fetch it with AJAX after the page loads, Google won't see it because it doesn't necessarily follow AJAX calls nor wait around for them to return in 810ms. They'll most likely only render the bundled content.
You can use the "fetch as Google" tool in Google Webmaster Tools to try this out for yourself.
by agnivade on 10/18/16, 9:38 AM
Umm .. nice work ? I guess ..
by petercue on 10/17/16, 8:57 PM
by amelius on 10/17/16, 8:22 PM
by angry-hacker on 10/17/16, 9:53 PM
by geggam on 10/17/16, 7:33 PM
My browser would thank you