by achalkley on 2/16/15, 10:49 PM with 94 comments
by mikecmpbll on 2/17/15, 9:26 AM
I like the desktop site, I like the mobile site, and I like and appreciate that you're giving us an insight in to your tech stack and how content is published.
Cheers beeb.
by Twirrim on 2/17/15, 7:01 AM
Taking a quick spin through yslow in the mobile browser suggests they've got a number of areas to improve on to make the time to screen significantly better on mobile devices (even on a fast connection here it took several seconds to even start showing me content, and several more before it had finally loaded everything)
Given the world wide reach of the BBC, expecting high speed and low latency networks seems like a bad idea. In the US, 3G & 4G typically see 90-100ms latency per request. Mobile Yslow is reporting that they've got 21 javascript scripts alone on the page. IIRC The android browser will limit itself to 4 threads retrieving content typically so that's (21/4 * 100ms) 525ms just lost in latency requesting the javascript, let alone actually downloading it and the overhead of the javascript renderer. It's also pulling in content from 21 different websites, so at the bare minimum that's 21 DNS calls being made (with the same latency penalty!) A bunch of those are being done just to load a single piece of content too, which is a little crazy.
Don't get me wrong, the site looks good.. it's just for a 'mobile-first' experience, they seem to be missing the all important time-to-screen and giving the mobile user a lot of work to do.
A useful tool from google for analysing the site for both mobile and desktop: https://developers.google.com/speed/pagespeed/insights/?url=...
and a good talk from last year's Google I/O conference on optimising the mobile experience: https://www.youtube.com/watch?v=WrA85a4ZIaM
by meesterdude on 2/17/15, 2:10 PM
> Rather than using PHP or Java (that was the requisite of the Forge platform), we have chosen a non-blocking framework, NodeJS with the Express framework . This allows us to serve more simultaneous requests, increasing the performance of the application.
I don't doubt this is true, but its worth noting you can get good performance out of a "blocking" framework too. Node.js does better than others in some situations, but in this its not a snowflake, and is in some regards worse.
But I will criticize priorities: I think this is too much fad, not enough practical. the experience is notably worse than the old site, and it seems like they just threw buzzwords at their problems instead of really crafting a solid solution.
I think where they're coming from, maybe this makes sense - they needed to overshoot from their previous platform. But I think they'll find it problematic in the long run and change some of their approaches and release a new site, and that will be a good platform and serve their needs for a good while.
by alexcason on 2/17/15, 8:43 AM
class="distinct-component-group container-buzzard"
class="distinct-component-group container-pigeon"
class="distinct-component-group container-macaw"
class="robin sparrow-container"
class="sparrow-container sparrow-columns"
by Domenic_S on 2/17/15, 4:35 AM
by super-serial on 2/17/15, 5:29 AM
Their manager said "Your goal is to write a blog post mentioning each one of these technologies, and add links so it appears in dark-bold-blue text. If we don't have a project using a trendy tech-stack... you bust your ass and get something up and running."
The engineers balked... "but why?"
Then the boss said, "I'm tired of being ignored in 'Who's Hiring' on Hacker News. We make this article and they'll all come begging us for jobs. At the BBC we don't wait for news to come to us, we make the news. And THAT'S what we call journalism kids."
by ndreckshage on 2/17/15, 12:46 PM
OP - can you shed any light on how this is actually impacting your performance? Or maybe things that you had to do to get around the problem (ex - details of 'module level' cache with Redis etc).
by Numberwang on 2/17/15, 7:00 AM
by nerdy on 2/17/15, 12:47 PM
The 13 tests shown in the screenshot take an average of 577ms each, a total of over 7.5 seconds. 13 simple tests with wild variance in execution time. Checking a module banner color? 42ms. Checking a module background color? 552ms. So a 12x increase for checking a different color within a module?
Those tests are going to rot because of the expense of executing them and they'll be discarded. Over a 500ms average per test just isn't sustainable, particularly at the scale required for checking every conceivable kind of background color.
by rentamir on 2/17/15, 3:35 AM
by weavie on 2/17/15, 8:29 AM
by thesehands on 2/17/15, 10:49 AM
Well worth a read.
by ggitau on 2/17/15, 12:09 PM
by noso on 2/17/15, 1:39 PM
Rock on!
by collyw on 2/17/15, 2:44 PM
by kirkus on 2/17/15, 7:36 AM
by esalman on 2/17/15, 11:51 AM
by smegel on 2/17/15, 8:52 AM
by Fastidious on 2/17/15, 11:35 AM
I hope I can play their videos on the new website, as before they all used to require Flash.
by bruceboughton on 2/17/15, 4:04 PM
by bencollier49 on 2/17/15, 11:55 AM
by confiscate on 2/17/15, 8:41 AM
by rashthedude on 2/17/15, 9:34 AM
by _b8r0 on 2/17/15, 9:19 AM
by sklogic on 2/17/15, 8:21 AM