by lennysan on 5/27/14, 11:54 PM with 43 comments
by nostromo on 5/28/14, 12:58 AM
I ran an online marketplace at a previous gig. Our service providers always complained that they didn't know what to charge to maximize their business. They couldn't see the forest as a tree. Because we had the data for all providers, we started letting them know if they were under- or over-priced, and we saw more conversions and revenue.
Dynamic pricing (like Uber does on holidays) alone could be hugely valuable.
by wkonkel on 5/28/14, 12:30 AM
by thinkmoore on 5/28/14, 1:00 AM
Funnily enough, the page they reference for calculating the right sample size actually talks about sequential analysis, but AirBnB doesn't mention this in describing their solution...
by sutterbomb on 5/28/14, 1:46 AM
http://elem.com/~btilly/ab-testing-multiple-looks/part2-limi...
by bjlorenzen on 5/28/14, 1:45 AM
Rate of deployment of experiments is a better focus; since all your opponents are bound to copy your winners anyways, you have to rely on the few months edge you've earned before they do so, and constantly maintain that lead.
by coherentpony on 5/28/14, 6:17 AM
Try setting your p-value to your Type 1 error rate divided by the number of tests you perform. It will be much smaller, and this is a good thing. Significance should really test for significance, not random chance.
by jessriedel on 5/28/14, 12:42 AM
by cbovis on 5/28/14, 9:44 AM
by RA_Fisher on 5/28/14, 1:32 AM
by 205guy on 5/28/14, 12:47 AM
When is AirBnB going to experiment with helping their hosts follow the law? I bet I can predict that graph. Why, look at all those illegal rentals in SF right there in the sample screenshots--oh the irony.
Remember, DON'T FUCK UP THE CULTURE! But it's OK to fuck up your host city for a buck or 2 billion.