by bmahmood on 8/14/19, 5:52 PM with 31 comments
by otterk10 on 8/14/19, 11:25 PM
We’re really excited to release this feature after months of R&D. Many of our customers want to understand the causal impact of their products, but are unable to iterate quickly enough running A/B tests. Rather than taking the easy path and serving correlation based insights, we took the harder approach of automating causal inference through what's known as an observational study, which can simulate A/B experiments on historical data and eliminate spurious effects. This involved a mix of linear regression, PCA, and large-scale custom Spark infra. Happy to share more about what we did behind the scenes!
by cuchoi on 8/16/19, 2:52 AM
From the article, this seems like a normal regression to me. Would be interesting to know what makes it causal (or at least better) compared to an OLS. PCA has been used for a long time to select the features to use in regression. Would it be accurate to say that the innovation is on how the regression is calculated rather than the statistical methodology?
Either way, it would interesting to test this approach against an A/B test and check how much an observational study differs from the A/B estimates, and how sensitive is this approach to including (or not) a set of features. Also would be interesting to compare it to other quasi-experimental methodologies, such as propensity score matching.
Is there a more extended document explaining the approach?
Good luck!
by 6gvONxR4sf7o on 8/15/19, 11:18 PM
by mrbonner on 8/16/19, 12:34 AM
by whoisnnamdi on 8/16/19, 4:09 AM
Did you all consider using Double Selection [1] or Double Machine Learning [2]?
The reason I ask is that your approach is very reminiscent of a Lasso style regression where you first run lasso for feature selection then re-run a normal OLS with only those controls included (Post-Lasso). This is somewhat problematic because Lasso has a tendency to drop too many controls if they are too correlated with one another, introducing omitted variable bias. Compounding the issue, some of those variables may be correlated with the treatment variable, which increases the chance they will be dropped.
The solution proposed is to run two separates Lasso regressions, one with the original dependent variable and another with the treatment variable as the dependent variable, recovering two sets of potential controls, and then using the union of those sets as the final set of controls. This is explained in simple language at [3].
Now, you all are using PCA, not Lasso, so I don't know if these concerns apply or not. My sense is that you still may be omitting variables if the right variables are not included at the start, which is not a problem that any particular methodology can completely avoid. Would love to hear your thoughts.
Also, you don't show any examples or performance testing of your method. An example would be demonstrating in a situation where you "know" (via A/B test perhaps) what the "true" causal effect is that your method is able to recover a similar point estimate. As presented, how do we / you know that this is generating reasonable results?
[1] http://home.uchicago.edu/ourminsky/Variable_Selection.pdf [2] https://arxiv.org/abs/1608.00060 [3] https://medium.com/teconomics-blog/using-ml-to-resolve-exper...
by kk58 on 8/16/19, 6:19 AM
Granger causality for estimating Granger cause
by whirlofpearl on 8/15/19, 3:29 AM
Congratulations! Just remember to patent it :)
by move-on-by on 8/16/19, 1:16 AM
by Rainymood on 8/16/19, 9:05 AM