We're using integration tests to cover 90% of the user flows. Adding integration tests for all a/b test flows and their permutations seems to be an overkill, but we're facing situations in which there are broken things only when the user is in b version of the test X and b version of the test Y. I'm wondering how other smaller companies deal with this?
When we do feature testing, it's normally an existing feature against a new one. This implies testing the new code just as if it were exposed to everyone, and the old code is already tested.
If you're seeing broken things in one execution path, then yes, I'd say testing would be a good idea.