Would you explain what you mean by “A uniform prior probability is assumed.”?

Are you using an uninformative prior?

]]>In the years since I first published this, Optimizely have done some great work on stats and their stats engine. They now take a hybrid approach – see https://blog.optimizely.com/2015/03/04/bayesian-vs-frequentist-statistics/. I’m not qualified to comment on the quality of the maths, but I can say that they’ve published it and had it reviewed, which is more than can be said for some others. I certainly wouldn’t caution against using their numbers

Hopefully this approach gives another perspective which can be really useful in the case of low traffic or rare conversions.

]]>In general the approach would be to produce functions that give the distribution of costs minus rewards with conversion rate as a parameter. Then see how this varies between the split test options given the evidence of the test. You arrive at a probability distribution of value for each branch of the split test, so they can be compared in terms of likely $. Fortunately, numerical approaches and tools make the maths of this approach feasible.

However, unfortunately, there are so many options for how costs and rewards could be structured that (so far) I can’t figure out how to a make a tool for it that would generic enough to cover many situations, and still be manageable to build and use.

We can do this case-by-case, but we need quite a lot of specific information about the business situation to build the model and help interpret the results. ]]>

I like many others, use your tool for split testing marketing campaigns. While understanding from a small number of trials which campaign is more likely to have a higher conversion rate is incredibly useful, often the cost of each campaign per trial is different. It’s a no brainer when they cost the same or the worse CTR is more expensive. But what to do when the cost is the other way? Is there a way to factor in cost somehow in a bayesian manner?

]]>Zeth

]]>Thanks very much for the comment. Interesting problem!

If I have understood properly, then I think the general Bayesian approach is to work out a loss function for each branch of the test – this combines the expected conversion, the uncertainty around that, and the expected cost with its uncertainty, into one equation, which gives you a distribution of expected utility for each option.

Unfortunately, this is way beyond the scope of this calculator, and it’s going to depend on the specifics of your situation such as the distribution of profit (eg does the revenue per transaction vary, and in what way) as well as cost.

If you’re only seeing a small variance in conversion, and that is still going to make a substantial difference to your bottom line, then I’d suggest talking with a mathematician – email me if this is the case, and I can put you in touch with people with relevant expertise.

Otherwise, you may be able to reason through to a decision based on some simplifying assumptions – eg have a look at the observed distributions of transaction amounts and see if there is much variation between the test branches, if not, and if the conversion rates probability distributions are well separated, then it may be reasonable to simply calculate the overall cost and benefit based on the expectation conversion rates, and have a look at those. Email me if you’d like to discuss in more depth.

Justin

]]>Firstly, thanks for building this awesome tool for A/B split-testing. It has helped a lot in my work to be able to get a good estimate on how should I optimize my campaigns.

Secondly, I am facing a problem currently, which is that the cost per trials is not the same between each set. Therefore the “conclusion” calculated would not be an accurate estimation. Is there a way to work around this? I’ve seen comments above years ago talking about revenue/view. How could I use that metrics & what’s the way to incorporate into this calculator?

Looking forward to your kindest reply.

Best Regards.

]]>Thanks for building this tool. This calculator is awesome and very flexible.

With limitations on sample size, more and more people are bending towards Bayesian approach these days. It would be great if there are options to include multiple variants.

]]>I’ve been thinking of doing a version that exposes more options (as well as updating the look a bit). I’ve been trying to think of a way of gathering useful priors that doesn’t involve major conceptual leaps by the end-user – I’ve found this tricky. I’ll be keen to hear how your setup goes.

I’m also keen to get back in touch once I’ve had a closer look at your calculator ]]>