A/B Testing allows advertisers to segment the users they're reaching on Twitter so that they can understand how best to optimize for campaign performance and gather learnings to inform their marketing strategies.
These segments—referred to as user group splits—are randomized and mutually exclusive. With randomization, factors that influence outcomes are equally distributed. In other words, there are no inherent differences between groups or their expected behaviors. Because of this, when a single variation is applied to a user group and not the others, the difference in campaign performance can be attributed to that variation.
While it's possible to test many variations at once, we strongly recommend testing a single variation at a time. This isolates the causal factor for the observed difference in campaign performance.
Variations are set at the campaign level. For example, if the advertiser wants to test the efficacy of a new creative, they should create two identical campaigns where the only difference is the creative. In the future, we plan to support variations at the line item level.
A/B testing is most often used to support (1) optimization use cases for performance customers who want to understand what works best on Twitter in order to optimize their investment and (2) learning use cases for brand advertisers who want to use learnings to inform their marketing strategy.
The API will support A/B testing for any campaign variable, including: