by Admin
by Kyle Grant
by Kyle Grant
http://www.enquiro.com
I don’t think it is going to be a surprise to anyone to know that not every paid search test they implement will be a success; if it is a surprise, I am sorry. (p.s. there is no Santa Clause, either). The key to successful management of paid search is to determine acceptable margins of failure and test within those margins. How much failure can be tolerated and how much are you willing to risk?
Gord Hotchkiss has spoken and written on many occasions about internet speed and what this means for organizations trying to adapt to a rapidly changing competitive landscape. The question regarding internet speed is how fast are you willing to move to adapt to the changing online landscape, but the question also has to be asked; how fast are you willing to fail?
To determine the most effective marketing mix online, at some point a failure must be encountered. The question regarding optimization of paid search campaigns may come down to complacency. If everything is working well, ROAS is good, CTR is good, Quality Scores are good, then why mess with a good thing? Well, the quickest way to being overtaken by your competition is to stand still, and yet the problem is that when we optimize, we risk failure.
Not too fast, not too slow
Starting to optimize conservatively can lead to long, drawn out failures which can do longer-term damage, and yet going to the other extreme can lead to some very large disasters. It is more about balancing speed and risk with optimization testing. When starting testing it is important to define the scope of the test and determine what the acceptable rate of failure would be (i.e. negative impact to the bottom line) versus time needed to realize statistically relevant information. For example, when testing new ad copy or a new landing page, it is important to measure the volume of traffic you will direct to the landing page compared to the amount of potential loss of business due to that change. The margin of error associated with the test can also come down to the size of the change. The more significant the change, the faster you will see results (positive or negative).
Test markets
As with traditional marketing, definition of the test market will help to control the scope of the testing as well as the amount of volume exposed to the test. Limiting the scope of the test to selected ad groups or specific campaigns, or leveraging geo-targeting to isolate markets exposed to the test, can assist in balancing quick results and risk. The selection of test markets will also assist in providing a comparative bench mark for the test; although it is less of an issue with A/B testing.
Avoid the knee-jerk reaction. Testing and seeing results quickly is great, but ensure the statistical relevancy of those results is accurate. Before declaring a test a success, determine exactly what degree of data is required to see an effect from the test. Remember Statistics 101: what size of result is required to determine a 90 or 95% confidence interval with the test? The confidence intervals are going to be directly impacted by the amount of data; the more data, the smaller the interval required for statistical relevancy. There are two ways to increase the amount of data: time or scope. Increasing the duration of a test will assist in driving towards statistical relevancy or increasing the scope of impacted ad groups/campaigns will increase the amount of data.
Where do you want to go?
Testing in marketing is 70% science and 30% art. Even the most lackluster creative developer can design a highly successful campaign with enough iterations and testing. Before implementing a test, start with a hypothesis. I know, its high school science all over again, but without an idea of what the expected outcome of your testing could be, it’s going to be like going flying without a destination; you’re going to end up somewhere, just maybe not where you would like.
Knowing what works is only valuable if you know why something didn’t work and therefore failure is the only real way to drive success. Failure is only really a bad thing when nobody learns from it, which is why when implementing a testing framework, recording specifics about how each change affected the campaign’s performance is imperative to a successful testing strategy. Each test should be followed up with a detailed analysis as to how the test impacted the results. What was it about the test that caused the specific result?
The key to a successful optimization and testing program is designing successive tests and constantly challenging the status quo. Just remember you’re going to fail once in a while, but you can learn a lot from failure and take what you learn to help you succeed that much more next time.
© 2010 Enquiro Search Solutions.
|
|