Tutorial 53: principles of testing
You don’t have to be a statistician to apply and administer basic logic.
- Written by
- Jerry Huntsinger
- Added
- January 07, 2019
Indeed, many statisticians rely on their figures even when logic tells them they are wrong. I have a few basic statistical principles that I fall back on when it looks like I am going to drown in an ocean of facts.
First of all
I never pay attention to the number of returns unless there is a minimum of 20 responses from each segment of the test. This means that if you are testing a laser-personalised letter against a generic letter, you must have 20 responses from the generic letter and 20 responses from the personalised letter before you even consider that one may have won or lost.
And then, unless there is at least 20 per cent differential between the two segments, I never declare one side a winner. So if the computer letter brings in 23 responses and the form letter brings in 20 responses, it is a tie.
Second
Whenever I am comparing income, I always throw out gifts that are $100 and over, if that gift represents 10 per cent or more of the total income. This means that if a total of $1,000 came in on one side of the test, but it included a single gift of $100, then the $100 represents 10 per cent of the total and I would throw it out (i.e. not count it as part of the test), because it is not representative of what actually happened. (But it could also mean that the test list has some major donor potential.)
Third
In setting up a testing format, I always make sure that each segment of the test has a chance of returning the minimum number – 20 – to allow it to be counted.
This means that if I expect a one per cent return, I can’t test 1,000 pieces because that would only give me a minimum of 10 returns.
And my final personal rule is that any results, based on the minimum response of 20, must be confirmed in quantities that will double the initial testing. So that the next step would be to set up a testing procedure, whereby I receive 40 responses from each side of the test.
And now, for those of you who want to probe deeper into the mysteries of statistical procedures, let’s move on.
How to test without losing your shirt
We must start with the basic principle that a test is valid only when it has been properly designed, controlled and interpreted.
In designing a test, you must decide to test something that is significant – that is, something that will give you valid input for your making your decision.
This principle eliminates the testing of trivia. Don’t bother to test whether a window should go on the right-hand centre or left-hand portion of the envelope. Regardless of the test results, you will not have gained anything significant. Don’t test whether a postage stamp should be put on straight or slightly slanted to get attention.
There are two basic types of testing
One is based on the concept of incremental testing. This means that you are going to constantly test various aspects of the package, the offer, the timing, the list, etc, until you come up with a package that wins little by little, and your percentage points improve.
The second type of testing is ‘breakthrough’ testing. This kind of testing is designed to jump you from one per cent response to two per cent on prospect mailings, or from 10 per cent to 20 per cent in house appeals.
Here, you are going to test something so radical, so different, so new, so explosive and unique that it achieves a breakthrough.
Therefore, as you design your test, you must first of all determine if you are going for incremental testing or breakthrough testing. If you are going for breakthrough testing, then don’t bother to test whether the reply envelope should be blue or yellow. That will not achieve a breakthrough.
Then, as you design your test, you must control the variables.
This means that if you are testing one letter against another letter, then all parts of the package must be identical or your test will not be valid.
And if you are testing a laser-personalised letter versus a generic letter, then the copy for each letter must be identical.
You must design a test that will allow you to compare both returns and net income. So if you are testing the offer of a book premium versus a calendar premium, the cost of the premium is a variable that must be computed when you figure your net profit.
The confidence level
In test analysis, professionals use what is called a ‘confidence level’. This is similar to what I mentioned above under the heading of confirmation.
Let’s say that you have a prospect package that you have been using for three years and you test another package against your old package – which we refer to as ‘the control’.
If you test the control against the new package and you receive 20 returns on the control and 25 returns from the new package, your confidence level is fairly low, because you are not going to gamble your future on the new package until you have more confirmation.
So how many returns do you need to come up with a 95 per cent confidence level? Here again, there is no arbitrary rule. But let’s say that if we receive 100 responses from both sides of the test and the new package still beats the control package by more than 20 per cent – then we can be fairly confident it will hold up in larger quantities.
And let’s say that the new package again wins, how do we become 100 per cent confident? We need 1,000 responses on both sides of the test, so we test 100,000 pieces for each package, for a total of 200,000.
Let’s say then that we have designed a test that will meet all of the above requirements, and the results are all in and, after testing 200,000 pieces, the new package clearly wins.
Wait
If you stop there, you have not properly designed your test, because the quality of donors enrolled is the most basic element of all. You must provide for an analysis of the percentage of donors who make a second gift, and their average gift over a six to 12-month period of time, and compare that average with donors you enrolled from your old control package.
This means that in a donor prospecting programme, you may have a new package being run concurrent with the control package – and even though the new package may appear to be winning over the control package, you are systematically comparing the quality of the donor of the new package with the quality of the donor of the control package.
Unfortunately, often life is not this neat and orderly, and many times you are under pressure to create a new control package and drop the old package. And when that kind of a situation occurs, you simply have to fall back on your intuition.
But be careful about changing your control package if you have drastically changed the offer. Let’s say, for example, that you have been mailing a control package offering a membership card to enrol members of a zoological society and you change your offer to include a wildlife calendar – and the new offer beats the control by 30 per cent, so you change to the calendar offer.
But, as the year rolls around, and you start your annual renewal programme, you may find that the calendar people renew only 50 per cent as well as the membership card people – and so, unfortunately, you have lost.
This brings up an interesting paradox in that if you figure the net profit and the net value of the donor, you may find that the package that receives the lower percent of response and a higher gift may actually be the long-term winner – because it results in the enrolment of a donor that gives a larger average gift, and stays on the mailing list for a longer period of time.
Some fundraising executives may wish to dig deeper into the mysteries of statistical analysis and I would refer them to the various literature published by the Direct Marketing Association.
For myself, I don’t ever intend to learn what is meant by “multiple regression” or “independent variables” or “dependent variables” or “linear relationships” or “dichotomies.” But it makes interesting shop talk.
© SOFII Foundation 2010-2014.