When you want to find out if your control package could be beaten, you test a different communication against it.

So how do you test if your direct marketing program could be better?  Clearly, you test a different program against it.

For some this is a scary thought: it’s hard enough to deliver on one program effectively.  But frankly,  this is the only way to test the fundamental assumptions at the heart of your program:

Am I communicating too much?  Roger highlighted a program-wide test by the Union of Concerned Scientists last year.  The full post, and the video below with Laurie Marden, are well worth watching.

The  TL;DR version is that UCS tested 12-15 appeals for one panel of 25,000 and four (yes, as in the number between three and five) to another panel of 25,000.

They received an additional $8,000 in net revenue with the four appeals and this reduced cadence became their control.

They aren’t the only ones.  Catholic Relief Services tested a pilot program that cut 6-7 mailings and had a reduced email diet emails to a select pilot group.  The results are here and that program is now their control program for 2018.

Am I asking the wrong way? The US Olympic Committee saw much of their donor file was premium dependent.  For a $20 donation, a donor could get all manner of Olympic swag in the mail.  Not surprisingly, this attracted more donors interested in the merchandise than a philanthropic gift to support Team USA.

Then…USOC  tested a bold step: changing the offer to focus on a philanthropic gift – specifically, asking donors to join the Sixth Ring, a group of donors who give $100-plus per year, whether in single or recurring gifts.

And they tested this in parallel to their existing premium program, finding a 27% increase in net per piece (you can read Roger’s full discussion of this here).  This, too, is now their default offer.

I’ve run a premium-dependent program before; the premiums become like crack: they give you a quick hit on results. So,  you feel you need to keep using them.  But. the results winnow over time, and you drive away those people who aren’t interested in premiums/crack.  (Or so I’m told.  I don’t even know how to use baking soda in a cake.)

I found I couldn’t break out of premium-dependent by testing just one piece at a time.  Like USOC, it takes testing an entire pilot program to see a world without premiums.

What if my communications to a group of people are off?  Tom talked about the idea of the long-term test back in 2011. Specifically, he raised the issue of  split-testing new donors to an organization and testing different versions of welcome and first-year communications to see what maximized retention of quality donors.

This is truly the only way to test the effectiveness of your intro communications during that critical first year (where most of your donors will leave even if you are doing well).  We’ve preached soliciting feedback from your donors immediately, including commitment, satisfaction, identity, and preference information.  That’s a tall order.

Should it be in one giant communication? (No)

Two communications over time? Three? Four? (Perhaps)

Fifteen different communications over the course of the year each asking for a separate piece of information?  (Definitely not)

The point is that your initial instinct may not be what best gets you the information you need to market effectively.

Because each of these questions cuts straight to the assumptions that underlie your program, with outcome measures like retention rates and long-term value, they can’t be measured with a one-piece response-rate-times-average-gift approach.

Creating a pilot program – one that tests a transformative variable across communications and media – is the only way to get at these deeper questions.

Have you done this in your program?  Can you share results?

Nick