A false god

Customer segmentation algorithms rarely work for qual recruitment – but is there a deeper problem?

My heart sank the other week when I was asked to use a customer segmentation algorithm to recruit respondents for this week’s groups.  I’ve been here before:  “Thou shalt worship at the altar of the bright shiny customer segmentation model that is now driving all of the company’s business analysis and strategy; for it is good, and thou shalt not cast aspersions thereon, or thou shalt be cast out into the night.”

Sure enough, the algorithm was rubbish.  On first glance, it didn’t feel right; on closer examination, it just didn’t work.  If you followed the template that was supposed to spit out a respondent in the right segment, you were going to wind up with someone who logic dictated simply didn’t exist or, if they did, was not going to be representative of the type of person supposedly characterising the segment.  You didn’t have to be a rocket scientist to work this out, you just had to be able to think logically and apply a little common sense.

What’s more, there was simply a mistake in there, a straightforward coding error that would lead you in exactly the wrong direction.  This was the client’s established segmentation study and no-one had yet spotted the error.

On a previous occasion for another major client, I was easily able to prove, using the figures contained within the client’s own algorithm guide, that the percentage of the universe that would be recruited using the algorithm was many times smaller than the percentage of the population the original study stated fitted into the relevant segments.  If we could find respondents who did fulfill all the requisite criteria, they’d clearly be freaks!  My protestations fell on deaf ears.  What is it about quantitative data that makes people disengage brain and blithely accept what a little clarity of thought will clearly demonstrate is tosh?  It took over a year and several projects before the Head of Insight finally and reluctantly accepted that the segmentation was completely unsuitable for use in qual recruitment.

Now, while it’s a pain having to explain to clients why their precious segmentation doesn’t work for qual, it’s clearly something we can get around, by understanding the spirit of the segment and designing proxy criteria to ensure we find the right people.

However, what if it’s not just the segmentation algorithm that’s tosh?  What if this is just a symptom of a segmentation study that is itself deeply untrustworthy?  And since a lot of segmentation algorithms don’t work, does this mean there are a lot of questionable segmentation studies out there?  I can see several reasons why users may not realise or accept that their segmentation study is flawed: it has the gloss of credibility that numbers bestow and few people have sufficient confidence in their numerical analysis skills to challenge the data; it costs too much to discard; and high level management has bought into it.  It does make you wonder…

I’ll leave the (nearly) last word to my favourite qual-minded quantie, Albert Einstein: “Not everything that counts can be counted, and not everything that can be counted counts”.  But, in the case of segmentation algorithms, you really don’t need to be Einstein to see the problem.

Advertisements
This entry was posted in Brand strategy, Market research, Qualitative research, Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s