Surveys should be mostly open text

Surveys should be mostly open text-01.png

Tom H. C. Anderson of OdinText has been writing about some research that he has run comparing Likert and Open Text survey responses.

Tom used Google Surveys to ask two samples of 1500 people a question about perceptions to Donald Trump’s controversial executive order banning people from certain muslim countries. To one group he asked the question as a typical 5 point Likert Question.

 

To the second group he asked the exact same question but instead let them respond in open text.

 

As you can see the answers are remarkably similar – within 2% of each other. According to his experiment an open question would be as suitable to gauge the level of agreement as a traditional quantitive scale question.

Tom’s findings are remarkably similar to what we see when using open questions as a replacement to traditional quantitive questions. We tend to use a combination of structured and unstructured questions in employee feedback requests but by far the most important are the open text questions. Open questions can achieve most of the aims of a scale question but provide some notable advantages.

In his post, Tom later highlights the difference between the Likert and the open text where the latter provided much richer data on the long-tail of responses (he describes them as low-incidence insights). As he notes:

“While there is nothing more to be done with the Likert scale data, the unstructured question data analysis has just begun…”

Recently a client asked us why we didn’t have a sorting question available in Workometry. Our answer was that for that type of question we’d prefer to use open text.

Their proposed sorting question had 8 different choices for the employee to sort. I could show an almost identical question asked as open text by another client where we had identified just under 30 different options. Whilst we hadn’t done a controlled test like Tom did given our experience we’d expect pretty much identical results. A sorting would be right only if you want to limit the potential options to a subsection of the person’s true opinions

In a recent FT Tech Tonic Podcast with Astro Teller, the head of Alphabet’s ‘X’ lab Astro notes: (at about 14:45)

“If you are serious about learning you set up your test in a way you don’t know the answer to the question you’re asking and you’re indifferent to the answer. You have to do that or you’re not really learning you’re biasing either the test or how you read the data.”

However good your survey design is, if you’re using a structured, choice-based question you’re asking a question where you’re by design limiting what the answer could be.

Open questions on the other hand give you the option of not knowing what you’re going to learn, before you ask. In our experience, if we’re doing a question like the ‘most important’ / sorting question above it would be common to find 2 or 3 of the top 10 answers that you wouldn’t have included in a similar structured question.

The other aspect that text provides is context. A structured question might identify the strength of feeling (though the example above shows that text can do this equally well) and who holds which feeling but it can’t show why they feel it. It’s why when companies do large engagement surveys often the immediate action is to do a series of focus groups on particular topics to understand the context.

When would we recommend structured question?

Even though we believe that in most instances the right option is to ask an open question there are a limited number of occasions when a structured question might be better. We find ourselves using them:

  • when we want to report a simple metric over time, e.g. engagement or eNPS
  • when our objective is to build a model and you need a score on a target variable for each individual

In both of these instances it’s because you want to have a score on something simple and purposely constrain the answer. We might be using such a score to determine what topics those who are engaged (or disengaged) are more likely to be discussing. It’s important to note, however, that for any feedback request we might ask 2 or 3 scale questions.

Why are scale questions so popular?

If open questions hold so many advantages why are most surveys mostly scale questions?

A very big factor is that it’s easy to report and analyse scale questions. Before tools like OdinText for customer feedback or Workometry for employee feedback analysing large volumes of unstructured data was hard.

Now, however that is not the case. Given the rapid progress of text analytics I suspect we’ll start to see the gradual death of the traditional survey. If you’re serious about learning it can’t come too soon.