featured

Engagement surveys – Part 1, issues with the traditional approach

Engagement surveys – Part 1, issues with the traditional approach V02-01.png

In these articles I use the term ‘survey’ to mean both a survey, with sampling, and a census where everyone is asked.

There is a shift at the moment from long infrequent engagement surveys to shorter, ‘pulse’ surveys which are being used either as a replacement or supplement to the longer survey. With this and the following post I wanted to discuss some of what we see as reasons and advantages and in doing this. As always I hope to give a data-led perspective.

Some background

Organizations, recently through their HR departments but before that via operation research groups, have been conducting employee surveys for around 100 years. In the 1970s the focus was on organization commitment and job satisfaction and the focus went from OR to HR. There had been some earlier work by Katz (1964) on Organizational Citizenship Behaviour which talks about Organization Commitment.

Engagement was first described by William Kahn in 1990 but was made popular by Gallup’s ‘First break all the rules’ book of 1999. Since then most organizations have been conducting some form of engagment survey.

During the same sort of timeframe the technology for completing surveys has changed. In the 1990s and before most surveys were still done on paper. When web technology started to enter the employee survey space we saw surveys which were fundamentally a replication of a paper survey done electronically. This made sense at first as many organizations spent some time doing both in parallel. We still see paper in some environments such as delivery drivers.

About the surveys

Engagement surveys tend to follow a common design – they ask a set of questions to create an engagement index. Mostly this is in the region of 5 questions. They then ask a large number of questions to identify items which are linked to that engagement. Most annual surveys are in the range of 60 – 150 questions long. I would estimate it takes about 20 – 30 seconds for an employee to answer each question.

Data is also used to consider the demographics of each participant. We see both self-reported demographics and demographics that are imported from other HR systems. The latter is a more effective way of getting good data but in some firms there are concerns about privacy.

There is a potentially enormous number of factors that could be associated with engagement. As Kieron Shaw noted in “Employee engagement, how to build a high-performance workforce.”:

“It’s arguably unfeasible to directly measure in the survey all the actions behind engagement,” due to the fact that, “there are potentially thousands of different individual actions, attitudes, and processes that affect engagement.”

Hence, however long the survey is the designer has to have selected a subset of potential factors.

Criticisms of traditional surveys.

In a fascinating paper “Measuring Employee Engagement: Off the Pedestal and into the Toolbox” Andrew Graham of Queen’s University notes 9 issues with the traditional survey:

  1. Not frequent enough
  2. Single scoring leads to issue distortion
  3. Aggregation reduces meaning
  4. Does not capture the specifics (context seldom captured)
  5. Lengthy or poor response planning
  6. Managers are busy & have no incentive to implement any actions
  7. Lot of resources & monitoring
  8. Surveys get old
  9. Causality not clear

A tenth issue that we find as analysts is that there is typically an illusion of richness. Many firms think that by asking 80 questions they are capturing 80 independent data points. This is clearly not the case.

Issues with survey data

One of the analyses that we like to do with survey data is to build a correlation graph. This uses each question as a node and the correlation between each question as an edge. When you visualise survey data in this manner you typically get something like the following:

 

What we see is a hairball. Each question tends to be highly correlated with another. (In the graph above Questions 31 – 33 are questions that the HR team wanted to add relating to a process which obviously has little link to engagement).

We’ve done experiments with survey data where we ‘destroyed’ 80% of all answers randomly and then used recommendation algorithms to back-fill what had been removed. In most instances we’re able to accurately replace what has been removed. People answer in patterns (hence that hairball), and if you know some answers you can pretty accurately infer what all the others will be (this means that you could probably randomly ask each employee different questions and dramatically shorten the survey without much loss in accuracy).

Issues with User Interfaces

This is a bit more contentious. It relates to how questions are asked.

Most employee surveys use Likert-scale questions, mostly 5 points between strongly agree and strongly disagree. One of the reasons for doing this has been that on a paper survey it’s easy to get someone to code the data into a reporting system (it’s easy to see a check in a box). What has been done is to take this process that was designed for paper and put it onto the web with little thought in terms of adapting the question to take advantage of the opportunities presented by the new medium.

Employees actually have a true feeling on a continuum between the two end points. When you ask them to answer on a 5 or 7 point scale what you’re actually doing is asking them to ‘bin’ their true feeling to the nearest compromise point. Doing so is adding burden on the survey taker and potentially adding inaccuracy in the data. The data can’t be seen as linear, instead one should use statistical methods appropriate for ordinal data.

In a 2012 paper in the journal Field Methods “Why Semantic Differentials in Web-Based Research Should be Made From Visual Analogue Scales and Not From 5-Point Scales”, Funke & Reips show experimental evidence that show that marking a line between two points – a visual analogue scale – has numerous advantages over traditional 5 point scales. Two of these are better (more accurate) data and less burden on the survey taker.

Whether the answer is a visual analogue scale or something with a large but distinct number of points (the 0-10 scale used by NPS practitioners?) is harder to determine. However I see little evidence that 5 points is the right approach.

Should we even be asking scale-based questions?

Finally, too often what drives executive action from survey data is the responses to a few open text questions. As Graham notes on his fourth issue survey data rarely provides context. The qualitative nature of open text does provide this opportunity.

Often the initial action from a survey is to do more qualitative research focussing on several key topics. Such research is both time consuming and expensive. (Arguably acting without understanding the context can be more expensive).

There are instances where asking a scale question makes sense, most notably if you’re wanting to report a trend. However asking sufficiently broad, open questions will likely capture richer data. The challenge for many firms is how to do this at scale.

If we think about how we’d try to understand an issue in a conversation we’d ask open questions and then follow up with relevant and probing follow-up questions. I firmly believe this will be the future of employee feedback, though it will be a bot-based conversational approach which can be done in multiple languages and at scale.

As an industry we’re currently not there but the future is coming quickly. In the next article I’ll describe the current state, our findings of working with clients  at the cutting-edge and highlight some approaches taken by other innovators in this market.

How text analytics is changing the employee survey

How text analytics is changing the employee survey-01.png

In my last post I discussed the process by which employee engagement increases and decreases and therefore what are the most effective interventions leaders can use to make a long-term difference. In this post I explore the related question – what do we need to change?

As People Analysts OrganizationView have conducted a lot of surveys over the last 6 years. One thing we’ve come to appreciate is how important employees’ open text comments are to driving action from management. The issue has usually been how to deal with tens of thousands of comments, provided in multiple languages. This was one of the driving forces behind our decision to develop Workometry, our employee feedback platform.

Before the advent of reliable text analytics, analysing these comments at scale was time and cost prohibitive. Much of the way employee surveys have developed is because of this constraint. However, just in the same way that statistics has changed by the availability of large, easily available computing power and cheaper acquisition costs for data we predict that understanding employees will change by the availability of text analytics.

With text there are two main tasks that we want to do: we want to categorise the text into one or more topics and we might want to score the text on some sort of scale, for example a level of sentiment.

When categorising we want to go further than just words. We look at two dimensions – the topic (eg ‘career development’) and the context (eg ‘shortage’). This has to be more than just keyword as we’ll want to link together the multiple synonyms – it’s the meaning not the words that they’ve chosen which is important.

Doing this is adding metadata to our text. We can then apply various statistical techniques to the metadata. Typically we’re identifying in the region of 40 – 60 different topics for any text question. We can think of this as equivalent to adding another 40–60 scale questions to a survey. Therefore we can ask the short surveys that are needed to maintain response rates when you’re doing them frequently whilst capturing very rich data. We use an unsupervised learning approach meaning that the topics are suggested by the data, not precoded based on generic results.

One of the reasons that we do analysis is to draw attention to the parts of the information that managers need to focus on. We’re currently doing that by combining two techniques.

One of the joys of working with Employee data is that we often have extensive records about the employee – who they are, what role they’re doing, who they work with, how they’ve performed and the key events (job changes, pay rises etc). Linking this data to the perception data lets us provide much greater insight than if we just analyse the much smaller survey data on its own.

With Workometry we use probabilitic models to identify which groups are most likely to discuss a certain topic. We can incorporate information from HR systems, business systems and even other survey answers provided at a different time – for example looking at their perceptions to their onboarding process.

The results from these models can then be used within interactive visualisations to guide the user to the most interesting groups / results. The interactivity lets the user drill down into the data in new ways, guided by the models and ultimately lets them see the underlying text comments.

One very simple, but from our experience very powerful, way of looking at text data is to identify which topics are most likely to be discussed by engaged / disengaged employees. We see clear patterns that demands for work become far more transactional as a result of people moving from engaged to disengaged. This fits with information we get from exit interviews. We can think of a two stage process – first they become disengaged & then some leave. This supports the systems dynamics model I discussed in the last post.

Finally, what we’ve learnt from looking at vast quantities of text in this manner is that in a survey information in text comments seem to change much more quickly than scale-type questions. For one client we saw a jump in comments about technology 3 months before we saw changes in a related scale question. This ability to provide information to enable managers to fix issues whilst they’re still emerging should be seen as a key benefit of collecting and analysing text.

Cost ≠ Value. Issues with the total compensation approach

Cost ≠ Value. Issues with the total compensation approach-01.png

In several sectors compensation professionals and the HR teams they serve talk about total compensation. When communicating compensation messages with employees and prospective employees they sum the cost of each component and present this as the value of the total package.

Unfortunately what is important to employees is not the fiscal cost of the package, but the perceived value, and they are rarely the same thing. The issue is that given the money employees would often prefer to allocate it in a way that gives them more perceived value. Economists describe this as a deadweight loss: a waste of resources that could be averted without making anyone worse off. Think about it another way, if instead of providing non-cash benefits a company paid just cash then a benefit package would only be efficient if the employee would choose to spend the money in exactly the same way as the employer had.

In a famous 2001 paper Joel Waldfogel attempted to calculate the deadweight loss that giving presents at Christmas generated over the alternative of giving the recipient the equivalent cash. He estimated that the deadweight loss was between 10% and a third. It is highly likely your total compensation is being similarly devalued by employees.

Why non-cash benefits can make sense

So the economic theory suggests that the most efficient way – that that creates the highest perceived value to employees – of providing compensation is just to provide the cash. However this misses part of the picture. Companies can often purchase goods and services at considerably reduced prices than employees can. If we take Waldfogel’s 10–33% estimate then if the company can purchase the goods on behalf of the employee at this sort of discount then the package could be more valuable to the employee than just giving a bigger salary.

How to measure what is a valued package

Techniques exist for measuring how various components of a package are perceived but they’re rarely used by employers to measure perceived value of benefits packages. They should be.

The class of problem that you need to understand is termed discrete choice: that is you have a maximum total resource to allocate and having more of one thing means less of something else. The only way of measuring preferences is to replicate this trade-off.

The problem of maximising the perceived value of a package of goods is one faced by marketers daily. The established technique for measuring this is a technique called conjoint analysis. It can be very effectively applied to the compensation problem.

The importance of choice

The most effective way of maximising the perceived value of a compensation package is to provide a wide menu of options and let the employee choose which they want. This menu should be extensive and it should be possible to change items on a reasonably regular basis as employees preferences change, and as they do their ideal basket will change.

The issue with this approach is that it’s expensive. The companies it arguably works best for are those with a large number of relatively similar employees who have a similar set of preferences. It’s used quite frequently in large professional services firms for example.

If you don’t want to do this the second best approach is provide a series of menu options which bundle benefits in a way that appeals to various segments within your workforce. The great thing about discrete choice experiments such as conjoint analysis is that the results they produce are differentiated – that is that cluster analysis is possible using the data.

Communicating benefits to employees

The issue of the difference between cost and perceived value of benefits makes communicating benefit value difficult. If you communicate a cost to employees but it is not something that they would have chosen then there is a real danger that the employee will view it as wasted money. One way of reducing this is to report not only your cost but also market cost – so that the employee sees that you have got them a bargain. Whichever way you communicate it is always worth doing material tests to understand the reactions to the wording. If you don’t want to do this directly with employees then it’s relatively straightforward to recruit test participants who are similar to your employees.

Negotiating a salary in a total comp world.

If you are a prospective employee and want to negotiate a good package then it’s always worth starting with the reported total compensation value of your current package. It is highly likely that the value to you will not be as high as your current employer is reporting due to deadweight loss. By starting at this level you can then negotiate a package that more effectively matches your perceived value and capture some of the deadweight loss.