The biggest change over the last 5 years in the field of employee engagement measurement is the frequency that employers are looking to poll their employees. Whilst it used to be common to run a survey every 12 or even 24 months now firms are wanting to run a survey multiple times per year.
One question that we’re frequently asked is what is the ideal frequency to ask employees engagement-related questions. I don’t think there is a perfect and generally-applicable recommendation however the following describes our thinking about this.
Why measure frequently?
If we think about personal health, even if we have the ability to take measurements at any time it doesn’t necessarily make sense to do so. I may measure my weight on a regular basis but I can’t remember the last time I measured my height.
The reason that I don’t measure my height on a regular basis is that it doesn’t change (much). I’ve been about 187.5cm for the last 20 years. However during this time my weight has had a range of over 20kg. I’m certainly heavier now than I was one month BC, otherwise known as ‘Before Christmas’.
So, logically the only reason we’d want to capture engagement-type data often is if we believe that it changes frequently. Does it?
Thinking about organization-level engagement
All measurement has uncertainty. Depending on your variable of interest and your measurement approach you probably have different amounts of confidence in any measurement.
Most people report Employee Engagement at an organization level as the sum of the individual scores of employees in that organization. This makes some sense in that it’s easy to do but it adds extra uncertainty – we can’t really disentangle how much of the uncertainty comes from the measurement error at an individual level and how much depends on who has been included at any period.
Lots of uncertainty in how we measure engagement
The next thing to recognise is that in almost every instance we’re not measuring engagement (if we can actually define a commonly agreed definition, but that’s another issue). What we are actually recording are the answers to a number of questions.
Given a whole range of factors from culture, language, user interface, number and type of scale etc. we get different responses.
It’s worth considering that as individuals we have a feeling to any particular question on some continuum between two extremes. When presented with a scale question with categories – e.g. a Likert scale – they have a burden to try and convert where they see themselves on their continuum to what they think is the nearest value on the scale. Two people with identical feelings can interpret this in a different way. The same person facing the same challenge might change the interpretation. There is uncertainty in almost everything.
Measurement uncertainty and regular engagement measures
In the old world where people did engagement surveys infrequently – say once every year or two – there would often be endless conversations in organisations on the right timing. There was an implicit feeling that organization events, such as restructure announcements, would make a big difference and therefore the measurement team would try and pick a favourable time. It was the team feeling that there was uncertainty.
We probably can think of the measured value as an underlying engagement value + short-term noise. As managers we want to understand the underlying value. If the noise was totally independent of any short term issues then with a large enough population we could probably assume that our distribution of noise was approximately normally distributed and it would average itself out.
However, the concerns about picking the right time raises two issues:
- We probably can’t assume that the noise is independent
- Often we report down to small groups and therefore we won’t see avergeing out.
When infrequent surveys make sense
If we think about what this logically implies there are two conditions for proposing the historic, infrequent measurement cycle:
- That an employees’ perception doesn’t change frequently
- That the noise or measurement error is small and therefore one measure is good enough.
Do these apply?
We’ve got Workometry clients who are doing regular, even monthly, pulse surveys. If we look at individual-level longitudinal data we do see it mostly stable most of the time. However from time to time we see an individual who was stable suddenly starting to change – usually becoming less positive.
This stability also implies that measurement error is probably relatively small. It’s easier to assume that a static value probably shows infrequent change rather than real change that is ‘hidden’ behind an opposite measurement error.
The downside of frequent
There is another issue though facing us as we think of how often to ask employees for their feedback – being asked the same questions introduces fatigue. Fatigue means less people might respond therefore reducing data quality.
When we asked irregularly most companies had a top-down action planning process. This process took many months to complete a cycle. However given the infrequent nature of asking it was still possible to make the changes and let the organization adapt before the next cycle.
With frequent feedback, even with a more agile approach of closing the loop in terms of feedback, it might still be difficult for the organization to adapt if the cycle is too short. We’ve seen this in terms of feedback – employees asking why clients were asking the same questions when there were other topics that they could poll.
The ideal frequency
Our current view is that unless there is significant amounts of organization change occurring (eg during a merger) then quarterly is probably as often as you’d want to ask all the same questions to employees. Any closer that this and we start to see employees telling us it’s too frequent.
But does this mean you shouldn’t ask for feedback more often?
We believe, from looking at our data, that whilst most of the time an individual’s perception is stable, there are times – probably after key events – when an employee’s views can change. (See here and here for more information). Given there is likely to be a trigger that causes this we feel that it’s important to be as close to the event as possible. From this perspective monthly is probably the ideal time.
Meet the hybrid feedback approach
So how do we keep close to the events yet not be repetitive? By building a feedback calendar.
An engagment survey usually has 2 components: a question or questions to understand or infer engagement and a selection of questions where the objective is to find variables associated with higher or lower engagement.
Our view is that we need to add a third section – asking if their perception has changed since the last time period and if so what triggered the change.
With most clients we’re using a version of the eNPS as a proxy to measure engagement. We always ask the question about change so we can be close to the event.
However, we don’t have to ask the supplementary ‘factor’ questions every time. These tend to be relatively stable. Instead we take the opportunity to ask employees about other important questions
Building a feedback calendar
How do we work with clients who want to take this approach?
- To start we set the months where we’re going to ask for engagement-focused feedback. This will proberly be quarterly so we’ve filled 4 months
- We then identify other events where it might be useful to get employee feedback. Examples of this could be around performance reviews, company results, promotions, business cycles etc. These are events that can be identified a long time in advance. Doing so might fill 3 or 4 more months
- Finally we work with executives to identify key business challenges that they might want to get the views of the employees. Sometimes this can be planned ahead but otherwise we might want to instigate a process to identify this topic closer to the time but with several dates in the diary.
The final thing to note is that we might identify topics based on earlier results. If you need to go back to the full employee base you might do a follow-up month. Often however you can add a question or two to the monthly questions of a defined segment.
Asking open questions
The key to all of this is that if you ask for feedback it has to be quick to complete and seen as relevant. Our view is that we get the best quality data by asking open questions and scaling this qualitative approach.
Doing so means that typically it takes less than 5 minutes to complete a feedback request. As we can reveal the company-wide feedback within a very short time of a feedback period closing we can work with our clients’ teams to communicate back to employees and to identify which groups and individuals need to act. The context provided by open questions means that managers can easily identify what needs to change.