People Analytics

Using Employee Voice in Open Innovation

Using Employee Voice in Open Innovation-01.png

One of the key trends that we’ve seen with Workometry clients over the last 18 months is the increasing use of Employee Voice as a contributor to enable innovation across global organizations. Leading firms are increasingly actively involving their employees in contributing ideas and identifying issues that need addressing, effectively using the huge cumulative insight as large-scale collaboration.

To understanding this trend in more detail I interviewed Yves Zieba, a prominent advocate of open innovation here in Switzerland. I hope that you enjoy it.

 Yves Zieba

Yves Zieba

Hi Yves, could you please give me us an introduction to who you are?

I am a typical ‘slasher‘. I share my time between co-animating an open innovation space, facilitating workshops on strategic agility, design thinking or fablab experience. I am also coaching successful SMEs in the European Union Horizon 2020 program. We apply leading edge practices in facilitation with Seethrough and with business models for the open economy workshop from Without Model. It is our way to combine open innovation principles, facilitation skills on emerging HR trends or peopletech topics.

What is ‘open innovation’? How are firms using it?

 It is primarily a mindset. It is often confused with open source. There are similarities, although it is not exactly the same thing. The idea is to co-create with the right customers, partners and providers. It is the exact opposite of what is happening in closed and highly secretive R&D departments. It can take several formats. What I see most these days are idea box, hackatons, public private partnership, problem solving contest, UX beta testing communities, or corporate venturing (i.e. large group working with small startups in different partnership formats).

What is the employees role in open innovation ?

It is to adopt this open mind, this equal-to-equal relationship, and to develop their ability to co-create, to work in flexible teams co-develop modular solutions with external stakeholders in a similar way than with internal stakeholders. Most of the time the open innovation transformation is taking place at the same time as the digital transformation. So the employees role is to drive this change, or at least to understand and contribute positively to this change. To some extent, taking some risks is also part of the game. Employees can now take advantage of the emergence of new management technics and latest organisation design, such as holacracy.

When we facilitate open innovation or buisness models for the open economy workshop, I am surprised to realise how little most people know about these principles or about creative commons licences.

What have been the challenges to using employee voice in this manner ?

In open innovation as well as design thinking the prerequisite is to adopt an equal to equal relationship. I typically ask everyone to adopt a beginner ‘s mindset and to start to « think together ». In reality, it takes time as existing hierarchies continue to carry their weight, and are making use of their business stamina in many organisations. So when we are using employee voice, typically in an « idea box » context, it is crucial to have transparent criteria for selection. We can also introduce elements of collaborative and participative approach to project selection and investment decision making. We plan ahead to be clear about the rules of the game (which idea will be funded, how, what will happen to the team, who owns the IP…). Companies sometimes fail, it typically occurs when they ask their employees to contribute and then fail short when it comes to implementing great ideas.

Who would you say has done this well? What benefits did they realise ?

There is a long list of organisations who have been using open innovation extremely well and successfully for years. To name a few, Quirky, for collaborative inventions, SamsungLegoGE with their ergonomic design challenge, Philips, on crowdsourcing in the healthcare and well being industry, Baiersdorf with the Nivea campaign, Procter & Gamble Connect & Develop campaign or Mozilla toolkit are some of my favorite examples. The University of Geneva Quality of Life Lab  and the Geneva Lab togther prepare a great program with EnoLL, there should be lots of best practices being displayed at OpenLivingLabDays2018 during the week of August 23rd

These players have adopted what they like about open innovation to meet their needs.

 

Sometimes the best solutions aren’t the most sophisticated

Sometimes the best solutions aren’t the most sophisticated-01.png

Last week I was at Tucana’s People Analytics World 2018 in London. I have a special affinity for this conference as I co-chaired the first three years from when a hundred or so people came together in a small space in Canary Wharf (and when the Tucana team was tiny) to today with several hundred of Europe’s top practitioners met in the QEII Conference Centre in Westminster.

Over the five years that the conference has run we’ve arguably seen three phrases in the analytics approaches presented:

  • The decision whether to do People Analytics
  • The academic – an approach to People Analytics where the objective was to run a project like academic research
  • The value where the time / effort concentrated on identifying business value.

This year was the first where I had seen value being cited as the prime driver.

I’ve written about this before. In 2016 I wrote “The greatest mistake for many in people analytics” where I discuss the need for a loss function to convert the statistical model to business value.

In the past too many presentations discussed finding statistical relevant findings but didn’t extend this analysis to the business impact of the change. At People Analytics 2018 two presentations especially made the necessary leap from this, rather academic approach to one centred on identifying value.

Michael Tocci of P&G even joked that his previous academic peers would challenge him because of his ‘less-rigourous’ approach. However, through his excellent presentation he showed how P&G analysed the value created through the global mobility program and refocused it to ensure it was being used effectively in a way which was highly mature. It was always value-led.

Pressed with a challenge, presented to investors of pulling cost out of employment costs P&G used analysis to understand how to do this in a way which minimised the long-term disruption to the organization. Make no mistake, this was not cost-cutting for the sake of cost-cutting, but an economically-led approach that ensured that mobility assignments were being used effectively, with the right people going on the right assignments to the right locations to maximise value. 

The techniques that Michael described were not what we would describe as ‘advanced’ but the clarity of thinking, and understanding of the business showed a maturity that we wouldn’t have seen even 2 years ago.

Similarly Swati Chawla of Syngenta described how they used People Analytics to analyse sales force effectiveness in APAC. Again, the analytic techniques weren’t advanced but the focus on identifying value as well as (the usual) cost enabled them to optimise on the correct variables for the business.

As with P&G, the techniques that Syngenta used aren’t ‘advanced’ but the maturity was achieved via a carefully selected and balanced set of measures. She demonstrated a strong understanding of the business trade-offs between trying to minimise cost-per-employee and balancing this with productivity.

The final characteristic that both presenters demonstrated was a strong understanding of the need to build wide support for action through a robust change management approach. It might (possibly) be easier to build a convincing case with simpler analytic techniques due to the ease of comprehension. However the focus always has to be on action, not clever analysis on a PowerPoint presentation.

As I flew back to Switzerland on Thursday evening I reflected on the lessons any CHRO could learn from these presentations.

  • Focus People Analytics efforts on the net value of the topic that is being analysed.
  • Business understanding is the most important factor, both to ensure the right thing is being optimised but also that analytics teams are able to motive leaders to change
  • Sophisticated analytics doesn’t necessarily mean sophisticated results. Pick the right tools and techniques for what you want to achieve.
  • These approaches can be used regardless of the size and complexity of the organizations. Whereas some sophisticated approaches require a decent size population to find meaningful trends, clear thinking is applicable to all
  • People Analytics, at least in some firms, feels like it is becoming mature. I hope People Analytics World 2019 shows this to be true.

AI in HR – how to understand what is happening

AI in HR – how to understand what is happening

There is a considerable buzz these days about so-called ‘AI in HR’. Most vendors are claiming to have some sort of machine-learning within their products and some are making claims that, from the perspective of someone who has been doing this for the last 15 years, seem unlikely.

Understanding what these new technologies can (and can’t) do is vital if HR is able to evaluate purchasing them, or work with their internal teams in designing, developing and deploying their own approaches. Analytical literacy is rapidly becoming a core skill of HR.

Algorithms are free

Many of the technology vendors will market their products as having some amazing, unique algorithms. In almost all instances this is unlikely.

One of the remarkable trends that we’ve seen over the last years is that the big technology companies have acquired large teams of the best data scientists and have been publishing new algorithms in journals, often open-sourcing code at the same time.

Pretty much everyone is using these algorithms – and in many instances much earlier developed ones – as the basis of what they’re doing. They will almost certainly be combining them and changing settings but at the heart we should assume the same freely available building blocks.

What is needed is great training data

In contrast to algorithms being free, data is not and as such is what differentiates decent from great analytics efforts. This matches the message that we always tell clients – to improve early analytic results there is usually a need for better data, not better algorithms. It’s why we built Workometry – to make the collection of the great-quality data as easy as possible.

In 2014 I wrote an article describing the 5 types of HR Analytics vendors. In it I described a category which I called the ‘data aggregator’. This was a firm who, by collecting vast amount of cross-firm, individual-level data were able to build valuable analytics offerings.

In 2018 pretty much every SaaS HR offering is trying this model. In many instances the data doesn’t really have enough value (there is a lot of it, but it’s not really that rich – most survey providers could be put in this category). However some vendors will find true value in this approach.

This data becomes a barrier to entry for new firms wanting to enter the industry – it’s hard and costly to acquire. It’s a good reason why many of the most innovative HR analytics start-ups are in recruitment. In recruitment far more data exists outside the firm in public data sources.

General AI is a long way off

When vendors talk about AI in their product to the lay-person they often conjure-up images of technology that has near-human levels of reasoning. Most data scientists would tell you that this reality is a long way off.

One of the interesting aspects of machine learning techniques is that it can solve some tasks that we humans might find difficult (playing chess for example) yet it might struggle with tasks that even a 4 year old could achieve easily. I suspect that we’re close to developing an autonomous van which can take a parcel from the depot to your house but it might be harder for a robot to take the parcel from the van, up the stairs, enter the building and find the correct letter box.

What today’s current approaches can do is solve certain, well defined problems, usually with lots of available data with extraordinary levels of accuracy. Often, the narrower the problem and the greater the data size to learn from, the more accurate the prediction. These narrow problems are often described as ‘Specific AI’.

Benefiting from Specific AI

Take the example of text analytics. Even within text analysis there are different firms in the HR space doing wonderful things. TextKernel has developed very good approaches to understand CVs and Job Descriptions. We, through our Workometry technology, have probably the leading approach to understanding the answers to open questions (for example in employee suggestions or feedback). We even go so far as building specific models on the organization / question level (arguably our key differentiator is how quickly we can build these models). With such specific models we can out-perform skilled humans at this task in a fraction of the time / cost.

We can think of the implication on work of AI / robots therefore not as automation taking away whole jobs – as most jobs require a variety of tasks, but of AI automating specific tasks. These will be the ones with a lot of repetition or where large volumes of data need to be acquired and synthesised.

When thinking of how to apply AI it’s important to therefore break a job down to tasks, ideally the smallest, most specific tasks possible and identify which are candidates for AI. At the same time we need to identify the value / cost of these tasks to identify which are worth developing solutions to automate.

When doing so we shouldn’t constrain ourselves to tasks that we’re currently doing. Many tasks are possible without AI, but prohibitively expensive. For many firms the sort of text coding Workometry does has been too expensive and time-consuming to perform. For many of our clients Workometry is 10x cheaper & 200x quicker than the alternative solutions and is of higher quality. What was difficult to justify therefore becomes attractive.

Benefits from AI

There are 2 key drivers of benefits from using so called ‘AI’ in HR:

  • To improve a business driver (eg productivity, customer experience) and by doing so enable the business to achieve better results
  • To reduce cost of of delivering HR.

In many instances the first is likely to provide opportunities to realise a greater return to the business, however it is also likely to require greater & more wide-spread buy in to results. Implementation costs and risks are likely to be higher with a greater number of uncertainties influencing the end deliverable.

With this type of analysis it’s highly unlikely that the data needed will be residing in one system or database. Given this we can expect fewer instances where a single system provider will have enough data-coverage to be able to build a complete model. The best work in this area will remain the preserve of data-science teams within a firm who can identify, process and join the necessary data sources into a reliable model.

Cost reduction for HR will ultimately be easier for predicted results to be achieved. In many instances there will be a smaller number of decision-makers (the HR leader) and it’s likely that cost reduction will be a core part of their objectives. Data for this type of analysis will be easily available and more likely to be of high quality / have less measurement error / to be more complete. It will also be more likely to reside in one system. In the medium term we can expect system providers to deliver such capability.

Some points getting the most out of AI for your HR team

  • A little knowledge will go a long way. Think about up-skilling your team so that they have a good understanding of where AI can be deployed in its current state and what the likely benefits are. Several providers (including us) can help here
  • Don’t expect system providers to provide complete solutions where they don’t have access to all the data. There will be a need for the foreseeable future to build good People Analytics capability
  • People Analytics technology won’t solve all your problems, but it might remove routine tasks from the People Analytics team, thereby enabling them to focus on higher-value tasks. Think of these solutions as complements to building capability, not a replacement
  • Challenge your technology vendors (especially if you’re a key client) to develop solutions that can identify cost improvements. With all the transaction data they should be identifying efficiencies. This will soon be a hygiene factor for systems providers
  • Often simple models can be built quickly. In a drive for accuracy you hit decreasing marginal returns pretty quickly. How much more valuable is this solution than what your team could build in 10 days?
  • General models, built on other firms data is unlikely to perform as well as specific models built on your data.

How to start a People Analytics project

How to start a People Analytics project -01.png

As one of the earliest People Analytics practices we have extensive experience of working with clients to help build great People Analytics organizations, either by helping them work through pilot projects or through our regular analytics trainings.

In most instances, to improve the quality of analysis it’s likely that you’ll need to acquire better quality data, not use better algorithms. Our Workometry product was initially built to meet the need of providing high-quality perception data to use in predictive models. Our experience is that this data is often the most valuable sources of insight and most predictive variables when model-building.

There are a few simple things to consider when starting analytics projects in HR. The most important thing is to do this in a systematic manner, not just grab the easiest-to-get dataset and start modelling.

What is your business objective? Really??

Possibly the hardest challenge of any analytics project is to accurately define what you want to analyse.

This might come as a shock but with so many things in employee management commonly used concepts are poorly defined. A good example is employee engagement – there is no common definition of engagement and therefore statistical analysis is made more difficult.

One of the ways that we recommend clarifying such topics is to add the words ‘as measure by X’ to the end. So if the project is defined as improving employee experience then the definition could be ‘to improve employee experience as measured by the employee experience rating on this survey.’ Socialising such definitions is important to ensure that all the key stakeholders agree with your definition.

Another useful technique is to use a version of a technique ‘5 Whys’. Here the purpose is to challenge the initial problem description by repeatedly asking ‘why?’ until the actual causal issue is identified. To give a simple example:

Manager: We need to understand how to reduce employee attrition?

Analyst: Why do you want to do that?

Manager: Because it is causing us a lot of unnecessary cost.

Analyst: Why is it so costly?

Manager: Because it is disruptive to our business and we have to pay recruitment costs

Analyst: Why is it disruptive?

Manager: because it takes so much management time both in recruitment and also bringing people up to speed

Analyst: Why do you think we need to focus on reducing attrition rather than reducing the management time it takes for each hire?

Manager: You’re right. We need to investigate both and understand what possibilities would yield more value from the investment.

How are you going to realise value from this analysis?

It’s important at an early stage considering what you’re going to do to implement the results of any analysis. Will changes be influencing policy change? Will you be creating a predictive model to give you an individual risk score on each employee?

The reason why this is important is that it has implications on what data you can bring into your model. As data needs to be captured for a predefined purpose then providing a personal ‘score’ has implications on what data can be used in your model. Working with anonymous data sets and at an aggregate level may enable you to do far more with the data and give you far more flexibility in your model building.

How you will realise value will also drive which types of modelling you want to do. Does your model need to be easily interpretable (needed for policy, process or training changes) or could a black-box model be sufficient. If a black-box model how are you going to determine you’re not at risk due to discrimination regulations (hint, just not adding a gender variable to your model won’t prevent your model being discriminatory).

What could be could be driving this behaviour?

The next important action is to identify a serious of explanations which could be causing / influencing what you’re studying. There are 3 main sources that we tend to use:

  1. Desk research: What has been identified by others as causes / correlations
  2. Brainstorm: Get together a group of key stakeholders to identify their view on what are the causes. This also helps socialise the problem
  3. Ask employees: Short open-question questionnaires (like Workometry) to as wide a population as you can will help you get an extensive list of possible causes. (We want to do an exploratory analysis at this stage). Our experience is that there will be a significant difference between this list and the stakeholder list.

What data do you need to test each possible explanation?

Now that you’ve identified the potential causes you need to identify the possible variables could you use in your model which would enable you to test each potential relationship. Again, this is another instance where being clear in what you are actually trying to measure it’s critical.

Some of the information that you need you will have in traditional employee systems, but it’s not likely to be enough. You may have data in other business systems but you might need to acquire new data.

Lots of data is available online from various credible data sources. Numerous governments and organizations like the UN publish great databases which can help you understand what is going on outside the organization with things such as the labour market or populations.

What new data will you need to capture?

It’s highly likely that you will need to capture new information to validate some of your ideas. In many instances you’ll have to ask people directly.

There are numerous data capture methods that you can use, however the process of how you solicit information is often at least as important as the questions you ask. You need to identify approaches which require low input from both the organization and the individuals concerned. If you will need to understand this on an ongoing basis you need to make sure it’s sustainable.

How will you measure the success of any changes?

Finally, before implementing any changes it’s important to identify how you are going to measure the impact of your changes.

It’s likely in most situations that this will have to be a hybrid approach – some measurements will need to be quantitive. Others are likely to be perception-based.

What is unfortunate is that all changes within an organization are likely to have unintended consequences. Also, given the complexity of organizations it’s unlikely that your model will be stable over time so you need to identify when the model will need reviewing.

The use of exploratory, open-text questions on a regular basis will enable you to monitor when new reasons emerge.

When to ask open vs closed questions in employee surveys

When to ask open vs closed questions in employee surveys-01.png

The traditional employee survey has a selection of closed or scale-based questions. These might be asked as a statement which the respondent has to agree to – usually on a Strongly Disagree to Strongly Agree scale, or a numerical score such as the 0–10 scale of an employee Net Promoter Score question.

We believe that in the vast majority of times such closed questions are used not because they are the most effective or appropriate way of gathering information, but because they historically have been the easiest to analyse.

With a scale-based question the respondent provides a response by selecting an option from a list. With most questions they are able to select a single answer however it could be possible to select several items from a list of alternatives. Because of the way the way the surveys are administered it is easy to count the number of people who answer “Strongly agree”, “Agree” etc and calculate a percentage.

Open questions however provide text data. This is typically described as ‘unstructured’ and requires effort to transform it into ‘structured’ data before it could be analysed. Until recently the only reasonable way of doing this was for a skilled researcher to do it manually. It is a long and slow process and is subject to a number of biases and issues with consistency (both between researchers and over time).

We have now reached a stage where computers are able to perform the coding task. The cutting edge algorithms, such as the ones we use in Workometry, are able to out-perform humans at coding text responses to open questions. They’re also able to do it at a tiny fraction of the cost of doing it manually meaning that cost and time becomes less of a deciding factor between using open questions and closed questions.

When you do this you get ‘count’ data plus the option of various other elements of metadata (data about data) such as whether the comment is offensive, is an outlier (some of the most important comments are often outliers) or even something simple like word count.

The choice of which question type to use therefore depends much more on why you are asking the question and what you want to do with the answers

Objective

The first question to ask yourself is why are you collecting the data.

Scale questions are best for providing a quantitive response from all respondents about a particular topic. They are therefore best for things such as KPIs which you need to track accurately on an ongoing basis.

The other way you might use a closed scale question is to test the strength of feeling on a particular topic, for example to provide as accurate value as possible for use in a business case.

If you need to identify how strong a feeling is you should use a scale question. However if your aim is to understand what people are thinking, why they are thinking it or how it affects them (or your business) you probably want to use an open question which provides text.

Open questions provide information which enable you to understand how to act. Closed questions might indicate you might need to act but won’t tell you how to do that.

MECE

A key principle when thinking about collecting information via closed questions is that you’re ensuring that the topics or reasons are Mutually Exclusive and Completely Exhaustive. Of this I think the most important is that the categories are Completely Exhaustive. Mutually Exclusive is important but having a hierarchical structure – ie that a category can be a sub-category of another – can be useful.

In some instances having completely exhaustive categories is easy. I could ask people which is their favourite Shakespear play as the number of plays is finite and reasonably small. My list could quite easily be Completely Exhaustive.

An alternative way of thinking of categories is at the question level. With an engagement survey historically we’ve asked a set of questions that are used to create an engagement index and then a large number of questions that are used to understand which factors, or categories, correlate with engagement. You can think of all those questions – ‘is communication good?’, ‘does your manager support you?’ etc. – as covering all the categories. The reason these surveys are so long is that there are so many possible categories.

If I want to know why someone did something it is impossible to build a Completely Exhaustive list. Some closed questions on survey might have an ‘other’ choice where the respondent then writes in their answer. Alternatively there might be a single open question at the end for the user to add anything that hasn’t been asked. Really this is saying ‘we don’t have a completely exhaustive list’. Unfortunately we see that these uses of ‘other’ will provide different (lower quality) responses than if you just ask an open question.

Open questions are, by their nature, exploratory in nature. This means that when you ask them you’re open to the possibility that the answers are outside the group of categories you could initially identify. When we ask open questions in an engagement type survey we find that about 30% of categories that employees mention are ones that we’ve never seen on a commercial engagement survey. We see a difference between two companies, even in the same sector. The reasons are very personal and business specific.

Another way of thinking about closed vs open questions is that with closed questions you have to ensure you’re Completely Exhaustive before asking the questions; with open questions your answers are Completely Exhaustive automatically. This makes survey design much simpler and removes many issues with ‘validation’.

How much signal is enough?

Fortunately the topics and their frequency identified during a coding are not randomly distributed. With our clients, a typical open question will generate about 250,000 words which result in the region of 100 different themes. The most common theme might appear 10% of the time where the smaller themes might appear less than 1%.

As the data size increases two things happen: first, the number of statements where we can identify a meaningful topic increases. The first or second time the algorithm spots something could be an outlier but after a few more instances we start to have enough signal to determine that this is a meaningful topic.

The second is as you get more and more data the confidence that you can safely assign to any ‘answer’ increases. You can start to consider tracking usage of topics over time. You can start to see which parts of your employee population are far more or less likely to talk about a particular topic.

Sometimes the topics are tightly distributed. With one client we saw a few people raising issues about bad contracts. Whilst in many organizations this might be ‘noise’ in this organisation the topics were all from one group and about one contract. By highlighting this statistically the management team could investigate and de-risk the situation.

What open questions don’t do is find a quantitive score against all potential categories – they don’t allow you to understand what each person thinks about each category. Instead they identify what is ‘top of mind’.

Respondent burden

As I’ve written about before, with data and analytics you need to think about what’s in it for the employee. Surveys and questionnaires are just the same.

There are three aspects for improving the respondent experience that I think you need to consciously try to improve:

  • The technology interface of the survey tool – how easy is it to provide the answers, especially across different devices including mobile
  • How long the respondent will have to devote to providing feedback
  • Whether the respondent will be able to tell you exactly what is on their mind.

On the first of these points we did worry about whether respondents would find it difficult to respond to open questions on a mobile device or whether responses would be shorter. At the moment we’ve found little evidence (though the 500+ word answers are mostly done with a proper keyboard).

For the second, to collect the richness of data a questionnaire which is based on closed questions inevitably needs to ask at least one question for each topic. Hence we either have the traditional long surveys, or we are forced to abandon data quality to provide a shorter experience. With a 4 question open question survey we find the average time to complete is less than 5 minutes.

Finally, open questions are the only way of ensuring that all issues are captured. Ideally with closed questions you’d want to ask about each category ‘how do you rate this’ and ‘is this important to you’. For example you might ask all employees about whether they think the firm offers family-friendly policies or benefits, but if a respondent doesn’t have a family they might not care (but could rate it as true). Many surveys assume that each category is equally weighted where this assumption is highly unlikely to hold.

As previously noted, when we’ve used open questions instead of scale questions in firms we’ve found that only about 70% of the important topics were typically part of a traditional employee survey.

Conclusion

Although we’re very strong believers in the importance of open questions linked with AI-linked analysis it’s clear that the best solution is a balance between open and closed questions, using both for their strengths.

In terms of information-gathering the vast majority of questions should be through open questions as the principle aim should be to identify issues that need fixing, or ideas that will be implemented. However, it’s important to have a quantitive measurement to capture your KPI on each survey. This data is very useful not only for tracking trends, but also for analysis.

The key point is that if your aim is to take effective action you should only use closed questions where absolutely essential. After all, if you want to really understand what people think you don’t ask a series of closed questions.

Focussing on the causes not the symptoms of employee engagement

Focussing on the causes not the symptoms of employee engagement-01.png

Depending on which data you look at employee engagement has tended to be flat or declining over the last 15 years. There is obviously something wrong with the way that firms have been attempting to manage engagement within their organizations. Our view is that the core issue is that engagement typically suffers flawed analysis leading to ineffective changes.

A simple way of thinking about the way most traditional surveys are designed is by considering the questions in two groups: a set of questions that make up the engagement index and another set, usually larger, which is used to identify factors that are correlated to that engagement index.

If we consider how this works in more detail we can see that what we’re doing is looking at the differences in views between engaged and disengaged employees.

Most engagement survey analysis is static – i.e. no knowledge of each respondents’ previous states or answers are considered when conducting the analysis. Where history is included it is usually at the sub-organization (eg team, department) level where there can significant differences in the underlying respondents from survey to survey.

During the analysis stage of a survey typically variables that are most closely linked with engaged / disengaged employees are identified. Unhelpfully these are often called ‘drivers’ implying, at least to the lay-person, that they are causal. They are not, they’re simply correlations.

What these analyses do is identify symptoms of being engaged rather than what causes the employee to become engaged in the first place.

There are several reasons this matters. The first and most obvious is that any post-survey action is likely to be sub-optimal. I can take a flu-remedy which prevents my nose running and tempers my headache but I still have the flu. As soon as the medicine wears off the headache returns.

The second reasons is that our sensitivity to certain issues changes as a result of our overall view. We probably all know people who cited reasons such as ‘they left the toothpaste lid off’ when explaining why they changed partners. When love disappears these things become more grating, but overall they aren’t the reasons why love is lost.

How to fix this

To understand the issues that are linked to employees moving from an engaged to a disengaged state (or vice versa) we need to include how any individual changes over time in our model.

There are two key ways of doing this. My view is that the ‘gold standard’ is to do both:

  • We can look at the transition between engaged & disengaged at an individual-level and understand what happened to them. One way of doing this is to link employee background (education level, function, tenure etc.) and event data (eg pay rises, job changes, performance reviews) and build models which identify which combinations are linked to changing individual-level engagement. We can look at which responses change and explore whether there were underlying reasons why this might have occurred.
  • We can ask people if they feel more / less engaged (or even identify it from their history) and if they feel a change what events, experiences or factors have influenced their changes in feeling.

What we learn

When analysing data in this way we see significantly different issues that are cited as reasons for a change in engagement (or recommendation when using eNPS).

Our employee feedback tool, Workometry, uses open questions instead of traditional scale questions. It then uses highly sophisticated text analytics approaches to summarise the statement and group them into meaningful groups (with meaningful descriptions such as ‘More clarity on the future direction’). The summaries and groups the algorithms find will depend on the answers to the question asked (and often be organization-specific).

Many of the reasons cited for changes in engagement would not be part of a typical engagement survey, meaning that there is no way they’d be able to identify them, and by implication that you wouldn’t be able to act upon them.

Reasons may seem at first glance far more personal but there are a few key themes that we typically find:

  • Environmental issues are often the largest group. These will include the team dynamics, toxic colleagues and ineffective management. I’d suggest that the single most effective way of improving a disengaged employee’s engagement level is changing their role
  • Barriers to getting the job done is a significant issue, especially amongst high performers
  • Measurement and incentive related issues are common. These could include targeting at the general level or where targets are badly aligned to things that are important to employees values, such as delivering great customer experiences.
  • Various events that could all be categorised in a general ‘Employee Experience’ group can have significant changes. For individuals where we see sharp drops in engagement over two, close time periods there is usually an event which has caused the change.
  • New senior leadership and refocussed strategies can increase engagement for certain groups.

Changes to action planning

Given the reasons we see most of the action that is likely to be effective would be bottom-up rather than the typical long, top-down approach. There will be some systematic issues that will need to be addressed but in general a principle of ‘decisions / action should be made at the lowest possible level’ is probably correct.

For organizations which really want to get this right will be building teams to address more structural issues. I’d imagine such teams would have experience in areas such as process and job design, technology enablement and incentives. Such changes would need to be formally costed / valued and prioritised.

As an extension of this organizations need to have a serious debate about how to ‘close the loop’ with employee issues in a timely manner. Whilst customer feedback can trigger this easily, in organizations employee feedback is usually provided with a promise of confidentially. Technology offers opportunities here.

‘Low hanging issues’ need to be fixed quickly. We always see issues such as staff facilities that are broken or policies, designed for one situation which are inappropriate in another area. Fixing these things quickly will likely not address fundamental reasons (they’re the toothpaste-lid sort of reasons mentioned above) but rapidly taking action sends a signal to employees that feedback is being listened to and providing it is worthwhile.

Overall we believe that much of the failure to improve engagement in organizations is due to using static analysis to understand dynamic systems issues. Businesses address the causes rather than the symptoms will realise significant benefits.

HR Analytics doesn’t mean abandoning intuition

HR Analytics doesn’t mean abandoning intuition

There’s a belief, that I hear frequently in the HR analytics community, that HR Analytics means a move away from intuition. This isn’t true.

Analytics using your own data is just one tool needed to conduct empirical decision making. Doing analysis – however sophisticated – can only be part of what you need to make great decisions. ‘Numbers are just another voice at the table’ as the saying goes.

As Sam Hill mentioned in his post on this blog ‘People Analytics – It’s a mug’s game. Isn’t it?’:

‘The People Analyst will keep formal and informal channels of communication open with HR process owners, line managers, senior managers, HR Business Partners and potentially external stakeholders to measure the pulse of their organisation and to identify emerging workforce issues or opportunities.’

As HR professionals there is usually a good reason that we hold the beliefs that we do. Many of us have built up knowledge and experience over many years of seeing similar situations, reading case studies and books or speaking to peers.

Managers too have built valuable experience. Many tend to have a good knowledge about what is happening in their organisations. They will have seen similar situations or even studied organisation theory on a general business course.

Discounting this experience and knowledge would be like starting with your hands tied behind your back, but unfortunately it’s common with some HR analytics teams.

Let me illustrate this with an example.

Suppose we are asked by a friend whether a coin is fair (ie as likely to come up heads as tails). They then toss the coin 10 times and it comes up heads 7 times. The chance that this will happen is in the region of 12%. This would be unusual but certainly not impossible. Do you tell the friend it’s fair? I suspect you might.

Now let’s change the situation slightly. Let’s say your friend then tells you he was given the coin by a magician at his daughter’s birthday party. My guess is that this bit of information would make you think that the coin probably isn’t fair. Maybe you’d think you were lucky to see 3 tails.

In both instances you’re updating your view based on the information that you had before. In the first instance you start with an expectation that the coin is fair as most coins have an equal chance of coming up heads or tails. You probably have some doubt but not enough for you to switch your view built upon a lot of previous experience.

In the second instance the knowledge the coin came from a magician changes everything. Given the source you’re now comfortable declaring the coin isn’t fair. Background information makes a big difference!

Inexperienced analysts rely too much on their data. They’d look at their data only, without any context, and say there isn’t enough evidence to say the coin isn’t fair. For them the data is everything. HR Analytics without incorporating experience or intuition is like this.

Good analysts, as Sam previously mentioned, start by collecting as much evidence as they can. They’ll ask managers and HR colleagues, they’ll probably do some desk research and see what others have found. Then they’ll take their data and update their view. The more data they have the more their recommendation will be based on the data. Conversely the less data they have the stronger will be their weighting towards intuition and experience.

Analytics models are only as good as the information they have. Intuition and experience are valuable sources of information. It’s crazy to ignore them as we move to using analytics in HR.

Engagement is necessary but not sufficient

Engagement is necessary but not sufficient-01.png

Andrew Marritt reflects on a People Analytics Podcast episode, a fascinating conversation between Laurie Bassi and Max Blumberg about the myths of Employee Enegagement, and looks into the real impact of Employee Engagement in business performance and value. 

A large part of economics is concerned with understanding and modelling individuals’ behaviours when faced with various incentives. Whilst some believe that economists are only interested in money as an incentive, in truth incentives can take many forms.

George Akerlof co-wrote a fascinating book – with Rachel Kranton – on how identity is an incentive (Identity Economics: How our identities shape our work, wages and well-being). There is even a sub-discipline of economics called ‘Personnel Economics’ which is dedicated to understanding how organisations work.

Ten years ago, when I was part of the HR Analytics team at UBS, I was the only economist in a team of psychologists. Looking at People Analytics from the perspective of an economist forms a significant part of this podcast between Laurie Bassi and Max Blumberg.

One of the things economists spend a huge amount of time learning at university is calculus. Calculus is useful to tackle problems involving optimisation. Given we are always dealing with limited resources almost all problems in organisations are optimisation problems. I would argue that one of the key historical basis of modern people analytics in organizations comes from the discipline of operations research, like economics an applied mathematics discipline concerned with optimisation.

The reason calculus is useful is that rarely is there a straight-line relationship between a variable and its desirability. For many firms it would be beneficial to reduce employee turnover, however I’ve worked with a couple of organisations over the last 18 months where more attrition would be beneficial. There is an optimal level, past this the benefits decreases. You can have too much of a good thing.

Our organisations work as systems. There is rarely a clear, unique, optimal solution. Invest money in one area and – given limited resources – you can’t invest the same money elsewhere. Much of management is about prioritising resources to maximise returns.

This brings us onto a key point in Max & Laurie’s conversation – how much resource should one allocate to improving engagement. As the resources are limited what should we reduce spend or effort on? New production investments? Training?

In a system, many inputs depend on each other. In a dynamic system they often depend on the levels of the previous state. Rarely is it true that you provide the optimal solution for the system by optimising each individual input. Furthermore, optimisation doesn’t mean maximisation.

Unfortunately this is often the belief of many in the HR profession when considering engagement. Grand action plans are built without considering the alternative uses of the needed time or money. As Laurie so eloquently mentioned ‘Engagement is a necessary but not sufficient condition’.

Surveys should be mostly open text

Surveys should be mostly open text-01.png

Tom H. C. Anderson of OdinText has been writing about some research that he has run comparing Likert and Open Text survey responses.

Tom used Google Surveys to ask two samples of 1500 people a question about perceptions to Donald Trump’s controversial executive order banning people from certain muslim countries. To one group he asked the question as a typical 5 point Likert Question.

 

To the second group he asked the exact same question but instead let them respond in open text.

 

As you can see the answers are remarkably similar – within 2% of each other. According to his experiment an open question would be as suitable to gauge the level of agreement as a traditional quantitive scale question.

Tom’s findings are remarkably similar to what we see when using open questions as a replacement to traditional quantitive questions. We tend to use a combination of structured and unstructured questions in employee feedback requests but by far the most important are the open text questions. Open questions can achieve most of the aims of a scale question but provide some notable advantages.

In his post, Tom later highlights the difference between the Likert and the open text where the latter provided much richer data on the long-tail of responses (he describes them as low-incidence insights). As he notes:

“While there is nothing more to be done with the Likert scale data, the unstructured question data analysis has just begun…”

Recently a client asked us why we didn’t have a sorting question available in Workometry. Our answer was that for that type of question we’d prefer to use open text.

Their proposed sorting question had 8 different choices for the employee to sort. I could show an almost identical question asked as open text by another client where we had identified just under 30 different options. Whilst we hadn’t done a controlled test like Tom did given our experience we’d expect pretty much identical results. A sorting would be right only if you want to limit the potential options to a subsection of the person’s true opinions

In a recent FT Tech Tonic Podcast with Astro Teller, the head of Alphabet’s ‘X’ lab Astro notes: (at about 14:45)

“If you are serious about learning you set up your test in a way you don’t know the answer to the question you’re asking and you’re indifferent to the answer. You have to do that or you’re not really learning you’re biasing either the test or how you read the data.”

However good your survey design is, if you’re using a structured, choice-based question you’re asking a question where you’re by design limiting what the answer could be.

Open questions on the other hand give you the option of not knowing what you’re going to learn, before you ask. In our experience, if we’re doing a question like the ‘most important’ / sorting question above it would be common to find 2 or 3 of the top 10 answers that you wouldn’t have included in a similar structured question.

The other aspect that text provides is context. A structured question might identify the strength of feeling (though the example above shows that text can do this equally well) and who holds which feeling but it can’t show why they feel it. It’s why when companies do large engagement surveys often the immediate action is to do a series of focus groups on particular topics to understand the context.

When would we recommend structured question?

Even though we believe that in most instances the right option is to ask an open question there are a limited number of occasions when a structured question might be better. We find ourselves using them:

  • when we want to report a simple metric over time, e.g. engagement or eNPS
  • when our objective is to build a model and you need a score on a target variable for each individual

In both of these instances it’s because you want to have a score on something simple and purposely constrain the answer. We might be using such a score to determine what topics those who are engaged (or disengaged) are more likely to be discussing. It’s important to note, however, that for any feedback request we might ask 2 or 3 scale questions.

Why are scale questions so popular?

If open questions hold so many advantages why are most surveys mostly scale questions?

A very big factor is that it’s easy to report and analyse scale questions. Before tools like OdinText for customer feedback or Workometry for employee feedback analysing large volumes of unstructured data was hard.

Now, however that is not the case. Given the rapid progress of text analytics I suspect we’ll start to see the gradual death of the traditional survey. If you’re serious about learning it can’t come too soon.

New ways to run engagement surveys

New ways to run engagement surveys-01.png

A version of this post originally appeared in EFMD’s ‘Global Focus’ Magazine.

“A new market has emerged: Employee feedback apps for the corporate marketplace. These tools are powerful and disruptive, and they have the potential to redefine how we manage our organizations.” Josh Bersin, Forbes August 26 2015

The measurement of employee engagement is changing. Businesses have been measuring engagement for about 15 years, the market is currently worth USD1bn per annum yet most reports suggest engagement is trending flat if not actually decreasing. Something is obviously not working.

There are many reasons businesses are growing frustrated with current methods. Slow, expensive and resource-intensive are some of the more common ones we hear. In many businesses the only things that are now measured on an annual cycle are engagement & performance management – both run by HR. Business leaders are demanding more real-time insight.

During the same time HR has been emphasising understanding employee engagement the measurement of customer engagement and feedback has been changing remarkably. Today many firms are capturing an always-on stream of customer data from a wide variety of channels from short surveys to social media.

Often called Voice of the Customer, the emphasis has switched to continual listening, rapid resolution and bringing deep insights of customers needs into everything from product development to service provision.

Business leaders are asking why there is a disconnect. Why have customer teams adapted whilst HR has stood still? Many of the trends that we’re seeing in engagement measurement could be viewed as an application of a Voice of the Customer philosophy to employees.

At the same time that there is this shift in the demand side we’re also seeing a shift in the supply caused by technology.

The technology changes can be classified into four categories.

Automation

Technology-led automation is something that is happening across society and it should be no surprise that it’s surfacing in our area.

Firms, especially new entrants, are automating parts of the engagement measurement & analysis process that typically was done by analysts. Whilst it started with relatively simple report automation – the production of thousands of template-based pdf reports moments after a survey closed – we’re seeing the level of sophistication increasing.

Whilst this might, and should, have been utilised by the traditional firms for some time their incentive was to increase margin. Most of the new entrants are using it to radically reduce prices and complexity. The new business models are disruptive.

Real time reporting via dashboards is becoming the norm. Production of large numbers of PDFs is possible. We’re seeing a shift from the multi-page result presentation to one page infographic style reports. Ultimately there is a shift from seeing the provision of a large numbers of individual reports as complexity to seeing it as a commodity solution.

The ability to automate however can blind the user to really question whether they’re addressing the real issue, or merely creating a faster, cheaper broken process.

 

 Rethinking the end-to-end feedback process

Rethinking the end-to-end feedback process

For Workometry we took the full end-to-end feedback process back to first principles. At the beginning and end of this process there are likely to be two time consuming and expensive periods of qualitative research – designing a great survey at the beginning and running workshops to understand context regarding the issues at the end. Only by addressing these long, expensive activities can you make feedback truly agile whilst preserving the richness.

Mobile & User Interface changes

The second technology-led shift is to do with the way that employees are able to take surveys.

For several of our clients mobile has become the dominant channel for employees to take surveys. We see respondents taking surveys just before the working day, during lunchtime and even in the evening. They’re interacting on the edges of their working days and grabbing a mobile device to do it.

Consumer web technologies have changed the way we expect to interact with our devices & engagement surveys can’t escape this trend. Many of the question types we used were the same as we used on paper but digitalised. We used these methods often because they were easy to score. Digital-only surveys aren’t bound by these constraints.

Research in user interfaces is reinforcing these methods. In a world where people expect to touch, slide and scroll through long-form sites surveys have needed to adapt.

Big data technology

The majority of the new entrants are focussing on the previous two technologies. Whilst this is right for medium sized businesses, enterprise organizations typically have a set of needs that extend these simple use cases.

One shift that has occured during the last few years in a number of firms has been the building of sophisticated People Analytics capability. Firms in this position are increasingly wanting to combine and analyse employees’ demographic, behaviour and perception data to answer key, strategic business questions.

Whereas employee survey data has historically been treated as an island – analysed with the context of the perception data or a predefined limited set of demographic information – survey data is now used to give critical insight into the reasons why.

To do this type of analysis requires that the survey data can be linked on an individual basis to both an extended set of demographic data, and to behaviour data, either from HR or business systems.

Furthermore it’s often useful not just to analyse the result of one survey with the extended data set, but to also include all other survey data belonging to an individual. Such requirements quickly dictate the sizes of data processing systems.

As well as the ability to handle large data sets increasingly analysts are using non-table data structures to better answer questions. One alternative that offers great potential are so called ‘graph databases’ where data is stored in a network. Such data structures allow us to ask very different questions.

With network data we can more easily answer questions not only about the individual employees but also the relationships between the employees. We see early promise in a network perspective at looking at contagion of engagement – ie how changes in employee engagement can spread across an organization.

Network survey technology such as that produced by Australian start-up Polinode allow businesses to capture not only traditional survey questions but also ask questions about an individual’s working relationships. Alternatively it is possible to understand communication patterns through the data trail left by emails, telephone calls or participation in internal social channels.

Machine learning

The final technology trend which is starting to disrupt the survey world is the application of machine learning – the use of algorithms to search for and learn patterns in large quantities of data. Machine learning is also the basis of much so called ‘predictive analytics’.

With employee survey data we’re seeing great success with three applications of machine learning: using text analytics to make sense of vast amounts of open text answers, using pattern-spotting techniques to make probabilistic assessments of which populations are most likely to raise certain topics and finally to use survey data to answer business questions.

Historically it’s often been acknowledged that open text is the most valuable part of a survey, however it’s been very difficult and resource-consuming to deal with it at scale. Text analytics can solve this problem and therefore provide new opportunities to capture this richer form of information.

Our experience is that with these techniques we’re able to analyse open text responses in almost any language, categorise a comment against a continually evolving set of categories & score against things such as sentiment and to do so in near real time. With this capability it’s possible to radically rethink how and what data is captured.

The second use of machine learning is to identify groups most likely to be discussing certain topics. Whereas traditional surveys might show differences between one function’s scores against their peers, with machine learning it’s possible to segment the population a much more granular manner. For example you might discover that those complaining about a shortage of career development opportunities are much more likely to be women, gen ’Y’ers who are in the upper performance grades.

Finally survey data is increasingly important to answer strategically important business questions that involve the workforce. For example you might link the survey data to sales data from a CRM system to try and optimise sales performance. In some cases it’s possible to use existing survey data. In others surveys need to be used to collect new data.

So with all these opportunities where to get started? We typically advise our clients to do three things:

1) Make sure that the legal and other agreements are in place to use data in this new manner. Be open with your employees about how their data is being used and how the new approaches don’t need to mean lower levels of confidentiality

2) Pilot some approaches with new use cases or in discrete populations.

3) Consider those pilots as supplementing existing work. From our experience you might replace old approaches but there is often significant political capital invested in the established approaches.

How frequently should you survey employees?

How frequently should you survey employees?-01.png

The biggest change over the last 5 years in the field of employee engagement measurement is the frequency that employers are looking to poll their employees. Whilst it used to be common to run a survey every 12 or even 24 months now firms are wanting to run a survey multiple times per year.

One question that we’re frequently asked is what is the ideal frequency to ask employees engagement-related questions. I don’t think there is a perfect and generally-applicable recommendation however the following describes our thinking about this.

Why measure frequently?

If we think about personal health, even if we have the ability to take measurements at any time it doesn’t necessarily make sense to do so. I may measure my weight on a regular basis but I can’t remember the last time I measured my height.

The reason that I don’t measure my height on a regular basis is that it doesn’t change (much). I’ve been about 187.5cm for the last 20 years. However during this time my weight has had a range of over 20kg. I’m certainly heavier now than I was one month BC, otherwise known as ‘Before Christmas’.

So, logically the only reason we’d want to capture engagement-type data often is if we believe that it changes frequently. Does it?

Thinking about organization-level engagement

All measurement has uncertainty. Depending on your variable of interest and your measurement  approach you probably have different amounts of confidence in any measurement.

Most people report Employee Engagement at an organization level as the sum of the individual scores of employees in that organization. This makes some sense in that it’s easy to do but it adds extra uncertainty – we can’t really disentangle how much of the uncertainty comes from the measurement error at an individual level and how much depends on who has been included at any period.

Lots of uncertainty in how we measure engagement

The next thing to recognise is that in almost every instance we’re not measuring engagement (if we can actually define a commonly agreed definition, but that’s another issue). What we are actually recording are the answers to a number of questions.

Given a whole range of factors from culture, language, user interface, number and type of scale etc. we get different responses.

It’s worth considering that as individuals we have a feeling to any particular question on some continuum between two extremes. When presented with a scale question with categories – e.g. a Likert scale –  they have a burden to try and convert where they see themselves on their continuum to what they think is the nearest value on the scale. Two people with identical feelings can interpret this in a different way. The same person facing the same challenge might change the interpretation. There is uncertainty in almost everything.

Measurement uncertainty and regular engagement measures

In the old world where people did engagement surveys infrequently – say once every year or two – there would often be endless conversations in organisations on the right timing. There was an implicit feeling that organization events, such as restructure announcements, would make a big difference and therefore the measurement team would try and pick a favourable time. It was the team feeling that there was uncertainty.

We probably can think of the measured value as an underlying engagement value + short-term noise. As managers we want to understand the underlying value. If the noise was totally independent of any short term issues then with a large enough population we could probably assume that our distribution of noise was approximately normally distributed and it would average itself out.

However, the concerns about picking the right time raises two issues:

  1. We probably can’t assume that the noise is independent
  2. Often we report down to small groups and therefore we won’t see avergeing out.

When infrequent surveys make sense

If we think about what this logically implies there are two conditions for proposing the historic, infrequent measurement cycle:

  • That an employees’ perception doesn’t change frequently
  • That the noise or measurement error is small and therefore one measure is good enough.

Do these apply?

We’ve got Workometry clients who are doing regular, even monthly, pulse surveys. If we look at individual-level longitudinal data we do see it mostly stable most of the time. However from time to time we see an individual who was stable suddenly starting to change – usually becoming less positive.

This stability also implies that measurement error is probably relatively small. It’s easier to assume that a static value probably shows infrequent change rather than real change that is ‘hidden’ behind an opposite measurement error.

The downside of frequent

There is another issue though facing us as we think of how often to ask employees for their feedback – being asked the same questions introduces fatigue. Fatigue means less people might respond therefore reducing data quality.

When we asked irregularly most companies had a top-down action planning process. This process took many months to complete a cycle. However given the infrequent nature of asking it was still possible to make the changes and let the organization adapt before the next cycle.

With frequent feedback, even with a more agile approach of closing the loop in terms of feedback, it might still be difficult for the organization to adapt if the cycle is too short. We’ve seen this in terms of feedback – employees asking why clients were asking the same questions when there were other topics that they could poll.

The ideal frequency

Our current view is that unless there is significant amounts of organization change occurring (eg during a merger) then quarterly is probably as often as you’d want to ask all the same questions to employees. Any closer that this and we start to see employees telling us it’s too frequent.

But does this mean you shouldn’t ask for feedback more often?

We believe, from looking at our data, that whilst most of the time an individual’s perception is stable, there are times – probably after key events – when an employee’s views can change. (See here and here for more information). Given there is likely to be a trigger that causes this we feel that it’s important to be as close to the event as possible. From this perspective monthly is probably the ideal time.

Meet the hybrid feedback approach

So how do we keep close to the events yet not be repetitive? By building a feedback calendar.

An engagment survey usually has 2 components: a question or questions to understand or infer engagement and a selection of questions where the objective is to find variables associated with higher or lower engagement.

Our view is that we need to add a third section – asking if their perception has changed since the last time period and if so what triggered the change.

With most clients we’re using a version of the eNPS as a proxy to measure engagement. We always ask the question about change so we can be close to the event.

However, we don’t have to ask the supplementary ‘factor’ questions every time. These tend to be relatively stable. Instead we take the opportunity to ask employees about other important questions

Building a feedback calendar

How do we work with clients who want to take this approach?

  1. To start we set the months where we’re going to ask for engagement-focused feedback. This will proberly be quarterly so we’ve filled 4 months
  2. We then identify other events where it might be useful to get employee feedback. Examples of this could be around performance reviews, company results, promotions, business cycles etc. These are events that can be identified a long time in advance. Doing so might fill 3 or 4 more months
  3. Finally we work with executives to identify key business challenges that they might want to get the views of the employees. Sometimes this can be planned ahead but otherwise we might want to instigate a process to identify this topic closer to the time but with several dates in the diary.

The final thing to note is that we might identify topics based on earlier results. If you need to go back to the full employee base you might do a follow-up month. Often however you can add a question or two to the monthly questions of a defined segment.

Asking open questions

The key to all of this is that if you ask for feedback it has to be quick to complete and seen as relevant. Our view is that we get the best quality data by asking open questions and scaling this qualitative approach.

Doing so means that typically it takes less than 5 minutes to complete a feedback request. As we can reveal the company-wide feedback within a very short time of a feedback period closing we can work with our clients’ teams to communicate back to employees and to identify which groups and individuals need to act. The context provided by open questions means that managers can easily identify what needs to change.

Employee Experience and why it fits well with People Analytics

Employee Experience and why it fits well with People Analytics-01.png

Employee experience is rapidly becoming one of the key topics on the CHRO agenda. Yet many of the conversations that I hear miss a critical factor: that creating valuable employee experiences is a systematic and data-driven process.

When I left a senior HR role in 2009 to build a business ‘to help make HR an empirically-driven function’ one of the key areas of information that we started with was experience data. In the diagram above, which I’ve taken from one of our earliest presentation decks, the components at the bottom right are all ways of measuring experience.

Our earliest proposition said OrganizationView focused on 3 things:

  • measurement & meaning – collecting data and making sense of it through analytics
  • employee-centric design – as we said ‘use a scientific approach to ensure technologies and services are closely aligned to users’ needs and behaviours
  • develop and deliver – moving analysis into production

Why experience?

Why such a focus on experience in 2009? Well, my background in the early noughties was centred around understanding in a deep way how to systematically understand user experience. Lots of this was in the area of candidate experience. You can see some of this in a 2004 article by David Bowen in the Financial Times – subscription needed – that came after a long conversation we had about candidate needs from my research at the time. It’s about building career sites and recruitment systems that are based around optimising the candidate experience.

As an aside when I joined UBS in 2005 to launch their first global careers site on the first meeting of the project team, when we were discussing governance I added one rule: “if we can’t decide what to do we’ll test it with users in an experience lab.” We tested lots (UBS had two user-research labs and we also ran tests in London) and the bank came (joint) top in the FT web ratings in the career section that year. We cut our marketing budget that year by over-investing in research.

Some of this philosophy came from working in a couple of firms where my close peers were working on projects with IDEO. We took this view and many of the techniques into recruitment, making it candidate centric and based on experience and relationships. The key though was that the process was heavily research-centric. Experience design is highly aligned with empirical decision making. It is systematic and based on data. A central theme is to actively and constantly listen and understand the experiences your stakeholders.

IDEO, in their 51 method cards, separate their ‘measurement’ approaches to 4 categories – Learn, Look, Ask and Try. What they all are is ways of understanding how the user experiences a product or service or the part of their life where the offering will fit. Some are very qualitative, some more quantitive. I believe all qualitative data can be quantitive if you capture enough examples. Also, the first thing you do with qualitative data is to add meta-data which makes it quantitive. In the end data is just information.

From Candidate Experience to Employee Experience

The roots of Employee Experience came from Candidate Experience. From 2002 I smashed my head against the proverbial wall for a long time trying to evangelising why it was critical. The Talent Board folks did a much more effective job.

One of the slides we used to show in the early days was the following graphic. In it we compared the importance of experience as a driver of satisfaction in banking and in work. We used internal bank research (not UBS) with some re-cut data from the CEB. It turns out that in each case components of the offer which could be classified as ‘experience’ account for about 70% of what drives satisfaction, and therefore engagement.

 How Employee and Customer experiences drive satisfaction

How Employee and Customer experiences drive satisfaction

 

 

The way an employee thinks about their organization is the sum of their experiences. At different stages in their journey from consideration, through selection to employee and alumni their perception will change. How that perception develops is the sum of their experiences. I discussed how this is linked with the EVP in early 2011.

Employee Experience and People Analytics

What we can establish is that experience design is both systematic and data-driven. Yes, it incorporates systems and user experience but critically it includes experiences that have nothing to do with systems. Even with systems you need to understand what people were doing before they go to the system and what they do after using it.

Our vision of People Analytics is that it should drive evidence-based decision making about the workforce in organisations. We have always felt that that evidence is a mixture of quantitive and qualitative data. We believe that experience measurement is a core element of the role of a People Analytics team.

In the graph above we show that 70% of the drivers of satisfaction is experience based. If we think of the current state of People Analytics too many firms only use existing data from their HR systems to develop their models. None of this data is likely to be describing experiences. They’re building models trying to squeeze meaning without signal about the important part.

The analysts’ job is not to build accurate models, it’s to answer critical questions with data. Given how important a driver experience is it needs to include experience and therefore many analyses need to include experience data these models. The analyst needs a robust and automated way of capturing this data.

At the heart, this was the basis from which we decided to build Workometry. Capturing open, reliable experience data at critical touchpoints – what some call ‘moments that matter’ – and doing so in a way so that it can be integrated into sophisticated models is critical to understanding and managing the employee experience.

How to ask for and receive better employee feedback

How to ask for and receive better employee feedback -01.png

Over the last 18 months that clients have been using Workometry we’ve collected hundreds of thousands of rows of employee feedback, for numerous large organizations. Although we haven’t been able to randomly assign different questionnaires to different employees to run a ‘scientific’ test we are seeing some clear patterns into what makes a successful request for feedback or ideas.

Show that you’re listening

Responding back to employees quickly, even if it is a short summary communication, with an overview of the results and how you’re going to act makes a big difference to the effort people will take to respond the next time.

When you’re using open questions it’s useful to summarise the messages that you were provided. Many employees don’t expect management to read their comments – we even have seen employees start comments with ‘I don’t expect anyone to read this but…’

Show that you’re the capability to understand what they’ve said even if you’ve asked thousands of people. Avoid using a simplistic approach like you’d get on a wordcloud, but instead summarise the comments (think ’shortage of career opportunities’ not ‘career’ or ‘opportunities’).  You should be able to understand the context. Use poignant examples of quotes where possible.

Try and communicate this within a day or two of the feedback period closing. Of course ensure that you’re giving an honest overview of the responses.

Ask as few questions as possible – you’ll get better data

Nobody likes filling in long surveys and we certainly see an inverse relationship between the length of the questionnaire and the quality (measured by word / sentence length and topics identified). The more qualitative questions you ask the less information people provide in the open questions.

We believe that when you really want to understand what someone thinks you ask open questions. Furthermore our algorithms learn the topics from the text provided meaning you don’t need to pre-determine what the possible answers will be. Hence, we believe that text data is richer data and more accurately reflects the provider’s views.

If we consider each topic identified as replacing a quantitive question then one open question can ‘replace’ up to 50 quantitive questions. With this in mind you need to ensure that you only ask quantitive questions where a quantitive question is essential.

As an example, on a recent questionnaire we had a median answer length of 16 words with the maximum answer to a question being over 500 words. For a different questionnaire for the same client which was more ‘traditional’ we had a median answer length of about 7 words.

Lead with the open questions

Most of the time people design questionnaires with the open questions last. In the worse case the question might even be a ‘catch-all’ such as ‘Is there something we didn’t ask which you think is important?’

Recently we had a client who turned this approach on it’s head. They asked 4 great open questions FIRST and then asked a few scale questions after. The quality of the responses that they got was superb. It sent a message that these were the most important questions.

Ask for the good and the bad

Text algorithms have progressed considerably over the last couple of years however you always can build more accurate models if your constrain your text. One way of constraining the text is asking employees what’s they think is good and what could be improved as two different questions.

We code each question individually which gives Workometry its remarkable accuracy (but also means we need decent data sizes hence working with large organisations). What we see is if we’ve asked for positive and negative separately, and code the answers as two different questions we get a better model than if we combine the text and build a single model.

Consider breaking from the 5 point Likert scale

Employee surveys have almost always used Likert scale questions, usually with 5 points ranging from ’Strongly disagree’ to ‘Strongly agree’. Whilst there are benefits from using them – not least familiarity – we think you should question whether they’re always right.

Increasingly, as an analyst, I’m enjoying working with data from other question types, especially when we use Semantic Differential scales. These are questions where two polar-opposite adjectives are used and in our implementation the 11 points between the extremes are numbered.

The data we get from such questions is richer than from Likert questions. We treat Likert data as strictly ordinal (so no calculations like taking a mean) whereas given the alternative uses a numbered scale we believe you can assume the data is interval.

What we certainly see is that the data to most scale questions are not normally distributed.

Make sure you have at least one target variable (which isn’t qualitative)

Asking open questions is great but don’t abandon all scale questions. The really powerful feedback is when you can combine quantitive and qualitative data.

Each time you ask for feedback there is probably one or two things that you really want to understand how strong the respondents feel. For example if you’re asking for the quality of service you’re providing you might ask:

  • ‘On a scale of 0-10, how well do you think we’re performing?’
  • ’During the time you’ve been dealing with HR has the service we’ve provided got better or worse?’

With these two variables you’ve got the ability to understand the (maybe 100 or so) different themes from your open questions by things such as the average or variance of people providing certain responses. Alternatively you can look at the top / bottom quartiles on each scale question and identifying which themes these groups are more likely to use.

Adding a small number of scale questions gives your analysis and interpretation of the results purpose.

Give people results directly

One of our clients, after using Workometry on a monthly basis for a year, decided to ask their employees for feedback on the feedback approach.

One of the clearest, and most surprising results was that employees wanted to be sent the relevant results directly. Whilst there was some evidence to suggest that this was particularly strong for teams where the manager didn’t take time to discuss the results the feedback was pretty consistent across all parts of the organization.

These employees wanted to have the results emailed to them directly.

Embrace mobile devices

Several of our clients work in environments where a large proportion of employees don’t have access to computers. In these environments employers should embrace the opportunities that communicating to employees using their smart phones.

We’ve found that such employees, who are often at lower levels in the corporate hierarchy, not only have universal access to smart-phones but also are likely to acquire the newest phones the quickest. Maybe it’s because they aren’t in front of a computer all day that they invest in good mobile technology.

We’re currently not finding much difference in the quality of answers people provide on mobile phones which had concerned us.

At the other hierarchical extreme, executives like giving feedback on iPads.

Think of your questionnaire as an interview, not a survey

Twenty years ago I used to head recruitment for a fast-growing consulting organization.  What I realised as I created and used structured interviews for the firm to use was just how much you can learn from a single, well thought-out open question. It’s rare that you need to ask more than a couple of follow-up questions.

We’re learning that you should think of your feedback questionnaire like you do an interview. Ask a small number of open questions. Give the respondent the opportunity to present their ideas in their own words and ensure that you have enough information and context so you feel confident acting on what you hear.

With feedback often the most valuable is when you get information you’d never have expected. This has always been a problem with survey design – though if you don’t ask open questions you might never realise it. With recruitment interviews the best people I met were often those that I didn’t expect to be exceptional.

Only by starting with an open mind and asking well-thought questions do you really get the best information.

Developing empirically-driven HR leaders

Developing empirically-driven HR leaders

Some HR leaders are trying to build HR analytic capability by hiring a small team of data scientists or using a consultancy to do pilot projects. The hope is that by doing good, valuable work they will encourage a change to a more empirically-based HR function.

The State of Sarawak in Malaysia is taking another approach and I have high hopes that it might prove to be more effective. Instead of building a small team of highly capable analysts to provide a ‘supply’ of People Analytics they’ve focussed on building demand by making analytics-understanding a core skill in HR.

Re-skilling HR

Earlier this year I started talking to Dr. Sabariah Putit, now Deputy Secretary of State responsible for transforming the civil service in the state. She hatched an ambitious plan – she wanted all her senior HR team to be trained on People Analytics.

We’ve done this work before, working with senior HR leaders to help them understand how analytics can help them in their organizations. However as our conversations progressed it was clear that their ambition far exceeded anything we had previously seen.

Analytics should be a core skill of all HR

In September we completed the first wave of training in Sarawak. During this wave around 110 of the most senior HR people in the state have been through our 2 day in-house HR analytics course. This includes not only people in the central HR functions but also the most senior HR people in various departments and regions of the State.

It’s worth stating that I don’t believe that the majority of these people should or will be doing analytics, at least not at the level which we’re seeing firms build HR analytics teams. Hence the course we’ve developed over the last 3 years focusses on giving HR leaders an understanding of how to apply analytics in their organizations.

Being good consumers of Analytics

My take on the need for most in HR is they need confidence to use data (or information as we call it) to make empirically based decisions.

I want HR managers be comfortable to turn to data or analytics to inform decision-making. It’s important that they don’t get intimidated when more analytically-driven functions are using data to make decisions or argue their case.

There is a mythology or hype about analytics at the moment that you need a team of PhDs to do this work. I want to show that many of the core techniques and approaches are at their heart simple. When a software company or consultancy tries to sell a shiny new product / project based on ‘predictive analytics’ I want the HR managers to understand what it is, how it could be useful and how to ask sensible questions to evaluate whether it’s likely to be effective.

Finally, as organizations do build-out analytics functions in HR I want to ensure that there is a common language between the HR specialists or managers and the analysts (who often will have a non-HR background). It’s vitally important that these groups can come to a common understanding of the real need as quickly as possible and that HR managers feel confident managing analytics projects or shaping them to the needs of their clients.

Ultimately our approach is to provide a start to building a sustainable culture of empirically-driven decision making in HR. Much of what we hear both written and at conferences is about building capability for the supply of analytics within HR. What we’re attempting to do is build the demand.

Focussing on the technical aspects of analytics can be a red flag

Whilst the course that we delivered in Sarawak was called ‘Talent Analytics’ it would probably be better described as ‘How to use information to deliver empirically-based HR’. Needless to say the latter isn’t so snappy.

Too many analysts, especially those without deep HR domain expertise, seem to focus on the predictive capability of their models. They discuss which algorithm they use, or how well it can score on various definitions of accuracy.

Our take is different. We think you need to be focussing on business objectives and achieving those objectives requires not only a prediction but also action. I’m absolutely convinced that often to get the model to perform well you need to focus on variables or features that realistically you can’t do much about. As I’ve previously noted, far too few analysts use a loss function which is absolutely critical when understanding the predicted benefits of taking action.

I also frequently see a desire to overcomplicate the situation. Without a good understanding of the potential ROI of analytics (something that I spend a lot of time discussing on the course using an information economics approach) it’s too easy to over-invest in doing analysis where bringing some much simpler information to the table could be almost as effective. Analytics becomes an expensive R&D function. Our goal isn’t better analysis, it’s better decisions. If you can realise 80% of the opportunity with 20% of the effort you should almost certainly do that first.

The course we’ve developed really has two broad components. The first is number-sense and what’s probably best described as decision-making under uncertainty. The second is an explanation and demonstration of some of the techniques and approaches that we think are the most exciting and useful. We’re not really trying to teach people how to do predictive modelling or social network analysis (though we show it’s not prohibitively hard to do and in Sarawak we conducted a social network analysis to look at collaboration in the HR team to bring it to life) but really to show that these techniques exist and what they can do. All analytic techniques require you to think about the workforce and HR in possibly different structured ways. Understanding them can even challenge your existing beliefs about how our organizations work.

HR analytics is not just for HR

Building a culture of using data to inform people decisions can’t be something left totally to HR. As part of my time in Malaysia I had the honour to present to about 50 of the State’s top civil servants. In some ways I enjoy talking to ‘the business’ about the work we do more than HR. The feedback and engagement we get is both challenging and enlightening.

One of my personal highlights was discussing with the head of the largest department, an engineer, about applying systems dynamics to understand how the workforce develops over time. The realisation that he could apply approaches that are familiar to him as an engineer to his organisation was wonderful. I really hope that he is able to apply it into his organisation, even if only at the level of a mental model.

In my view, a disproportionate amount of effort has been spent creating analytic capability in some HR departments without focussing on the cultural change needed to embed these new ways of working. We could argue that there has often been a ‘build it and they will come’ approach. I think we need to think of this as introducing a new ‘product’ to the HR offering, and it’s absolutely critical that developing the market is at least as important to ensure that results and efforts are sustainable.

Our favourite books for People Analytics

Our favourite books for People Analytics

A common question I frequently get is “What are books should the aspiring People Analyst read?” This post is a personal view on 5 that made an impact on me. Reading them won’t turn you into a great analyst, but I think all analysts will be better after reading them.

The case study: Work Rules by Lazlo Bock

Google was seen as an early pioneer in using data to understand and manage the workforce, and employees’ experiences. The case studies mentioned in this book are mostly well known by now – Google seems to work hard at controlling the message of what it is doing and to this analyst it feels like the really valuable stuff they’re doing is probably tightly hidden behind closed doors. This is hardly surprising given the ability to use People Analytics to build competitive advantage for a firm.

The part that jumped out for me when I read this book was not the analysis that they were doing but their philosophy of running experiments to test changes. Unfortunately this is rare. Well conducted experiments remain the gold-standard for good reason.

The stats book: Computer Age Statistical Inference: Algorithms, Evidence, and Data Science by Bradley Effron & Trevor Hastie

A recent publication but an absolute gem of a book. This isn’t the book to turn to if you’ve never done statistics before (but arguably if you’re that new to the field you’re probably not an aspiring People Analyst). However if, like me, your education in statistics was pre-computing this is the book which will catapult you into the 21st century.

Part philosphy and part technique Effron and Hastie quickly move through Frequentist, Bayesian and Fisherian inference and 300 or so pages they’re discussing deep learning and SVM. The graphics are well done and colour, the equations are there but well explained.

There are lots of good stats books, especially if you want to go for topic-specific ones. For general statistics knowledge Doing Bayesian Analysis by John Kruschke is a good start or even Statistics in a nutshell by Sarah Boslaugh if you want a good, easy read. The MOOCs do stats really well.

The application in business: Data Science for Business by Foster Provost & Tom Fawcett

If you want to understand how to integrate People Analytics into your work this is a very good place to start. Provost and Fawcett explain the key topics and themes of machine learning / analysis in a (relatively) simple manner, highlighting what to look out for. It’s probably best if you need to manage analysts rather than teaching you how to do analysis but of course an analyst will probably learn a lot from it, not least how to communicate what they do in a clear and concise manner.

There are numerous books on a similar theme. This one stands out to me as having a lot less hype and more practical. It appeals as one written by folks who actually do analysis rather than just want to show how exciting / cool it can be.

Developing the mental models: Personnel Economics in Practice by Edward Lazear & Michael Gibbs

Having domain knowledge is critical to do high-quality analysis. Today we’re seeing lots of great analysts coming from a variety of non-traditional backgrounds into People Analytics and there is a need for these people to quickly get knowledge of how organizations & workforces work. This rich book does that with clarity.

Personnel Economics is the application of econometrics & economics into workforce issues. It tends to be a micro-economics based approach but also covers topics from game theory to information asymmetry. This book will help you to understand a broad range of HR related topics, but do so in a manner that makes analysis of them with your data simpler.

Data visualisation: Signal by Stephen Few

I don’t agree with everything that Stephen writes but that doesn’t stop his books – especially Signal, Now You See It and Show me the Numbers being essential reading for analysts. They are beautifully published books, bristling with great, practical advice. Signal, the most recent one is a great addition – I particularly like the discussion on understanding uncertainty and his discussion on funnel plots (p46 – 53) and parallel coordinates plots (p148 – 152).

I would treat his ideas – and they’re passionately made – as starting points. We do a lot of usability testing of visualisations and some of the recommended approaches don’t seem to work in practice as well as they might be expected to. The other aspect which isn’t given much discussion in his work is the role of interaction in modern visualisation. Interaction adds other goals to a visualisation as well as clarity of understanding. The balance between these needs is a delicate manner.

Very few of these books will give you the immediate tools or examples to doing analytics. However, I think they are more useful as they provide the insight to how to approach an analytic problem, how to structure it, how to decide how you’re going to tackle it. All require thought into how to apply what they discuss but in the end this will likely make you a better analyst than something which encourages you to follow a few recipes.

For more HR Analytics-specific books have a look at iNostix’s list of 29 HR analytics Books from 2015.

Manager-level reporting: how we automate the process

Manager-level reporting- how we automate the process

There was a call a few months ago that those in People Analytics should share more details on how they do their analysis / work. I hope that this is a valuable contribution to that body.

We’ve been doing manager-level reporting since starting OrganizationView almost 7 years ago. Hand-crafting standardized presentations is a time-consuming process and one fraught with potential for errors. Hence we’ve always used technology to automate the process.

What started as automation of slide packs has evolved into the automation of infographics. These days more and more of our clients are using our interactive reporting but given we work with businesses heavily involved in retail or manufacturing where most employees don’t have access to company communications technology printed reports are still needed. In fact we found out recently that one client distributes the reports by pinning the infographics to the toilet-room doors!

I’ll take the example of how we do printed reports from Workometry, our text-centric employee feedback tool. As we’re working for large, global firms we’re often having to produce thousands of individual reports very quickly after a feedback session is finished. Often this means producing reports in multiple languages.

Please note that this covers just the automated reporting to managers. Typically clients are getting interactive reporting that enables them to explore the rich, text data and we’ll usually get some of our data scientists to explore the data in more detail, often using sophisticated models which form the basis of executive presentations.

Getting the data and creating the assets

Workometry data are stored in a ‘big-data’ cloud datastore. We have structured this store to enable us to run very complex analysis over multiple time periods of analysis.

For the ‘print’ reports we access the datastore programmatically from R. Workometry adds metadata to each question explaining what type of question was asked (Open, Lickert, eNPS etc.) which enables us to define appropriate analysis and visualisation techniques depending on the question type.

Once in R we use a script for each report that circles through all the possible reporting ‘options’ that a client wants. At each level we subset the data to include the responses of the relevant group, run the necessary analysis and produce the needed numbers and graphics.

There are two ways typically we create sub-groups. The first is via filtering on demographic details. Workometry uses a schema-free way of storing demographics which gives us great flexibility. Alternatively we use a graph-based data store defining relationships between individuals and then use graph-traversal to identify the needed groups.

Our visualisations use the ggplot2 library in R. We’ve defined our own themes over time which enable us to create great looking visualisations in the company (or client) style. R saves beautiful vector images which keep the print quality super-high. We typically are producing thousands and thousands of images.

Over time we’ve built our own set of R functions which define the graph types and calculations that we need based on what we’ve found best for each question type. Therefore building a new reporting script is a very time-efficient process – it’s literally bolting together pre-built functions. Given we control how the data is captured we’ve been able to control all elements of the workflow.

The output of our reporting script in R is twofold:

  • All of the individual visualisations, scaled and formatted as needed for the report saved as individual PDFs
  • data defining what numbers and graphics should be used in each individual report.

Creating beautiful reports

We use Adobe Indesign to create the reports. Indesign is a high-end publishing tool which gives us amazing control over every element of the look and feel of the reports.

In almost every instance now we work with graphic designers to create a template for each report. Some clients have a particular template which they use each time, others create a new template for each feedback run. It might seem a bit of an extravagance but the use of graphic designers really takes the reports to a next level. Our experience is that people associate great looking, professional reports with a high quality of analysis.

Within the Indesign template we can define where the data and graphics should be inserted, sized and styled. Indesign then takes the data from R and inserts it into the relevant places on each report. As well as inserting the numbers it can bring in the visualisations. The output here is a set of pdfs, one for each team.

Back to R

We use R for the final step, mostly because after all these years and hours of writing R code it’s my ‘natural’ tool and we’re reusing code written over many years.

The final R script that we use has three main functions:

  • It renames the PDF files to give them a meaningful name
  • It adds metadata to define who should see each report
  • It uploads it into the manager reporting part of Workometry.

Managers can get their reports from the Workometry tool which provides a secure tool for delivering thousands of reports to the right people as needed. Hence we control and automate the distribution of documents. They still have the ability to explore the data interactively.

The whole of this process can be run in a few minutes, meaning that every manager can have ‘their’ reports as soon as the feedback period closes. Senior managers can have multiple reports – typically their own group and those of all the mangers below them. We can do sophisticated cuts of the data such as ‘people who report into me’ & ‘people who have been on a project that I’ve led’.

Closing thoughts

The commentary about People Analytics is usually about smart models and analysis. However as any analyst knows moving analytics into production is at least as important. Automating key tasks enables you to free-up analyst teams to deliver more valuable, custom work. Hopefully this will give those who are interested enough knowledge to recreate a similar workflow in their businesses.

The greatest mistake for many in People Analytics?

The greatest mistake for many in People Analytics?-01.png

Douglas Hubbard, in his excellent book ‘How to measure anything’ cites 3 economically-valuable reasons for measuring (and thus analysing):

  • To make better decisions
  • To influence behaviours
  • To sell that information.

All three are applicable to HR departments, especially if you consider that participating in a salary survey is selling information (because the cost is less to acquire the aggregated data if you’ve contributed your own data). However the primary reason for most of the work we see in People Analytics today is the first – making better decisions.

Machine learning takes over HR

So called ‘predictive modelling’ has made a huge impact in HR, or at least the ambition of HR departments over the last 24 months.

As a firm, we’re less-inclined to focus purely on the predictive nature of models, because, as I’ve previously mentioned, there are several other good uses of models.

Models can be used for:

  • understanding what is going on
  • communicating what is happening
  • predicting / forecasting a future state.

Many I speak to in HR who talk about and work in ‘predictive analytics’ actually are using machine learning approaches predominately for the first two of these reasons (a good check on what the primary purpose of a piece of work is is by understanding how the it’s intended to be moved into production).

Machine learning approaches tend to try and do two things:

  • They try and optimise a particular variable or feature
  • They search for patterns in data.

The loss function

If we look at a simple model of decision making under uncertainty (all decision making is under uncertainty as decisions are made about future events and all events in the future are uncertain) we need:

  • the probability an event is going to happen
  • The cost / value that we’ll face if that event occurs.

Furthermore we often have to account for the cost / value of the action we’ll take on the back of a decision if we think a particular event will happen. In general there is hopefully a benefit from making a good decision. However, there is also usually a loss from making a wrong decision.

In HR (like in goalkeeping in football / soccer) often this loss / value is asymmetric – i.e. the cost of making a wrong decision can often be much larger than the benefit of making a right decision. In most instances it’s good enough to build something which amounts to a grid summarising all the key options.

The function that defines the various costs / values is the loss function. Multiply the probability of the even happening by the loss and you get the expected value.

(In this article I’ll focus on the firm acting rationally. It’s worth noting that individual managers might not share this incentive)

Optimising on the wrong thing

This brings us to the key issue we see with many modelling projects we see in HR: they focus on optimising a certain workforce value rather than optimising expected loss / value.

In most attrition models that we see the model tries to reduce the number of people leaving. This can cause significant issues.

In one recent project we found that improving resourcing issues would optimise the number of people that the firm lost. However, it did so by disproportionately reducing attrition of low / medium performers. In general if your sole objective is to reduce your turnover figures do a better job at keeping your low performers!

In this example the best approach, accounting for the loss function was to focus on another set of factors, which were most influential in driving attrition of high performers. Because our loss function valued high performers more highly than average employees the optimal solution focussed on issues that increased the likelihood of this group leaving.

Defining a loss function can take time.

In their eagerness to start building a great model many analytics teams overlook the resource and effort needed to build a good, reliable loss function.

In some areas the loss function is relatively easy to build – if, for example, you have individual sales figures for each employee. However in most instances it is more difficult and requires a long process of investigation plus some realistic assumptions.

A suggested approach

We use CRISP-DM as a methodology to guide all our analytics projects. The iterative nature of this approach is valuable when creating a loss function as well as when building the model. It fits well into the Business Understanding – Data Understanding stages of CRISP-DM.

It’s worth socialising the loss function widely across the firm, not only in HR but in other functions who might have a stake. We like including finance in such a conversation as an agreement from finance that the approach is realistic tends to be very powerful when convincing sceptical HR managers that the approach is sensible.

Monte Carlo simulation can be powerful

At the end of the process we’re often exploring questions such as ‘if we change ‘x’ what will be the overall benefit / loss?’

In reality we now have two forms of uncertainty:

  • We have uncertainty for each prediction or a row-by-row basis
  • We have uncertainty in the loss function (because we can not predict with 100% certainty what the real costs / values associated with each outcome).

Suddenly even the mathematics of understanding expected losses / gains become difficult. Fortunately simulations can help.

If we explore our probability curves using simulation we can start to understand not only the most likely outcome of any decision but also the certainty we should have in the predicted loss or gain.

Thoughts when building your People Analytics function.

Building loss functions is a skill that many analysts won’t have naturally. If you’re building a People Analytics function it’s worth thinking of how you are going to resource this need.

It’s likely that people with this skill will have come from a more business-focussed background. You might have them in the HR function. You might have them in other parts of the business, for example operations or finance. Alternatively some parts of management consultancy will have developed such skills.

One of the key underlying assumptions of People Analytics is that not everyone is equal, and therefore if we focus on valuable segments we can improve our organizations.

The only logical implication of this thinking is to include loss functions in all your analysis work.

People Analytics – what’s in it for the employees ?

People Analytics – what’s in it for the employees ?-01.png

Early in life I developed a love for microeconomics. All of my electives at uni were micro or mathematical economics based. At the time this was unfashionable but over the last 15 years it’s risen in stature.

Microeconomics is all about understanding how people react to incentives. When we’re doing People Analytics we’re not dealing with numbers generated by some machine, we’re dealing with information created by people dealing with the constraints they’re facing as they try and find their best solution. To be a good analyst you always have to understand incentives.

It’s worth understanding that in organizations there will never be a complete alignment between the organization’s incentives and those of the individual. Assuming what is right for the business will also be right for the individual is dangerous. Therefore it’s right that we consider explicitly individual-level incentives and identify ways that these can be harnessed to increase the value to the organization.

What’s in it for me?

People often ask me what is the most important thing is to doing great People Analytics. The answer is simple – trust. If your employees don’t trust what you will do with their data they won’t give it to you, or won’t provide relevant, truthful data.

Too often I see HR rushing into analysis thinking only (a) what can they do to improve the business (b) how can they improve their own lives. Whilst the first one is important, critical even, it’s always important to think about what’s in it for the employees.

People Analytics teams always need to be in a position where they are comfortable explaining what they’re doing to the employees. I personally think we can hold our heads high with the work we’re doing. In many instances we’re looking for information to make better, more objective decisions. We’re using data to reduce bias.

Give people a choice between acting in the best interest of their employer and acting in their own best interest and most of the time they’ll pick the latter. Of course sometimes the two are aligned but this isn’t always true.

With any decision there are likely to be winners and losers. Prospect theory suggests that those who are going to lose out will be more vocal than those who will benefit. Those whose careers have been built on politics will feel threatened by moving to an objective world.

However, this isn’t a reason to not being objective. Company’s can afford to be more objective. It’s right to try and maximise the overall benefit for all stakeholders.

For any People Analytics work to be sustainable (and thus maximising the benefit over time) it needs to benefit all stakeholders. You’ll need employees to help give you great quality data. To do this you have to ensure they can see why it’s in their own personal interest.

The vast proportion of our analytics projects can be explained as being good for employees. Attrition modelling is all about reducing the economic cost of people leaving. People leave because they’re unhappy. You can only reduce attrition by increasing this happiness.

We can think of other topics. Sickness, health & safety (reducing accidents), increasing customer experience, better alignment of teams, diversity, employee experience. What is good for the employer is almost always good for the overall employee population. However the question will often need to be reframed to make this explicit.

People Analytics teams need to be proud of what they’re doing. They should communicate what they’re doing in as open a way as possible. They should engage with works councils where these exist.

If you don’t do this your work won’t be sustainable. You should assume that employees will find out what you’re doing with their data. It’s better for you to tell them in advance. It’s important to build trust.

How thinking about employees’ incentives means better data

Data acquisition is a fundamental part of any analysis work, and in many instances you will rely on employees providing that data. Again, if you want your work to be sustainable you need to think about what’s in it for them to support the initiative.

Again, we have a split between what is in the company’s interest, and what are the incentives for the employee. Unless you consider the latter, and build a convincing case for them personally benefiting, you won’t get great quality data.

For those who are entering the data as part of their job maybe the incentives are aligned. For many other instances there has to be a good reason why someone should provide data.

One of the models we use to describe how improving incentive alignment can improve analysis we call the Virtuous Circle of Data Quality. This describes how employees will contribute more information, or take better care with providing information, if they feel that the analysis provides benefits to them.

One area close to my heart is when asking for feedback. Employees are clearly motivated to provide feedback if they feel that their feedback will make a difference. We see this from both engaged employees – who care deeply about an organization and want tit to improve – and disengaged employees – who want to tell you what’s not working.

It’s worth thinking, however, how you can take this further by providing individual-level reporting. One early proponent of this is Towers Watson who offer individual engagement reports. Their implementation seems very similar to how psychometric test developers often provide reporting to test participants (an almost identical incentive problem). I think this is a great first step. I suspect that user-centred design combined with advanced analytics is the way to build upon this approach.

Help people meet their objectives

The explosion of workforce data has created great opportunities for helping individuals to manage their work in ways that help meet their own personal goals. As analysts and technology providers we need to now build tools that meet this need.

As people analytics evolved we’ve matured from solving problems which were of most interest to HR teams to addressing issues that met business objectives. The next phase will be about providing tools that enable individuals to meet their objectives, even if their objectives aren’t explicitly the same as the organisations’.

We could think about using recommender systems to provide advice, both in terms of identifying knowledge or proposing contacts and networks. We could be thinking about how people like them manage their careers or realise more value from the vast amount of non-pay benefits that are often available, but not well known within organizations.

What is common across almost all of these potential solutions is that (a) there needs to be a recognition that employees’ and organizations’ incentives don’t perfectly align and (b) they use technology, data and analysis to reduce friction including search costs.

Analysis becomes sustainable when it addresses the needs of multiple stakeholders. HR teams should ask themselves the question of ‘what’s in it for the employees?’ If they can answer this question effectively it greatly increases the chance of success.