People analytics is a means to an end, not the end

I’ve had a few interesting meetings this week with senior HR managers who are actively building their People Analytics functions. A common concern is understanding how such a group delivers value and therefore how to identify how much to invest and which topics to tackle.

The goal is better decision making and policy changes

The biggest issue that I see is that too often the perceived outcome of an analytics project is seen as the insight provided by doing a good analysis. Therefore teams are set up to deliver insight and results. Companies are going out and hiring data scientists, often without HR experience. The real way of creating value is by making better decisions and making the relevant changes.

I think a contributing factor to this issue is because so many people who are working in these functions come with a background in academia. In academic work the outcome is the research paper.

In the commercial world value isn’t realised until a change is made. I’ve seen far too many projects that end with the final powerpoint deck. The analyst often blames their client for not managing the implementation, their client often responds that the analyst didn’t consider how the recommendations were to be implemented.

One way to increase the amount of times analysis moves to action is for the team to be engaged with the business who are going to be implementing any recommendations at an early stage, and ongoing through the project. It’s important to be looking for policy changes where there is a ROI. You need to be considering the costs of these changes and the ease of realising the benefits.

The objective is not the most accurate model. It’s realising as much value as possible. Factors that influence this are the predicted benefits, the cost of making the change and the probability that any change will deliver the predicted outcomes.

Focus on things that you can change

Most models will highlight a decent number of potential changes. Some of these will be easier to change than others. At least at the early stages it’s better to focus on those things that are easy to change and have reasonable value.

With workforce models many of the most important variables are difficult to change. Things such as tenure or location are usually up there as some of the more important factors. Both are difficult to change in the short term. Others such as time in a role might be easier.

When you’ve got other factors such as communication quality this could be more difficult to change, with uncertainty of managers changing old habits being quite high. Process changes or resource availability could be easier. The only way of understanding this is to have domain expertise.

Ideally, what you will want to do is to identify the costs & uncertainty of any changes. These will be fed into your model to optimise on expected value, not on the workforce variable such as attrition.

Look at subgroups for unintended consequences

Your workforce is not homogenous and whilst a model might produce a good overall effect it will usually do this by increasing some ‘value’ in one area and reducing it in another.

To compound the issue, the value of these subgroups is never homogenous, neither will the cost of implementing changes. In a commercial setting you’ll never be able to explore every question. You need to direct your analysis to chase value.

The HR team have to be aware of not only what outcome will be achieved but also how it will be realised. Ultimately whilst it’s possible to capture as much detail as possible in the loss function, it still requires human judgement to make decisions.

If you’re deploying ongoing predictions, understand how humans will react

Businesses can afford to make rational decisions, your managers often can’t. In most organizations the upside to the individual of making a decision is much less important to the individual than the downside of making a wrong decision. Humans protect themselves from loss.

How the predictions are being presented, the training the individual who is presented the outcome, and the likely decisions are critically important. If the model suggests Jim has a 60% chance of leaving does their manager see that risk as a prompt for decreasing Jim’s risk or as a prompt to reduce the cost of losing Jim, for example by stopping investing in Jim’s development?

We believe, in most instances, it’s better to communicate factors that increase risk, and how to address them, rather than communicating individual-level predictions. With a good model achieving 80% accuracy it’s worth remembering that 1 in 5 predictions will be wrong. Focus on subgroups rather than individuals.

Decide how you’re going to measure the changes

We believe that changes need to be iterative and tested. The good analytics team will work with those responsible for making the changes to design ways of capturing fresh data to monitor and validate the changes. In turn they should use this data to refine their models.

In an ideal world you’d run pilots (a more organization-friendly way of describing experiments). Here it’s important to be looking for the unintended consequences. It’s best to make several small changes rather than bundling changes into a series of large, broad programmes. Small and targeted is the right approach.

Measurement should be designed not only to capture reliable data but also to reduce the friction in capturing that information. This way it’s likely to be measured.

Look for 4x+ projects

So which projects to attempt, and how much is it worth?

Given all this uncertainty in implementation we probably need to be tackling projects where the expected net return (return – cost of implementation) is several multiples of a conservative estimate of value. Our crude guide is to look for areas where the potential value is at least 4x the cost of the analysis.

A good way of doing this is to use your financial model early on to identify the outcome if you could improve the situation by 5%. For a topic which hasn’t previously been improved using empirical data this is usually a conservative estimate yet reasonable to achieve.

The next phase is to build your model in a series of stages. Use milestones at regular places to identify if it’s worth refining the model. Don’t be afraid to stop early. Modelling suffers from decreasing marginal value quite quickly.

Final thoughts

I suspect that 50% of a People Analytics function needs to be people who understand how to make the changes and not analysts. You’ll want this ‘consultant’ role to have as good an understanding of how to make organization changes as understanding the models.

However, it’s important that they have a good knowledge of the way data can support decision making. It is important that they understand how models work, what they can say and what they can’t. They need to understand where to apply analytics and where it’s probably best to rely on managers intuition (because they are making good decisions already in relatively low-value areas). They’re the organizations’ coach on how to apply data to decision making.

The ultimate goal of any People Analytics function is not to build models – it’s to encourage empirically-driven decisions about business issues that depend on the workforce.