Interview: Nick Lynn on his new book 'Employee Experience Leadership'

Nick Lynn’s new book ‘Employee Experience Leadership: build trust through employee experience and engagement’ is a fantastic read. Nick’s background as a practitioner and consultant, working with senior leadership team to implement employee insight and insight-driven changes is apparent yet this not a book designed to sell more consulting assignments but a comprehensive and information-dense guide to what is for many a new topic.

Why Employee Experience demands new forms of measurement

Miscellaneous Blog Image V04-01.png

Over the last 2 years we’ve seen a big shift in HR conversations from a narrow focus on employee engagement to a wider one about employee experience. Whilst this change has been widely accepted the implication for what this means in terms of measurement hasn’t been considered in such detail.

More qualitative data is needed

Unlike employee engagement, employee experience isn’t grounded around such a formal, psychology-based concept. Understanding experience demands a much more exploratory-based approach which of course demands much more qualitatively based measurement.

When I first got involved in what would now be called ‘employee experience’ almost 15 years ago as a quantitive analyst I was initially surprised not only about how my colleagues in customer experience used a much-more qualitative based research approach but also how effective this qualitative data was at driving effective action.

A simple example were the user tests we’d conduct on the careers site and tool that we designed / implemented. I was advised by our researchers that large numbers of different individuals weren’t necessary to run an effective study. Of course we ensured we recruited well and tried to reduce bias but I was surprise at how quickly the results started to converge. In terms of identifying key areas to change we probably had found most before the 10th person.

Another thing that I rapidly realised was that the issues that we identified were rarely the ones which we had expected. We went in to the testing with a set of ideas to test, often building multiple versions to compare, then found that something completely unexpected needed fixing more urgently.

The data that we captured was typically much richer than a traditional quantitive approach. Quantitive analysis can be very good at identify what, when and how but it’s very poor at helping understanding why - i.e. the underlying causes. From the perspective of improving experience the critical part is to understand ‘why’.

Measurement and service design

Within service design, which we believe employee experience aligns closely with, an approach of going in with an open mind and asking simple, open questions will typically outperform analysing more structured, quantitive data.

We’re trying to understand the events - often called ‘moments that matter’ - that create the positive or negative experiences so that we can systematically improve these touch-points (or reinforce the good ones). Understanding experiences is inherently an exploratory approach. We need to start with an open-mind to understand the experiences from the users’ perspectives.

Traditional surveys are ineffective

Traditional surveys used in HR (and old-school customer surveys) take a constrained view of what could have happened and then offer each possibility as an option.

We all know not only how ineffective these can be and how frustrating. I guess everyone has seen a customer survey which asks question after question about an experience (eg a hotel stay) but fails to ask about the one event that you want to comment on.

Not only do these designs fail to capture the relevant information, they also provide a poor usability and make a poor experience feel worse.

Over the last 15 years these types of surveys have become less common as firms have transitioned to simpler feedback methods, usually with a small number of scale questions and the opportunity to provide open feedback. Think NPS-type surveys or online rating platforms such as Expedia or Glassdoor. The common part of all these is that they use open-text instead of closed questions to capture the majority of the information.

Quickly addressing issues

If we look at mature customer experience efforts we see a strong (often company-wide) desire to close the loop quickly with raised issues, regardless of the channel they come in from.

In the hotel industry a range of platforms have been developed to enable managers to capture feedback regardless of where it is left (guest surveys, social media or one of the many hotel-review sites) and ensure that the guest is acknowledged quickly and any issue addressed. Many hotels will have daily ’meetings for management to review issues and identify solutions. 

This approach has two key benefits, both of which apply to employee experience. First, the issue can be addressed in a timely and effective manner. Second, responding quickly minimises the chance that issue becomes bigger than necessary via word-of-mouth.

Multiple levels of action

With employees action often needs to come from various sources. Some interventions can be dealt with at the manager-level. Others might require escalation to various internal teams, eg IT. Finally there are a class of issues which are most effectively dealt with at a senior level.

If HR want to effectively manage employee experience I believe that they need to build an approach similar to the management meetings in hotels. They need to build cross-organization groups to ensure issues are addressed quickly and effectively.

When I managed the new employee experience program at UBS we created governance groups to ensure a multi-stakeholder approach. IT created a special team to manage global technology provision and we co-designed processes to bridge the gap between HR and IT and ensure that even though managers were still responsible for ordering, if IT hadn’t received the order in sufficient time they could proactively contact the manager. We also created technology that could ensure that different members of our cross-business governance group could be responsible for their content whilst providing one simple coordinated communication.

With employees it’s not always a good idea to devolve action to managers. Employees will often choose to remain silent rather than providing voice if they don’t feel safe. With the manager so important in creating, and destroying, employee experience there is an opportunity for a employee experience team to act as the filter, viewing the feedback and allocating it to the correct individual, whilst preserving confidentiality if appropriate.

On a regular basis there is a need to take a top-down view of all feedback. Statistical analysis and ‘AI’ can provide badly needed objectivity to understanding how themes are distributed, where outliers lie and how feedback is linked. This objectivity is vital to ensure that decisions are effective. The technology ensures that such work can be done in a timely and cost-effective manner.

Ask for ideas and issues

Many in the customer experience space suggest asking a simple ‘why did you provide that answer’ type question as the open question. After studying millions of answers to employee questions we believe that there is a better way.

Some of your most engaged employees will be the ones who identify the most important things to change. Conversely it’s important to understand what those who are disengaged or unimpressed value.

Our experience is, just like how good performance feedback can be given, it’s best to ask everyone ‘what is good about x?’ and ‘what could we do to improve x in the future?’

This effectively doubles the amount of feedback you receive whilst reducing bias in the data. It makes analysis easier and adds little additional burden on the user.

The advantages of asking many people open questions

At the beginning I suggested that qualitative studies typically required less participants. The key reason was that the ideas tend to converge quite quickly.

The main reason for not asking lots of people is the cost of doing so. Manually themed text is time and resource-intensive so it made sense to do as little as is needed to get good results.

However there are advantages of getting more data. Themes and ideas have a long-tail distribution and the more data you have the more you can look down that tail.

At the same time if you want to understand the distribution or relationship between feedback and ideas you can extract more insight with more comments.

Modern text algorithms dramatically reduce the time and resources needed to get human-level accuracy. They therefore offer more opportunities to collect and comprehend ideas at scale.

Wharton People Analytics Conference

Wharton People Analytics 16-9_Page_05.jpg

OrganizationView was a runner-up in the Startup competition at the 2019 Wharton People Analytics Conference. This was our presentation


Good Morning.

I’m Andrew Marritt, CEO and Founder of OrganizationView, a specialist People Analytics practice founded in 2010.

Since 2015 we’ve focussed almost exclusively on helping clients use workforce sourced text data in their models and decision-making. As analysts we saw that the best quantitive models could identify who, when and what but failed to explain ‘why’ a behaviour occurred. We understood that to improve we didn’t need better algorithms but instead better data.

Understanding ‘why’ is critical to design policy and process changes that are at the heart of realising value from data-driven changes. We saw that rich, contextual text data was critical to driving executive action and therefore positive change.

OrganizationView, through our Workometry service provides large, global clients with an effective and efficient way of generating insight from employee feedback quickly and flexibly.

Alternatively structured, not unstructured data

Wharton People Analytics 16-9_Page_03.jpg

Text is often classified as unstructured data. We disagree with that view.

Employee text is structured by its grammar, it’s structured by the context of the question that was asked, the context of the unique language that builds-up in any large organisation.

Any effective analysis of feedback has to be able to incorporate all these aspects. With Workometry therefore we code at the organisation and question level, using the grammar of the answer to inform the understanding of the text. We use an inductive approach which learns the themes from the clients’ own data.

Themes

Our themes are context rich groupings where the label should be a legitimate answer to the question. So instead of ‘communication’ we might state that employees want more transparent leadership communication, better interdepartmental communication or less communication at weekend. Only when you summarise at this level can you close the distance from raw data to effective decisions.

Clients constantly tell us that our specific models comfortably out-perform the best generic models. We’ve yet to be beaten on a competitive assessment.

Humans + Algorithms beat Humans or Algorithms

A key differentiator of Workometry is that we use a human-in-the-loop approach.

The inspiration for this came from Gary Kasparov who discusses how decent but not exceptional chess players plus a chess computer can beat Grandmasters or the best chess computers.

With Workometry we use domain-experts to support the algorithm when it’s unsure. We’ve built a pipeline that asks experts for interpretation when needed, but where the algorithm handles routine decisions. The experts’ coding constantly builds and refines the training data ensuring rapid model convergence.

Ask any question

Wharton People Analytics 16-9_Page_06.jpg

Another aspect of our specific modelling approach is that we can interpret the answers to any question.

This has enabled our clients to increasingly use Workometry to inform key business challenges. 

Ask any executive of a major global firm and they’ll tell you that the organisational distance between them and those closest to their customers and key processes is large. Communication typically is done via Chinese Whispers. We give them the ability to go directly to where the knowledge and ideas originate and opportunity to use machine learning to synthesise massive volumes of multi-lingual feedback to the key insight.

Let me give you a couple of examples:

What is getting in the way of you doing your job?

Many firms find that technologies and processes can stop employees operating to the best of their abilities. High Performers can be especially frustrated by these inefficiencies.

One of our global clients wanted to identify these frustrations. The question they asked all the employees globally was ‘What processes and technologies are preventing you from working to your best ability?’ They received 7.5 million words from 50,000 thousand employees. For this instance we built two models - one to identify the processes and technologies (over 300 key ones) and another to identify the issues and causes. 

Optimising retailers

Several of our clients are large, global retailers. In this highly competitive, low margin sector capturing and responding quickly to what is happening on the ground is vital, these days even helping ensure the survival of the business.

In some instances our retails clients ask business-specific questions such as ‘what can we do to improve customer experience in our stores?’ or ‘What do we need to change to ensure we have the best ever Christmas?’.

In other instances employees mention business specific changes in engagement type questions. The quotes showing came from one retailer who had changed the music in store to reduce broadcast licensing charges. We quickly identified that the new music was causing significant numbers of complaints from employees, that these employees also linked it to customer problems. Those employees were typically the most engaged yet felt that leadership was out-of-touch and didn’t listen.

By surfacing the issue this, like many of the changes we identify could easily be fixed. In fact not only could we reverse the issue with the music but leadership could also demonstrate that they listened and acted quickly.

Listening to employees

I’ve given you just a few of the many examples where our clients have used Workometry to make positive changes to the way they run their organizations. Whilst in many times the data comes from surveys we have helped clients understand feedback from performance management systems, help-desk systems… anywhere where employees provide answers to questions in large volumes.

Our clients typically fall into two groups. Many are the People Analytics functions of some of the largest global firms. We find that the best data scientists become some of our biggest supporters. They typically understood the difficulty of achieving the quality level we do and are glad that they use Workometry flexibly to incorporate text into their work.

The other big set of clients that uses Workometry are the large, global management consultancies and survey organisations. We let them white-label Workometry to extend their services with their clients. With this group we often build hybrid models - starting with their proprietary classifications and adding new themes for the comments outside their model. We can provide the service via an API.

As someone who has been applying machine learning to employee data for 15 years I’ve always passionately believed that the real value comes from combining business and workforce information to solve business problems. 

With Workometry we believe that we enable our clients to use the best possible data to solve these problems, not just the data that is easiest to analyse.



Mixing qualitative and quantitive approaches in People Analytics model building

Miscellaneous Blog Image V01-01.png

The following is our approach to running People Analytics projects, especially focussing on the hardest part - ensuring that you have the right question and right data to make a decision. 

What really is the problem?

The most critical part of any project is correctly defining the problem in a way that can be informed by data.

Whilst this might seem an easy problem, in most instances it’s not. 

Let us take a simple example - can we reduce employee turnover.

In itself this might seem like a good question, however trying to simply minimise turnover implies some assumptions:

  • that all employees are equally valuable - i.e. you have no preference to which employees you ‘save’

  • that there is no cost associated to ‘saving’ employees, or at least the cost associated is uniform for each option

  • that there is no optimal level of attrition apart from zero.

For most firms none of these assumptions would hold. What we probably want to do is to minimise the cost of attrition where the cost function is as complete as possible and will include the cost of whatever change you need to make to reduce the attrition.

What could be causing the problem?

For many analysts, when given a problem there is a tendency to want to jump-in and start building a model. This is problematic.

From our experience it is always advantageous to conduct a qualitative review to ensure that you have identified as many possible ‘theories’ as possible. 

Conducting this research has two key advantages:

  • it ensures that you make a conscious decision to what data and features you need to bring into your modelling process

  • it helps you socialise and gain acceptance for your recommendations. You’ll reduce the risk of a key stakeholder challenging you that ‘have you thought of X?’.

Desk research

It’s highly unlikely that you’re the first person to consider your current problem. Desk research will go a long way to ensure that you build your analysis on the work of others.

One place worth starting is Google Scholar. Many articles will be available even without access to a university library. With time you’ll learn how to sift through the journals and papers to identify causes quickly and efficiently.

Asking stakeholders

It’s very rare that the causes of the issue will be of a true surprise to people within the organization. However it’s quite possible that decision-makers won’t have a complete view of the issues on the ground.

Companies tend to develop myths of what is causing certain issues. Time after time we see a distorted view, especially at senior levels, of what is causing issues on the ground. Not only are leaders often several layers away from the issue in the organization but they are not a representative sample of the people in the organization as a whole. Expecting their ideas to be representative and complete is foolhardy.

Traditionally this work would be done with a series of interviews and workshops. However, especially for an issue where speed or access is an issue (eg for problems with a geographic distribution) it’s worth considering using supporting technology.

Using technology to go broad

One method which we see many of the most advanced analytics teams using on an increasingly frequent basis it to ask a few questions to a large population of people to understand what they believe is causing the issue you’re addressing. They’ll do this using a very short topic-specific survey / questionnaire.

The most important questions in these surveys will be open-text, as you’re trying to identify a broad set of potential issues in an exploratory manner. It’s almost always worth asking these sort of questions in pairs:

  • What could be causing you / others to….?

  • What suggestions do you have that could help us address this?

In addition you might ask one or two scale based questions. Depending on your topic this might be something like:

  • How big of an issue do you perceive X to be?

  • Over the last 6 months do you think this has become better / worse?

It is important to use a survey tool - almost all do this - which enables you to track who provides the answers so that you can link the answer data with various demographic variables. In this way you can analyse the results by various sub-populations. For example if you’re looking at something like attrition it’s likely that you’ll identify different reasons depending on the geography or life-stages of the individuals.

When you have the answers it’s important to accurately code the reasons that people provide in the open questions. We believe that the best way of doing this is to use an inductive approach (you learn the themes from the data, not a pre-defined model). When our clients use our Workometry service for this it’s typical that only about 70% of the themes that they find are those they expected. Using an automated inductive approach replicates what the best qualitative researchers would do but at a fraction of the time / cost.

Where is the data to test these ideas?

For each of the ‘causes’ that you’ve identified in the earlier stage it’s worth thinking how you can get data to test if the perceived relationship seems validated with the data. This part can require some creative thought. 

Some of this data will be available in your business and HR systems but some certainly won’t. 

All measurement and data-capture has measurement error. At this stage you’re trying to think of ways of acquiring data that balances the amount of uncertainty in the measurement with the cost of bringing it into the model. At this stage you’re not trying to build the most accurate data capture method but instead find a way that is good-enough. If the analysis suggests that there might be something worth investigating you can then invest more resources. Doing an early review makes it easier to build a case to create a more expensive / robust method if needed.

Thinking about Proxies

One of the things that you’ll have to do is to make some reasonable assumptions to identify data that could be a proxy for what you care about.

For example, a few years ago we were helping a client build an attrition model for a national workforce across India. One of our hypotheses was that the attrition rate in any branch was influenced by the vibrancy of the local job market.

At the time we didn’t have good regional data on local job markets. Faced with either a lack of data or expensive data acquisition cost we looked for a proxy.

One idea was that if a city has a buoyant job market then more people will move to that city and therefore that the population would increase (or shrink). Fortunately this data was available as open data from the Indian census. By creating a variable for population growth at a city level between the last two censuses we built a proxy for job market vibrancy. It proved an important predictor in our model and helped explain why the issue wasn’t uniformly distributed.

Creating new variables

There is often a big difference between the data that is captured in systems (usually to help run some form of process) and the data you need for modelling.

The process of transformation needed to create the variable of interest from the raw data is one of the most time-consuming parts of the analysis process, but this can be significantly guided by understanding likely issues.

For example you might be capturing events as dates but the important variable might be duration - for example time since last promotion. Alternatively we often found that the rate of change was more important than the absolute value - salary often falls into this category where the salary rise has more predictive power than the absolute salary (apart from at the extremes). 

This type of feature creation can be a never ending task so understanding where to focus efforts is important. Your early qualitative approaches will often pay-back by reduced effort at this stage

New measurement

In many instances you’ll need to capture new data, either because a system has incomplete data or there is no records are available.

Sometimes you will be able to ask people to provide data. In other instances you might want to manually create it from a sample before bothering employees. For example a few years ago we had a client where the hypotheses was that the employee’s previous employer (eg did they come from a competitor) was a driver. In this instance we took a sample of employee names and spent a few hours looking on their LinkedIn profiles to capture the information. It turned out that it wasn’t likely to be an issue. Hence we avoided creating a burden unnecessarily.

Prioritisation of data acquisition

It’s highly unlikely that you’ll be able to include all potential datapoints in your first iteration of your model. Prioritising which to select is another big advantage of using a broad qualitative questionnaire.

For each potential variable we prioritise based on two characteristics:

  • The cost of acquisition (including the resource-time needed)

  • The likely importance.

If you’ve done a wide-reaching questionnaire you will likely have a good idea of the importance by how frequently it was mentioned. This data might also highlight the groups where it could be problematic, eg an issue might be restricted to a particular geographic area. In this case instead of capturing the data globally you might want to include it in potential ‘hotspots’. It’s always possible (and often advisable) to build an overall model which is an ensemble of local models.

Models can’t be better than the data they see

To non-analysts it might seem that the way to build good models is to extract as much data from your systems or databases, apply some form of machine learning model and use the results. This is almost always the wrong way to do analysis.

Good model building is always a conscious set of choices from the analyst about what data to include and in what form. Spending more time up-front identifying potential issues and therefore variables almost always is a worthwhile investment of time and resources.

As noted, as analysts we should understand that these early qualitative efforts not only increase the chance we’ll build good models, but the process of doing it dramatically increases the chances that our recommendations and models will be accepted by our stakeholders. An analyst who thinks their results will speak for themselves is likely to be an inexperienced analyst.

The best analysts know when to use quantitive approaches and when to use qualitative, exploratory approaches. In almost all instances the best approach is to combine them.


Employee Voice: How are you going to implement it?

Miscellaneous Blog Image V05-01.png

In July 2018 the Financial Reporting Council published its long-awaited update to the UK’s Corporate Governance Code. We’ve seen considerable coverage of this in the legal press but very little for HR. This seems strange given that those in HR are likely to have to implement it.

The new regulation applies to firms with over 250 employees in the UK.

What the regulation states and what it doesn’t

The new UK Corporate Governance Code states:

For engagement with the workforce, one or a combination of the following methods should be used:

  • a director appointed from the workforce;

  • a formal workforce advisory panel;

  • a designated non-executive director.

If the board has not chosen one or more of these methods, it should explain what alternative arrangements are in place and why it considers that they are effective.

The challenge with all of these methods is how to capture the views of the employees, to synthesise them and provide them to whichever party that has been designated to include them in the board decision process.

Of course the workforce is highly unlikely to have an uniform voice. Therefore it is essential to capture this diversity of ideas. The formal approach used to decide how to handle potentially conflicting ideas isn’t part of this article.

The challenge of upward communication

As an executive you’d like to hear about all the issues that need your attention. By ‘your attention’ I mean that either they’re things that you don’t want to be surprise with when presented from another source or things that for whatever reason need you to take action (maybe because they are cross-organisational or need your level of authority).

The problem that you’re faced with is twofold:

  1. Only a proportion of the topics you need to hear about will reach you ‘undistorted’

  2. Due to communication windows it will take time for these topics to reach you.

Let’s take the example of a business with 8 layers between the executive team and the folks who deal with the customers.

  • If a high 80% of topics that you need to hear about get through each level then this implies you’ll only hear about 21% of topics

  • If a more realistic 50% get through each level you’ll hear less than 1% of all issues

  • Of course you’ll probably hear about the the wrong topics. Because many of the issues might seem ‘too petty’ they won’t be raised. However if 500 people across the organization have this issue it probably demands your attention. Small and widespread often doesn’t get through, whereas big and niche often does. Often the real value is in fixing small and widespread issues.

The other key issue - one that we frequently see and one that is well-researched in the academic journals - is that employees often don’t raise issues - so called ‘Employee Silence’. It can be more difficult raising issues up through the chain, where it might threaten their managers than to raise it confidentially to a central point. The sort of issues here might be to do with cultural risks such as poor incentives or behaviours. 

Unlocking innovation

Executives often think about this as needing to facilitate ‘whistle-blowing’ or raising of issues. This is, of course, important but these instances are rare and high value - the ‘rare and niche’ that I mentioned earlier. In truth these cases need special channels and need to be treated differently than other forms of employee voice.

The real challenge is not how to find the rare event with a clear channel, it’s finding the diversity of opinions and ideas about something widespread.

Often the true value in integrating employee opinion into decision making is to understand the distribution of ideas and opinions from a large population. It’s about asking all of your employees about a decision and understand the breadth and distribution pattern of the responses.

Of course some of these ideas will be truly innovative and potentially super-interesting but often you’re trying to get a good grasp of the top ideas.

The need for qualitative information

There is a time for getting quantative information - mostly when you already know the problem and are looking to understand the scale of it. If you know all possible options, a closed question might be the best way of putting a number on it.

In most instances however you don’t know all aspects of the problem, or haven’t identified all the possible solutions. When you’re at this exploratory stage then it’s best to ask open questions.

Sufficient, reliable, relevant and useful

Adopting a concept that comes from internal audit:

  • Information is sufficient when there is enough of it to come to an unbiased and dependable view of the topic. If you’re asking all your employees about a subject and a meaningful number of employees raise an issue then it’s probably sufficient.

  • Information is reliable based on the source of the information. Customer perception data from people working day-in and day-out with customers is potentially more reliable than information that comes indirectly. Information that can be validated is more so.

  • Relevant information is information that relates directly to the question explored. When we ask open questions to employees relevant information is that which relates to the topic. We need to down-weight information that we’ll receive but doesn’t help us answer the question (some people have something that they will raise regardless of the question asked).

  • Useful information is information that helps an organization meet its goals. If the information was about an old reporting system which has since been phased out, it probably wouldn’t be useful (because the issue has already been dealt with.)

Employee feedback provided via free text can be all of these things, though it might gain extra reliability if linked to existing data. It’s important when reviewing the summarised feedback that managers assess it against these 4 lenses.

The need to synthesise

Once you’ve decided to ask your employees a series of open questions about a key topic or decision, what do you need to do?

For many of our clients if we ask a couple of open questions we’ll get on average 20 words per question per employee. This means for each question with 50,000 responses you’ll be faced with 1 million words. 

The problem is that you probably need to present less than 1,000 words to your executive team, ideally 1 side of A4 paper.

How do you do this?

Technology to the rescue

Historically, analysing a large amount of free text was a long, expensive process. The quality that you’d get out of this also was probably lower than you’d imagine. Getting 80% agreement between reviewers is pretty good. It’s really hard for reviewers to be consistent throughout a review. Identifying a new category means having to start again.

There are several different capabilities that you need to have, most importantly the need to understand themes - context rich descriptions - rather than topics. Historically this level of understanding has been hard yet with the progress of text analytics over the last couple of years the best machine learning approaches can match human performance in many tasks. 

What you will need your technology to do

When considering an algorithmic support to enable you to collect and understand Employee Voice you need to ensure that your text analysis tool can deliver several key capabilities

1) Ability to ask any question, and to analyse the responses

We do not believe that Employee Voice - the ability to let employees contribute to decision making - is possible without being able to ask and understand questions about the decision you need to make.

The first, and arguably the most important requirement for any Employee Voice technology is that you can ask any question, and the system can theme the answers with a decent level of accuracy. 

This might seem obvious but it’s not. The best text analytics approaches work on very narrow types of text. A system might be great at parsing a CV but couldn’t understand a shopping list for example. To get the level of accuracy that you need we think you probably need to have a model fine-tuned at a question level.

2) Themes not topics

As mentioned before it’s important to understand the themes - how people describe their views - not the topics. So ‘more transparent communication’ instead of simply ‘communication’.

The algorithms should provide summaries the answers. Only if you can understand the underlying meaning just from reading the theme label then it’s probably good enough.

3) Identify notable and unusual answers

Another key aspect is the overall pattern of themes, both in terms of overall distribution and ‘hot-spots’ of feeling across the organization and employee population. Often you’ll need to identify the rare comments that might bring genuine innovative ideas (or tell you about a problem you really need to deal with).

4 Track asking and answering of questions

For compliance purposes you will want to be able to show who was asked, when they were asked, how the feedback was analysed and how the information was integrated in the decision making process. Technology is well-suited to this task.

A process for working with Employee Voice

We do not believe that technology alone will enable firms to meet the requirements of using Employee Voice in decision making. However whichever way (or ways) businesses decide to use to bring voice into business decisions it’s clear that technology can significantly improve the cost, responsiveness and quality of the process needed to maximise benefits and demonstrate compliance.

What we constantly hear from Workometry clients is that when executives experience the benefits of being able to consult their organizations quickly and effectively they want to use it more and more. We hope that this new regulation helps elevate employee voice into a standard business practice.


Taking it further

Earlier this year we published a Guide to Employee Voice. It includes a set of useful resources and documents for anyone trying to understand best practice in this area.

Try Workometry

If you do have thousands (or even hundreds of thousands) of free text answers from your employees let us show you what is possible.



How to understand open-question employee feedback

Large organizations have vast amounts of employee text feedback that so-far they’ve done little with. Often this information is the most valuable part of a questionnaire or survey yet understanding it at scale has historically been hard and time consuming.

The process of understanding this feedback is known as ‘coding’. What a good qualitative researcher is trying to do is to identify a set of categories that both cover as many of the comments as possible yet not so many that they become hard to navigate and understand. 

The importance of the question and organization context

Feedback does not necessarily come in beautifully crafted sentences. The understanding of any answer to a question is highly dependent on the question asked. People write answers assuming that you know the question asked and view the feedback through the lens that this context provides.

Given this it makes sense that to develop a set of themes from the answer to a specific question it has to be done in the context of the question asked. General models are hard to make accurate. It’s why great text analytics models typically have narrow domain use. 

In organizations you also need to understand the context of the organization. Organizations develop their own language use. Many times this is internal jargon, project and technology names. Sometimes it’s re-purposing words to have a different meaning. The employees of one of our clients talks about ‘payroll’ when most people talk about ‘headcount’ (because if they have more money for pay they can afford to take on more staff). Really good models need to learn this language use.

Themes not topics

A good set of categories should be understandable without needing to read the underlying comments (though of course you will likely want to let users drill down to these if they wish). The theme labels need to be sensible answers to the question asked.

If we look at typical categories that survey tools are starting to provide in their text analytics, if the themes are simple, one-word nouns then users will almost certainly have to read the comments to understand what it is about that topic that they are discussing. Noun-only topics are good for navigation - guiding the user to read a subset of comments - but poor at providing insight.

For helping drive decision making what is needed is contextual understanding about what it is about that topic which is of interest. So instead of ‘Communication’ we need to describe that it is ‘more regular communication’, ‘less communication’, ‘more transparent communication’ etc.

It is quite common in our analyses that we’ll find conflicting themes within a particular topic. We’ll find that some employees want ‘more hours’, some will want ‘less hours’ and another group will talk about ‘more consistent hours’. If you’re just looking at the topics - ‘hours’ - all of these themes will be grouped together. To take action you need to understand the context.

Semantic understanding rather than keywords

Early text coding systems used keywords to identify topics. You can even do this with a simple spreadsheet application like Excel. Wordcloud applications typically use this approach.

Wordclouds provide very limited insight.

Wordclouds provide very limited insight.

What is important when creating good themes is to use a semantic understanding of the comment, not just look for certain words. So an employee might ask for ‘more pay’ or ‘better wages’ or a ‘salary increase’. To a human reader these would all fall into the same theme. Our algorithms even recently allocated ‘ha ha, a raise’ to be about improving pay.

The bar for usefulness is quite high

When creating a good set of categories of it’s hard to be useful until you get to a stage where you’re able to identify most of the useful themes and that these themes are summarisations of the text they contain.

We’ve found from clients that the level of coding accuracy has to be high before it’s seen as valuable. In our instance, where we’re trying to replace quantitive, closed scale questions with open-questions it’s important to have good quality themes that accurately reflect the semantic and contextual meaning of the answers.

Whilst providing a list of topics might seem a good first-stage approach, the reality is that it doesn’t replace reading the comments and therefore doesn’t add much value. 

Deductive and Inductive coding

There are two general approaches to doing a coding - deductive and inductive.

Deductive coding starts with a predefined set of codes. It’s what most employee engagement apps and even consultants typically use. You start with a model of what people will likely say and then you allocate their topics to these categories. 

There are a few advantages with deductive coding:

  • You know that you will identify comments in the themes in which you are interested
  • It’s easier to compare results across different organizations

However there are several key issues:

  • You might miss key themes. From our experience, about 30% of topics from a typical employee question might not fit into a general model
  • The coding model has bias as the themes are based on an analyst’s model, not the views of the employees
  • It takes more time / resources up-front to decide what themes to look out for
  • A model really only performs well with one specific question

The alternative to deductive coding is inductive coding. With inductive coding you start from scratch, using the data as the source of the themes. With inductive coding you can understand the answers to any question.

The typical process for creating an inductive code manually is:

  • you read some of your responses
  • you identify themes (codes) that cover this sample
  • you code your all of your responses to these themes
  • you identify the comments you weren’t able to code and identify new themes
  • you then recode all your responses again against the new full set of codes
  • this whole process is repeated until it’s not possible to identify new, useful themes.

This is, at a high level, how we create the themes with Workometry, apart from instead of doing it all manually a set of algorithms helps with the coding.

Balancing accuracy and coverage

In any predictive model the analyst has a choice between false positives and false negatives. In the case of text classification, if you want to increase the accuracy of your coding (reduce false positives) you increase the risk that you’ll miss some comments in your themes (increasing false negatives).

Our experience of text coding is that readers are much more sensitive to a comment being misclassified than they are about being said that it hasn’t been coded. As a human we tend to be much more understanding if someone says ‘I’m sorry I don’t understand’ to them answering the wrong question. The inductive coding process is iterative to try and create the needed balance.

Whilst coding might seem to be a task which everyone can perform even humans find it difficult. Several studies have shown that when you get several skilled coders to theme the same piece of text you'll only get agreement about 80% of the time. Consistency is even problematic at the individual coder level as their opinions will change as they review more feedback. AI-based solutions will be more consistent than a skilled human.

With volume you get granularity

We often get asked ‘how many different topics will you find?’ The answer is almost always ‘it depends’.

However there are two key factors that it depends on: the question and how many answers that you receive.

With the question one of the big determinants is whether you’re asking a ‘what is good’ or ‘what could be improved’ type of question. Typically you’ll find less different themes in the former than that latter.

To understand the likely number of themes it’s worth understanding how the volume of responses matching a theme tend to be distributed.

If you plot the frequency of mentions for the different themes in your question and order the themes largest to smallest you tend to have an exponential-type distribution.

How often a theme is mentioned will have an exponential shape when sorted most to least popular. 

How often a theme is mentioned will have an exponential shape when sorted most to least popular. 

If instead of plotting the number of mentions for each category you plot the logarithm of the number of mentions the curve becomes very close to a straight line. The area under the curve represents the total number of mentions.

The relationship between the number of comments and the number of themes identified

The relationship between the number of comments and the number of themes identified

As long as the number of themes per answer stays the same as you increase the volume of answers then the effect of this is that the curve representing the relationship moves out as shown. The implication of this is that both the mentions of any theme increases and the number of themes identified increases.

Another way of thinking about this relationship is that as you get more comments you start being able to see further down the ‘long tail’ of theme groups.

Combining with other data

Understanding the themes within the data is important but where it starts to really come alive is when you start looking at where the use of certain themes is particularly high or low.

We have an automated detection approach which scans through groups looking for unusual incidents of particular themes.

One example of this would be to understand which themes were unusually likely to be used in various countries or functions. Often the theme might be expected (eg finance people talking about the finance system) but in other instances it might reveal areas for further exploration. We believe that managers should be managing exceptions.

Another use is to identify which themes are closely related to each other - ie co-occurrence of themes. For example you’d want to know that if the temperature in the stores was closely related to comments about customer complaints.

If you’ve been asking open question feedback as part of a mixed question type survey you might want to build a model which links the provision of your themes with the scores on various questions. This can help demonstrate what is linked to engagement, for example. 

Finally when text is structured it can be included in any other predictive model. You might want to understand the themes that people talk about which later drives attrition for example. Our experience is that these data points are often some of the most predictive variables, and the easiest to action.


How Workometry codes answers

Our Workometry solution uses an inductive approach to coding feedback themes meaning that it can be applied to the answers of any question. We learn the themes for a specific question in a specific organization and can understand multi-language feedback. Regular clients benefit from our ability to learn organization-specific language and embed earlier knowledge in themes.

As an augmented intelligence solution, Workometry scales very well and many clients are dealing with tens, or even hundreds of thousands of comments to any question. Our key differentiator is to be able to build highly accurate, inductive models in a very short time.

Learn more 

Using Employee Voice in Open Innovation

Using Employee Voice in Open Innovation-01.png

One of the key trends that we’ve seen with Workometry clients over the last 18 months is the increasing use of Employee Voice as a contributor to enable innovation across global organizations. Leading firms are increasingly actively involving their employees in contributing ideas and identifying issues that need addressing, effectively using the huge cumulative insight as large-scale collaboration.

To understanding this trend in more detail I interviewed Yves Zieba, a prominent advocate of open innovation here in Switzerland. I hope that you enjoy it.

Yves Zieba

Yves Zieba

Hi Yves, could you please give me us an introduction to who you are?

I am a typical ‘slasher‘. I share my time between co-animating an open innovation space, facilitating workshops on strategic agility, design thinking or fablab experience. I am also coaching successful SMEs in the European Union Horizon 2020 program. We apply leading edge practices in facilitation with Seethrough and with business models for the open economy workshop from Without Model. It is our way to combine open innovation principles, facilitation skills on emerging HR trends or peopletech topics.

What is ‘open innovation’? How are firms using it?

 It is primarily a mindset. It is often confused with open source. There are similarities, although it is not exactly the same thing. The idea is to co-create with the right customers, partners and providers. It is the exact opposite of what is happening in closed and highly secretive R&D departments. It can take several formats. What I see most these days are idea box, hackatons, public private partnership, problem solving contest, UX beta testing communities, or corporate venturing (i.e. large group working with small startups in different partnership formats).

What is the employees role in open innovation ?

It is to adopt this open mind, this equal-to-equal relationship, and to develop their ability to co-create, to work in flexible teams co-develop modular solutions with external stakeholders in a similar way than with internal stakeholders. Most of the time the open innovation transformation is taking place at the same time as the digital transformation. So the employees role is to drive this change, or at least to understand and contribute positively to this change. To some extent, taking some risks is also part of the game. Employees can now take advantage of the emergence of new management technics and latest organisation design, such as holacracy.

When we facilitate open innovation or buisness models for the open economy workshop, I am surprised to realise how little most people know about these principles or about creative commons licences.

What have been the challenges to using employee voice in this manner ?

In open innovation as well as design thinking the prerequisite is to adopt an equal to equal relationship. I typically ask everyone to adopt a beginner ‘s mindset and to start to « think together ». In reality, it takes time as existing hierarchies continue to carry their weight, and are making use of their business stamina in many organisations. So when we are using employee voice, typically in an « idea box » context, it is crucial to have transparent criteria for selection. We can also introduce elements of collaborative and participative approach to project selection and investment decision making. We plan ahead to be clear about the rules of the game (which idea will be funded, how, what will happen to the team, who owns the IP…). Companies sometimes fail, it typically occurs when they ask their employees to contribute and then fail short when it comes to implementing great ideas.

Who would you say has done this well? What benefits did they realise ?

There is a long list of organisations who have been using open innovation extremely well and successfully for years. To name a few, Quirky, for collaborative inventions, SamsungLegoGE with their ergonomic design challenge, Philips, on crowdsourcing in the healthcare and well being industry, Baiersdorf with the Nivea campaign, Procter & Gamble Connect & Develop campaign or Mozilla toolkit are some of my favorite examples. The University of Geneva Quality of Life Lab  and the Geneva Lab togther prepare a great program with EnoLL, there should be lots of best practices being displayed at OpenLivingLabDays2018 during the week of August 23rd

These players have adopted what they like about open innovation to meet their needs.

 

Sometimes the best solutions aren’t the most sophisticated

Sometimes the best solutions aren’t the most sophisticated-01.png

Last week I was at Tucana’s People Analytics World 2018 in London. I have a special affinity for this conference as I co-chaired the first three years from when a hundred or so people came together in a small space in Canary Wharf (and when the Tucana team was tiny) to today with several hundred of Europe’s top practitioners met in the QEII Conference Centre in Westminster.

Over the five years that the conference has run we’ve arguably seen three phrases in the analytics approaches presented:

  • The decision whether to do People Analytics
  • The academic – an approach to People Analytics where the objective was to run a project like academic research
  • The value where the time / effort concentrated on identifying business value.

This year was the first where I had seen value being cited as the prime driver.

I’ve written about this before. In 2016 I wrote “The greatest mistake for many in people analytics” where I discuss the need for a loss function to convert the statistical model to business value.

In the past too many presentations discussed finding statistical relevant findings but didn’t extend this analysis to the business impact of the change. At People Analytics 2018 two presentations especially made the necessary leap from this, rather academic approach to one centred on identifying value.

Michael Tocci of P&G even joked that his previous academic peers would challenge him because of his ‘less-rigourous’ approach. However, through his excellent presentation he showed how P&G analysed the value created through the global mobility program and refocused it to ensure it was being used effectively in a way which was highly mature. It was always value-led.

Pressed with a challenge, presented to investors of pulling cost out of employment costs P&G used analysis to understand how to do this in a way which minimised the long-term disruption to the organization. Make no mistake, this was not cost-cutting for the sake of cost-cutting, but an economically-led approach that ensured that mobility assignments were being used effectively, with the right people going on the right assignments to the right locations to maximise value. 

The techniques that Michael described were not what we would describe as ‘advanced’ but the clarity of thinking, and understanding of the business showed a maturity that we wouldn’t have seen even 2 years ago.

Similarly Swati Chawla of Syngenta described how they used People Analytics to analyse sales force effectiveness in APAC. Again, the analytic techniques weren’t advanced but the focus on identifying value as well as (the usual) cost enabled them to optimise on the correct variables for the business.

As with P&G, the techniques that Syngenta used aren’t ‘advanced’ but the maturity was achieved via a carefully selected and balanced set of measures. She demonstrated a strong understanding of the business trade-offs between trying to minimise cost-per-employee and balancing this with productivity.

The final characteristic that both presenters demonstrated was a strong understanding of the need to build wide support for action through a robust change management approach. It might (possibly) be easier to build a convincing case with simpler analytic techniques due to the ease of comprehension. However the focus always has to be on action, not clever analysis on a PowerPoint presentation.

As I flew back to Switzerland on Thursday evening I reflected on the lessons any CHRO could learn from these presentations.

  • Focus People Analytics efforts on the net value of the topic that is being analysed.
  • Business understanding is the most important factor, both to ensure the right thing is being optimised but also that analytics teams are able to motive leaders to change
  • Sophisticated analytics doesn’t necessarily mean sophisticated results. Pick the right tools and techniques for what you want to achieve.
  • These approaches can be used regardless of the size and complexity of the organizations. Whereas some sophisticated approaches require a decent size population to find meaningful trends, clear thinking is applicable to all
  • People Analytics, at least in some firms, feels like it is becoming mature. I hope People Analytics World 2019 shows this to be true.

AI in HR – how to understand what is happening

AI in HR – how to understand what is happening

There is a considerable buzz these days about so-called ‘AI in HR’. Most vendors are claiming to have some sort of machine-learning within their products and some are making claims that, from the perspective of someone who has been doing this for the last 15 years, seem unlikely.

Understanding what these new technologies can (and can’t) do is vital if HR is able to evaluate purchasing them, or work with their internal teams in designing, developing and deploying their own approaches. Analytical literacy is rapidly becoming a core skill of HR.

Algorithms are free

Many of the technology vendors will market their products as having some amazing, unique algorithms. In almost all instances this is unlikely.

One of the remarkable trends that we’ve seen over the last years is that the big technology companies have acquired large teams of the best data scientists and have been publishing new algorithms in journals, often open-sourcing code at the same time.

Pretty much everyone is using these algorithms – and in many instances much earlier developed ones – as the basis of what they’re doing. They will almost certainly be combining them and changing settings but at the heart we should assume the same freely available building blocks.

What is needed is great training data

In contrast to algorithms being free, data is not and as such is what differentiates decent from great analytics efforts. This matches the message that we always tell clients – to improve early analytic results there is usually a need for better data, not better algorithms. It’s why we built Workometry – to make the collection of the great-quality data as easy as possible.

In 2014 I wrote an article describing the 5 types of HR Analytics vendors. In it I described a category which I called the ‘data aggregator’. This was a firm who, by collecting vast amount of cross-firm, individual-level data were able to build valuable analytics offerings.

In 2018 pretty much every SaaS HR offering is trying this model. In many instances the data doesn’t really have enough value (there is a lot of it, but it’s not really that rich – most survey providers could be put in this category). However some vendors will find true value in this approach.

This data becomes a barrier to entry for new firms wanting to enter the industry – it’s hard and costly to acquire. It’s a good reason why many of the most innovative HR analytics start-ups are in recruitment. In recruitment far more data exists outside the firm in public data sources.

General AI is a long way off

When vendors talk about AI in their product to the lay-person they often conjure-up images of technology that has near-human levels of reasoning. Most data scientists would tell you that this reality is a long way off.

One of the interesting aspects of machine learning techniques is that it can solve some tasks that we humans might find difficult (playing chess for example) yet it might struggle with tasks that even a 4 year old could achieve easily. I suspect that we’re close to developing an autonomous van which can take a parcel from the depot to your house but it might be harder for a robot to take the parcel from the van, up the stairs, enter the building and find the correct letter box.

What today’s current approaches can do is solve certain, well defined problems, usually with lots of available data with extraordinary levels of accuracy. Often, the narrower the problem and the greater the data size to learn from, the more accurate the prediction. These narrow problems are often described as ‘Specific AI’.

Benefiting from Specific AI

Take the example of text analytics. Even within text analysis there are different firms in the HR space doing wonderful things. TextKernel has developed very good approaches to understand CVs and Job Descriptions. We, through our Workometry technology, have probably the leading approach to understanding the answers to open questions (for example in employee suggestions or feedback). We even go so far as building specific models on the organization / question level (arguably our key differentiator is how quickly we can build these models). With such specific models we can out-perform skilled humans at this task in a fraction of the time / cost.

We can think of the implication on work of AI / robots therefore not as automation taking away whole jobs – as most jobs require a variety of tasks, but of AI automating specific tasks. These will be the ones with a lot of repetition or where large volumes of data need to be acquired and synthesised.

When thinking of how to apply AI it’s important to therefore break a job down to tasks, ideally the smallest, most specific tasks possible and identify which are candidates for AI. At the same time we need to identify the value / cost of these tasks to identify which are worth developing solutions to automate.

When doing so we shouldn’t constrain ourselves to tasks that we’re currently doing. Many tasks are possible without AI, but prohibitively expensive. For many firms the sort of text coding Workometry does has been too expensive and time-consuming to perform. For many of our clients Workometry is 10x cheaper & 200x quicker than the alternative solutions and is of higher quality. What was difficult to justify therefore becomes attractive.

Benefits from AI

There are 2 key drivers of benefits from using so called ‘AI’ in HR:

  • To improve a business driver (eg productivity, customer experience) and by doing so enable the business to achieve better results
  • To reduce cost of of delivering HR.

In many instances the first is likely to provide opportunities to realise a greater return to the business, however it is also likely to require greater & more wide-spread buy in to results. Implementation costs and risks are likely to be higher with a greater number of uncertainties influencing the end deliverable.

With this type of analysis it’s highly unlikely that the data needed will be residing in one system or database. Given this we can expect fewer instances where a single system provider will have enough data-coverage to be able to build a complete model. The best work in this area will remain the preserve of data-science teams within a firm who can identify, process and join the necessary data sources into a reliable model.

Cost reduction for HR will ultimately be easier for predicted results to be achieved. In many instances there will be a smaller number of decision-makers (the HR leader) and it’s likely that cost reduction will be a core part of their objectives. Data for this type of analysis will be easily available and more likely to be of high quality / have less measurement error / to be more complete. It will also be more likely to reside in one system. In the medium term we can expect system providers to deliver such capability.

Some points getting the most out of AI for your HR team

  • A little knowledge will go a long way. Think about up-skilling your team so that they have a good understanding of where AI can be deployed in its current state and what the likely benefits are. Several providers (including us) can help here
  • Don’t expect system providers to provide complete solutions where they don’t have access to all the data. There will be a need for the foreseeable future to build good People Analytics capability
  • People Analytics technology won’t solve all your problems, but it might remove routine tasks from the People Analytics team, thereby enabling them to focus on higher-value tasks. Think of these solutions as complements to building capability, not a replacement
  • Challenge your technology vendors (especially if you’re a key client) to develop solutions that can identify cost improvements. With all the transaction data they should be identifying efficiencies. This will soon be a hygiene factor for systems providers
  • Often simple models can be built quickly. In a drive for accuracy you hit decreasing marginal returns pretty quickly. How much more valuable is this solution than what your team could build in 10 days?
  • General models, built on other firms data is unlikely to perform as well as specific models built on your data.

How to start a People Analytics project

How to start a People Analytics project -01.png

As one of the earliest People Analytics practices we have extensive experience of working with clients to help build great People Analytics organizations, either by helping them work through pilot projects or through our regular analytics trainings.

In most instances, to improve the quality of analysis it’s likely that you’ll need to acquire better quality data, not use better algorithms. Our Workometry product was initially built to meet the need of providing high-quality perception data to use in predictive models. Our experience is that this data is often the most valuable sources of insight and most predictive variables when model-building.

There are a few simple things to consider when starting analytics projects in HR. The most important thing is to do this in a systematic manner, not just grab the easiest-to-get dataset and start modelling.

What is your business objective? Really??

Possibly the hardest challenge of any analytics project is to accurately define what you want to analyse.

This might come as a shock but with so many things in employee management commonly used concepts are poorly defined. A good example is employee engagement – there is no common definition of engagement and therefore statistical analysis is made more difficult.

One of the ways that we recommend clarifying such topics is to add the words ‘as measure by X’ to the end. So if the project is defined as improving employee experience then the definition could be ‘to improve employee experience as measured by the employee experience rating on this survey.’ Socialising such definitions is important to ensure that all the key stakeholders agree with your definition.

Another useful technique is to use a version of a technique ‘5 Whys’. Here the purpose is to challenge the initial problem description by repeatedly asking ‘why?’ until the actual causal issue is identified. To give a simple example:

Manager: We need to understand how to reduce employee attrition?

Analyst: Why do you want to do that?

Manager: Because it is causing us a lot of unnecessary cost.

Analyst: Why is it so costly?

Manager: Because it is disruptive to our business and we have to pay recruitment costs

Analyst: Why is it disruptive?

Manager: because it takes so much management time both in recruitment and also bringing people up to speed

Analyst: Why do you think we need to focus on reducing attrition rather than reducing the management time it takes for each hire?

Manager: You’re right. We need to investigate both and understand what possibilities would yield more value from the investment.

How are you going to realise value from this analysis?

It’s important at an early stage considering what you’re going to do to implement the results of any analysis. Will changes be influencing policy change? Will you be creating a predictive model to give you an individual risk score on each employee?

The reason why this is important is that it has implications on what data you can bring into your model. As data needs to be captured for a predefined purpose then providing a personal ‘score’ has implications on what data can be used in your model. Working with anonymous data sets and at an aggregate level may enable you to do far more with the data and give you far more flexibility in your model building.

How you will realise value will also drive which types of modelling you want to do. Does your model need to be easily interpretable (needed for policy, process or training changes) or could a black-box model be sufficient. If a black-box model how are you going to determine you’re not at risk due to discrimination regulations (hint, just not adding a gender variable to your model won’t prevent your model being discriminatory).

What could be could be driving this behaviour?

The next important action is to identify a serious of explanations which could be causing / influencing what you’re studying. There are 3 main sources that we tend to use:

  1. Desk research: What has been identified by others as causes / correlations
  2. Brainstorm: Get together a group of key stakeholders to identify their view on what are the causes. This also helps socialise the problem
  3. Ask employees: Short open-question questionnaires (like Workometry) to as wide a population as you can will help you get an extensive list of possible causes. (We want to do an exploratory analysis at this stage). Our experience is that there will be a significant difference between this list and the stakeholder list.

What data do you need to test each possible explanation?

Now that you’ve identified the potential causes you need to identify the possible variables could you use in your model which would enable you to test each potential relationship. Again, this is another instance where being clear in what you are actually trying to measure it’s critical.

Some of the information that you need you will have in traditional employee systems, but it’s not likely to be enough. You may have data in other business systems but you might need to acquire new data.

Lots of data is available online from various credible data sources. Numerous governments and organizations like the UN publish great databases which can help you understand what is going on outside the organization with things such as the labour market or populations.

What new data will you need to capture?

It’s highly likely that you will need to capture new information to validate some of your ideas. In many instances you’ll have to ask people directly.

There are numerous data capture methods that you can use, however the process of how you solicit information is often at least as important as the questions you ask. You need to identify approaches which require low input from both the organization and the individuals concerned. If you will need to understand this on an ongoing basis you need to make sure it’s sustainable.

How will you measure the success of any changes?

Finally, before implementing any changes it’s important to identify how you are going to measure the impact of your changes.

It’s likely in most situations that this will have to be a hybrid approach – some measurements will need to be quantitive. Others are likely to be perception-based.

What is unfortunate is that all changes within an organization are likely to have unintended consequences. Also, given the complexity of organizations it’s unlikely that your model will be stable over time so you need to identify when the model will need reviewing.

The use of exploratory, open-text questions on a regular basis will enable you to monitor when new reasons emerge.

When to ask open vs closed questions in employee surveys

When to ask open vs closed questions in employee surveys-01.png

The traditional employee survey has a selection of closed or scale-based questions. These might be asked as a statement which the respondent has to agree to – usually on a Strongly Disagree to Strongly Agree scale, or a numerical score such as the 0–10 scale of an employee Net Promoter Score question.

We believe that in the vast majority of times such closed questions are used not because they are the most effective or appropriate way of gathering information, but because they historically have been the easiest to analyse.

With a scale-based question the respondent provides a response by selecting an option from a list. With most questions they are able to select a single answer however it could be possible to select several items from a list of alternatives. Because of the way the way the surveys are administered it is easy to count the number of people who answer “Strongly agree”, “Agree” etc and calculate a percentage.

Open questions however provide text data. This is typically described as ‘unstructured’ and requires effort to transform it into ‘structured’ data before it could be analysed. Until recently the only reasonable way of doing this was for a skilled researcher to do it manually. It is a long and slow process and is subject to a number of biases and issues with consistency (both between researchers and over time).

We have now reached a stage where computers are able to perform the coding task. The cutting edge algorithms, such as the ones we use in Workometry, are able to out-perform humans at coding text responses to open questions. They’re also able to do it at a tiny fraction of the cost of doing it manually meaning that cost and time becomes less of a deciding factor between using open questions and closed questions.

When you do this you get ‘count’ data plus the option of various other elements of metadata (data about data) such as whether the comment is offensive, is an outlier (some of the most important comments are often outliers) or even something simple like word count.

The choice of which question type to use therefore depends much more on why you are asking the question and what you want to do with the answers

Objective

The first question to ask yourself is why are you collecting the data.

Scale questions are best for providing a quantitive response from all respondents about a particular topic. They are therefore best for things such as KPIs which you need to track accurately on an ongoing basis.

The other way you might use a closed scale question is to test the strength of feeling on a particular topic, for example to provide as accurate value as possible for use in a business case.

If you need to identify how strong a feeling is you should use a scale question. However if your aim is to understand what people are thinking, why they are thinking it or how it affects them (or your business) you probably want to use an open question which provides text.

Open questions provide information which enable you to understand how to act. Closed questions might indicate you might need to act but won’t tell you how to do that.

MECE

A key principle when thinking about collecting information via closed questions is that you’re ensuring that the topics or reasons are Mutually Exclusive and Completely Exhaustive. Of this I think the most important is that the categories are Completely Exhaustive. Mutually Exclusive is important but having a hierarchical structure – ie that a category can be a sub-category of another – can be useful.

In some instances having completely exhaustive categories is easy. I could ask people which is their favourite Shakespear play as the number of plays is finite and reasonably small. My list could quite easily be Completely Exhaustive.

An alternative way of thinking of categories is at the question level. With an engagement survey historically we’ve asked a set of questions that are used to create an engagement index and then a large number of questions that are used to understand which factors, or categories, correlate with engagement. You can think of all those questions – ‘is communication good?’, ‘does your manager support you?’ etc. – as covering all the categories. The reason these surveys are so long is that there are so many possible categories.

If I want to know why someone did something it is impossible to build a Completely Exhaustive list. Some closed questions on survey might have an ‘other’ choice where the respondent then writes in their answer. Alternatively there might be a single open question at the end for the user to add anything that hasn’t been asked. Really this is saying ‘we don’t have a completely exhaustive list’. Unfortunately we see that these uses of ‘other’ will provide different (lower quality) responses than if you just ask an open question.

Open questions are, by their nature, exploratory in nature. This means that when you ask them you’re open to the possibility that the answers are outside the group of categories you could initially identify. When we ask open questions in an engagement type survey we find that about 30% of categories that employees mention are ones that we’ve never seen on a commercial engagement survey. We see a difference between two companies, even in the same sector. The reasons are very personal and business specific.

Another way of thinking about closed vs open questions is that with closed questions you have to ensure you’re Completely Exhaustive before asking the questions; with open questions your answers are Completely Exhaustive automatically. This makes survey design much simpler and removes many issues with ‘validation’.

How much signal is enough?

Fortunately the topics and their frequency identified during a coding are not randomly distributed. With our clients, a typical open question will generate about 250,000 words which result in the region of 100 different themes. The most common theme might appear 10% of the time where the smaller themes might appear less than 1%.

As the data size increases two things happen: first, the number of statements where we can identify a meaningful topic increases. The first or second time the algorithm spots something could be an outlier but after a few more instances we start to have enough signal to determine that this is a meaningful topic.

The second is as you get more and more data the confidence that you can safely assign to any ‘answer’ increases. You can start to consider tracking usage of topics over time. You can start to see which parts of your employee population are far more or less likely to talk about a particular topic.

Sometimes the topics are tightly distributed. With one client we saw a few people raising issues about bad contracts. Whilst in many organizations this might be ‘noise’ in this organisation the topics were all from one group and about one contract. By highlighting this statistically the management team could investigate and de-risk the situation.

What open questions don’t do is find a quantitive score against all potential categories – they don’t allow you to understand what each person thinks about each category. Instead they identify what is ‘top of mind’.

Respondent burden

As I’ve written about before, with data and analytics you need to think about what’s in it for the employee. Surveys and questionnaires are just the same.

There are three aspects for improving the respondent experience that I think you need to consciously try to improve:

  • The technology interface of the survey tool – how easy is it to provide the answers, especially across different devices including mobile
  • How long the respondent will have to devote to providing feedback
  • Whether the respondent will be able to tell you exactly what is on their mind.

On the first of these points we did worry about whether respondents would find it difficult to respond to open questions on a mobile device or whether responses would be shorter. At the moment we’ve found little evidence (though the 500+ word answers are mostly done with a proper keyboard).

For the second, to collect the richness of data a questionnaire which is based on closed questions inevitably needs to ask at least one question for each topic. Hence we either have the traditional long surveys, or we are forced to abandon data quality to provide a shorter experience. With a 4 question open question survey we find the average time to complete is less than 5 minutes.

Finally, open questions are the only way of ensuring that all issues are captured. Ideally with closed questions you’d want to ask about each category ‘how do you rate this’ and ‘is this important to you’. For example you might ask all employees about whether they think the firm offers family-friendly policies or benefits, but if a respondent doesn’t have a family they might not care (but could rate it as true). Many surveys assume that each category is equally weighted where this assumption is highly unlikely to hold.

As previously noted, when we’ve used open questions instead of scale questions in firms we’ve found that only about 70% of the important topics were typically part of a traditional employee survey.

Conclusion

Although we’re very strong believers in the importance of open questions linked with AI-linked analysis it’s clear that the best solution is a balance between open and closed questions, using both for their strengths.

In terms of information-gathering the vast majority of questions should be through open questions as the principle aim should be to identify issues that need fixing, or ideas that will be implemented. However, it’s important to have a quantitive measurement to capture your KPI on each survey. This data is very useful not only for tracking trends, but also for analysis.

The key point is that if your aim is to take effective action you should only use closed questions where absolutely essential. After all, if you want to really understand what people think you don’t ask a series of closed questions.

Focussing on the causes not the symptoms of employee engagement

Focussing on the causes not the symptoms of employee engagement-01.png

Depending on which data you look at employee engagement has tended to be flat or declining over the last 15 years. There is obviously something wrong with the way that firms have been attempting to manage engagement within their organizations. Our view is that the core issue is that engagement typically suffers flawed analysis leading to ineffective changes.

A simple way of thinking about the way most traditional surveys are designed is by considering the questions in two groups: a set of questions that make up the engagement index and another set, usually larger, which is used to identify factors that are correlated to that engagement index.

If we consider how this works in more detail we can see that what we’re doing is looking at the differences in views between engaged and disengaged employees.

Most engagement survey analysis is static – i.e. no knowledge of each respondents’ previous states or answers are considered when conducting the analysis. Where history is included it is usually at the sub-organization (eg team, department) level where there can significant differences in the underlying respondents from survey to survey.

During the analysis stage of a survey typically variables that are most closely linked with engaged / disengaged employees are identified. Unhelpfully these are often called ‘drivers’ implying, at least to the lay-person, that they are causal. They are not, they’re simply correlations.

What these analyses do is identify symptoms of being engaged rather than what causes the employee to become engaged in the first place.

There are several reasons this matters. The first and most obvious is that any post-survey action is likely to be sub-optimal. I can take a flu-remedy which prevents my nose running and tempers my headache but I still have the flu. As soon as the medicine wears off the headache returns.

The second reasons is that our sensitivity to certain issues changes as a result of our overall view. We probably all know people who cited reasons such as ‘they left the toothpaste lid off’ when explaining why they changed partners. When love disappears these things become more grating, but overall they aren’t the reasons why love is lost.

How to fix this

To understand the issues that are linked to employees moving from an engaged to a disengaged state (or vice versa) we need to include how any individual changes over time in our model.

There are two key ways of doing this. My view is that the ‘gold standard’ is to do both:

  • We can look at the transition between engaged & disengaged at an individual-level and understand what happened to them. One way of doing this is to link employee background (education level, function, tenure etc.) and event data (eg pay rises, job changes, performance reviews) and build models which identify which combinations are linked to changing individual-level engagement. We can look at which responses change and explore whether there were underlying reasons why this might have occurred.
  • We can ask people if they feel more / less engaged (or even identify it from their history) and if they feel a change what events, experiences or factors have influenced their changes in feeling.

What we learn

When analysing data in this way we see significantly different issues that are cited as reasons for a change in engagement (or recommendation when using eNPS).

Our employee feedback tool, Workometry, uses open questions instead of traditional scale questions. It then uses highly sophisticated text analytics approaches to summarise the statement and group them into meaningful groups (with meaningful descriptions such as ‘More clarity on the future direction’). The summaries and groups the algorithms find will depend on the answers to the question asked (and often be organization-specific).

Many of the reasons cited for changes in engagement would not be part of a typical engagement survey, meaning that there is no way they’d be able to identify them, and by implication that you wouldn’t be able to act upon them.

Reasons may seem at first glance far more personal but there are a few key themes that we typically find:

  • Environmental issues are often the largest group. These will include the team dynamics, toxic colleagues and ineffective management. I’d suggest that the single most effective way of improving a disengaged employee’s engagement level is changing their role
  • Barriers to getting the job done is a significant issue, especially amongst high performers
  • Measurement and incentive related issues are common. These could include targeting at the general level or where targets are badly aligned to things that are important to employees values, such as delivering great customer experiences.
  • Various events that could all be categorised in a general ‘Employee Experience’ group can have significant changes. For individuals where we see sharp drops in engagement over two, close time periods there is usually an event which has caused the change.
  • New senior leadership and refocussed strategies can increase engagement for certain groups.

Changes to action planning

Given the reasons we see most of the action that is likely to be effective would be bottom-up rather than the typical long, top-down approach. There will be some systematic issues that will need to be addressed but in general a principle of ‘decisions / action should be made at the lowest possible level’ is probably correct.

For organizations which really want to get this right will be building teams to address more structural issues. I’d imagine such teams would have experience in areas such as process and job design, technology enablement and incentives. Such changes would need to be formally costed / valued and prioritised.

As an extension of this organizations need to have a serious debate about how to ‘close the loop’ with employee issues in a timely manner. Whilst customer feedback can trigger this easily, in organizations employee feedback is usually provided with a promise of confidentially. Technology offers opportunities here.

‘Low hanging issues’ need to be fixed quickly. We always see issues such as staff facilities that are broken or policies, designed for one situation which are inappropriate in another area. Fixing these things quickly will likely not address fundamental reasons (they’re the toothpaste-lid sort of reasons mentioned above) but rapidly taking action sends a signal to employees that feedback is being listened to and providing it is worthwhile.

Overall we believe that much of the failure to improve engagement in organizations is due to using static analysis to understand dynamic systems issues. Businesses address the causes rather than the symptoms will realise significant benefits.

HR Analytics doesn’t mean abandoning intuition

HR Analytics doesn’t mean abandoning intuition

There’s a belief, that I hear frequently in the HR analytics community, that HR Analytics means a move away from intuition. This isn’t true.

Analytics using your own data is just one tool needed to conduct empirical decision making. Doing analysis – however sophisticated – can only be part of what you need to make great decisions. ‘Numbers are just another voice at the table’ as the saying goes.

As Sam Hill mentioned in his post on this blog ‘People Analytics – It’s a mug’s game. Isn’t it?’:

‘The People Analyst will keep formal and informal channels of communication open with HR process owners, line managers, senior managers, HR Business Partners and potentially external stakeholders to measure the pulse of their organisation and to identify emerging workforce issues or opportunities.’

As HR professionals there is usually a good reason that we hold the beliefs that we do. Many of us have built up knowledge and experience over many years of seeing similar situations, reading case studies and books or speaking to peers.

Managers too have built valuable experience. Many tend to have a good knowledge about what is happening in their organisations. They will have seen similar situations or even studied organisation theory on a general business course.

Discounting this experience and knowledge would be like starting with your hands tied behind your back, but unfortunately it’s common with some HR analytics teams.

Let me illustrate this with an example.

Suppose we are asked by a friend whether a coin is fair (ie as likely to come up heads as tails). They then toss the coin 10 times and it comes up heads 7 times. The chance that this will happen is in the region of 12%. This would be unusual but certainly not impossible. Do you tell the friend it’s fair? I suspect you might.

Now let’s change the situation slightly. Let’s say your friend then tells you he was given the coin by a magician at his daughter’s birthday party. My guess is that this bit of information would make you think that the coin probably isn’t fair. Maybe you’d think you were lucky to see 3 tails.

In both instances you’re updating your view based on the information that you had before. In the first instance you start with an expectation that the coin is fair as most coins have an equal chance of coming up heads or tails. You probably have some doubt but not enough for you to switch your view built upon a lot of previous experience.

In the second instance the knowledge the coin came from a magician changes everything. Given the source you’re now comfortable declaring the coin isn’t fair. Background information makes a big difference!

Inexperienced analysts rely too much on their data. They’d look at their data only, without any context, and say there isn’t enough evidence to say the coin isn’t fair. For them the data is everything. HR Analytics without incorporating experience or intuition is like this.

Good analysts, as Sam previously mentioned, start by collecting as much evidence as they can. They’ll ask managers and HR colleagues, they’ll probably do some desk research and see what others have found. Then they’ll take their data and update their view. The more data they have the more their recommendation will be based on the data. Conversely the less data they have the stronger will be their weighting towards intuition and experience.

Analytics models are only as good as the information they have. Intuition and experience are valuable sources of information. It’s crazy to ignore them as we move to using analytics in HR.

Engagement is necessary but not sufficient

Engagement is necessary but not sufficient-01.png

Andrew Marritt reflects on a People Analytics Podcast episode, a fascinating conversation between Laurie Bassi and Max Blumberg about the myths of Employee Enegagement, and looks into the real impact of Employee Engagement in business performance and value. 

A large part of economics is concerned with understanding and modelling individuals’ behaviours when faced with various incentives. Whilst some believe that economists are only interested in money as an incentive, in truth incentives can take many forms.

George Akerlof co-wrote a fascinating book – with Rachel Kranton – on how identity is an incentive (Identity Economics: How our identities shape our work, wages and well-being). There is even a sub-discipline of economics called ‘Personnel Economics’ which is dedicated to understanding how organisations work.

Ten years ago, when I was part of the HR Analytics team at UBS, I was the only economist in a team of psychologists. Looking at People Analytics from the perspective of an economist forms a significant part of this podcast between Laurie Bassi and Max Blumberg.

One of the things economists spend a huge amount of time learning at university is calculus. Calculus is useful to tackle problems involving optimisation. Given we are always dealing with limited resources almost all problems in organisations are optimisation problems. I would argue that one of the key historical basis of modern people analytics in organizations comes from the discipline of operations research, like economics an applied mathematics discipline concerned with optimisation.

The reason calculus is useful is that rarely is there a straight-line relationship between a variable and its desirability. For many firms it would be beneficial to reduce employee turnover, however I’ve worked with a couple of organisations over the last 18 months where more attrition would be beneficial. There is an optimal level, past this the benefits decreases. You can have too much of a good thing.

Our organisations work as systems. There is rarely a clear, unique, optimal solution. Invest money in one area and – given limited resources – you can’t invest the same money elsewhere. Much of management is about prioritising resources to maximise returns.

This brings us onto a key point in Max & Laurie’s conversation – how much resource should one allocate to improving engagement. As the resources are limited what should we reduce spend or effort on? New production investments? Training?

In a system, many inputs depend on each other. In a dynamic system they often depend on the levels of the previous state. Rarely is it true that you provide the optimal solution for the system by optimising each individual input. Furthermore, optimisation doesn’t mean maximisation.

Unfortunately this is often the belief of many in the HR profession when considering engagement. Grand action plans are built without considering the alternative uses of the needed time or money. As Laurie so eloquently mentioned ‘Engagement is a necessary but not sufficient condition’.