When pressed to nominate one book which the HR analyst should have on their shelves I tend to suggest Douglas Hubbard’s ‘How to measure anything’. In it Hubbard give a clear and systematic approach to quantifying the so-called intangibles which most HR managers wrestle with, and many avoid or dismiss as unmeasurable.
This week I spoke to Douglas about his work, measuring HR activities and how the internet is opening new forms or real-time information that can aid decision-making. We started by talking about the career journey he took:
I entered the workforce after my MBA with Coopers & Lybrand in 1988 in Management Consulting Services and tended to get involved with projects a quantitive angle – Operational research, Management Sciences, Decision Sciences – and I recall running into a variety of things which were described as immeasurable and I knew I had just measured it on a previous project. So I wondered whether in circumstances when I didn’t have a counter-example it also could be measured.
I realized that I could identify only three reasons when somebody would say something was immeasurable and they were all illusions. People misunderstood what was meant by the word measurement – that it was being used in a different manner than in the sciences. Second, they misunderstood the object of measurement or what they were trying to measure was ambiguous.
Finally people were unfamiliar with how measurement was done, especially the use of samples. Scientific method was never about having data, it was about getting data. So when people say ‘we don’t have the data to measure’ that they making multiple assumptions.
They’re presuming they can’t get any more. You can always get data. Secondly, it’s not as if they sat down and did the math where they said ‘this is the much uncertainty reduction’ they will get from additional data. You usually don’t need as much data as our intuition tells us. When the math disagrees with out intuition it’s the intuition that’s wrong.
I call these three illusions ‘concept, object and method.’
People also underestimate what you can get from messy data. If you needed perfect data all the time most of science wouldn’t be possible.
Douglas talks about the cost of getting the information, and the benefit, through uncertainty reduction that the information would bring. In this way the point is reframed from ‘we can’t measure’ to ‘is it worth measuring this aspect.’ If we say something is hard to measure we’re presuming the cost is more than the benefit.
Over time I realized that the high-information-value were things they wouldn’t be measuring, and what they were measuring had low value. I call this the ‘measurement inversion.’ People focus on what they already understand how to measure. People would give up on things that mattered most, but they didn’t know how to measure.
This is seen when viewing business cases. Managers tend to concentrate a lot on cost estimation but often fail to quantify other benefits, which are probably the most important factors. For example they may say ‘This software product will improve the quality of information’ but not quantify this in their case. As Douglas noted:
When you do this you end up putting in the one value that you’re pretty certain isn’t true – zero. What you should be doing is putting a highly uncertain range on it, compute whether it’s worth refining and see if you should measure it further.
One point that he made that I felt mapped very well to HR is that much of the time labeling something as intangible and immeasurable is used as a defense mechanism when someone isn’t comfortable with quantitative analysis. Given that this group is significant in HR is this the reason the profession is so keen on describing benefits as intangible?
This brought us onto performance management approaches, how sometimes bad measurement leads to unforeseen consequences and people trying to game the system.
We need to make a distinction between the measurement of performance and an incentive structure. There’s a lot of things you can measure that you don’t have to put in an incentive structure.
Much of this comes back to not measuring the important factors. If you don’t you’re creating inefficiencies. Furthermore, if you understand that people will be gaming the system, design the incentives so people do this in a way to encourage the behaviours you want.
If you’re incentivizing project managers to come in under budget or before time and the project managers are going to be responsible for the initial estimates then guess what will happen – they’re going to make those estimates high to make themselves look good.
Finally we discussed his latest book ‘Pulse. The New Science of Harnessing Internet Buzz to Track Threats and Opportunities”
I would call it one of the most important scientific instruments of a generation. It’s right up there with CERN and Hubble.
Large numbers of people are leaving breadcrumbs and footprints on the internet. We can see what they’re tweeting, we can see what a 1999 Dodge sedan goes for on eBay, we can see what the adverts are for on Craigslist and the ranks of books on Amazon or what people search for on Google. This information is broadly useful.
A lot of these things correlate with a lot of macro trends as ‘nowcasting’ in other words tracking what is happening now rather than back-casting.
As an example, you can correlate not only unemployment but when people start worrying about being made unemployed – when people start searching for insurances for example. This correlates fairly well with the following unemployment rate.
Never before have people had this level of information. I think this is going to revolutionize the social sciences. They have information on par with the particle physicists and astronomers.
In Pulse I talk about how early examples of getting the big picture work – massive surveys which took multiple years.
Now we can see things in real time. A big driver has been the penetration of mobile phones currently running at 70% of the world population. There has been no other phenomena in history – I mean diseases, government systems, fads, technologies – that have penetrated such a proportion of the world’s population in such a time.
There is sceptisism in some quarters about how relevant this information is, how relevant the sample is or the noise that you get in the system. As Hubbard mentions the key element is that correlations do occur. As we discussed earlier measurement doesn’t have to be perfect, rather it needs to be useful. Used correctly this sort of information can be valuable.
In my conversations with Jacqui Taylor, who helps some of the largest FMCG use this data in their decisions, we discussed that the data wasn’t substantial enough to do analysis for HR purposes. Speaking to Douglas I realized that it was the way you framed how to use the information that was important. Using it like an FMCG does to monitor brand reputation might not be there but using it, for example, as a proxy for consumer confidence which we know is important to employee turnover forecasting was likely to yield results.
For me, this is an area that needs monitoring. As Douglas notes:
Every time a new scientific instrument has come out a flurry of new science has followed.