Michael, VP of Talent Development writes…
"Are there HR metrics that are better than the rest? Too often I see metrics that just seem like they are measuring the wrong things like quantity vs. quality.
For example, we can report out how many people went through training, but we (as an industry) don't talk about the more meaningful thing to measure like how many training graduates got a promotion.
That's my basic gripe…what are your thoughts and insights on how to get better as a profession?"
I could not agree with Michael’s premise more strongly. There is no doubt in my mind most organizations are spending their valuable resources measuring things that do not add value to the business or serve a substantive purpose. They then cry poverty as the reason we do not measure “what matters.” Michael asks whether there are measures that are better than rest. Responding to that comment brings up mixed feelings for me. Of course there are measures I recommend and believe provide deeper organizational insight than others, but I do not want to give the false hope of a “magic” list or silver measurement bullet. The reality is high-quality workforce measurement is complex.
Why is it so <explicative deleted> hard?
In a word “people.” People are not like dollars, molecules or baseball – these things can be counted, they act in predictable ways, and we have developed equations and rules to describe and predict outcomes. The haters of sabermetrics will be the first to cry “foul” and point out that the algorithms are not infallible. Of course there are always exceptions, but we have come to accept certain truths from these equations. People are the next great frontier, as Stephen Baker describes in The Numerati. In some arenas of human existence, the mathematicians and computer scientists have mastered (or come close) our shopping habits, voting patterns, and health outcomes.
One area frustratingly still out of the powerful computer’s grasp is the workforce. IBM, the creator of Watson, has sunk millions of dollars, literally challenging the best Ph.D.s, stochastic analysts, behavioral economists, computer scientists and even anthropologists to uncover how to maximize IBM’s world class workforce. While they have uncovered insights and tools, they have not yet “cracked the code.”
I share this not to frustrate friends like Michael but to celebrate. While it is true most of us are constrained by rudimentary tools compared to the toys available at Big Blue, the insights we derive are just as valuable (if not a better value). While my business partner and I toil over spreadsheets and manually stage data, I take comfort that this is hard work, which makes those insights so satisfying when discovered.
Know Thy Challenges
So it’s hard, “so what?” Does that mean we all give up and wait for IBM, Google or other mathematical powerhouses to solve the equation? Do you really think they are going to freely share the secret formula they toiled over? Do you think you can replicate the answer without learning all the little lessons and discoveries that got them there? No. From my perspective, letting the Little Red Hen do all the work and hoping for crumbs is not a strategy for success.
Five challenges to workforce measures you can mitigate and move from counting “assess” to understanding the impact of your organizational efforts by being acutely aware are:
- Effective workforce measurement necessitates a hypothesis
- Effective workforce measurement requires forethought and planning
- To measure effectiveness/impact use the long view
- There is a lack of formal measurement standards
- Because of the complexity of human beings and the delay of time between the initial event and outcomes, it is difficult to assign “contributory value” because of intervening events. Pursuing causation is a fool’s errand.
Organizations always want to know if their programs are adding value or are worth doing, but that is not a hypothesis. A hypothesis is the theory or guess of how the program will add value, usually something like increase productivity, reduce risk or improve talent. Organizations can have multiple and competing hypotheses because they are merely speculation or assumptions.
Based on the hypothesis, the organization should develop a measurement strategy which includes:
- identification of key data elements
- purpose of measurement (examples: create accountability – change behavior – evaluate performance)
- measurement intervals (how often will you measure – when do you expect to see a change – how many measurements will you take?)
- an evaluation criteria
Depending on the hypothesis, the organization may need to pull baseline production data, identify a control group or develop an opinion survey to evaluate the results.
Rarely have I found close proximity between a program, process or decision and a measurable impact on the organization. Using Michael’s example of training classes, it is sometimes possible to measure a change in attitude or behavior immediately following a course but the question is, what is the long-term impact? That requires measuring weeks, even years later, but few organizations take the long view. Prematurely declaring victory can undermine human capital analytics.
While many find the lack of human capital reporting standards a frustration, I say be careful what you wish for. Our brethren in Finance have the elusive standards we often crave in the human capital analytics space, but that does not mean they are not subject to interpretation and constant reconciliations. There is not just one set of financial standards but rather numerous schemes including the International Financial Reporting Standards (IFRS), General Accepted Accounting Principles (GAAP) and jurisdictional GAAP such as the US or Dutch GAAP, for example. These standards are not set in stone but are evolving norms.
The lack of human capital reporting standards creates an opportunity and responsibility for us. The opportunity is to create custom measures that meet our specific business needs and situations. The responsibility is to be very clear on the how and why we are reporting, which makes development of a thorough measurement strategy critical to success.
The most challenging issue is establishing a link between the program, process or decision and the impact on the organization. Why? Unless you have a natural experiment, organizations are not in the business of creating a control group nor willing to maintain the conditions of valid research. Proving causation is further made nearly impossible due to intervening events, small sample size and a rapidly changing business landscape.
I encourage organizations to strive to show correlation, i.e., that the events are related, while always stressing that correlation is not causation. When a relationship can be established, I then recommend that HR analysts look to the key stakeholder to determine the strength of the correlation. Is that subjective? Of course, but American jurisprudence is based on juries making decisions based on testimony and circumstantial evidence. Is it a perfect system – no – but it is a pretty good one. By the way, a “dirty” secret of the judicial community, cases with a “smoking gun” or other direct evidence nearly always settle and rarely see a jury.
Your stakeholder should assess the strength of the relationship, which includes how much the program or initial event affects the business impact. A strong relationship might be 15-20%. On the face, that percentage might seem low but when you think of all the influences on profitability or other business outcomes, 20% is pretty significant. Without stakeholder input, I use a conservative rule of thumb of 5% in calculations.
Final Advice
Dr. Ed Hubbard, the Diversity ROI measurement specialist recommends, as does the ROI Institute, that every project should dedicate 10% of the budget to measure the return on investment of the initiative. While this figure makes most organizations’ figurative “mouths drop,” it raises an important question: How much time and resources is your organization dedicating to human capital measurement?
A lack of commitment is the reason that, as a profession, we continue to measure what is easy rather than what matters to our businesses. Measuring and communicating the impact of our HR initiatives needs to become an expectation of all HR professionals and not the sole responsibility of just the analytics team (or in many organizations “person”). For the profession to evolve, we need to more clearly demonstrate the value of our contributions to the business. Bottom line, it’s people who make us grow, it should be people we measure.