'Smart' Questions to Ask

Here are some things to consider and questions to ask vendors of "smart" HR technologies that are designed to help HR leaders make better technology and business decisions.

By Steve Boese

As 2016 has been winding down you've no doubt seen or will see various published or online pieces such as "HR Trends to Watch for 2017" or "Hot Technologies for HR Leaders in 2017." I thought (briefly) of making this last Inside HR Tech column of 2016 such a piece. But then I decided that, rather than contribute to the chorus of prognosticators predicting "mobile will be big in 2017," I would instead dig in a little more to one trend or "hot" technology that most analysts and industry observers are pointing to in 2017, namely "smart" HR technologies. Whether it's called artificial intelligence, machine learning or predictive technology, the development of more sophisticated HR technologies that can evaluate and mine large data sets, make recommendations based on past data and "learn" or adapt over time to become even better at providing HR and business leaders with people-related decision support presents HR leaders with both an opportunity and a challenge.

Just as some earlier HR-technology advancements failed to produce the desired business outcomes because they simply automated or made a badly designed processes easier to replicate (think about the first time you automated your dreaded annual performance reviews or placed your too-long, candidate-unfriendly application process online), the application of "smart" technology (or artificial intelligence) for HR also presents the very real danger of perpetuating many undesirable characteristics and outcomes of current processes. For example, if a "smart" tool that is meant to help HR leaders predict future high performers based on an assessment of the traits of current high performers has, at its core, a fundamental bias in how managers have rated these current high performers, then the "smart" technology may continue to perpetuate this biased evaluation tendency.

What are some the guidelines or principles that HR leaders (and solution providers) should consider when developing and deploying these technologies for making HR and talent decisions? I've found what I think is a good starting point developed by the Fairness, Accountability, and Transparency in Machine Learning organization that I want to share. It offers five core areas to consider and questions to ask of these "smart" solutions that HR leaders can use to guide their research, inquiry and assessment of these tools. Briefly, the five elements and some recommended and related questions to ask solution providers for each area are as follows:

Responsibility

Above all, it is incumbent for HR leaders to ensure any artificial decision-support technologies are operating responsibly. Ask if there are visible and accessible processes for people to question or address adverse individual or organizational effects or impacts of an algorithmic decision system or design. And who will have the power to decide on necessary changes to the algorithmic system during the design stage, pre-launch and post-launch? Finally, can your solution provider make adjustments and changes that are determined to be necessary for the organization to address these kinds of concerns?

Explainability

One challenge that HR leaders will likely encounter when recommending and deploying smart technologies is that they can often be difficult to understand in terms of  functionality. It will be critical for success and adoption (and acceptance) that any algorithmic decisions or recommendations as well as any data driving those decisions can be explained to end-users and other stakeholders in non-technical terms. HR leaders should press solution providers to share as much as possible about the processes, models and assumptions that are built into these technologies, so they can be communicated and explained to impacted constituencies. No one wants to think that his or her job offer or promotion or bonus was subject to the whims of an algorithm that cannot be examined and explained.

Accuracy

An important checkpoint of these smart technologies is a simple one: Are they doing the job they were designed to do? HR leaders should insist that solution providers create the ability to identify, log and articulate sources of error and ambiguity throughout the process and the data sources used to generate recommendations. Another important consideration in these systems is the idea of confidence --  i.e., how certain is the technology that a given decision or recommendation is the right one, or will be most likely to produce the desired business outcome. For example, when an algorithm stack ranks a group of candidates for a given open position, how confident is the algorithm that the top-ranked candidate is actually a better fit than the second- or third-ranked candidate? Confidence in the accuracy of the rankings will likely impact the decisions of the hiring manager about whether or not to interview candidate two and three, or simply make an offer to candidate one.

Auditability

These technologies should allow outside parties to question, understand and review the behavior of the algorithm through disclosure of information that enables monitoring, checking, and assessment of the process and outcomes generated by the tools. HR leaders don't need to be reminded of the many federal, state, and even local laws and regulations surrounding employee selection, compensation and wages, benefits, leave and more. Smart technologies that drive or at least inform workforce and talent decision making might -- and perhaps should be -- subject to the same kinds of auditability standards that many current and traditional HR processes must be prepared for. The smart tech can't be a "black hole" that only Ph.D.s from MIT can decipher.

Fairness

This last element is possibly the most important one. HR leaders and solution providers must ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing data across different demographics and groups. Ask yourself and your providers if there are there any groups that may be advantaged or disadvantaged by the algorithm or smart-technology system you are building and deploying. A great example of this would be the creation of "desired candidate profiles" for open jobs that are generated by mining the data from what might be a non-diverse, homogenous employee population and taking into account those factors that will encourage the algorithm to recommend more of the same kinds of candidates who "look" like the existing population.

In 2017 and beyond, it seems almost certain that HR technologies will continue to incorporate and emphasize "smart" features. Call them AI, machine learning, predictive analytics, or some other term that has not been invented yet, these powerful solutions and capabilities will fast become a part of the HR toolset. HR tech that promises to identify the best candidates using big data analysis, pinpoint the high- performing employees who are about to resign before they submit their resignations, or devise and schedule the optimal mix of staff and managers that are expected to yield the highest revenue per shift is and will continue to become, incredibly powerful and influential for workforce planning, talent management and recruiting.

But, to borrow a well-worn maxim, "With great power comes great responsibility." In this case, it is a shared responsibility of the solution providers that create these amazing tools and the HR leaders who deploy these tools, to ensure that these "smart" technologies are responsible, ethical, fair, accurate and accountable. Many of the same dimensions by which HR has traditionally devised and deployed assessments and performance management systems for people are, perhaps ironically, the same ones that will need to be applied to "smart" technologies and artificial intelligence in the workplace. Let's hope that we can all work to ensure these technologies improve work, workplaces and HR in 2017 and beyond.

Steve Boese is a co-chair of HRE's HR Technology® Conference and a technology editor for LRP Publications. He also writes an HR blog and hosts the HR Happy Hour Show, a radio program and podcast. He can be emailed at sboese@lrp.com.

 

Dec 19, 2016
Copyright 2017© LRP Publications