This article first appeared on Forbes.
For decades, traditional economic wisdom held that human behavior was rational, unemotional and even predictable. But after progress in behavioral economics and its accompanying insights into human decisions, the world has gradually begun to accept that the opposite is true: Irrationality is all around us, hiding in plain sight.
In the 1980s, Daniel Kahneman and Amos Tversky introduced biases like loss aversion, the tendency of decision makers to prefer avoiding losses to acquiring equivalent (and sometimes greater) gains, and status-quo bias, the misguided preference for the current state of affairs in the face of preferable alternatives. More recently, Richard Thaler won the 2017 Nobel Prize in economics for demonstrating the ubiquity of such cognitive shortcomings in business and finance.
Inasmuch as these findings can erode our faith in human decision makers, they also offer prescriptions on how we all can make better choices. In Nudge: Improving Decisions About Health, Wealth, and Happiness, Thaler shows that access to the right information is frequently all it takes to be “nudged” toward better judgment.
In business, firms frequently rely on tried-and-true methods to troubleshoot problems, taking an “if it ain’t broke, don’t fix it” approach in areas like hiring, marketing, production, maintenance and many other areas. However, the data often tells a different story: Just because a process isn’t “broke,” doesn’t mean it’s working as well as it should be.
The good news is that our increasingly connected world is giving decision makers new tools to identify and root out bias. With access to ever-growing troves of data on key business processes and the capability to analyze this data at lightning speeds, machine learning and artificial intelligence (AI) can provide the nudge (and sometimes even the shove) toward better judgment and, ultimately, better performance.
Consider three kinds of scenarios where this is happening:
1. Demystifying Causes
When industrial assets fail, mechanics often resort to heuristics, or trial and error, to try to identify the root of the problem. Likewise, in the C-suite, executives face similar dilemmas when trying to determine why an initiative went wrong, while another one went right. But our heuristics often fail us — we mistake correlation for causation or overlook the possibility of simultaneity when two variables, X and Y, determine each other.
Machine learning algorithms are capable of analyzing vast quantities of sensor data, identifying relationships inaccessible to human experts. This can be useful in areas like maintenance to get a more accurate picture of when a part will break or identify the leading causes of suboptimal behavior for sophisticated industrial assets.
One example of AI demystifying causes involves a problem that stumped scientists for centuries: how flatworm cells regenerate into new flatworms. A team from Tufts University fed all of the studies done on this topic into an AI system, which generated random simulations until the result matched that of the studies. After 42 hours of simulations, the system not only returned three known molecules that contributed to the successful regeneration but also discovered two new previously unknown proteins that also played a role.
2. Handling Novelty
Decision making in novel situations is highly susceptible to bias; humans assign undue attention to (ostensibly) similar past situations, failing to take into account more relevant information. More generally, fear of the unknown leads us to act erratically or not at all.
Artificial intelligence handles novelty the same way it handles familiarity, applying the same statistical protocols to identify patterns and make predictions. This is a key reason why, to an increasing extent, machine learning systems are beginning to outperform experts in diagnosing medical conditions. Whereas experts are fallible to “framing effects” in interpreting new data, deep learning algorithms are not. Such systems are allowing businesses to approach new challenges with consistency and impartiality.
3. Creating Feedback Loops
A final kind of bias endemic to hierarchical organizations of all kinds takes the form of championship bias, or the tendency of leaders and executives to go unquestioned in making important decisions. Without feedback, leaders are highly prone to myopic and intransigent decision making.
But data will challenge our beliefs. We’ve seen this portrayed beautifully in the film Moneyball, where statistics upended much of the most cherished wisdom about player performance in the tradition-loving sport of baseball. AI and machine learning are taking feedback loops to the next level, capable of not only challenging opinions with new insights but even identifying patterns of decision making inertia.
AI helps us avert our flawed heuristics and encounter new situations without bias and provides an organic feedback loop for human decisions. This heightened attention to details and relationships we normally miss helps us think past our typical inclinations.
Of course, AI is still a new technology and faces challenges. Though AI can cut through bias, in some cases, it can amplify problematic biases, as with Amazon’s recruiting software showing bias against women. AI must be developed in collaboration with human experts in order to circumvent such ethical issues and should include a retraining or appeals process to ensure its outputs remain relevant.
In my previous article, I argued that future AI systems will need to be able to produce accurate results from training on smaller data sets. This, combined with more open data availability, will enable us to deploy AI solutions to a wider variety of problems that are subject to bias today. Think a more informed supply chain, a holistic picture of oilfield performance and optimized staffing. With a nudge from AI, we can create a more capable and dynamic professional world.