Imagine trying to extract the whole of human intelligence out of the brains of subject matter experts (SMEs) across the globe and then feeding it all into primitive computers to generate artificial intelligence (AI). It’s a laughable notion now, in the age of super-fast computers with almost limitless storage and calculation abilities. Still, in the early decades of research led by psychologists, this “knowledge expert” model was the only way to recreate the human brain’s capabilities.
We’ve thankfully moved on from those days, though as Dr. Bruce Porter, SparkCognition’s chief science officer, shared in a recent workshop on the state of AI, we’re still far away from true machine cognition that in any way captures what humans are capable of.
There are narrow slices of AI technology embedded in everyday tools and apps we use, from route planning engines to real-time language translation tools to recommendation systems that help us decide on a purchase or movie selection on Netflix. But these narrow applications use only one or two broad functions—learning, language use, reasoning, knowing, seeing, and movement—that experts say could someday happen in parallel by machines and robots.
Porter said the so-called “AI winter” from 1985 through 2005 came after researchers reached a plateau with the knowledge expert model, which was completed by interviewing SMEs, feeding that information into computers, and using if/then reasoning frameworks to solve problems. The time-consuming process didn’t generate the benefits needed to justify the cost, and it wasn’t until computing power took several steps forward in the early 2000s that the “AI spring” enabled by machine learning capabilities became a reality.
Machine learning went steps beyond traditional computer programming by allowing the technology to discover patterns in data and training itself to perform tasks based on what it found. Just like humans learn to walk before they can run, machine learning helped AI technology take the first essential steps toward realizing the possibility of superhuman performance in specific tasks and skill sets.
Rather than the traditional data + program = outcome paradigm that had developed over decades, machine learning worked on a data + outcome = program, with the end model getting more accurate based on the amount of data available to study for patterns. Also important is selecting the correct algorithm, or rules, for the technology to use when building its models to deliver the most valuable results. For example, a client may want as few false positives as possible in industrial uses. In contrast, another may want to avoid all false negatives that could lead to overlooked major failures.
Since the beginning of major advances in AI technology nearly 20 years ago, there have been many exciting developments in what it can achieve, from image recognition and completion to real-time language translation, with AI using a variety of data types: structured/rectangular data in spreadsheet form, images, sequential data such as video and sensor readings, and language in the form of words and text.
Porter said there was once the belief that researchers could “throw the kitchen sink at the tech” and let it sort out its findings, but there are limits to how useful vast amounts of data can be. In general, Porter said AI needs 10 times as many entries (records or incidents) as features (specific attributes or characteristics that identify an entry) in the data being analyzed. Too many features versus insufficient entries create a problem of having too many variables from which to find patterns in the data set.
Between performing tasks such as classification (identifying a type of animal or vehicle in an image), completion (correcting a blurry image, and determining missing values in a loan table), AI tools have become highly valuable in today’s business environment.
Other burgeoning uses for AI include the expanded use of drones to detect maintenance issues in wind turbines and other renewable energy assets or watching for proper safety procedures and other actions within a manufacturing setting where health and safety are crucial concerns.
And even those beneficial, impressive uses for AI technology are only scratching the surface, Porter said, in a field that promises to change the world in the coming years and decades.