(*) Original article appeared in Austin Business Journal
Elon Musk and several other Silicon Valley stalwarts have announced their intent to provide $1 billion in funding to a new nonprofit dedicated to safe artificial intelligence research. The organization, called OpenAI, brings together talented practitioners in the field and proposes to make their designs and code publicly available. The motivation for the establishment of this organization and its focus on safety may be found in the statements that Musk, Bill Gates, and Stephen Hawking have made recently. In these musings, artificial general intelligence has been compared to “summoning the demon” and has been presented as a development that could “spell the end of the human race.”
As IBM’s Rob High discussed in a recent ABJ article, leadership in artificial intelligence has been a reality for years. Most people are familiar with IBM’s focus on cognitive computing through Watson, the “Jeopardy” winning natural language processor that has a major presence in Austin. In addition to my company, SparkCognition, organizations such as Umbel, Lucid, WayBlazer, and CognitiveScale are advancing machine learning in their respective verticals. Whether it is energy, security, health care, travel, or retail, the AI sector in Austin is leading the industry. And that’s all before we even get to the University of Texas Computer Science Department, which is the largest in the country and a top 10 program.
As someone who considers artificial intelligence to be perhaps among the most worthy goals for humanity to pursue, and as someone who has committed himself to this pursuit, I am thrilled to hear of the incredibly generous financial commitment that will enable a group of amazing people to contribute to the field. I believe that intelligent systems hold tremendous promise for the human race and may in fact present the only practical avenue to rid us of many of the ills that have plagued us since the dawn of our own consciousness — poverty, hunger, disease and fear.
That last villain — fear — may seem somewhat out of place to many, but I include it here for good reason. Through our evolutionary history, our brains have been finely tuned to focus on risks and downsides, weighing them more heavily than the opportunities before us. Until now, the tight control exercised by our amygdala — that structure within the “reptilian” part of our brain that injects fear into our thought process — has arguably been beneficial to us. We have survived so long after all, fleeing lions, tigers and bears just in time to procreate and spawn another generation. But I believe that we now have more to gain by overcoming this bias toward fear. Most would hopefully agree that future man should be guided more by the pursuit of opportunity than the avoidance of loss and imagined downside. The past bears witness that it is precisely when we have given in to our fears that we diminish our collective potential and show history the ugly side of humanity.
I am not one to argue for the pursuit of any objective without regard to potential consequences. But it does worry me that so much of the discourse around the development of artificial intelligence hinges upon fear, job losses, machines taking over the world, and so on. Most AI researchers will confirm that artificial general intelligence is far from imminent. Too early on this journey have we conjured up too many demons. And these terrible visions we imagine — these dragons swooping down at us from the clouds — threaten to misguide us on our enduring quest to realize our potential.
The demonization of AI is no longer limited to thoughts and words but has recently taken practical form. After I spoke at a recent AI event at the popular SXSW conference in Austin, protesters opposed to the development of artificial intelligence picketed outside the venue. If we continue to march to the drumbeat of fear, we may actually jeopardize serious research and dissuade new entrants. This would not be in the interests of a humanity that faces many great challenges ranging from climate change, ideological radicalization, energy shortages and the potential end of the antibiotic age. We may be avoiding an imagined frying pan only to jump into a very real fire.