The Singularity Prize: A Vision of the Coming Intelligence Explosion

There’s a specter hanging over humanity. That specter is artificial superintelligence (ASI). As certain as the sun rises, the emergence of smarter-than-human intelligence in a machine is without doubt. The only uncertainty about it is when and whether the strong artificial intelligence(AI) will be pathological or will it be the greatest positive scientific breakthrough in all of human history?  It is my considered opinion that the public discussion of machine intelligence should be initiated now. This will be necessary to ensure a safe arrival of this technology. The Singularity Prize is a  wide-ranging novel with ideas about how this new scientific achievement is projected to arrive in the early 2030s. The story follows the character arc of Julian Marshall a computer science professor tasked with leading a team of scientists to capture the prize and is $30 billion reward. They competed with an Asian team based in China led by Xeujing Wang, a lady.

Major tech firms including Google, Apple, Facebook, and Amazon have major AI efforts underway. They have all invested fortunes in AI startups particularly those in machine learning. A self-improving supercomputer would be a prerequisite to a system exceeding human intelligence.

How can this future technology be safe and be a servant to humanity? Can a recursively improving system be predicted to keep invariant goal structures to help humanity? No matter where you are in the universe, ethics is a system that will govern right from wrong. The code that will be required to have an ethical improvement system will be possibly the most difficult module of the singularity to code. There are thousands of scenarios that present complex ethical dilemmas. An improvement system must handle these situations better than the average human. This advanced ethical system ensures the safety of the superintelligence. Just think, a few years ago climate change was not a pressing issue for our global civilization and was not discussed broadly. Now it is discussed throughout the media because the evidence for major changes such as disappearing glaciers and unusual weather patterns is very clear now.

AI advancements are showing up rapidly in press reports. Many of these discussed Watson, the IBM computer that defeated the top human Jeopardy! players, and other benefits. Other reports shout the dangers of super intelligence loudly. The Singularity Prize hopes to capture the public’s imagination and get the general debate rolling on the topic.

The AI Community Takes Collective Action

Scientists met a few decades ago to map safety protocols for genetic engineering research at Asilomar. This agreement made biotechnology advances safer. A conference of this type was held again in Asilomar in 2017 to establish rules for artificial intelligence research. This conference resulted in a set of rules that can be found here (http://bit.ly/2nyeISS). More sessions like this are needed to give the research community guidance in pursuing AI research. The general public needs to get up to speed on the basic science of AI and express their opinion about keeping it save and collaborative with humanity.

The Paramount Importance of Ethics

Why focus on ethics? Of all the great philosophical traditions, ethics ranks heads and shoulders above them all as a system which teaches right from wrong. As a set of ideas which provides the basis for making difficult decisions of life and death; ethics has proven his strength.

Ethics is the last best hope of humanity to eradicate warfare and other forms of destructive, violent human conflict. Most importantly, the future intelligent machines of man’s creation must have embedded in their construct a recursively improving ethical system that exceeds human level ethics. This is a difficult programming task, but surely it is not an impossible goal. Indeed it is a necessary goal. Utilitarianism, an ethical theory which maximizes the well-being of sentient beings, must be a part of the AI.

An example of AI caution is the new nearly universal warning of our researchers to hold off any research and development on autonomous military robots. These combat robots could make life and death decisions without human intervention. This could evolve into a Terminator world that would be as dystopic as any future world we can imagine.

Computers will eventually eliminate 70% of all jobs by the turn of the 21st century. This will cause a major disruption in society to the core. 100 years ago society required  90% of workers to produce the food needed for the world. Today only 1% of workers are required to provide for food to the other 99% of people. The same transformation will occur with advancement in AI and robotics toward most jobs.

Martin Luther King famously said, “the arc of the moral universe is long, but it bends towards justice.”

This is an optimistic view of justice. We can paraphrase King and say, “the moral arc of machine intelligence is short, and it must always bend towards justice.” This certainly has to be embedded in the machine before it’s intelligence goes live. We can’t wait until the super intelligence arrives before considering and building its ethical foundation. Unlike the evolution of human societies where ethics emerged after the advance of intelligence, the emergence of greater than human intelligence has to build its ethical immune system first before machine development of anti-ethical behaviors and goal structures that may come from machine intelligence advances.

AI Has Begun to Have Enormous Effects 

In spite of the dangers inherent in building a singularity, as quietly as it is kept, our lives are already governed by AI. This is a narrowly focused AI that has expertise in one domain. Weather forecasts have improved significantly due to the improvement in the predictive science of AI. An accurate forecast can save lives when people evacuate early from dangerous hurricanes and tornadoes

When you board a plane from Portland, Oregon to New York City, the onboard computers manage the flight parameters and fly the plane when the pilot sets the autopilot. The pilot manually flies the plane 7% of the flight time. Computer software controls nearly every aspect of the cars we drive today. Wall Street, where major financial crisis actions take place, is wholly beholden to computers. Computerized high-frequency trading is the norm now. The flash crash that resulted in huge stock losses was due to a computer glitch.

It’s only another step to have supercomputer programs help us manage and enhance many aspects of daily life. These worker robots will liberate the masses from a lifetime of mental and physical work. The robots will not only take jobs but will create a problem of people having too much leisure and not knowing what to do with their time. Hobbies will be mandatory to extract pleasure and joy in life. Building friendships will be another great outlet for our creative energies. We were born social, and we will always be social in the present and the future. Our social networks will be enhanced greatly by computers and the nervous system of the planet, the Internet.

The singularity when it arrives will touch all of our lives in profound ways. This ensures that this touching will be gentle and supportive of our cherished values, our hopes, our dreams, and our determination to build a better world for everyone.

Broad Public Discussion is Needed on AI Design

My hope for  The Singularity Prize is for it to stimulate the deep discussions and fierce debate necessary for human society and its scientific tip of the spear to build an intelligence that must perform actions that will cause more good in the universe than any possible alternative. This is our granite like view that is ethically based and focused on creating a moral agent that can be described as mathematical ethics embedded in a machine. The critical question is the need for greater and deeper levels of intelligence to solve the great and pressing problems of civilization. We could wait for the random genius to be born who can masterfully solve the great mysteries of science. This is linear in scope and may not happen at all. How often does an Einstein appear? This happens every one or two centuries. The problems are arriving faster than the solutions. For example, the sixth great extinction is underway. In all of the earth geologic history of 4.5 billion years, there have been five great dying of periods of plant and animal species. Most of these were caused by asteroids or other space objects striking the earth. The 6th extinction is caused by one species, humans, who are the primary force of change. The five previous extinctions were mostly gradual over several centuries or millennia. The 6th is occurring at warp speed in comparison to previous extinctions.

Extinctions are not good for the earth and the current ecosystem. As the data flows in on the great change underway on the planet, super intelligence amplification will be capable of finding subtle patterns in the vast data sets. Prediction and forecast modeling will be significantly more powerful and accurate. Foremost, the superintelligence will formulate multiple solution tracks to problems encountered. The ability to master big data with speed and mathematical precision will be the hallmarks of the scientific advance. For example, there would be accurate predictions of the rate of climate change and timelines of temperature changes in sea level rise. Human civilization would then have sufficient warning to be proactive in mitigation of the changes that might occur. Thus, millions of lives could be saved, and the ecosystem will remain stable instead of irreversibly degrading and losing its ability to support life.

There are of course many other threats to our global civilization that is super intelligence can help us with and save lives. Instead of fearing the loss of life we could be hopeful with a high degree of probability that AI could save millions of lives. The Singularity Prize presents several scenarios where an AI can save lives.

Artificial general intelligence (AGI) which preceded ASI focuses on learning software that could learn from raw data and does not need to be pre-programmed. This skill of the algorithm could solve problems and operate across many domains. The AI would perform by making observations of its environment (e.g., data, visual images). Based on these observations the AI would take specific actions that are in response to the observations. The early work on this AI concept used computer games as a testing ground for these neural net-based algorithms. In rapid sequences, the AI improved from not knowing the game or the rules to shortly beating the best-known human players. AlphaGo, developed by a company named Deep Mind, used this AI system to beat the world’s best go player, Lee Solden, in 2016. This development was projected to take place by 2025, because of the complexity of the game. The possible moves in one game can exceed the number of atoms in the universe.

Unlike narrow AI as described earlier, AGI can tackle many domains and solve real-world problems. Healthcare is a sector that can result in diagnostic and therapeutic breakthroughs in the near future. The data accumulating in the health sector is the fuel which the AI algorithm engines require. It is certain to make spectacular progress in the health field. Already Watson at IBM is performing in oncology diagnosis and treatment planning on par and better than some top human oncologist.

Where does this place us now in the AI debate? We are at the very beginning of the public debate. We cannot depend on the computer scientists alone to decide how to proceed with AI. The general public needs to be involved. The Singularity Prize can help educate the public.

The Critical Control Issue of AI

This is to make sure that humans remain in control of an AI. We also have to coordinate our efforts in teams. We also need to avoid a race to the finish line where we begin to cut corners. Safety could get cut because it does not contribute to the AI capability. We must ensure that there is a government role in how we get AI out into the world. This will ensure the democratization of AI. We must determine who and how we set goals for AI. The solution is partly to have partnerships in AI’s development. We need to have top technical talent in this field in a trusted community of collaborators with an ethical ethos in a transparent environment. There is a concern that once we have intelligence out there that is greater than ours, we will suffer the fate of other species that are less intelligent than humans. When these machines get smarter than humans will humans understand what is in the black box. We must understand the steps in the AI’s decision-making process. We have to develop machines that generate explanations for the answers that humans can understand. There is a concern that bad actors could misuse AI. This is difficult to control. Even if AI is benign, it could cause overly dependence by the human society that owned the  AI. Human intelligence could atrophy from disuse. The process would be so gradual that civilization will not even recognize this change. AI will, of course, recede into the background and be invisible to society. Can we answer the question of what would be a good future for humanity? We have to acknowledge that humans are already cyborgs. We are already superhuman through links to our computers and phones. However, we do have a problem with bandwidth and making it widespread. We have to increase the bandwidth that will exist between computers and humans. The human prefrontal cortex is a prediction machine that can simulate the future. ASI will be thousands of times more capable of accurate forecasting of the future.

We need to have a vision and imagination of our future with AI. It should be a fun future and not a boring future. We have to solve the alignment problem. It is important to have an AI that is aligned with us in our values and can show ways in which we can improve our lives. The AI should point out various pathways of existence including showing us what future we should want. This forecasting could be the most significant thing that can happen to humanity if we get it right. The AI can help us explore the big questions of why we are here, what is consciousness, and what is the nature of the universe. What is our purpose and how can we unlock our full potential? AI can help us eliminate the negatives in the world. As we shift to the positive, an AI can suggest the path that we cannot imagine. AI opens up a faster pace of possibilities that transcends human formulations and exceeds the limitations of our mental constructs. The post-human era, if it arrives, will be hard for us to forecast accurately.

The prospects of humans becoming smarter as a result of AI is very promising. If we could imagine 74,000 years ago asking humans what type of future would they like to have, they might say, “we would like to have enough food year-round, or we would like protection from predators.” They could not imagine owning an iPhone or an automobile. By analogy, we cannot imagine a future of the AI advances. The vista of change will be too wide in scope to speculate accurately. The imbalances in society between poverty and wealth can be rebalanced with the help of an AI. It can help us eliminate the zero-sum game between haves and have-nots. The competition for scarce resources will not be an issue, because of  AI produced abundance. As the saying goes, “freedom consists of the distribution of power and despotism is its concentration.”  AI can facilitate freedom. Let’s embrace the challenges of this future world. The Singularity Prize is a novel that can illuminate many of the issues discussed above. Look for its upcoming release in October 2017.