The document discusses the technological singularity, which is the development of artificial superintelligence that could vastly surpass human intellectual abilities. It may be difficult to predict what will happen after such an event. Once created, this superintelligence could self-improve rapidly through an "intelligence explosion," designing even more advanced versions of itself. The consequences of developing such a powerful AI are uncertain - it may help humanity flourish or potentially pose dangers that must be prevented through safeguards like confining it and ensuring it remains helpful and harmless to humans. Developing AI with human-friendly values is seen as key to navigating this challenge.
2. Accelerating Change Technological growth progress is exponential Touchscreens, 3DTV, motion control, you ‘aint seen nothing yet We may have to integrate ourselves with machines just to keep up with the rapid technological development
3. What is the Singularity? Mankind builds a supercomputer or artificial intelligence, or AI, that is superintelligent The term singularity describes that such an event can happen anytime, anywhere
4. Intellectual Event Horizon Because this machine’s mental capabilites would be far greater than those of our own, it would be very hard to near-impossible to predict the future after the singularity
5. Intelligence Explosion Once we build a machine that even slightly surpasses human intellect, it could improve on and enhance its own designs The improved machine could make even more improved machines, resulting in a cascade of intelligence ; in other words, our intelligence explosion
6. How do we build this superintelligence? We develop a device in which the power of human brains can be amplified We construct an artificial intelligence (AI) with a higher ability of thinking than our own.
7. Consequences The machine becomes evil and kills us all; populates the Earth with robots The machine is friendly and we flourish as a human society alongside robots
8. How do we stop and prevent the AI from becoming deadly? Keeping the superintelligence in a simulated box, separate from reality and internet access, Programming the intelligence with the intentions of not harming us, Programming the intelligence to maximize human pleasure and satisfaction, Teaching the superintelligence moral values with machine learning, Adding friendliness to the AI design.
9. Friendly AI is the key! Institutes that are developing a friendly AI include: the Singularity Institute (in San Francisco, California) the Future of Humanity Institute (in Oxford, U.K.)