Introduction


            A superintelligence is any intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.  While the possibility of domain-specific “superintelligences” is also worth exploring, this paper focuses on issues arising from the prospect of general superintelligence. 


            Several authors have argued that there is a substantial chance that superintelligence may be created within a few decades, perhaps as a result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains.



Ideals


            A requirement for having a significant discussion of superintelligence is the realization that superintelligence is not just another technology, or another tool that will add increasingly to human capabilities.  Let’s take a look at some of the unusual aspects of the creation of superintelligence:


§   Superintelligence may be the last invention humans ever need to make


§   Technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.


§   Superintelligence will lead to more advanced superintelligence.


§   Artificial minds can be easily copied


§   Emergence of superintelligence may be sudden.


§   Artificial intellects are potentially autonomous agents.


§   Artificial intellects need not have humanlike motives.


§   Artificial intellects may not have humanlike psyches.


To the extent that ethics is a cognitive pursuit, a superintelligence could do it better than human thinkers. This means that questions about ethics, in so far as they have correct answers that can be arrived at by reasoning and weighting up of evidence, could be more accurately answered by a superintelligence than by humans.



Obligations


            Superintelligence top goal should be friendliness. How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration.  If the benefits that the superintelligence could give are extremely vast, then it may be less important to negotiate over the detailed distribution pattern and more important to seek to ensure that everybody gets at least some significant share, since on this idea, even a tiny share would be enough to guarantee a very long and very good life. One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it.



Consequences


            A superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to in joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.


            However, the risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general.


            Once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.




Credit:ivythesis.typepad.com


0 comments:

Post a Comment

 
Top