I was recently reading about technological singularity that a lot of who's-who in the field of AI/Robotics (Ray Kurzweil, Hans Moravec, Vernor Vinge etc.) are talking about. The June'08 IEEE Spectrum runs a special feature on this called "Rapture of the Geeks". Reading through the articles (and also having read Ray Kurzweil's The Singularity is near) i have a few questions on some of the predictions that futurists are making. I am trying to get feedback on these issues from some well known folks in the field and will post them as and when they become available.
- One popular view of technological singularity predicts that machine intelligence will surpass human intelligence in the next few decades and we will have machines building more intelligent machines presumably not under human control. This means that humans would have succeeded in building something which can replace us at the top of the intelligent species list. If this is indeed true, then wouldnt it make human existence meaningless and eventually result in our extinction? Or worse, we may end up being pets to a superior intelligent species :) . My point
is, if humans are smart, why would they let this happen?
- Ray Kurzweil predicts that singularity is just 3-4 decades away. He builds up his arguments based on the technological revolutions in Genetics, Nanotechnology and robotics. Innovations in these areas may help us build machines smarter than ourselves but they all would lack the consciousness that sets humans apart. Thus they can all be efficient than us but presumably not "street smarter" than us. Some scientists also predict that we will be eventually able to give our consciousness to these machines. But what would that help us achieve? Will it will help us better our own lives or extinct us?
- Assume that the singularity does eventually happen, what makes us feel that we will be able to build a set of guiding principles under which our intelligent innovations will work? And why would those conscious intelligent beings follow our guidelines instead of inventing their own efficient guidelines? Isn't this similar to humans having children, children growing up and then deciding themselves on whats right and wrong? The only difference here being that these android offsprings would be far more capable (and lethal) than human children.
- The final question is, if our technological progress is indeed pointing towards a singularity, then should we take it as a sign of progress or a warning for our future?