The Ethical Future Of Robotics Stephen Fung April 28, 2007 Extras First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These three laws of robotics, formulated back in 1940 by Isaac Asimov, have applied in many science-fiction movies whenever a robot appeared – and were sometimes blatantly breached for the purpose of more gruesome scripts! Yet, it is likely that such androids will become a reality some fifty years from now. So what ethics will these artificial creatures apply? This is the topic that was debated by both experts and lay people at the Rights for Robots public conference held in London, UK, on April 24. With the rapid technological advance in this field – remember Domo? – it seems essential to set up guidelines that will regulate the behavior of robots in the future. Or we’ll end up having to create special Blade Runner police forces! “Robot technology is accelerating with applications in the home, in the workplace and in the military. It is hard to keep up and we are at a point where the public need to make some informed decisions about our future.” Source: EurekAlert Share This With The World!