It was decades ago when science fiction great Isaac Asimov imagined a world in which robots were commonplace. This was long before even the most rudimentary artificial intelligence existed, so Asimov created a basic framework for robot behavior called the Three Laws of Robotics. These rules ensure that robots will serve humanity and not the other way around. Now the British Standards Institute (BSI) has issued its own version of the Three Laws. It’s much longer and not quite as snappy, though.
In Asimov’s version, the Three Laws are designed to ensure humans come before robots. Just for reference: In abbreviated form, Asimov’s laws require robots to preserve human life, obey orders given by humans, and protect their own existence. There are, of course, times when those rules clash. When that happens, the first law is always held in highest regard.
The BSI document was presented at the recent Social Robotics and AI conference in Oxford as an approach to embedding ethical risk assessment in robots. As you can imagine, the document is more complicated than Asimov’s laws written into the fictional positronic brain. It does work from a similar premise, though. “Robots should not be designed solely or primarily to kill or harm humans,” the document reads. It also stresses that humans are responsible for the actions of robots, and in any instance where a robot has not acted ethically, it should be possible to find out which human was responsible.
According to the BSI, the best way to make sure people are accountable for what their robots do is to make sure AI design is transparent. That might be a lot harder than it sounds, though. Even if the code governing robots is freely accessible, that doesn’t guarantee we can ever know why they do what they do.