Smart machines need ethics

robby the robotGartner said CIOs need to develop “ethical programming for smart machines”.

What does that mean? According to Frank Buytendijk, a research VP at Gartner: “People must trust smart machines if they are to accept and use them. The ability to earn trust must be part of any plan to implement artificial intelligence (AI) or smart machines, and will be an important selling point when marketing this technology.”

But not all programming needs to be ethical – vapourware is by its very nature not at all ethical, said Gartner. “The first release of any software is seldom complete.”

But when you get to so-called smart machines like virtual personal assistants, the responsibility for ethics is shared between the service provider and designer. For example, asking a bit of smartphone software which bridge is the best to hurl yourself off is clearly far from ethical. Instead, the software should point you in the direction of a psychoanalyst or support worker.

Where ethics really becomes important is when you put a smart machine – like an autonomous car, in charge of life and limb.

Self aware machines are still a long way off but that’s when we’ll really be in trouble. Gartner says self aware machines will be responsible for their own behaviour.

If they behave unethically or perform illegal acts, perhaps they will need to be taken to a digital court and punished accordingly.

It’s all a bit Isaac Asimov but self aware machines are, right now, merely an idea.