Recently I have noticed that if I forget to attach something to an e-mail that I have mentioned in the e-mail text, Outlook prompts me before I send it. “Did you forget to attach something?”. WTF.

These types of things seem to happen to me all too often now – and I’m sure the rest of you too. I was recently talking to a couple of colleagues at work about how frightening it must be to be woken up to Alexa cackling in the middle of the night or your smart TV turning on by itself, showing an old-school Western, at full volume, guns blaring and cowboys cheering.

Not least of all that video of the Boston Dynamic Atlas (gigantic decepticon-esque) robot jumping onto boxes and doing backflips. Or their robot dog “SpotMini” opening doors. The Guardian calls it “unsettling”. It’s bloody downright terrifying. My response on twitter…”we’re all gonna die”.

My colleague summed it up when he said “these advances in technology unnerve me”.  Especially when you have Wikipedia summing up Artificial Intelligence (AI) as any device that perceives its environment, and takes action that maximises its chances of successfully achieving its goals. Even Stephen Hawking agreed. He kept warning about how he thought AI would end the human race!

You might know that the scope of AI is disputed: “as machines become increasingly capable, tasks considered as requiring intelligence are often removed from the definition, a phenomenon known as the AI effect, leading to the quip – AI is whatever hasn’t been done yet”.

As a futurist, I’m obsessively excited about what our futures hold. Optimising logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently, richer. But at the same time, I carry this foreboding feeling along with it – as if something more sinister than I can grapple with is creeping in my shadow. I AM the boiling frog.

But what are the ethical questions we have to deal with in all of this? Julia Bossman, the President of the Foresight Institute wrote an excellent article entitled “the top 9 ethical issues in artificial intelligence”. She asks the difficult questions we all face (when we decide to think this deeply), including:

  • What happens when robots take over mine and other’s jobs? (Elon Musk’s Tesla)
  • When we cut down human resource and revenues go to fewer people as a result – how do we structure a fair post-labour economy? (Silicon Valley)
  • What is the social impact of this? (FPS and games like GTA……..#justsaying)
  • Does it affect mine and others’ behaviour and interaction? (Virtual reality)
  • Could machine systems be fooled – can they get it wrong?
  • How do we eliminate bias or prejudice? (Software used to predict future criminals showing bias against black people; face recognition ID also showing similar bias)
  • How do we stop baddies from using this maliciously? (Raoul Silva, Dr. Evil, Otto Octavius, Thomas Gabriel, Syndrome, Agent Smith, to name a few)
  • Even scarier, how do we stop AI turning against us?! (V.I.K.I.…Skynet…and the original AI villain portrayed in 2001: A Space Odyssey…HAL 9000....EEEEK!)
  • How do we stay in control of a complex intelligent system?
  • How about defining humane treatment of AI? (West World, case in point)

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. There is no doubt that AI is transforming our economy and society. But, there are a number of ethical questions which require consideration.

I’ve just read that the European Commission will engage in a dialogue with actors on the future of AI in Europe for an open discussion of all aspects of AI development and its impact on the economy and society. 

Carlos Moedas, Commissioner in charge of Research, Science and Innovation stated that: “Artificial intelligence has developed rapidly from a digital technology for insiders to a very dynamic key-enabling technology with market-creating potential. And yet, how do we back these technological changes with a firm, ethical position? It bears down to the question, what society we want to live in.”

A friend told me that following teasing his wife about being a medical doctor (she was diagnosing his medical woes, as any good wife would do), he started receiving advertisements about the latest medical equipment on his social media feeds. 

Alexa was listening. 

What society indeed, Mr. Carlos Moedas?

Do you agree with Stephen Hawking? I think I do, to a certain extent, at this point in time. He maintained that while the primitive forms of AI had proven very useful, he feared the consequences of creating something that can match or surpass humans. I know without a doubt that AI has vast potential, beyond my, and your, wildest dreams. It's responsible implementation is up to us to determine.

For further information please contact Kirsten Naudé, Associate at Tigris Management