Recent Entries

You have 2008464 hits.

Latest Comments

You are currently viewing archive for January 2015
Posted By Peter Bentley

Artificial Intelligence has never been more popular, with many recent movies and books having fun with the ideas. Research in AI and Machine Learning has never been stronger, with more people creating more advances than ever before. We're now seeing more and more mainstream applications - look at Siri on your iPhone for an everyday example of start-of-the-art AI. Your credit card company uses basic machine learning to alert you of potential fraud. Your car might even have the ability to apply the brakes and help you avoid collision. There are new companies specialising in creating AI software - I am a consultant for one called Braintree Ltd which has the real aim of creating Strong AI in the future.

But there has also been another recent trend - the rise in AI scaremongering. Professors, entrepeneurs, and supposedly knowledgeable people who should know better are increasingly being reported as proclaiming that AI poses a real danger to the future of humanity. Whether these people have made these claims or not, they should know better than to let such claims stand in the press.

It's nonsense. Rubbish. Idiocy. It might even be downright damaging in the same way that the GM food or Stem Cell negative publicity caused real damage to research funding and progress. It's also not new. AI research is as old as computers and it has been through this several times in the past - silly claims and predictions, leading to a loss of confidence, leading to "dark ages" of AI research where no funding can be obtained.

The bottom line is that we try very very hard to make "intelligent" software. We get a few really neato results for very niche applications. We will continue to make really neato applications that process information better than we can, and soon we'll have lots of very helpful tools that make our lives easier and safer. But despite the science fiction visionaries and their silly predictions, we have little clue how to make real intelligence. We don't understand consciousness or emotions; we don't understand how and why brains are structured in the way they are; we don't understand so much that we simply cannot make an AI. Maybe with enough resources we could evolve one with a combination of genetic algorithms, developmental processes and neural networks. But we don't understand how to do this well enough yet, and we don't have the computational resources.

So I suggest - if you're worried about technologies that pose a real danger to humans, forget AI. Worry about the automobile. Worry about trains or aircraft. Worry about water processing facilities, power stations. Worry about the decay of societies caused by excessive TV or video game playing. Worry about people doing harm to themselves. Worrying about AI is no different from worrying about how teleportation or antigravity will destroy humanity. It's seriously not an issue, and won't be an issue for a long, long, long, long time. 

These are some of the things I would say if given a bit more time. In the land of TV however I only get about 15 seconds, so here's what I did say on ITV news recently.