Artificial intelligence: there are problems we need to address right now, the rest is science fiction

by Frederik Zuiderveen Borgesius, Marvin van Bekkum, and Djoerd Hiemstra

Everywhere you read warnings of ‘existential risks’ from artificial intelligence (AI). Some even warn that AI could wipe out humanity. The tech company OpenAI is predicting the emergence of artificial general intelligence and superintelligence, and of future AI systems that will be more intelligent than humans. Some policymakers also fear this kind of scenario.

But things are not moving that fast. ‘Artificial general intelligence’ means an AI system that, like humans, can perform a variety of different tasks. There is no such general AI at present, and even if it does come one day, creating it will take a very long time.

Many AI systems are useful. Search engines, for example, are indispensable to internet users, and are a good example of specific AI. A specific AI system can perform one task well, such as pointing people to the right website. Modern spam filters, translation software, and speech recognition software also work well thanks to specific AI.

But these are still examples of specific AI – far removed from general AI, let alone ‘superintelligence’. Humans can learn new things. AI systems cannot. What computer scientists are getting better and better at is creating general large language models that can be used for all kinds of specific AI. The same language model can be used for translation software, spam filters, and search engines. Does this mean that such a language model has general intelligence? Could it develop consciousness? Absolutely not! There is therefore no real risk of a science fiction scenario in which an AI system wipes out humanity.

This focus on existential risks distracts us from the real risks at hand, which require our attention right now. Little remains of our privacy, for example. AI systems are trained using data, lots of data. That is why AI developers, mostly big tech companies, are collecting massive amounts of data. For instance, OpenAI presumably gobbled up large sections of the web to develop ChatGPT, including personal data. Incidentally, OpenAI is quite secretive about what data it uses.

Secondly, the use of AI can lead to unfair discrimination. For example, many facial recognition systems do not work well for people with darker skin tones. In the US, the police have repeatedly arrested the wrong person because a facial recognition system wrongly identified the dark-skinned men as criminals.

Thirdly, AI systems consume incredible amounts of electricity. Training and using language models like GPT require a lot of computing power from large data centres, which guzzle energy. Finally, the power of big tech companies is only growing with the use of AI systems. Developing AI systems costs a lot of money, so as the use of AI increases, we become even more dependent on big tech companies. These kinds of risks are already here now. Let’s focus on that, and not let ourselves be distracted by the ghost of sentient AI.

Published by Radboud Recharge.