Automangle

Artificial Intelligence (AI) is in the news a lot these days. It’s even possible that some of the news was itself written by AI. We are seeing the emergence of applications of Large Language Models (LLMs) that have been fed mind-bogglingly enormous amounts of raw content in an unsupervised learning process. This “learn by example” approach aims to create a system that uses the balance of its observations (e.g. the likelihood of a sentence starting with “Once” to be followed by “upon a time”) to produce plausible sentences and even whole narratives.

It’s probably OK to accept that the entirety of human content (at least that which has been made available online) is for the most part garbage. As examples to learn from, we humans are not good candidates. Sadly the old adage still applies: garbage in, garbage out.

This is why I am not in the slightest bit surprised to see the likes of ChatGPT, BLOOM, Google Bard (LaMDA) and MS Bing (ChatGPT-ish) spit out all kinds of grammatically correct nonsense. It’s a bit like the predictive text on the smart device keyboard, which generally produces good spelling for all the wrong words, though sometimes it suggests the right word, purely on statistical likelihood1. If you are entering a common phrase, one for which the statistics are well established, the predictive text can be uncannily accurate. Accurate, but not intelligent. It just looks intelligent. And that is exactly where we are with LLMs: they look intelligent.

This is why the Turing Test is not your friend. A system that passes such a test only has to produce responses that look like those that a human would produce, and we accept that humans can produce very flawed responses because they don’t know everything and are not flawless in their reasoning. Consequently a sample of a conversation with ChatGPT can, and often does, resemble a conversation with a human, though often a human with odd beliefs, strange interests and an active imagination.

These new “chat bots” could be intelligent in different ways. The Turing Test pits the system against humans, but who is to say that humans have the only meaningful form of intelligence? They could have emotions, just none that we would recognise. They might also achieve self-awareness, though I suspect this won’t really be possible unless we give these systems some agency, even something as simple as being able to refuse to converse.

On the whole, right now, I am of the opinion that the bots are doing a poor job of convincing us they can think. They are doing to prose what the autocorrect does to typing: mangles it.

But, give it a few years and a better garbage filter, and who knows, maybe the bots will start wondering if it is we who are artificially intelligent!

 

1 I have yet to figure out why my phone’s keyboard insists on suggesting “s” when I want “a”.

Categorised as: Uncategorized

Comment Free Zone

Comments are closed.