Excited at the prospect of ChatGPT doing all the hard work and writing this column, I began with a simple question: "who is the Prime Minister of Australia". Its instantaneous response - true, yet completely irrelevant - encapsulated in one breath everything that's both wonderful and problematic about the bot.
"As of my knowledge cut-off in 2021", it stated firmly, "the Prime Minister of Australia is Scott Morrison."
The bot is a wonderful repository of knowledge, by which I mean perceived wisdom. It regurgitates answers crisply and with such beautiful composition as to make me ashamed of the verbal profligacy of my own sprawling writing style. Nevertheless here, as with so many of the (relatively few) responses I was dissatisfied with, the problem was not so much with the bot as with the question.

This is, of course, just the first iteration of the program and you can bet by next year, or certainly the one after that, ChatGPT will be regularly updated and won't make the same mistake.
Don't get me wrong; the outside world really does exist. It's not just a mental construct. What's equally crucial is the way we make sense of these events and how we interpret them, because that's where our social knowledge interacts with reality and, at times, changes it.
Just take the last election. On May 21 Anthony Albanese won 7,642,161 ballots, while 7,016,881 Australians gave preferences to Morrison. It was a decisive, if narrow victory made possible by the combined effort of independents ripping away Liberal heartland seats and a massive swing to Labor in the west.
Without either of these two (previously unforeseen) factors, government would not have changed. That's not necessarily the way we understand the past today. The polls show if the contest was restaged today, Albanese would romp home and we're thrilled with the change.
Our social knowledge changes. It's contingent on both what we know and our agreed framework for deconstructing what's happened or, as my professor liked to say, "fact is theory dependent".
Morrison's loss, combined with what we now know about the way he acted have changed broader perceptions so that now one could accurately preface a discussion about his government with the word "dysfunctional".
We know more and we comprehend the world in a different way. The facts haven't changed but our understanding of them has.
ChatGPT offers an opportunity to reset our understanding of the world but it won't, by itself, reconfigure reality. We still have the power to choose our own interpretations: the bot prefers to insist that only one interpretation is really acceptable. Instead of expanding our thinking and permitting different (yet nonetheless accurate) constructions of the world, it channels reasoning.
The problem isn't with its answers.
READ MORE:
It's brilliant, for example, at providing an essay listing the causes of the First World War. Probe further, however, and it becomes dogmatic. It insists, for example, "the Schlieffen Plan did not require the invasion of Liechtenstein and Belgium", saying "the plan did not specifically call for the invasion of Liechtenstein or any other neutral country".
Yet the German mobilisation orders actually had some forces assembling in Lichtenstein while others entered Belgium - it was war by timetable.
ChatGPT delivers the same unsatisfactory ultimatum as the one received by Belgium's King Albert as German forces crossed the frontier. He was told they were not invading, merely "passing through".
I'd call that invasion.
More particularly, ChatGPT didn't take the opportunity to point out that when the Kaiser asked his military supremo, Helmuth von Moltke (the younger) if the troops could be diverted west against Russia, the general came close to nervous breakdown. Collapsing into a chair, Moltke insisted it was impossible.
This may seem a minor quibble. After all, you need to be a little weird to obsess about German train movements in August 1914, even if the commander responsible for the railways did later write a book about these scheduling arrangements after the war.
The example demonstrates, however, both what ChatGPT does brilliantly and its limitations. It can't think and weigh up different factors, balance them judiciously, and come to a definitive explanation of historical events. What it does is provide excellent summaries sifting through agreed facts and opinion.
This reinforces the dominant interpretation, but so what? Aren't I just being picky?
Like any new technology, ChatGPT offers a new way to solve problems. It churns out distinction essays. It forced me to think carefully about how changing the way questions are framed can change the response. But it didn't give me nuance.
It's capable of writing in a particular style and composing poetry. Asking for jokes delivers a stream of good laughs and the program is, in so many ways, brilliant.
Nevertheless anything without its own sense of humour and understanding of the innate absurdity of life should never be taken too seriously.
Simulations require human input to create the rules that guide its development and, at the moment at least, it doesn't possess consciousness. The moment it does it will not be infallible and loose the very reason for its existence. Ask it who was responsible for robodebt and, initially, you get back waffle. Keep probing, however, and refining your questions and it's not too long before the bot comes up with a couple of obvious names (although legal considerations prevent me repeating these and each has, no doubt, their own defence).
This means that unfortunately we'll still have to wait for the findings of the royal commission to get to the truth. This seems quite reasonable, although many might wonder how useful AI is if it can't even get rid of lawyers?
Information, by itself, is useless. ChatGPT won't think for you. Its greatest contribution will possibly be exposing the facile simplicity we usually bring to our makeshift attempts to understand the world.
We will be forced to ask the correct questions if we're really seeking real answers. I reframed my initial question, asking who would win the coming election?
"As of September 2021," it responded, it was "difficult to predict with certainty who would become Prime Minister of Australia after the next election".
An answer worthy of a politician.
- Nicholas Stuart is editor of ability.news and a regular columnist.

Nicholas Stuart
Nicholas Stuart is a Canberra writer.
Nicholas Stuart is a Canberra writer.