Skip to content

I visited the future… and all I got was this lousy chatbot

This year I’m going to be writing longer form posts but publishing them less often. In some ways this is my attempt to counter the trend towards fast, disposable content such as what will be increasingly created by platforms like ChatGPT. As always I’d love to hear your thoughts on the new format and what, if any particular topics you’d like to see me explore.

Just last week it was announced that ChatGPT has become the fastest growing app of all time, reaching 100 million users in a little over two months. And over that time there has been so much said about Open AI’s new chatbot, some of it fawning fan reviews, others harsh critique…and from my own explorations I would suggest both these responses are entirely valid. If I was to summarise all that I’ve learnt so far it would be this

  1. ChatGPT is incredible
  2. Yet it’s also deeply, deeply flawed
  3. Those flaws my never be fixed
  4. This is both disappointing and dangerous
  5. But people will use it anyway


1. ChatGPT is incredible


For anyone who’s used a chatbot ever, ChatGPT is incredible in it’s ability to ‘hold a conversation’ and respond to natural language prompts. Personally I’m not a fan of chatbots, in fact I’ve never understood why any company would voluntarily put a piece of technology in front of their customers when they are at their most impressionable (when they have a problem/request/sales enquiry) and then proceed to frustrate them with circular arguments, technology drop outs and constant miscommunication.

BUT, if a chatbot was powered by ChatGPT then I imagine I could have quite a nice (though slightly verbose) conversation. It’s eerily human like in it’s responses and you get them fast. And if there is anything that is incredible about ChatGPT, it’s that you get fast, human like answers…it’s definitely not the quality or reliability of it’s responses.

2. Yet it’s also deeply, deeply flawed

Unfortunately you cannot rely on the ChatGPT providing you with accurate information. There are countless examples of this floating around the internet and I shared an example on LinkedIn recently of ChatGPT being a flat earther. Supporters of the platform will claim that most of the responses are accurate but in some ways it doesn’t matter whether this is true or not. If you can’t be sure if the response you receive is accurate then you’re still going to have to validate it through your own research. Not only does this remove one of the two incredible traits of ChatGPT (speed), the way ChatGPT provides references doesn’t always make validation easy.

If you ask ChatGPT where it sources it’s information from it will often tell you its ‘training data’ but not provide links to specific books, websites or articles. Then, at other times, it will provide references but when you check those references they turn out to be inaccurate or even entirely made up.

For example, I recently asked ChatGPT about the amount of milk produced in Turkey (don’t ask me why) and I got the following responses*

Simon: How much milk is produced in Turkey?
ChatGPT: 11 million tonnes in 2021
Simon: Where did you get this data from?
ChatGPT: http://www.fao.org/faostat/en/#data/QL
~ Simon goes and checks the FAO website and finds out that Turkey produced 23.2 million tonnes of milk products in 2021 ~
~ Simon then thinks that he must have phrased the question incorrectly ~
Simon: How much total milk products are produced in Turkey?
ChatGPT: 5 million tonnes in 2021 (includes products such as milk, yogurt, cheese, and butter)

* These responses have all being truncated because, as I mentioned earlier, ChatGPT tends to give quite long winded answers

So neither of the two answers ChatGPT provided align with the reference it gave…or even with each other (it might be possible for the amount of milk produced to be less than total amount of milk products but not the other way around). Now this may feel fairly benign but there are a number of other examples of ChatGPT providing dangerous financial advice including suggesting people taking on high risk loans they can’t afford.

And although spitting out information that is absolutely wrong an incorrectly referenced is a major problem, I would suggest there’s actually a bigger flaw. It’s not that ChatGPT is sometimes absolutely wrong, it’s also wrong absolutely. What I mean by this is ChatGPT is entirely committed to its wrong answer. There is no hesitation or doubt in its responses, no caginess, no upward infliction on the last word to suggest that it’s not entire convinced that it’s correct. The problem with this is without ‘ifs, buts and maybes’ we are likely to believe the responses that ChatGPT provides even when we shouldn’t.


3. Those flaws may never be fixed

Some might argue that this technology is still in it’s infancy and will get better over time. Although we can most definitely expect the technology to get better it may not get more accurate. In fact, the main objective of ChatGPT is not actually to provide accurate information, its main objective is purely to sound more human.

The whole premise of a generative AI model (like the one that ChatGPT is based on) is to predict the next word(s) in a sentence given a particular prompt. It does this by ingesting large amounts of text data and finding statistically significant patterns. The more statistically significant the pattern of words the more likely they will be regurgitated by ChatGPT when you ask a question. The objective of ChatGPT is plausibility not perfection. In fact you could argue that when ChatGPT provides an accurate response to your question it only happened by chance, not by design.

4. This is both disappointing and dangerous

Humans have been pursuing the dream of creating a computer that can think like a human for over 50 years. The goal of Artificial Intelligence has consumed vast amounts of money, resources and time and with ChatGPT it suddenly feels like we’re getting close.

Except we’re not.

Although ChatGPT sounds human like it doesn’t necessarily ‘think’ human-like. It doesn’t have the capacity to reason in the same way that we do, it can only consult it’s statistical models and provide a contextual response. It also has no capacity to shape it’s responses based on personal experience and individual circumstance (for a great critique of this I encourage you to read this post by Nick Cave responding to a song written by ChatGPT in the style of Nick Cave), and perhaps most importantly it lacks imagination. It cannot generalise concepts from one domain and apply them to another or dream a new idea into existence.

And that’s why, like Professor Adam Frank, I now hesitate to use the words Artificial Intelligence when it comes to talking about ChatGPT. At best it’s misleading, at worst it can be dangerous. By implying a form of intelligence we are more likely to believe in it, and trust in it, than than we should.

But perhaps the most disappointing thing about ChatGPT (and the pursuit of AI more generally) is rather than compensate for our human flaws it compounds them. Historically humans have created technologies to make us better, faster, stronger and more consistent.

For example, most of us are poor at math. We can do the simple stuff ok but if someone asks what the square root of 247 multiplied by 368 is, we will probably take a long time to get to the answer and even after a whole bunch of time and effort there’s a good chance we will get it wrong. But thankfully we have calculators. As long as you can push the right buttons in the right sequence you will get the right answer. Every. Single. Time.

ChatGPT is like having a calculator that randomly provides you with the wrong answer…and then convince you that it’s 100% correct.

5. But people will use it anyway

Unfortunately we all seem to be obsessed with doing more with less. More output, less effort and preferably delivered instantly. And if that’s your objective its hard to go past the output of chatbots like ChatGPT. Don’t wont to write your own blog post slash academic article slash mid term paper? Just feed a few prompts and parameters into ChatGPT and it will spit out an entirely average but nicely worded answer.

And unfortunately we live in a world were something that sounds nice is often quite enough. Apart from a ranking algorithm that automatically crawls the internet, most people’s blogs aren’t going to be read that much anyway. And to be honest, if your ChatGPT created article or paper can pass scrutiny it probably says as much about the flaws in our academic systems as it does about the quality of the work.

The optimist in would like to think that the inception of ChatGPT can be a catalyst to question a whole bunch of things about the technology we create, the type of work we get people to do and how we define human success both in school and in the workplace. But the realist in me knows that if this was to ever happen it’s unlikely to happen any time soon. In the short term the pure novelty of having a chatbot spit out nice sounding words with next to zero effort means our classrooms, boardrooms, inboxes and the internet more broadly will be awash with this type of content.

And as the technology gets better the problems with this will become more pronounced. The latest evidence suggests artificially generated human faces now look more real than photographs. Over time we can expect text generated by ChatGPT to appear more real and more trustworthy than text that has been researched and anguished over by humans… but, just like artificially generated faces, this artificially generated text will also lack depth, personal experience, and ultimately, a deeper sense of purpose.

related post

Work With Simon

If you’re considering engaging Simon please reach out to book a to chat. You can do this by filling out the form below, contacting Sarah at 1300 66 55 85 (within Australia), or emailing her at  sa***@si*********.au . If it’s for a speaking engagement please provide the event date and any information that might be helpful.

Enjoying the read so far?

Don’t miss out on more insights – subscribe now!

Join our community of curious minds and receive our latest updates, insights, and articles directly in your inbox.

SUBSCRIBE TO MY NEWSLETTER

No Spam. Just Insights.