Muse or Monster

Simon Shaw


In this month’s blog on of Mojave's founders Simon Shaw wonders if we’ll find a new muse or a new monster in artificial intelligence applications like Chat GPT and asks it some questions to find out. Pure poetry, utter drivel or somewhere in between? 
What will we get, I wonder? Fouler language, bigger financial frauds, faker (and flakier) news, worse disinformation? Or better biomedicine, more accurate climate predictions, smarter education, safer work?
Artificial intelligence (AI) I is the answer to both. Which one comes down the track – or how much of either – depends on how we use it.
“We are very nearly at a stage where we are using all the internet knowledge available in the world to make programs,” says the Director of a new Centre for Human-Inspired Artificial Intelligence at Cambridge 

“The question is, what do we do then?”

Anna Korhonen is clear about what that is: regulation, a responsible ecosystem, inclusivity. The ethical challenges. But these challenges are “more difficult in some ways” than the technical ones, says her CHIA colleague Fumiya Iida who leads research into bio-inspired robotics. “We don’t really know the legal implications, the societal implications, and those things we just need to try and find out together”.


There’s a whole lot of ‘we’ in all this – six in the blog so far.

So I asked one of ‘us’ how s/he would try to meet the ethical challenges of using AI.
  1. I got a list of 10 bullet points:
  2. ethical frameworks and guidelines,
  3. responsible AI development,
  4. diversity and inclusion,
  5. data governance and privacy – blah, blah.

You might have realised by now that I actually asked this question to thousands of ‘us’, maybe tens of thousands, using the AI platform GPT4, “a type of language model that uses deep learning to generate human-like, conversational text”.
The list I got back was ‘scraped’ from the internet so it’s a composite not one person’s view. Fair enough but it’s horrible to read, like a policy statement put out by open-necks in corporate PR. Empty and meaningless. Human-like maybe, just, but not conversational, not at all.

So I asked another question. What are the ethical challenges of AI? Another list, just as empty and meaningless as the first.

Why? Something to do with syntax for a start. Passive voice = I’m not there in person, I don’t take personal responsibility. Repetitious paragraph structure: statement followed by implication = I don’t like varying my routine. Lots of auxiliary verbs like ‘can be’ or ‘could be’ = Don’t expect me to speak plainly.
Just like a robot in fact. Or a bot. For a so-called ‘natural language’ system GPT4 sounds like it’s at the unnatural end of language to my ears.

So I asked a few more questions, this time about how Open AI, the makers of GPT4, deal with the ethical challenge of disinformation. It took quite a bit longer for the system to answer those. Did they need more thought? Or is thought a category error in thinking about AI? Don’t know, but it was more horrible PR-speak anyway so I’m sorry to say I got rude: ‘Why are these answers so boring’, I asked?

“I apologize if my previous responses appeared boring,” it said. It felt a bit bad (or is feeling another category error?). Then I read on and decided to save my emotions anyway: “While I strive to be helpful and engaging, my responses are limited to the information and tone I have been trained on.”

OK, you’ve been trained. No wonder you sound like a posh parrot with a stick somewhere … but I wanted to give it another chance. ‘Can you phrase your answer to my question about disinformation in a more interesting and vivid way?’, I asked. I got this: “OpenAI, the superhero of the AI realm, is donning its cape to combat the nefarious villain known as disinformation!”
Donning? Nefarious? Capes? 
England’s poet laureate Simon Armitage was on a BBC programme recently where the presenter read out a poem written by GPT4 in his style. He was upset. It was awful, he said. Songwriter Nick Cave was blunter, calling a song ‘written in the style of Nick Cave’ “bullshit” and “a grotesque mockery of what it is to be human”.
On these showings it doesn’t look as if AI will help us much to answer the ethical challenge of AI. Or replace poets and musicians. At least, not in a human way.

“We are still far from a human level AI,” says Korhonen.

Yep. It’s back to the real ‘us’, you and me, and thousands or millions of others, trying and finding things out together in a thousand or a million different contexts. Some working through the ethics, others working on a robot that ‘cares for’ people at a human level or a natural language processing system that ‘speaks to’ people like humans do.

That’s fine. I don’t hate GPT4. In fact I quite like how it ‘responds’ when I get bad-tempered. And it’s teaching me to ask better questions. Here’s my string:

1. What are the ethical challenges of AI? – too general
2. What are the ethical challenges of using CHAT GPT? – too defensive
3. What is Open AI doing to filter out disinformation? – too corporate PR
4. Why are these answers so boring? – sad!
5. Could you rephrase your answer to my question 'What is Open AI doing to filter out disinformation?' in a more interesting and vivid way? – too awful!
6. Who in Open AI is working on preventing discrimination? – slightly helpful but still too general
7. How do AI's data scientists identify and mitigate biases in training data? – getting there, a bit more specific
8. Which tools do they use? – better, some information I can use.

‘We’ (I) got there at last, some of the way. By Question 8 I felt that GPT4 was starting to give me something useful. Something I might expect to get from querying a knowledgeable human who’s not guarding the information or turing it into gobbledegook. That’s progress.

AI is training me. 

In me, it has to be said, not the AI. If I’d asked the last question first, I would probably have got the same information quicker. The AI is training me. 
Yes, to get a useful answer. No, not to expect too much. The training it gets is by another bunch of humans in … I don’t know where. When I asked it said: “As an AI language model, I don't have access to real-time information or specific details about OpenAI's internal operations beyond my September 2021 knowledge cutoff.”

Fair enough. Even if it knew (‘had access to’ – ugh, please stop talking like a machine!), it might not have told me for other reasons, including ethical ones like disclosure boundaries.

Anyway, wherever they work they are human, they’ve got an invention that appears to do awesome things, and they’re trying to find out together how to make it work.

To make what work, exactly? In this particular application of AI, one answer could be “to meet the demand for automated information-sharing” – words originally used to describe the internet itself.

Which makes me want to find out how to make it work too …

Created with