Muse or Monster
Simon Shaw
In this month’s blog on of Mojave's founders Simon Shaw wonders if we’ll find a new muse or a new monster in artificial intelligence applications like Chat GPT and asks it some questions to find out. Pure poetry, utter drivel or somewhere in between?
What will we get, I wonder? Fouler language, bigger financial frauds, faker (and flakier) news, worse disinformation? Or better biomedicine, more accurate climate predictions, smarter education, safer work?
Artificial intelligence (AI) I is the answer to both. Which one comes down the track – or how much of either – depends on how we use it.
“We are very nearly at a stage where we are using all the internet knowledge available in the world to make programs,” says the Director of a new Centre for Human-Inspired Artificial Intelligence at Cambridge
“The question is, what do we do then?”
Anna Korhonen is clear about what that is: regulation, a responsible ecosystem, inclusivity. The ethical challenges. But these challenges are “more difficult in some ways” than the technical ones, says her CHIA colleague Fumiya Iida who leads research into bio-inspired robotics. “We don’t really know the legal implications, the societal implications, and those things we just need to try and find out together”.
There’s a whole lot of ‘we’ in all this – six in the blog so far.
So I asked one of ‘us’ how s/he would try to meet the ethical challenges of using AI.
- I got a list of 10 bullet points:
- ethical frameworks and guidelines,
- responsible AI development,
- diversity and inclusion,
- data governance and privacy – blah, blah.
You might have realised by now that I actually asked this question to thousands of ‘us’, maybe tens of thousands, using the AI platform GPT4, “a type of language model that uses deep learning to generate human-like, conversational text”.
The list I got back was ‘scraped’ from the internet so it’s a composite not one person’s view. Fair enough but it’s horrible to read, like a policy statement put out by open-necks in corporate PR. Empty and meaningless. Human-like maybe, just, but not conversational, not at all.
So I asked another question. What are the ethical challenges of AI? Another list, just as empty and meaningless as the first.
Why? Something to do with syntax for a start. Passive voice = I’m not there in person, I don’t take personal responsibility. Repetitious paragraph structure: statement followed by implication = I don’t like varying my routine. Lots of auxiliary verbs like ‘can be’ or ‘could be’ = Don’t expect me to speak plainly.
Just like a robot in fact. Or a bot. For a so-called ‘natural language’ system GPT4 sounds like it’s at the unnatural end of language to my ears.
So I asked a few more questions, this time about how Open AI, the makers of GPT4, deal with the ethical challenge of disinformation. It took quite a bit longer for the system to answer those. Did they need more thought? Or is thought a category error in thinking about AI? Don’t know, but it was more horrible PR-speak anyway so I’m sorry to say I got rude: ‘Why are these answers so boring’, I asked?
“I apologize if my previous responses appeared boring,” it said. It felt a bit bad (or is feeling another category error?). Then I read on and decided to save my emotions anyway: “While I strive to be helpful and engaging, my responses are limited to the information and tone I have been trained on.”
OK, you’ve been trained. No wonder you sound like a posh parrot with a stick somewhere … but I wanted to give it another chance. ‘Can you phrase your answer to my question about disinformation in a more interesting and vivid way?’, I asked. I got this: “OpenAI, the superhero of the AI realm, is donning its cape to combat the nefarious villain known as disinformation!”
Donning? Nefarious? Capes?
England’s poet laureate Simon Armitage was on a BBC programme recently where the presenter read out a poem written by GPT4 in his style. He was upset. It was awful, he said. Songwriter Nick Cave was blunter, calling a song ‘written in the style of Nick Cave’ “bullshit” and “a grotesque mockery of what it is to be human”.
On these showings it doesn’t look as if AI will help us much to answer the ethical challenge of AI. Or replace poets and musicians. At least, not in a human way.
“We are still far from a human level AI,” says Korhonen.
Yep. It’s back to the real ‘us’, you and me, and thousands or millions of others, trying and finding things out together in a thousand or a million different contexts. Some working through the ethics, others working on a robot that ‘cares for’ people at a human level or a natural language processing system that ‘speaks to’ people like humans do.
That’s fine. I don’t hate GPT4. In fact I quite like how it ‘responds’ when I get bad-tempered. And it’s teaching me to ask better questions. Here’s my string:
1. What are the ethical challenges of AI? – too general
2. What are the ethical challenges of using CHAT GPT? – too defensive
3. What is Open AI doing to filter out disinformation? – too corporate PR
4. Why are these answers so boring? – sad!
5. Could you rephrase your answer to my question 'What is Open AI doing to filter out disinformation?' in a more interesting and vivid way? – too awful!
6. Who in Open AI is working on preventing discrimination? – slightly helpful but still too general
7. How do AI's data scientists identify and mitigate biases in training data? – getting there, a bit more specific
8. Which tools do they use? – better, some information I can use.
‘We’ (I) got there at last, some of the way. By Question 8 I felt that GPT4 was starting to give me something useful. Something I might expect to get from querying a knowledgeable human who’s not guarding the information or turing it into gobbledegook. That’s progress.
AI is training me.
In me, it has to be said, not the AI. If I’d asked the last question first, I would probably have got the same information quicker. The AI is training me.
Yes, to get a useful answer. No, not to expect too much. The training it gets is by another bunch of humans in … I don’t know where. When I asked it said: “As an AI language model, I don't have access to real-time information or specific details about OpenAI's internal operations beyond my September 2021 knowledge cutoff.”
Fair enough. Even if it knew (‘had access to’ – ugh, please stop talking like a machine!), it might not have told me for other reasons, including ethical ones like disclosure boundaries.
Anyway, wherever they work they are human, they’ve got an invention that appears to do awesome things, and they’re trying to find out together how to make it work.
To make what work, exactly? In this particular application of AI, one answer could be “to meet the demand for automated information-sharing” – words originally used to describe the internet itself.
Which makes me want to find out how to make it work too …
Please contact me!
Please get in touch with more information about the Mojave Strategy Expedition Programme, and show me how my organisation can emerge stronger.
Thank you!
Karen Goldring FCIPD
Karen had an extensive and hugely successful career in strategic HR before becoming a leadership coach. From rapid growth SMEs to large global corporates, Karen has gained immensely valuable insights and a thorough understanding of what it takes to make people and their teams high performing. She has particular experience working within fast paced tech companies.
A qualified Insights Practitioner - one of the leadership models we wholly rate here at Mojave due to its pragmatism and relevance to lived experience - Karen offers teams and their members the ability to really understand themselves and each other. From the informed start point that results from the Insights process, Karen will work with you to identify the objectives that matter most to you and your organisation, and ensure you get there.
Click below to learn more about Karen and whether she’s the right kind of coach for you..
Piers Mummery
He has
built, grown, sold, bought funded and capitalised a whole range of businesses,
and he’s learned loads in the process. One of the things he’s learned is
that he loves helping others do the same.
As he says on his website, there are no magic answers, but applying a few basic fundamentals well, consistently and in the right way for you and your business is key. And it’s that acknowledgement that the right way will vary for each of us that means we are proud to work with Piers as a coach who is totally aligned with our own values and philosophy.
As he says on his website, there are no magic answers, but applying a few basic fundamentals well, consistently and in the right way for you and your business is key. And it’s that acknowledgement that the right way will vary for each of us that means we are proud to work with Piers as a coach who is totally aligned with our own values and philosophy.
Click below to learn more about Piers and whether he’s the right kind of coach for you..
Rachel Smith
Formerly an architect, Rachel herself began to realise that she could process and articulate her own ideas far more effectively through visuals. What began as a process to enable her to be more effective in her own career soon evolved into a process that could help others exceed in theirs.
As an accredited executive coach and visual thinking teacher, Rachel’s incredible skills enable her to draw the thoughts and ideas her clients articulate. Being able to ‘see what you say’ as you work through each session can be incredibly powerful in enabling you to identify new connections or blockers that can otherwise remain hidden.
Click below to learn more about Rachel and whether she’s the right kind of coach for you..
Notify me!
When the next series of Leading People online workshop dates are released.
Thank you!
Leading Operations
Thank you!