ChatGPT’s Not Funny | Artificial intelligence expert Mhairi Aitken on whether bots can write comedy

ChatGPT’s Not Funny

Artificial intelligence expert Mhairi Aitken on whether bots can write comedy

Can comedy be automated? That’s the question I’ve been forced to confront in the past few weeks.

Urgh, not another piece about ChatGPT, I hear you moan. I’m afraid so, because despite all the column inches and radio airwaves already dedicated to Open AI’s conversational chatbot there is still more to say - especially for the world of comedy.

This is where my main interests collide. I work in artificial intelligence (AI) – as an ethics fellow at The Alan Turing Institute (the UK’s national institute for AI and data science) - and have been doing comedy about AI for a few years now, but it worries me that many comedians are now looking into the possibilities of using AI for comedy.

In case you have somehow managed to avoid hearing about ChatGPT, here’s the gist of things: essentially, ChatGPT is like an overconfident newbie stand-up act whose studiously watched hours of stand-up and learnt to mimic the mannerisms and language of their favourite comedian but lacks any of the charisma or content.

That’s pretty much exactly what ChatGPT does: It’s been trained on a huge dataset of human language so that it can recognise patterns and mimic human speech or writing and it can do that in pretty much any style ranging from a sonnet to a convincing looking legal document.

Since its release at the end of November it has been used by a staggering 100 million unique users and has been tasked with everything from answering customer service queries, providing mental health advice (that’s a big no no!), starting conversations on Tinder and even in a judge’s ruling on a court case. Seriously, it’s everywhere. And comedy is no exception.

In comedy, ChatGPT might be used as an improv partner to bounce ideas off, as a starting point for developing scripts or maybe even to write whole stand-up routines. Harmless enough? Actually, not really.

Until recently, creative industries have mostly seemed to be safe from the risks of automation by AI. Creativity requires humans, right? Right?!

Well yes. But there are plenty of people who are now trying to take shortcuts to creative outputs and using ChatGPT – or other generative AI models – to produce 'creative' or funny content. ChatGPT has been used to write poetry, scripts, song lyrics... It’s everywhere and it’s increasingly hard to avoid. There was even a never-ending AI generated online Seinfield-esque show, which – while the comedic value is contested – aimed to recreate the essence of Seinfeld without human performers, writers or directors. Ultimately it was halted when its mimicry led it to make thoughtless transphobic jokes, but overall the show was mostly fairly flat with a dystopian end-of-days feel to the viewing experience.

Before you are tempted to try it out, there are a few things you really should know about ChatGPT. Firstly, this isn’t the final product. When you use ChatGPT you are being used to test the system, providing it with more training data and feedback which can be used to develop future models, as well as future profitable products and services. Remember: if you’re not paying for the product, you are the product.

But that’s far from the most exploitative aspect of how ChatGPT has developed. While the human labour is often invisible, AI relies on people to label data and train the AI model to produce appropriate outputs, in the case of ChatGPT this includes Kenyan workers being paid less than 2 US dollars an hour to identify and label extreme content (including graphic descriptions of bestiality and child sexual abuse) so that ChatGPT could learn what not to say. Protecting users from harmful content by exposing vulnerable, underpaid workers to the absolute worst of it is a pretty brutal way of teaching manners.

And then there’s the issue of what ChatGPT is – or isn’t - able to produce. Despite this being so called 'artificial intelligence', it’s not intelligent. It can’t think, it doesn’t understand the words it produces. All it does is reproduce patterns from the human language it was trained on, and in many cases that also means reproducing biases and prejudices. So it isn’t going to be able to push boundaries in comedy, all it can do is recreate and mimic familiar territory. It’s very much the bad, over-confident stand-up trying to be the next big thing by copying the last big thing without saying anything new or different.

What we have seen so far is just the tip of the iceberg. As I write this, Google has just announced the release of its competitor to ChatGPT (Bard) and there any many other similar systems on the way.

Perhaps it’s inevitable that we will see generative AI being used increasingly to create content for webpages, blogs and social media. It will increasingly be used by publishers and producers to create content or develop scripts quickly without paying for the time and ideas of human writers. Ultimately this will lead to increasing attempts to create comedy without a human mind, which inevitably lead to comedy without any soul or heart.

That’s why I am making this plea to be cautious about using these tools as shortcuts in creative processes. Please join me in laughing about AI – not with AI.

• Mhairi Aitken is an Ethics Fellow at The Alan Turing Institute (the UK’s national institute for AI and data science) and named in the 2023 list of 100 Brilliant Women in AI Ethics. She uses comedy as a means to spark conversations about the role of AI in society and is a regular performer at the Cabaret of Dangerous Ideas at the Edinburgh Festival Fringe.

Published: 10 Feb 2023

We see you are using AdBlocker software. Chortle relies on advertisers to fund this website so it’s free for you, so we would ask that you disable it for this site. Our ads are non-intrusive and relevant. Help keep Chortle viable.