Some of the above are affiliate links and I may earn a small commission from them.
[00:00:00] Hello and welcome to the Creative Shoofly Podcast. I'm Thomas Beutel. In this episode, I ask myself, what do I do with all these AI tools that are becoming available? How should I use these tools in my creative process? And how do I maintain integrity as I do?
These questions might be on your mind too, and I hope that I shed at least a little bit of light on the topic.
I am recording this in March of 2023, and there's been an explosion of AI announcements. All of a sudden, AI is everywhere. Everyone seems to be talking about it, and it feels almost like it's being jammed on our throats.
Yeah, we've been using Siri and Alexa and Hey Google for years now, and we know that recommendation algorithms have been limiting what we get to see on social platforms and in places like Netflix.[00:01:00] But while those services are a form of artificial intelligence, we were quickly jaded about how mediocre and dull they were.
But the latest AIs come in the form of image generators like Stable Diffusion and chatbots like OpenAI's ChatGPT. And they seem lightyears ahead of those earlier tools. And scarier too.
The AI image generators in particular caused quite a stir in the art community when they were introduced in 2021. Not only were these AI tools creating an image in minutes that would take an artist hours and days to create, but the tools were trained on images found on the internet, including images that artists had posted themselves. That felt a lot like theft.
But these are clearly creative tools. And they have me thinking about creativity and the ethics involved. Can AI help me in any of my creative processes? And if they can, how should I be [00:02:00] using AI?
The recent hubbub really started when the latest version of ChatGPT was launched in November of 2022. That was ChatGPT 3.5, and people took notice of its impressive capabilities. ChatGPT gained a hundred million users in just a few months. To put that into perspective, Gmail took five years to get to a hundred million users.
In just the last few weeks, there have been a number of follow-on announcements. ChatGPT was upgraded to version four. Google announced their chatbot called Bard, and they're also integrating AI in their workspace tools.
Microsoft released Bing Chat as an alternative way to find information with Bing. They also announced that they would be including an AI tool called Copilot in their office suite, meaning that you'll be able to have AI assist you when you're creating content in Word documents [00:03:00] and PowerPoint slides, and also when you're sending emails using Outlook.
Many other companies announced that they're integrating AI generative tools into their existing products. Canva announced that they're adding AI so that you can use text to describe a design, and it'll create a Facebook cover page, a YouTube profile picture, YouTube intro and outros, Instagram post and story and so forth, all from one text description.
GitHub announced their co-pilot X that helps coders code faster. Ubisoft announced a dialogue generator for game development and Metahuman showed facial motion capture using just an iPhone, and this allows them to animate characters with very realistic facial expressions.
With all these announcements, it does feel like AI is taking over at this point. Artists are asking themselves a lot of questions.
How can I use these tools ethically and [00:04:00] morally?
How will it affect the marketplace for art and creative products?
Will it make me a dull person?
These are all important questions. So, let's first talk about what AI really is, and then we can see how we can adapt any of it into our creative process.
So, let's first get something out of the way. Artificial intelligence is not conscious intelligence. Some people describe it as an enhanced version of auto complete. I like that metaphor. All of these systems are trained on a large amount of data, and they create texts and images based on what they've been trained on. I like to think of AI as a statistics engine with some randomness thrown in.
ChatGPT has been trained on 300 billion words from around the internet. From that training, it is able to guess what word comes next for a given prompt. Bing Chat is based on the same engine, [00:05:00] and Google's Bard is based on a similar large language model called LamMDA.
Another example is the DeepL Translator. It translates from one language to another based on millions of translations it has been trained on.
Other AI tools are similarly trained. Image generators like Midjourney, dall-E, and Stable Diffusion are trained on large numbers of images that have been captioned with text. They're able to generate new images based on text prompts.
Transcription tools like Descript and Otter.ai, were trained on millions of voice samples to allow them to transcribe audio to text. Descript, PlayHT and others have text to speech capabilities that generate very realistic voices, again by being trained on millions of examples.
All of these tools have been under development for many years, but they have now gotten to the point where they're quite good. And technology companies like [00:06:00] Canva and Adobe are rushing to incorporate them into the creativity products we use every day.
So, these tools are now here and ready for us to use.
The thing that has me thinking about the impact of AI is what Microsoft and Google recently announced. They both are incorporating chat-based AI tools into Microsoft 365 and Google workspace. These tools are targeted at the workplace, and that means that millions of people will soon be getting very comfortable prompting these AIs for answers about their businesses, and also to generate text for emails and presentations.
AI generated content will become ubiquitous in a very short amount of time, and since it is in a business context, it will most likely not be marked as being generated. All of this will be widely accepted because the use of AI will increase productivity. Some estimates say that world GDP will increase by 7% over the next 10 [00:07:00] years. That's a massive amount!
But what happens if I use AI to create an image that is based on someone else's intellectual property? How do I as an artist give credit or even payment if the AI tool doesn't even have the capability to tell me what the source was?
I recently read a great Guardian interview with Jaron Lanier. He's a futurist, a technologist, an artist, and he's considered the godfather of virtual reality. He's been in the industry a long time.
His take is the danger isn't that AI will destroy us. It's that it'll drive us insane. In the interview, Lanier says that he doesn't even like the term artificial intelligence, objecting to the idea that the technology is actually intelligent.
Just because a chatbot can pull information from millions of sources and express ideas in a language we can understand, that doesn't [00:08:00] make it better than us.
Lanier's mission is to champion the human over the digital. In his book, Ten Arguments for Deleting Your Social Media Accounts, he argues that the internet is making us dull and uncreative. His worry is that we'll use technology as agents of manipulation. We become mutually unintelligible because we are slaves to the algorithms that corral our attention into silos.
So, he says that we have a responsibility to act morally and humanely. In spite of his view on what the internet has become, he believes that AI tools like ChatGPT and Google's Bard could provide hope for the digital world. A good AI can open us to ideas and knowledge that we weren't seeking before. A well-designed chatbot would spark both curiosity and play.
It could also keep track of the sources of information that it was trained on, and if the chatbot relied on something you created, [00:09:00] you could get paid for it. In a system where there is shared sense of responsibility and liability, everything works better. He calls it data dignity.
In his book, You Are Not A Gadget, he said that the point of technology was to make the world more creative, expressive, empathetic, and interesting. He reminds us to remind ourselves of our humanity.
What Jaron says is heartening to me. Like him, I do believe that the goal of technology is to help us be more expressive and more empathetic. I think we can use AI tools without compromising our humanity or our integrity.
I've already been using Descript for three years now, taking advantage of its AI-based transcription to make audio editing much easier for me. Descript allows me to edit the text, and it edits the audio waveform for me based on those text edits. It's so much faster than fiddling with the waveform [00:10:00] directly.
When I record my voice, I speak with a lot of ums and ahs. Descript automatically highlights all of those ums and ahs so that I can get rid of them with one click.
Descript even has a voice generation feature called overdub. I've trained Descript with my own voice, and it allows me to replace words and phrases where I misspoke or I want to say something with more clarity, and I don't want to have to set up the microphone again to rerecord it. Descript generates the new text with my voice.
When it comes to writing, however, like when I write a podcast script, I don't use AI to generate any of the words. These are all my own words.
That said, I often will ask ChatGPT for a word or phrase. I'll say, “Hey, what's a more precise word for this concept?” For example, I recently asked, “What is the Inc. Corp or Limited part of a company name called?” ChatGPT informed me that it was called the legal [00:11:00] designation.
It's the sort of thing I once used Google for, and then I would need to follow a link. But ChatGPT gives me the answer right away, and it helps my writing be clearer.
Another thing I do when I start a podcast script is to have ChatGPT ask me questions. I'll say, “I'm writing about this topic. What questions would you ask me if you were to interview me?” I find this a great way to jumpstart writing my first draft. I take those questions and either type out the answers or I just record the answers, and I use Descript's AI to transcribe them into text.
It's a real time saver. It gets me to my first draft much more quickly than by starting with a blank page.
Another way that I use ChatGPT is to critique my writing. I'll say, “Here's an idea that I'm trying to express. How well did I do?” ChatGPT at that point will usually [00:12:00] say, I did a great job. So not all that useful, but then I ask if I missed anything, and it will usually come up with a point or two that I hadn't thought of.
For me, that's a godsend. It's like having a friend look over your work and make suggestions. I feel this has improved my writing quite a lot.
Again, I'm not using AI to create any of the text, but it is making suggestions that I can then think about and write in my own words.
As far as AI image generation, I don't find myself very interested in it. I did once use an AI image generator to help me with a scene that I was drawing in a comic because I needed to have a reference to draw to.
So, I said, “Show me someone speaking at a podium with the audience in front of them,” and it created something good enough for me to then create my comic. The image I created was all mine, but the AI provided a reference.
I think this idea of [00:13:00] references is a good way to take advantage of AI tools in the creative process.
I do want to make a distinction about generative art that is created by writing an algorithm where I create the algorithm to make the art. That's not AI, and the algorithm is not trained on other artists' work. That's me thinking of a program to make and shape the image, and I enjoy that immensely because it's very challenging and creative to think of an algorithm and to figure out how to make the computer make the image that I'm imagining in my mind.
If you're interested in seeing examples of non-AI generative art, look for the hashtag #processing or hashtag #p5js. I'll put links to them in the show notes.
I think going forward it is important as an artist to have full disclosure. In the past, I've never disclosed what technologies I use to make a certain piece of art. In most cases, it was [00:14:00] obvious in context.
I mostly use ProCreate on the iPad and Pixelmator on the Mac for digital art. I use Processing for generative art. I use Ableton Live for my music and so forth.
But going forward I plan to fully disclose what AI and tools I use in making my art, including when I don't use any AI at all. At the end of my podcast, I'll disclose what AI tools I used, if any for that episode.
I think from now on, so many people are going to be using AI in their daily lives, and it's going to be hard to distinguish whether someone's used AI in the work that they did.
To me, that represents a real opportunity. Because the world's going to be flooded with content that was created by AI. And people are going to be seeking out art that was not created with AI. I think it's actually going to make that sort of art, the art created by an actual human, [00:15:00] more valuable.
I think artists have an opportunity to use AI technology to become more creative, more expressive, and more empathetic. But artists need to push for control over how their works are being used. and they need to insist that any AI that they use fully disclose the sources.
And I do think an artist needs to be very clear whether an art piece that they created was created by artificial intelligence, or if they just simply use AI tools to help them to create that artwork. I want to know that an art piece was created by a person, and I think the art world is pretty smart and will make distinctions fairly quickly about whether a piece of art that was generated by AI has any value.
I'm using AI in my podcast production process. But ultimately, these are my words. These are my thoughts, and I'm using AI just simply to help me be a little bit more productive, a little bit [00:16:00] more creative, and a little bit more expressive, and that's where I find a comfortable balance in using AI in my creative process.
How will you be using AI in the stuff that you make and what steps are you going to take to make sure that you maintain integrity in the work that you create?
I myself think that this is a really fascinating topic and in particular because we are now at the moment in time when all of these AI tools are being released into the world and we are just starting to figure out how to use them and how to live with them.
So, in preparing this episode, I consulted a number of different articles and YouTubes on this topic, and I've linked to them in the show notes. So please refer to the show [00:17:00] notes and take a look at these. I think you'll find them very interesting.
And so, onto my disclosures. All the words in this episode are mine, except for some paraphrasing that I did from the Guardian article about Jaron Lanier.
I did all of the post-production using the Descript tool, and that does use AI for transcribing my voice. And I also used ChatGPT to ask me some questions about this topic and that helped me to get to my first draft.
The cover art that I'm using for this episode was created by an algorithm that I wrote specifically for this episode, and I assembled the cover art using Pixelmator on the Mac.