In part one, we acknowledged the natural feelings of excitement and trepidation felt by creative communities to Generative AI. We considered the founding of any fears, our shared baselines for creativity and what our role might be in keeping the world’s content fresh and colourful. In part two, we look at GenAI ‘out in the wild’. That is, how it’s being naturalised into our familiar spaces and some potentially unexpected – and positive – outcomes of its deployment in creative industries.
Indeed, just as every technological revolution makes certain jobs superfluous, they create new ones, and ‘Prompt Engineer’ seems to be the position du jour. It’s important to establish here that this role is a broad church, with the high end of the salary range (some at $100,000+ a year), acting more as a tester and evaluater of the strengths and weaknesses of AI models. While the low end seems to be just ‘create huge volumes of content to brief using ChatGPT’ (a quick scout around Upwork puts this end of the service spectrum at around $50 a day). With Microsoft, Adobe, Shutterstock and other big hitters welcoming Generative AI into their professional toolboxes, learning to prompt effectively will be an essential skill. Author and prompt wizard Guy Parsons gives a crash course on prompts for art and ‘faux-tography’ using DALL-E on the Microsoft website, advising, “it's best to imagine your image already exists in some kind of online gallery, and then write the kind of short caption you might imagine appearing with it.”
Remember, when Photoshop was released in 1990, some speculated that it would destroy photography forever. This is laughable now, but the arrival of Generative AI in the creative space certainly feels similar. Today, Adobe’s Generative Fill (which is still in Beta and cannot be used commercially) brings the same tools that can be found in Midjourney, DALL-E, Stable Diffusion and others into the Photoshop platform – which is used near exclusively by millions of creatives around the world. On a simple text bar, you type descriptive prompts to automatically add, extend, or remove content. The AI is trained on Adobe’s database of stock images. And the weirdest thing? It’s early days, but people seem to really enjoy using it. Could this be because it has the heady blend of a clearly defined data source, while having the potential to cleanly automate tiresome jobs without eliminating the truly creative ones? Has Adobe hit gold with digital creators?