Who made this?
When you use generative AI, one of the questions you might ask early on i “who made this?” It’s a good question. Sometimes it feels like the AI is doing all the heavy lifting. But there are a few ways you can make sure you’re the one in the lead, and make sure there isn’t any confusion. Or at least not as much.
What’s the difference between actual creation and simple generation? You are.
If your work is awesome, it’s because of you. If it sucks, that’s also on you. Even if you hit some AI lottery where every single thing you get is pure gold, and you don’t need to touch it, you still had the idea, and fed that into whatever AI tool you are using.
First off. It’s your idea. Or at least, I really hope it is. Let’s take this post, for example. I had an idea that I wanted to write something about how the line can blur sometimes, especially when you’re first getting used to working with AI tools. So I went into Midjourney and started prompting for cyborgs, human and robot pairs, things like that. I fiddled with my prompt until i got what I wanted. Images that fit the (new! yay!) look of the site, and the tone of the article I have in mind. I didn’t use AI to write this or even proof read it (which, looking back may have been a mistake) because I didn’t feel like it could add any value to my point of view, as this is more about a human perspective on design work.
There is a lot of discussion about using generative art - and here’s my feelings. Generating images is no different that looking for the perfect stock image. Oh wait, that’s a lie. It’s a ton easier. I have always HATED looking through stock. Generative AI makes my life so much simpler because I can take what’s in my head and get something similar to what I was looking for almost instantly. As a bonus, a lot of times there are amazingly happy accidents that inspire me further.
For client projects, I use AI generate images as a placeholder for shoots, the same way I’ve always used stock images in the past. When clients fall in love with the sources, It’s a lot more fun to cast and shoot images that resemble your AI-generated moods than to have to basically reshoot a stock photo.
To finish up the images for this post, after I got some people I liked, I started playing with colours, overlays, and decided that I needed some texture. Years ago, I had files of paint blobs and scratches that I collected over the years. Some I made but most were parts of eps sets, fonts or stock. For these images today, I went back to Midjourney and prompted for the splotches I wanted, mostly because I am a bit lazy and it was the tool already in my hand.
So coming up with the concept for the image, feeding it into the AI, choosing images, cutting them out and resizing them, duplicating them in the layout (or not): That’s me. Moving things around, making some other shapes, overlying them. Picking typefaces. Even deciding what words go on the images. Again, that’s me. I’m telling a story in these images (or at least I hope I am) and I’m carefully thinking about each pixel as I go along.
I also decided that I needed one more image, and just went and generated then designed the header image up there. Going back to Midjourney, I used the prompts for both people and paint I’d had the best luck with today, hit regenerate, and got something I liked for both pretty quickly. Then I laid them out, and boom. Done.