linusekenstam.substack.com/p/tutorial-how-to-create-consistent?utm_source=%2Fsearch%2Fmidjourney&utm_medium=reader2
1 Users
0 Comments
7 Highlights
0 Notes
Tags
Top Highlights
Tutorial: How to create consistent characters in Midjourney We will use Portraits as an example but this technique will work on cartoons, products, illustrations, you name it. Keep in mind this is just one way to do it.
Let’s start with what we know. Midjourney is an AI diffusion model, it creates images from noise, using human written prompts (descriptions) of what you want the AI to produce. When it comes to fine-tuning or training your own model, Midjourney is not like Stable Diffusion. You can not train a model on Midjourney and use it for specific needs. With Stable Diffusion creating consistent characters is a lot simpler. But we’re here to see what can be done with Midjourney. In Midjourney we can use something that’s called “image prompting”. This is a technique where we use images as inspiration in our prompts. It’s not training Midjourney per se, but rather works as few shot prompting works in ChatGPT, you can hint at something you want or a style, and Midjourney uses that as inspiration when generating the output.
For image prompting to work you need images that live on the internet, even better if the image lives on the discords CDN. To get images into discord you just need to upload them in discord and grab the URLs and paste into your prompt.
In this tutorial, we’ll use a human portrait as an example. So when it comes to recreating photos of real humans in Midjourney. I had to take a step back and look around at what was out there. I came across how illustrators work when they come up with characters and especially faces. They create these sheets, where multiple angles of the face become the blueprint for the character. This way they have reference art to look at when they are drawing out new scenarios. This is what I think Midjourney is doing behind the scenes with images in images prompts, and reference keeping. When training Stable Diffusion models on faces, this is very similar, you want good-quality training data. So if you are training something on Stable Diffusion, this technique will probably improve your models too.
I remember writing something 2 years ago about Metahumans - and it all becomes more clear. I need to try to get Midjourney to create a portrait for me, but not just that I need to take that portrait and get Midjourney to create multiple angles of the same portrait… This is the strategy I’m applying, and let’s dive into that.
It’s entirely possible to use this technique to get extremely consistent characters, modify them and place them in different environments. You can explore using or not using the —seed number. I feel this technique is similar to few-shot prompting with GPT3, where your images for inspiration act as a precursor to what you want. and your text prompt or multi-prompt acts as the instructions. In few shot prompting you give GPT3 or any LLM a set of examples and then you give your instructions, so a bit lite pre-training or fine-tuning but in the prompt. This technique is similar but for Midjourney instead.
If you want consistent outputs based on one model or yourself, you are probably better of for now using a custom model for Stable Diffusion. Everything is moving so fast, and there are new ways and techniques popping up almost daily now on how to make consistent characters or items. I’m expecting there to be some sort of easier way in the future for this. Maybe a lightweight pre-trained model or something else entirely.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.