AI Image Generation Using Stable Diffusion
I have improved the quality of the images that I am using for training the AI, but so far, I cannot produce an acceptible embedding. I will continue to work on this.
What I have been able to do though is use a combination of controlnet and charturner to effectively “pregenerate” the images that I need without necessarily training the AI.
What I am doing is creating an image with all of the poses that I will need for a given character on it, and then using that with control net open pose and charturner (and a specific, highly detailed prompt) to generate the images that I will need for each character.
I can then upscale these, and perform post processing on them as needed.
This isn’t ideal, and of course it only works for a small number of specific images, but for a game where only a small number of different poses is required, this could be enough.
