AI Image Generation Using Stable Diffusion
I have spent more time on AI generated images.
My current goal is to produce consistent images of the same character, in different poses (possibly in different clothes, etc), easily and reliably.
I think that the correct way of doing this is to use an embedding, or “Textual Inversion”.
The problem is generating the images for the training set. I want to do this entire in substance diffusion (I could generate a 3D model first, for example, and then render out images from it) and so generate sufficient images for a training set is difficult, however if I can figure this out, I should have all that I need to use substance diffusion for visual novel-style games.