Previously, it was necessary to equip all inventory items before they could be combined with the item being crafted, but this was tedious, especially for complex crafting objects with many pieces.
I have now added the ability to craft items directly from the main inventory, without the items having to be equipped first.
I also fixed some bugs with the inventory system while working on the crafting.
The equipment slots are still needed for “using” items, but not for crafting.
I have also increased the time scale of the game, so that an entire day is now 3 hours instead of 4. It felt like the four hour time scale was too long, so I am going to experiment with changing it, and see if it works better.
I also posted this guide to the reddit sub “/r/StableDiffusion” under the username “spiritusastrum”.
Generating the same character in multiple poses is one of the most common questions for beginners in AI generated art.
Most solutions involve LORA’s, Dreamboth, etc, etc, and these are preferred, however they are complex, and require good quality training sets.
It is possible to generate enough training images using just SD, but this is difficult.
I have, after some research and trial and error, discovered a very simple way to create a unique character using entirely Stable Diffusion and then change that characters pose, while keeping most of their likeness (Clothing, hair, face, etc, intact).
I am using “DivineAnimeMix” to generate the images used in this guide.
The guide is aimed mainly at simple artwork, such as the kind you would see in Visual Novels, etc. With complex art or close ups of a characters face, etc, this technique may not work as well.
First, (using txt2img) create an extremely specific prompt, and use it to produce a single image of a character.
This image should be the “main” character image, used for the characters default image in the visual novel, etc.
It is important to make sure that this image matches the prompt as much as possible.
For example, if the colour of the characters jacket is different from the prompt, it should be fixed now, otherwise the jacket will be the wrong colour when the pose changes. This can be fixed, but it is easier to fix it now.
Also, it can be very difficult (Or almost impossible) to get things like tattoos, makeup, etc, to match properly when the pose changes, so it can be desirable to avoid creating characters with tattoos, etc. If a character does have tattoos, makeup, etc, it is important to specify the location of the tattoo and what it is.
So, instead of just “With tattoos” say “With a tattoo on their left arm”, etc.
The final point with prompt generation is to add a pose or stance with a “weight” modifier, such as:
(Standing: 5.5).
This is used to change the pose, without changing the prompt.
This is the image that I generated at this stage of the process:
With the default image and prompt generated, it is now possible to change the pose.
This is done by just changing the weighed prompt, so:
(Standing: 5.5) could become (Riding a Motorcycle: 5.5), or anything else.
Of course, cfg scale, restore faces, etc, can all be used to improve the quality of the images, however the prompt should not be changed, apart from the pose.
What will now happen is that many similar, not identical, characters will be generated.
This is the image that I generated for this step:
Notice that the character is similar, but not identical, to the first image.
Once an image is produced that firstly, matches the desired pose, and secondly, has relatively few differences to the “main” image, it is time for the next step.
Download the second image and open it in photoshop, or any other basic image editor (Even MS paint would work fine for this).
Now, with the main image as a guide, roughly mark any areas which the AI has gotten “wrong”.
Is the jacket the wrong colour? Use the colour dropper tool to paint over the jacket on the second image with the colour from the first image.
Is the AI wearing long pants in the second image, and shorts in the first?
Again, use the eye dropper tool to paint a skin-colour texture over the long pants texture.
Is the hair too long, or too short? Are there tattoos where there shouldn’t be?
Do the same thing.
If something is missing from the second image, simply select and copy it from the first image.
Tattoos, a belt, a style of glove, even an entire face, can be very crudely copied, pasted, and scaled, onto the second image.
This will result in something that looks awful, but this is perfectly fine. The goal is simply to add visual cues to tell the AI which parts of the image to regenerate, the AI will do the rest.
This is the image that I created here:
Notice that I have painted over her right arm (Her sleeves should be short) and her right hand (She should be wearing gloves). I have also copied and pasted the face, right sleeve, and the fur collar from the first image.
When this is done, upload the modified image to stable diffusion, this time to img2img.
Use the same prompt that was used to generate the second image (not the main image!), and set the “denoising strength” as low as possible. The idea is to JUST regenerate the parts that you painted over in photoshop, not the rest of the image.
You can use inpainting for this (painting over only the parts of the image that you want to regenerate, leaving the rest), but I found that img2img works as good or better (I seemed to end up with bad-quality faces more often with inpainting).
You may need to generate several images, but you should end up with a character that looks MUCH more like the main character that you initially generated, but with the pose of the second character.
If there are any minor issues remaining, simply take the best result, download it to photoshop, and go through the process again, you can repeat this as many times as necessary.
This is one of my final images:
Notice that the right arm is still wrong (She has veins instead of tattoos, and she has a bracelet instead of gloves) but these issues could be fixed in photoshop. The major details match the original character, while the pose is different.
Here is another image:
Again, the likeness is not perfect, but it is close enough that I think most people would regard this character as the “same” as the first one.
Here is a side by side comparision between the initial character and the final image:
I think that, for a single pass, this is a very good result.
If I did another pass in photoshop, I could fix the issues with the right glove, and the red stitching on her pants, as well as modify the tattoos to help them match up more.
I have tried this with “realistic” and anime style checkpoints and it works very well with both. I suspect it would work better with illustrated or manga specific checkpoints, because there would generally be less detail involved, and so it would be harder to detect differences between the images.
This solution, of course, does NOT solve the problem of creating the “same character in a different pose”. You are generating a new character, that just happens to look similar, however, this process seems to work well enough for my purposes at least, and it may work for others as well.
I have been working with stable diffusion to generate AI Art for some time, but the main problem that I am having is with generating the same character in multiple poses.
This is, apparently, a common problem with AI generated art in general.
It is possible to generate similar characters using a strong enough prompt, but not identical characters.
I have been working on a simple workflow to address this problem, and the initial results are looking promising.
I have made several additions, improvements, and bug fixes, to the transport system.
I am also working on improving the random generator for the NPC’s. At the moment, there are a small number of preset NPC types that are chosen at random.
This system is limited, so my goal is to extend this to generate a large number of randomised characters.
I have some art assets that I can use, and I have a system already that can randomly assemble a character. I will need to spend some time improving this though, to make sure that clothing types are compatible, etc.
It is relatively basic, but can handle detecting, chasing, and attacking the player, as well as getting into combat with other factions and native wildlife.
AI enemies can also surrender if they get too badly hurt, or run away.
AI can spawn in squads, and will continue to move and attack with their squad, in addition to spawning and fighting alone.
The next goal is to look at adding addressable assets to my project. This would allow me to load the custom content (uniforms, clothing, weapons, etc) without storing a reference in memory.
This is extremely important for performance reasons, since storing a reference to all art objects in memory would be a huge waste of RAM.
The player intercept code is now basically working.
Players can join up regardless of distance and velocity, and the system stores the position information correctly over the server.
There are still some additions and tests that I want to run on this, but it basically works.
Orbital mechanics also works for all players, however the position and velocity values are not properly synced, since I am using update() rather than fixedupdate(), so different framerates will effectively change the speeds that ships are moving at.
I will eventually switch to using fixedupdate, but I am having a few issues with this at the moment.
I have improved upon the orbital mechanics concept, and I now have a system that is quite functional.
Large ships can now be pulled under the force of gravity, and I have also added a gui that allows players to enter various types of orbit, including circular and geostationary.
This system also works persistently, so the players position is saved correctly to the server as they orbit.
I have also tested the orbit system on other planets (at long distances from the origin) and it works.
I still have some work to do with this system, but it is close to being working.
I have done more debugging and testing, and so far, the project still seems to be going quite well.
I have uncovered some minor bugs which seem to be related to the server (particularly database loading of characters) and the AI, but these seem minor.
The next goal is to continue the testing and debugging, and identify and key areas that need improvement.