Recently, when uploading videos to steam community, I noticed that I was getting an error (On Chrome, but all browsers/devices seem to be affected) that stated:
“This page isn’t working,
steamcommunity.com is currently unable to handle this request,
HTTP ERROR 500″
Looking into this, I discovered this thread on the steam forums:
I have decide to re-enable comments on PGD on a trial basis.
I was getting a lot of spam, which is why I disabled them.
I am using a moderation queue with fairly strict anti-spam settings, so I am confident that I will be able to allow interaction with posts while still keeping spam to a minimum.
The PGD forum (Which will mainly be for technical issues, bug reports, etc), is located here:
I have improved the quality of the images that I am using for training the AI, but so far, I cannot produce an acceptible embedding. I will continue to work on this.
What I have been able to do though is use a combination of controlnet and charturner to effectively “pregenerate” the images that I need without necessarily training the AI.
What I am doing is creating an image with all of the poses that I will need for a given character on it, and then using that with control net open pose and charturner (and a specific, highly detailed prompt) to generate the images that I will need for each character.
I can then upscale these, and perform post processing on them as needed.
This isn’t ideal, and of course it only works for a small number of specific images, but for a game where only a small number of different poses is required, this could be enough.
I have used several techniques to try to train an AI to generate consistent characters.
I have achieved some success with this.
The most common technique seems to be to generate a number of images using either a third party program or stable diffusion itself, and using those images as a training set to train an embedding, then using that embedding in text prompts.
I was able to achieve some success with this, however it wasn’t quite enough to do what I needed it to do. I suspect that I need better training images.
I can generate better training images using tools like charturner and controlnet, and this, again, does seem to produce decent results, but there are still issues.
I intend to try a few more techniques, the current results are still not quite good enough for my purposes.
My current goal is to produce consistent images of the same character, in different poses (possibly in different clothes, etc), easily and reliably.
I think that the correct way of doing this is to use an embedding, or “Textual Inversion”.
The problem is generating the images for the training set. I want to do this entire in substance diffusion (I could generate a 3D model first, for example, and then render out images from it) and so generate sufficient images for a training set is difficult, however if I can figure this out, I should have all that I need to use substance diffusion for visual novel-style games.
I have been experiment with Stable Diffusion recently, and I have been using it to generate photo-realistic images of various characters, etc.
The software is incredibly powerful, and can easily create very highly detailed images.
It cannot be used for 3D models, etc, or for texture sets, but for a visual-novel style game it could be invaluable. Not only could it generate backgrounds and scenes, it can also generate characters too.
The only problem is that it is quite difficult to generate the same character consistently across multiple images, each image is unique.
This is a problem that I am working on solving now, if I can figure this out, I could create some simple concept games using this software.
I recently discovered that it is actually possible to sponsor certain unicode characters, and also, that the “heart” of my logo is in fact a unicode character. It is U+2727, the Four-Pointed Star:
I have sponsored this character on the official Unicode site HERE.
I also get an official tweet from the Unicode channel:
While researching technologies for my virtual world project, I looked into a realistic physics library. This would allow me to further explore dynamic content, by allowing players to, for example, construct objects, buildings, or simple machines from discrete elements in the world, using only the physics model to “Make them work”. I would not need to use prefabricated objects, object combinations, and scripted logic and mechanics. I don’t know if this concept will work out, but I am planning to research it in more detail, and come up with a concept. There are a few options out there for physics libraries in games. The main ones I found were NVIDIA’s PhysX, Bullet Physics, and Intel’s Havok physics, and ODE.
I am not going to spend time comparing the advantages and disadvantages of different physics engines, but suffice as to say, the two main options for T3D are Physx and Bullet Physics. Bullet is open source, but PhysX is free for commercial use, so there really is advantage in price. I am leaning towards PhysX, mainly because it seems like the most supported (by commercial projects) engine, and it seems like the most fully featured, although Bullet is catching up. PhysX is used in many Triple-A games, and it part of Nvidia’s “GameWorks” technology package.
GameWorks also contains several other fascinating and powerful technologies, including one that I am particularly interested in, NVIDIA TurfEffects. I haven’t looked into the licensing of this yet, or the work required to integrate it into T3D, (Nvidia’s website mentions a dependenc onf DirectX 11, currently, T3D uses DirectX 10, but this is being worked on) however, this system capability to render grass is staggering. I have seen many games, even high quality games, use what is obviously a randomly generated sprite system for grass. A 2D sprite is created, animated, and then replicated many times in the world. The effect looks decent, but unless the object count is very high, the grass just looks patchy and fake. In addition, the sprites will be billboarded, which means that they will rotate to face the player. This prevents the player from noticing that the grass is two dimensional. This can cause problems if the player views the grass from above, or from an extreme angle.
However, with TurfEffects, the grass is not only completely seamless with no patches or dead spots, it is also capable of collision! Even large grass fields support deformation due to player footsteps, or objects striking the grass. This video, from Nvidia’s site, demonstrates this:
According to the video, TurfEffects allows one millions blades of grass to be drawn in less than a millisecond, using a GTX 680, which is incredible performance.
Nvidia’s Gameworks also features HairWorks, which provides superbly detailed hair and fur simulations. This solves an irritating problem in the creation of characters for games. Hair is normally modelled as a mesh, similiar to any other object, and then either left static, or animated as a mesh, which looks very stilted and wooden, and very poor looking. This is one of the things that I, as a developer and gamer, just accepted as the status quo in games, since real time hair physics was just not really possible. However, with solutions like HairWorks, this may change. From watching the below video, it is surprising to see how much of a difference realistic hair actually makes to the depth and immersion in a game. This is a video, from Nvidia, of HairWorks in action:
Again, I don’t know what type of licensing or pricing HairWorks uses, (It’s not on Nvidia’s site) and I am also not sure how easy, or even how possible, it would be to integrate this into T3D. This is an area where using a more populate engine, such as Unity or Unreal, would be an advantage, since there would be greater support for the integration of technologies like this.
I personally tend not to like to use external libraries and resources, I am reluctant even to use art packs, since I prefer to create everything completely from scratch. I do use engines, such as T3D, and I do use some resources and libraries, but I try to create my own solutions and art when possible. However, the time required to “reinvent the wheel” like this will get prohibitive for a massive undertaking like a virtual world. I intend to seriously look into various libraries and tools to make the job easier. Some, like PhysX and MakeHuman, will be readily available, however, other technologies will have to be developed, such as a Spherical Terrain implementation (I was never able to find a library or resource which fit my needs).
Virtual Worlds are my main area of interest, if I am going to create the world that I am working towards, I am probably going to have to make use of any tools that are available. This is especially true with art and animation tools, since I have much more limited skills in this area.
Having recently moved to Windows 10, I had to solve a few fairly minor issues when compiling Torque using the new 2015 edition of Visual Studio 2015. The current version of T3D (3.6.1) seems to compile fine, however, projects built with older versions don’t.
Since I’m sure many developers, like me, are still working with older versions of the engine for various reasons, I decide to create this brief guide to outline how I solved these compiling and linking issues.
Firstly, there will be a lot of errors relating to cstdlib. This is the Standard Library. The problem here is that the new version of cstdlib has a function named “strtok” which conflicts with an identically names function in torque. To fix this, simple navigate to “engine/source/core/strings/stringFunctions.h”.
At line 38, change “#define strtof” to something else. This removes the conflict, and should fix most of the errors.
The second change to make is in “engine/lib/collada/src/dae/daeElement.cpp”. This file complains about a function “max” not being found. This is because the function, once a part of “xutility” has now been removed.
To solve this, simple add this function:
// ALGORITHM STUFF (from <algorithm>)
// TEMPLATE FUNCTION max
template<class _Ty> inline
const _Ty& (max)(const _Ty& _Left, const _Ty& _Right)
{ // return larger of _Left and _Right
return (_DEBUG_LT(_Left, _Right) ? _Right : _Left);
}
To daeElement.cpp, somewhere above line 475, where the only call to “max” appears (in “getNecessaryColumnWidth”.
If you are lucky, this is all you will need to do.
However, if you are unlucky, like I was, and you have third party libraries installed in T3D, you may get linker errors similiar to: “module unsafe for safeseh image”. What this means is that one of these libraries has been compiled using an earlier version of the compiler.
To fix this, either get an up to date version of the library, or go into:
“Project properties>Configuration Properties>Linker>Advanced” and disable “Image has safe exception handlers”.
This should be all you need to do.
I generally adopt a simple sequential naming convention for my Projects, P-165, P-166, etc. My current virtual world Project (Which I don’t have a name for yet) is P-152. However, I think that given the sheer complexity of the Spherical Terrain Implementation which I am currently working on, this element should have it’s own project number. This trend will likely continue throughout the virtual world project, with many parts of the main project being assigned unique Project Numbers and/or Names due to their complexity.
That means that I need to adopt a new project number for this project. I intend for this virtual world to be my most important project ever, and I will likely never finish working on it, I intend for it to be a life’s work. As such, I want to choose a unique name that best fits it’s status, I don’t want to just assign the next free number to it like I would with any other project.
I have decided to use an ID of P-0 for my current project, for “Project Zero”. This should allow me to uniquely identify posts and content for the project, and also define project numbers for the sub elements separately.
My first major virtual world, Spiritus Astrum, wasn’t given a project ID because I didn’t use them at the time. Astral Realms was given an ID of P-101. I think that P-0 properly emphasises the important of this project, since it will be an ongoing project. It may go through more iterations (which is what happened with my previous virtual worlds) but P-0 can still be used to define the project as it evolves.