In addition to my main project, I have been working on a simple concept for a system (Using the Unity game engine) that I can use to create Visual novels, and interactive stories.
Initially, this will use simple 2D art, sprites, etc, which means I can generate almost all of the artwork using AI-Generated art.
However, if it works out, I can add more complex 3D geometry in future, and create point-and-click games, and other projects that are a hybrid of 2D and conventional 3D art.
Using AI generated art would allow me to develop smaller projects more quickly, while still being able to work on my main game, which I would not be able to do if I was working on multiple complex 3D projects.
Most of the basic logic for displaying, selecting, and interacting with sprites is already done for the interactive story system, as well as basic text display, etc.
The main problem now is figuring out how to generate consistent characters with stable diffusion.
I have also spent some time testing the build, and fixing some minor issues that cropped up since the last build test.
In addition, I have been working on the “cosmetic” features of the game. I intend to add more dialog and prompts and guis to tell the user what is happening in the world, and to teach the user how to play the game.
I have also added a loading screen that disables player movement and covers the screen while the game is loading. This prevents the user from seeing objects and terrain, etc, slowly paging in, which negatively affects the immersion.
I have mostly finished the artwork for the new city apartment building, as well as some new inventory items.
I spent some time trying to figure out the correct way to texture large objects.
In the end, I decided to use a combination of techniques.
Firstly, split the large object up into smaller objects when it made sense to do so (So, have different objects for interiors and exteriors, for details and for the main texture, etc). Then use geometry to cover the seams, so add door and window frames, etc, to cover up the seams between sections of the building.
Secondly, use repeating textures when possible (Such as windows, simple wall sections, etc).
Thirdly, use high resolution textures (4k) when all else fails, such as for large exterior areas that can not be realistically broken down into smaller chunks.
The next step is to spend a few weeks improving the cities in the game.
In particular, I want to add new artwork, and add more procedural decorations to make the cities feel more “lived in”. I also want to add the ability for the player to earn credits by performing various tasks and jobs.
The first thing to do is to create some new items that will be useful in a city environment, such as new food and drink items, and credits chips, etc.
I will also create a new apartment block that the player can rent or buy rooms in while staying in a particular city.
I have been continuing on with the debugging and testing for the eproject.
I have made some further improvements to the AI system, and I have also improved the grass exclusion system, although there are more updates needed here later.
I have also added the ability to save and load the game while the player is driving a (ground) vehicle. Previously only ships and flying vehicles worked with the save system.
I have made some more improvements to the text to speech system in the project. I have added a new voice from Cereproc, this can be selected as a possible voice for the characters in the project.
All character voice settings are now persistent throughout the game, and so it is a simple matter to add new voices and then select which voice (if any) a character should use.
The basic concept for the Procedural Structure Generator is basically done.
It works in 3D, with interiors, and there are no visible cracks or gaps, etc.
In the future, I will extend this system by adding textured objects, and more detail. I also need to fix some bugs, and add doors and panels to properly seal off dead ends, etc.
I have decided to try to create an immersive city demo for the Oculus Rift. I want to see if I can capture the sense of immersion that I often feel while playing other VR games in a game that I made myself.
This is a very difficult thing to achieve, since playing a game that you made is like watching a magician performing a trick when you already know how it works. It’s very difficult to be amazed by it in the same way.
However, with VR and the great immersion that it provides, I think it is possible to come close. I have created a simple scene with some high-rise buildings (from the White City Pack) and I have created a building with an interior, where the player spawns.
I have noticed that even at this early stage, there is a definite sense of presence within the scene. It is actually surprising how easy this was to achieve, just an hour or so of placing buildings was enough to create a sense of presence within a virtual world. I haven’t even textured the walls of the apartment block yet.
I have just spent a little over a day working on the scripting side of Unity, and I have achieved something that I feared would take weeks to complete.
Not only did I successfully implement raycasting, but I was also able to create a polygon from code, texture it, and apply a collision mesh.
These three things can be very difficult, in Unity, the code to do it was just a few lines long.
The reason I chose this particular task is because of my Spherical Terrain project. The vast majority of my project *should* be engine independent, ie, it is mostly logic, variable declaration, etc, that should just require like for like changes, such as changing an “F32” in T3D to a “float” in Unity, changing a vector to a list, etc etc.
The parts that I am really worried about are the areas where my code integrates with the existing rendering and collision system, and the material manager. So far, using these subsystems has been very, very easy in Unity, and I a actually think that porting my Spherical Terrain code could be a viable option in the near future, if I choose to go down that route.
T3D and Unity also seem to both use a clockwise winding order, so this is also an advantage.
I find the code very elegant and neat, and powerful features can be implemented with very few lines of code.
The only issues that I am worried about would be the performance of C# compared to C++ (it would be slower) and the fact that most scientific computing code is written in C++. I now work mostly in the field of “serious” computing, virtual worlds, some computational physics, mathematics, etc, and using a “common” language would make it easier to integrate libraries and example code from the scientific computing field, which I would do on occasion.
Overall though, I am very impressed with the speed that I am picking up Unity, and I am looking forward to spending more time with it, it truly is a great piece of software.