This week, most of my time was again spent on this project.
I concentrated mostly on the game rules, with some time also spend on some important graphical and GUI features.
I implemented a context-sensitive cursor which will display the chances of hitting whatever target is currently underneath the mouse. This gui is affected by the distance, fire mode, cover status, etc of the ai firing the weapon and the target being aimed at.
I created a complex cover system which I am quite happy with. Instead of just keeping track of whether a player or enemy is in cover or not, this system actually keeps track of the direction of the cover, meaning a soldier could be protected from the front, but vulnerable from the sides and from the rear. This allows for flanking and diversionary attacks to be used to great effect.
I spent a few days this week working on some modifications to the guiShapeName hud used in the T3D engine. I have added the ability to display multiple lines of text, and to display different text on different clients. This means that I can display custom stats and attributes for the players own squad, visible only to the player controlling that squad.
I have had some further ideas regarding this project. Icarus turned out to be a very large rocket, much larger than I anticipated. It is also very complex, due to the fact that is uses liquid-fueled motors with cryogenic (LOX and LH2) propellant.
I want to know how small, and how simple, a rocket could be made to send a tiny payload (say, a USB memory stick) into space. I want to know if it is at all feasible to build a rocket powerful enough to reach escape speed with basic materials.
To simplify the Icarus project, in addition to reducing the payload capacity, I would use solid-fuel motors. This would eliminate the need for complex fuel and oxidiser feedlines, turbines, turbopumps, etc. Solid fuel rockets are much simpler to build, and homemade rocket fuel can be made relatively easily. The simplest type of homemade rocket fuel is “Rocket Candy“, Potassium Nitrate and Sugar (Sucrose).
The problem here is that solid fuel rockets in general, and solid fuel rockets using homemade fuels in particular, are less powerful than liquid fueled rockets. For this reason, medium to large rockets almost always use a liquid fueled main engine, possibly with solid fueled engines as boosters. Small rockets however, have been known to use solid-rocket engines exclusively.
The specific impulse of Rocket Candy is between 115 and 130 seconds, compared to 150 – 180 seconds for APCP (Ammonium Perchlorate composite propellant, used by the Space Shuttle Solid Rocket Boosters). This is surprisingly good, although, possibly not good enough for a ground launch to escape speed.
The next problem with building a simple rocket would be controlling it. Commercial space rockets use thrust vectoring, usually some kind of rotating gimbal mechanism. This kind of system would be much too difficult to build for an amateur. There are several other thrust vectoring options, (useful graphic here) such as Liquid Injection Thrust Vectoring, Hot Gas Injection, Jet Tabs, and Axial Plates. Axial plates and Jet Tabs are probably the easiest to build, since they don’t require modification of the engine nozzle itself, or any kind of extra fuel or related equipment.
I have made huge progress in Star Commander over the past week.
In addition to creating animations for grenade throwing (which still need some work!), I added code to allow for “Client-Specific Rendering”. This is basically where an object shows up on one client, but not on another. Since Star Commander is a tactical wargame, I will be modelling real-world variables, such as cover, concealment, etc. This means that some players will be able to detect an enemy before that enemy detects them, or vice versa. For this to work, I need to be able to control the visibility of each player object over the network. I have implemented this by simply hiding the mesh, as opposed to deleting it. The advantage to this is that the mesh is still in the game world, so it could still be hit by stray bullets or grenades, or detected by the footprints it leaves behind, etc etc, which is a desireable feature. I could have achieved this effect on the server of course, and hiding a mesh as opposed to deleting it does allow for the potential for cheating, if a player were to hack the client, so this is something I may look into and improve later.
I create a game lobby, and implemented most of it’s features. There is a fully colour coded (Red Team, Blue Team, and Observer) player list and chat system, the ability to choose teams, choose spawnpoints, and enter the game, and the ability to send chat messages to everyone or only to your team. A different set of spawn points are loaded for each Team, ensuring members of opposing teams dont spawn beside each other, and a map wil be shown behind the spawn select buttons, providing a visual indication of where it is.
I implemented the beginnings to the games rules. Star Commander will have a complex system of rules for governing things like accuracy, detection ability, chance to be detected, etc etc. These rules wil be based on variables such as morale, stamina, health, whether the AI is in cover or not, how many casualties have been taken by the players team, or the enemies team, etc. I have come up with al algorithm to convert all of this into a percentage chance of hitting a target, and an offset vector determing how much a player misses by if they do miss.
I spent quite a bit of time implementing a point-and-click object selection system. I had to make several changes to the C++ source for this to work, as well as do a lot of GUI work. Players can now click on the screen with the mouse and the 2D screen coordinates will be translated to 3d coordinates, and used to select items in the game world. This means I can combine an on-screen gui with object selection, I don’t have to hide the gui menu while the player is controlling the camera, and then show it again, which is what I was doing before. Players use the left mouse button to select squad members, select orders, and issue orders. Holding the right mouse button allows the player to rotate the camera in full 3d. The videos below show this in action.
Finally, I added some simple Decals to indicate which player was currently selected, and I implemented frag grenades. (Both of these should be in the video below too) as well as creating and setting up a dedicated server for the game. For the initial release of Star Commander, I will have about four or five dedicated servers, running different missions. I will eventually add the ability for players to host their own missions.
I have tested the network play on a LAN, and everything seems fine. I can’t test the network play on the internet from my machine, because due to the way Torques networking works, I can’t connect over the internet to a game while I am on the same network as the game server. I intend to upload a simple playable test version of Star Commander soon, to be used for internet testing.
I have managed to get the trajectory for my smoke grenades working, thanks t o some help from the Garage Games forums. I now just need to add the animations for the grenade throwing, and I am basically done. Adding animation support may be quite tricky, since I have to move the player from a normal pose to throw the grenade, take the grenade from a belt or pouch, and then throw it, creating a projectile just at the right moment.
Grenades will be a major part of combat in the early release of Star Commander, since the choice of weapons and equipment will be quite limited at this stage.I think that, without grenades, combat would become monotonous “trench warfare”, where players stay in cover and simply fire at each other until they score a lucky hit. With smoke grenades, however, players can deploy smoke to cover an advance or retreat, covering ground that would normally be impassible. With frag grenades, they can clear enemies out from cover, or from building, forcing them to change positions or retreat. This should make the game dynamic and fun.
It may not be possible to see the grenades being thrown in the video below, but they are in fact there.
I have made an excellent start on this project. As can be seen from the videos below, I have created character models and textures for two Teams (Red and Blue) and implemented a simple orders and movement system. I have also created a weapon model and some animations for it which I think look quite good, as well as a “Fire” order.
There is still some modelling and texturing work to do on the character models, and I have yet to add additional weapons, and grenades. The user interface also needs a lot of work. Context-sensitive pop-up command menus, like the one shown in the videos, are only really effective for small numbers of commands. This game will have a large number of possible commands, meaning a conventional permanently visible GUI menu will be required. The problem with this is the issue of focus in T3D. If the on-screen gui has focus, IE, if the mouse cursor can bne used to interact with this gui, then the mouse cannot alos be used to control or influence the game world. With a context sensitive menu I can give the game world focus, and then switch focus to the on-screen gui while it is visible, then switch back.
There are resources available to extend and modify torques mouse select and gui systems, which I will have to use, but I will leave that for slightlt later in the project.
I hope to have a playable (alpha) demo of what I am working on by the end of the month. Such a quick development time should be possible due to the milestone based approach I am using, with iterative releases gradually adding features to the project.
After coming to the end of my previous game project, I wanted to work on something slightly different. I had an idea for an extensive and complicated game, which would no doubt take a long time to reach any kind of releasable state. Instead of jumping straight into a huge project like this, I decide to start with a simple but feature complete version of the game, and add elements over time in the form of “milestones”, with each milestone being a fully playable game in itself.
Star Commander will be a Sci-Fi themed squad-based tactical wargame. The play style will be somewhat similiar to the old X-Com games, although the first iteration of the game will be real time. I have not decided yet if the final version will be real time or turn-based. I intend to simulate realistic factors such as psychology (soldiers may panic, become stressed, etc) resource management (players may run out of ammo in the field, and require resupply) and advanced squad-based tactics and cooperation between players.
The final game will have single player, multiplayer, and multiplayer coop modes. I intend to allow for large-scale campaigns in both single player and multiplayer modes, where players will be able to slowly achieve strategic objectives through fighting many tactical missions. There will also be options for more casual play.
Milestone 1 will be contain primarily the Movement and Combat system, Multiplayer play and general game concept. This version will be simple, but should also be fun to play. It will feature an in game chat, 3-5 online dedicated servers featuring different maps, a small number of different types of weapons, and a promotion and rewards based system to encourage cooperation and tactics during play.
I used to play a free game called “Chain of Command”, which was also a simple squad based online game. It seems to be gone now, but it was relatively successful at one time, so, simple games can work. Especially if they are added to and improved over time. It is a lot easier to do this, than try to embark on a huge project all at once with limited resources.
I expect to have a working prototype of this project done fairly quickly, and hopefully get something uploaded in a few months. MS1 will probably be mostly multiplayer, with a singple player tutorial, and possibly some support for AI Players.
I will then build on this, adding more features in stages.
This resource adds support for “metablobs” to T3D. This can allow for very realistic fluid simulations, much more so than can be achieved with particle effects.
Included are a realistic Water material, a Lava Material, A Tar material, and a Mud material. I have also included a ported version of the renderMonkey glitter shader.
Tutorial:
First, download the resource from here: (4.2MB’s, .RAR archive)
http://www.phoenixgamedevelopment.com/downloads/FluidDynamics.rar
Copy the “Fluid Dynamics” folder to:
“game/art/shapes/FluidDynamics”
Inside the archive is a folder called “SHADERS”. All files in this folder should be copied to:
game/shaders/common/
Drop the files directly in that folder, do not include the “SHADERS” directory itself.
There should also be a folder called “CODE”. All files in this directory should be added to:
“engine/source/T3D/examples/metaBlobs”
In your compiler, you will then need to add these files to the build.
In order to add MetaBlob based objects to the world using the editor, add the following line to:
“game/tools/worldeditor/scripts/editors/creator.ed.cs
Around line 100, under:
%this.registerMissionObject( “RenderShapeExample” );
add:
%this.registerMissionObject( “metaBlobExample” );
The resource folder also contains a sample mission file showing how to create a metablob object from code.
Finally, execute the file “fluiddynamics.cs” by adding the line:
exec(“art/shapes/FluidDynamics/fluiddynamics.cs”);
to:
“game/scripts/server/scriptExec.cs”
Any questions, comments, etc, can be directed to:
jackstone@phoenixgamedevelopment.com
I have reached the point where I can test my program on real signals that I have picked up from a radio transmitter I build with an Arduino.
I tuned the radio to 89.62 Mhz, and recorded two signals: One with the antenna disconnected (which should be picking up almost entirely background noise) and another with the antenna connected (and a radio station clearly audible over the speaker).
I created a graph of the orginal signal after the window function was applied, the real and imaginary components, the magnitude, and a waterfall display for both the background signal and the tuned radio signal.
The waterfall graph isn’t full because I didn’t capture the signal for long enough, and there weren’t enough data points.
The following are the graphs of the background noise:
These graphs are of the radio signal:
There is a clear difference in the two signals. The second signal has a pattern that seems to contain human speech, which is visible in all of the FFT graphs. This, I feel, is a proof of concept of the system. The waterfall display also looks different, but it is hard to tell, since there is alot of noise also producing traces. It is possible some of the signal was picked up by the radio receiver even with no antenna. Even so, I think this system performed quite well, and it shows promise.
I have prepared my Genetic Algorithm Example application for upload. I used the QT library to develop the application, but the executable should run on almost any Windows system, whether QT installed or not.
A code listing with comments follows, refer to my earlier post on this topic for more information. To build and execute this code, simple create a new QT gui application, and add the relevant files from the source directory.
gaentity.h:
#ifndef GAENTITY_H
#define GAENTITY_H
class GAEntity
{
public:
GAEntity(int maxnumber);
GAEntity();
int solution; //This is the first, and so far only, “chromasome” of the GA entity
int fitness;
};
#endif // GAENTITY_H
gaentity.cpp:
#include “gaentity.h”
GAEntity::GAEntity(int maxnumber)
{
solution = rand() % maxnumber + 1; //All GA entities are initialised with a random value from 1 to the max number.
fitness = 999; //lower fitness values are better, so initialise entity with an impossibly high value
}
GAEntity::GAEntity()
{
solution = rand() % 1000 + 1; // default constructor, assumes max value is 1000
fitness = 999;
}
Main Logic Function:
void MainWindow::runbtnpushed(){
srand ( time(NULL) ); //init time for random number function
QString s = “”;
//init variables from gui:
int targetnumber = ui->targetIN->value(); //The number that the AI is trying to guess
int maxgenerations = ui->maxgenerationsIN->value(); //The max number of generations that the algorithm will run for.
int populationsize = ui->popsizeIN->value(); //The number of entities to create in each generation, the more there are, the more chance they will solve the problem
int bestfitness = 999;
for(int i = 0; i < populationsize;i++){ GAEntity ent = GAEntity(ui->maxnumberIN->value()); //Create I entities and initialise to random value
ent.fitness = abs(targetnumber – ent.solution); //determine fitness (Simply subtract the solution from the targetnumber, and ignore the sign)
population.push_back(ent); //add to population vector
}
int count = 0;
//pick the best two candidates, mate them, produce new population
GAEntity parent1;
GAEntity parent2;
//choose parents:
for(size_t i = 0; i < population.size();i++){ //lesser fitness is better
GAEntity ent = population[i];
//fitness:
int fitness = abs(targetnumber – abs(ent.solution));
//this code finds the two entities with the highest fitness. This would be an excellent place for improvement!
if(fitness < parent1.fitness){
parent2.fitness = parent1.fitness;
parent2.solution = parent1.solution;
I have always been interested in Artificial Intelligence, and I spent several years working on AI related projects, including Neural Networks, Genetic Algorithms, and Logical Inference programs, as well as Game AI such as path finding techniques and decision making for games. I spent a lot of time creating “chatterbots”, basically AI programs designed to converse in a realistic manner with a human. These programs proved to be much more difficult than I thought to write, however I did make some good efforts.
I think that when creating a true, learning AI entity (as opposed to a rule-based AI entity, such as those used in most computer games) there are two main approached that can be taken. I call these “Bottom up” and “Top Down”. I don’t believe these are industry standard terms, or even industry standard concepts, but it is how I learned to look at the field of AI, or at least, the parts of it that I was involved in.
A Top-Down AI program would focus on high level tasks, and be designed to emulate advanced behavour, such as communicating with a human, or playing chess. For example, a Top-Down AI program designed to play chess would have the rules of chess programmed in, and would be then programmed with a set of optimisation strategies for different moves and possibly learning techniques for predicting future moves of the opponent. In essence, this AI entity already “knows” how to play chess, the program just teaches it how to do it more efficiently or more intelligently.
A Bottom up AI program is different. It starts with no knowledge of the problem area, and must learn how to solve the problem completely by itself. For example, an AI program designed to navigate from point A to point B as efficiently as possible would begin with no knowledge of the route, and would then slowly explore and learn different behaviours and patterns as it progresses.
To date, I have concentrated on primarily Top-Down programs, attempting to emulate high level behaviour such as communication, and language skills. I have been interested in developing some Bottom-Up programs for some time, especially after coming across John Conway’s “Game of Life”. This is not an Artificial Intelligence program, but a” cellular automaton” demonstrating emergent behaviour. Having read about this, I had an idea.
I intend to create a virtual biosphere populated by a group of AI entities. Thes entities will begin knowing nothing about the environment (Bottom-Up) and will be programmed to learn and evolve in the same way as a real-life species. I will include variables such as availability of food, and water, predators, weather patterns, temperatures, mating, etc etc. It could be an intersting study in not only Artificial Intelligence, but also evolution and biology.
I intend to use a Genetic Algorithm as the basic for the AI entities. I have used these before, and I think they are well suited to this type of problem. A Genetic Algorithm is basically an AI program which evolves over time, slowly becoming better at solving a given task, in a similiar way to how evolution works in the real world.
To brush up on the subject and polish my skills, I have created a very simple concept test of a Genetic Algorithm. This only took a few hours, but it reminded me of the great potential that these concepts have for AI and problem-solving in general.
The program works by having the user first specify a “Target”, a number between 1 and a max value, also specified by the user. The program will then spawn a population of AI entities (Random Initialisation) and attempt to “guess” this number by first picking a number at random (within the range specified). Then, every “generation”, the entities with the “guesses” which are closest to the target number are chosen as the “Parents” (Selection). The average of these two parents is taken to produce a “child” (Crossover). Then a whole new generation of AI entites is created, with the “guesses” of each entity in the new generation being based on the child, plus or minus a small random amount (Mutation). This program finds the correct number almost all of the time, which is remarkable considering how simple it is!
The main disadvantages of it are, first of all the selection process. The program picks the best two entities from the previous generation, this is far too simple, it would have been better to implement a system like Tournament selection, or something similiar. Secondly, this program relies heavily on Mutation to work properly. Without it, the program would converge after just one generation, since there are only two parents. The child is used as a “seed” to create a new generation, but without mutation, the new generation will all be clones of the child. In most Genetic Algorithms (and in real life) the chances of a random mutation are much, much lower.
I intend to release the program and most of all of the source code in the next day or so, it is probably the simplest GA you can find, so it would be a good starting point for anyone looking to being programming with Genetic Algorithms.