I have revised the GP code posted previously so that entities can now plant trees. It’s not a lot better than before, but it is still a step in the right direction. A tree will drop some quantity of food after it is planted, but it takes a significant number of turns for it to do so. Furthermore, they are expensive in terms of energy cost to plant. Thus, entities can balance between not planting and saving energy, and planting which will consume energy, but lead to better health in the long term.
After a few minutes of milling about, the entities should reach a stable pattern of planting trees and then looping around to consume the food that the trees drop. If you look at the entity code, you’ll see that Command.2 (the plant tree command) tends to wind up inside of a conditional. This is more or less the behavior I was hoping for, so hooray.
Now it’s just a question of what they can do next.
So, I am using a severely (one might go so far as to say gutted) modified version of WordPress for this notebook system of mine. I daren’t call it a blog lest I get people expecting me to update more than once every couple of months (though I should more often if this is really to be a research notebook). WordPress likes to make things look very “nice”, which I am not really against. Just look at the proper quotation marks around “nice”. Aren’t they pretty?
So, anyway, WordPress likes to muck around with quotation marks, and as I discovered in the last post, makes it really hard to write embedded Javascript if I am interested in doing that. I didn’t want to put a full Java applet in a post, since many people’s browsers get all crazy if they see an applet. I’d much rather have a user push a button and then have the applet appear. Originally I wanted to do this fully embedded in the post, which caused all manner of problems, most of which involved having nice quotes around the javascript strings, or the strings that the javascript was printing out. Very persnickety.
Anyway, the script has been moved to an external file, and should be happy there. Please enjoy the finally working Java applet at a button-press.
Recently I have been developing an agent-controller system that uses Genetic Programming. The underlying logic is a lot similar to that in Genetic Image, but I actually represent a lot of structures from real life programming (such as statements, statement lists, and functions or methods), which is rather unusual in that sense. Most Koza-style GP appears to focus primarily on a tree-based evaluation that starts at a single node. Additionally, this style of GP is stateless, and the programs don’t usually alter their state as they go along. Then again, I haven’t been exposed to that much “real” GP, so I may be rather off.
Either way, the system described here uses a fairly sophisticated model, which is both very useful, but also very difficult to train. Attached below is a java applet that runs my current program. The program is a very simple bitozoa-style simulation that has a bunch of entities looking around for food.
I have run into a variety of challenges, though, and I have included some text from my notebook below:
Movement:
The current system of general movement is not working out tremendously well. Originally, the system of movement and direction planning was intended to be a baseline on agent behavior. If they could “get it”, then they could move up to get other more complicated situations. There are a couple of problems though:
Information overload:
When they percieve too many entities, they do not use this to set internal parameters, but call commands directly instead. It could be possible to cut the commands from the input functions, however, this would still require entities to set their internal variables to reflect sensory data. Furthermore, they would need to use that sensory data in their other functions to act intelligibly. This contains too many steps for the entities to be expected to discover via random mutations.
Behavior choices:
The current system is very emergence centric. While not a bad thing, entites are adapting to patterns, but not actually making decisions. This is partly the world configuration, the world itself is not that complicated. But even when equipped with very direct inputs and outputs (input: rotation of nearest food, outputs: movement speed and rotation), entities fail to map one to the other in a satisfactory manner. This may be solvable by using training, but that is undesirable. Ideally, the entities should be making direct and complex decisions.
Feedback problems:
Should still not be trying to get at a direct feedback problem. These can be solved via neural nets, and do not take into account the complex reasoning that can (supposedly) be solved via GP.
questions:
what information should the entities be able to percieve
what choices should the entities be making?
what problems should be solveable?
what worlds should be used?
Possible solutions:
Entities should be able to percieve relevant information. Entities should *not* need to perform elaborate transformations on that information into something that makes sense for them. But, this raises the relevant question of what information is relevant.
Entities should be able to make high level choices. Current choices are in the domain of “move to the food”, “move forward”, “loop about”. These choices are not tremendously complex, they have a great deal of nuance, but the entities could potentially be doing something more interesting.
I want complex behavior to happen, but complex and interesting are subjective adjectives and difficult to define. More explicitly, I want stable patterns to emerge, where entities are successful and stable, in terms of radical behavior changes. The matter of navigating towards food is a start, but there should be more types of objects in the world with which entities may interact, and more sophisticated use of those objects should result in higher entity success (or health, or lifespan, or number of progeny).
The nature of these objects, and the nature of the world itself is the remaining issue. These entities are existing in a purely abstracted environment, so the world can be as simple or absurd as necessary. But due to the matters of movement, having objects in the environment which affect an entity’s health or reproductive faculties is not the best option. If navigational logic were improved, that may change, though.
Because I want to take this in the direction of giving entities different affordances or rules for interacting with the world, and then observing the types of stable systems that result, there should be more capability for the entities to affect the world itself, possibly by creating objects. Who knows, we’ll see.
I would like to kick off my research entries with short description of what my research interests are, why I’ve chosen them, and how I want to connect them with existing domains. I also want this post category to work for reading summaries for the many and various journals, books, and papers that I’ll be reading to support my endeavor.
The main focus of my research is on AI based characters, and the simulation of social behavior between them. The primary target for this research is games, because I think that games are the best situation for people to engage with AI based characters. Ultimately, I would like the fruit of the research to be a small testbed where a player can interact with several characters, each of which has their own goals and interests, and those characters will interact with each other as well as with the player. Think a combination between Facade, and The Sims. There are several reasons why I’m interested in social behavior. One is because the AI for game characters currently is terrible, and it would do a lot to improve games if the characters had lives beyond the interaction of the player. Secondly, simulating social behavior among AI based characters can lead to several hard models of social structure that may be useful outside of a pure game setting. This is interesting because the resulting behavior among the agents can be changed based on the social rules in place, and it will be interesting to experiment with different rule systems to see what changes ensue. This is of great bearing in the sense of cultural theory, because the embedded patterns and affordances of the models (and the simulation software itself) will manifest in the resulting social worlds.
Because DM is a Digital Media program, much of what I’ll be doing will involve connecting with the work of existing new media theory and artifacts, and I will try to catalogue these to the best of my ability in this category. I’ll often wind up posting things that do not directly relate to the research project, but bear on new media and theory of games in general.