The Maja Machine Learning Framework (MMLF) (download here) is a general framework for problems in the domain of Reinforcement Learning (RL) written in python. It provides a set of RL related algorithms and a set of benchmark domains. Furthermore it is easily extensible and allows to automate benchmarking of different agents. Among the RL algorithms are TD(lambda), CMA-ES, Fitted R-Max, Monte-Carlo learning, the DYNA-TD and the actor-critic architecture. MMLF contains different variants of the maze-world and pole-balancing problem class as well as the mountain-car testbed and the pinball maze domain.
A certain scenario which is studied is called a “world”. An example of such a scenario would be a robot that tries to find its way through a maze. In RL, the world is typically decomposed into the “agent(s)” and the “environment”. In the example, the robot would be the agent and the maze would be the environment. The MMLF adopts this view since it provides a natural modularization, which allows to write general agents capable of learning and to test them in a multitude of environments. All learning (optimization of behavior) is usually done within an agent while simulation of physics and other kinds of dynamics are performed within an environment.
Agents and enviroments are only loosely coupled. In order to allow agents and environments to interact, the MMLF uses an “interaction server” which controls how the information between these two types of entities are exchanged. For example, an environment might inform an agent about its current state and which actions the agent can perform, while the agent might inform the environment which action he actually performed. This communication via the interaction server is completely transparent for agents and environments, the only constraint they have to fulfill is that they must offer certain methods which are called from the interaction server.
MMLF is available at http://sourceforge.net/projects/mmlf/
Contact the MMLF support mailing list.