Creating an AI Player

Creating an AI Player

Basics

Every agent in the framework has to extend the AbstractPlayer class and implement the getAction method. The Random Player can serve as a template for agents.

    public class RandomPlayer extends AbstractPlayer {
        private final Random rnd;
    
        public RandomPlayer()
        {
            this(new Random());
        }
    
        @Override
        public AbstractAction getAction(List<AbstractAction> availableActions, AbstractGameState observation) {
            int randomAction = rnd.nextInt(availableActions.size());
            return availableActions.get(randomAction);
        }
        
    }

The observation argument gives the observation to the agent that the agent might use for making better decisions. The agent gets a forward model from the game, which can be accessed by calling player.getForwardModel(). To advance the game state player.getForwardModel().next(gameState, action) can be used.

The AbstractPlayer class provides some optional functions: initializePlayer and finalizePlayer that can be used for loading the initial parameters (weights) and saving them after training. These also provide the initial (or final) GameState, and can be used to learn from a sequence of games if the same agent is being used. If a Player maintains any state at all, then the initializePlayer() method must be implemented to clear this at the start of a new game.

In some cases the player only has a single or no actions to choose from, in these cases instead of getAction the registerUpdatedObservation method gets called. This function lets the player know the gameState, so it can update its belief accordingly.