OpenAI Gym for NES games + DQN with Keras to learn Mario Bros. from raw pixels
- An EXPERIMENTAL openai-gym wrapper for NES games.
- With a Double Deep Q Network to learn how to play Mario Bros. game from 1983.
- Use Python 3 only.
- Install openai-gym and keras with tensorflow backend (with
cv2(OpenCV module, on Debian/Ubuntu,
sudo pip install opencv-python, see this SO question).
- Install the
fceuxNES emulator and make sure
fceuxis in your
$PATH. In Debian/Ubuntu, you can simple use
sudo apt install fceux. Version 2 at least is needed.
- Find a
.nesROM for Mario Bros. game (any dump for the Nintendo NES will do). Save it to
- Copy state files from
~/.fceux/fcs/(faster loading for the beginning of the game).
For instance to load the Mario Bros. environment:
# import nesgym to register environments to gym import nesgym env = gym.make('nesgym/MarioBros-v0') obs = env.reset() for step in range(10000): action = env.action_space.sample() obs, reward, done, info = env.step(action) ... # your awesome reinforcement learning algorithm is here
Examples for training dqn
Integrating new NES games?
You need to write two files:
The lua file needs to get the reward from emulator (typically extracting from a memory location), and the python file defines the game specific environment.
This website gives RAM mapping for the most well-known NES games, this is very useful to extract easily the score or lives directly from the NES RAM memory, to use it as a reward for the reinforcement learning loop. See for instance for Mario Bros..
Training Atari games
Training NES games
Mario Bros. game
Architecture of the DQN playning mario:
Overview of the experimentation with 3 emulators: