Quickstart

After completing the Installation, you can start using the environment by importing it in your Python code and calling the gymnasium.make function.

import gymnasium as gym
from tetris_gymnasium.envs import Tetris

env = gym.make("tetris_gymnasium/Tetris")

With the environment created, you can interact with it by calling the Gymnasium typical reset and step methods. The reset method initializes the environment and returns the initial observation and info. The step method takes an action as input and returns the next observation, reward, termination flag, truncation flag, and info.

Simple random agent

For example, a simple loop that interacts with the environment using random actions could look like this:

import gymnasium as gym

from tetris_gymnasium.envs.tetris import Tetris

if __name__ == "__main__":
    env = gym.make("tetris_gymnasium/Tetris", render_mode="ansi")
    env.reset(seed=42)

    terminated = False
    while not terminated:
        print(env.render() + "\n")
        action = env.action_space.sample()
        observation, reward, terminated, truncated, info = env.step(action)
    print("Game Over!")

Interactive environment

You can play around with the environment by using the interactive scripts in the examples directory.

For example, the play_interactive.py script allows you to play the Tetris environment using the keyboard.

import sys

import cv2
import gymnasium as gym

from tetris_gymnasium.envs import Tetris

if __name__ == "__main__":
    # Create an instance of Tetris
    env = gym.make("tetris_gymnasium/Tetris", render_mode="human")
    env.reset(seed=42)

    # Main game loop
    terminated = False
    while not terminated:
        # Render the current state of the game as text
        env.render()

        # Pick an action from user input mapped to the keyboard
        action = None
        while action is None:
            key = cv2.waitKey(1)

            if key == ord("a"):
                action = env.unwrapped.actions.move_left
            elif key == ord("d"):
                action = env.unwrapped.actions.move_right
            elif key == ord("s"):
                action = env.unwrapped.actions.move_down
            elif key == ord("w"):
                action = env.unwrapped.actions.rotate_counterclockwise
            elif key == ord("e"):
                action = env.unwrapped.actions.rotate_clockwise
            elif key == ord(" "):
                action = env.unwrapped.actions.hard_drop
            elif key == ord("q"):
                action = env.unwrapped.actions.swap
            elif key == ord("r"):
                env.reset(seed=42)
                break

            if (
                cv2.getWindowProperty(env.unwrapped.window_name, cv2.WND_PROP_VISIBLE)
                == 0
            ):
                sys.exit()

        # Perform the action
        observation, reward, terminated, truncated, info = env.step(action)

    # Game over
    print("Game Over!")

Training

To do Reinforcement Learning, you need to train an agent. The examples directory contains a script demonstrating how to train a DQN agent on the Tetris environment using a convolutional neural network (CNN) model.

To run the training, use the following command:

poetry run python examples/train_cnn.py

This script trains a DQN agent with a CNN architecture.

You can refer to the CleanRL documentation for more information on the training script.

Note: If you have tracking enabled, you will be prompted to login to Weights & Biases during the first run. This behavior can be adjusted in the script or by passing the parameter --track False.