I've been working on a formalization of combinatorial game theory, focusing on solving the game of Nim and the Sprague-Grundy theorem. Initially, I wanted to make this post a literate Coq document, but as I started writing, I realized there was not much to gain from cluttering the posts with various small lemmas and proofs. Instead, I will highlight the main definitions and some of the design decisions I faced. I will also mention some of the more technical challenges I had to deal with.

I'm planning on making multiple posts; this post will be mainly definitions. You can view the (mostly) completed project here.

The situation for prerequisite knowledge is a bit interesting.
No knowledge of game theory is required, although it would probably help you follow along if you've
seen these proofs before in some other context.
I *don't* think that reading formalized proofs is a particularly good introduction to game theory--
especially for the material I'm focusing on, the proofs are fairly intuitive, and the formalization
mostly gets in the way.
I'm assuming some basic familiarity with Coq, since I'm focusing on the main challenges associated with using Coq as a formalization tool.

## A Quick Introduction to Combinatorial Games

Combinatorial games are a way to model sequential, two-player games with perfect information.
In a combinatorial game, players make moves that change the state of the game, which in turn determines
which moves a player can make.
Whoever is not able to make a move on their turn is declared the loser, and the other player is the winner.
One of the main goals of combinatorial game theory is to describe the outcome of a combinatorial game--that is,
whether the first or second player will win if both players play optimally.
In this post, I will be focusing on *impartial* games, where both players are able to make
exactly the same moves.

One fundamental example of an impartial game is the game of Nim, whose state is represented by a pile of stones. On their turn, a player must remove any positive number of stones from the pile. Whoever is left with an empty pile on their turn is the loser.

Of course, this version of Nim is not very interesting, since it is clear that if the pile is empty to start with,
the first player loses, and otherwise, the first player can simply take all remaining stones and force the second player
to lose.
We can instead consider the *sum* of two Nim games, where on each turn, a player can choose one Nim pile and remove a
positive number of stones.
This can clearly be generalized to a sum of any number of Nim games, or in fact any number of any kind of impartial game.
Now, it is not so clear how to determine the outcome of a sum of Nim games.
For instance, the Nim games with $1$, $2$, or $3$ stones are all winning for the first player, but
the sum of them is losing.

It turns out that the outcome of a sum of Nim games can easily be determined by computing its associated *nimber* or
*Grundy value*. (I will use both interchangeably).
If a game has Grundy value 0, it is losing for the first player; otherwise, it is winning.
Interestingly, the operation that is used to add nimbers is *bitwise xor*; for example,
the sum of Nim games $1 \oplus 2 \oplus 3$ is losing exactly because $1 \oplus 2 \oplus 3 = 0$.
This is a seemingly mysterious connection I hope to justify.
In addition, *all* impartial games have a Grundy value, and the sum of any number of impartial games can be understood
by computing the xor-sum of the Grundy values of each game.
This is the content of the Sprague-Grundy theorem.

## Defining Impartial Games

We define an impartial game as the following structure:

- A type of positions,
- A start position,
- A function mapping states to valid next moves,
- A proof that the induced relation between moves is well founded (corresponding to the fact that any game must eventually terminate).

This definition is adapted from a blog post on combinatorial game theory from the Poleiro blog.

```
Inductive impartial_game :=
ImpartialGame {
position: Type;
start : position;
moves : position -> list position;
valid_move next current := In next (moves current);
finite_game : well_founded valid_move;
}.
```

We define the notion of *winning* and *losing* states.
We might try to define a winning state as follows:

```
Fail Inductive winning_state : S -> Prop :=
| trans_to_losing : forall (s s' : S),
valid_move game s' s -> (~ winning_state s) -> winning_state s.
```

Recall that `~ winning_state s`

is equivalent to `winning_state s -> False`

.
Thus this constructor does not satisfy the positivity requirement Coq imposes to ensure soundness.
To work around this, we can define two mutual inductive types, `winning_state`

and `losing_state`

.
A state is winning if there is at least one transition to a losing state,
and a state is losing if all transitions are to a winning state.

```
Section Winning.
Variable game: impartial_game.
Let S := position game.
Inductive winning_state : S -> Prop :=
| trans_to_losing : forall (s s' : S),
valid_move game s' s -> losing_state s' -> winning_state s
with losing_state : S -> Prop :=
| all_winning : forall (s : S),
(forall (s' : S), valid_move game s' s -> winning_state s') -> losing_state s.
```

The advantage of defining winning and losing like this is that it is quite natural and aligns with how winning and losing is usually understood.
One downside is that it is not immediately clear that these two notions are mutually exclusive, or that a state must be either winning or losing.
We show this by constructing a recursive decision procedure `get_outcome_b`

that will take a state and determine whether it is winning or losing.

I will omit the proofs here, but we can use boolean reflection to relate the
decision procedure `get_outcome_b`

to the `winning_state`

and `losing_state`

predicates,
and then use this to show that winning is a decidable proposition.

```
Lemma w_reflect:
forall s, reflect (winning_state s) (get_outcome_b s).
Lemma winning_decidable : forall s, winning_state s + ~ (winning_state s).
End Winning.
```

As an example of a game, we define the zero game, which has a single state with no moves and is losing for the first player.

```
Definition zero : impartial_game.
refine {|
position := unit;
start := tt;
moves s := [];
|}.
constructor; intros y H; inversion H.
Defined.
```

## Sum of Games

Next we define the *sum* of two impartial games.
If we have two games $A$ and $B$, their sum
$A \oplus B$ can be thought of as playing $A$ and $B$
at the same time.
On their turn, a player will choose either game $A$ or $B$ and
make a move.
The loser is whoever is unable to make a move on either board.
Naturally, we can define the states of the game $A \oplus B$ as
the product of the states of $A$ and $B$,
and moves between states as either making a move in $A$ or $B$.
We must also show that this game terminates, but this follows relatively easily from the fact that
its valid move relation is a product of two well-founded relations.
This definition is also adapted from the Poleiro blog.

```
Definition sum_game (a b : impartial_game) : impartial_game.
refine {|
position := position a * position b;
start := (start a, start b);
moves pos := map (fun s => (s, snd pos)) (moves a (fst pos)) ++
map (fun s => (fst pos, s)) (moves b (snd pos));
|}.
(* Proof obligation omitted. *)
Defined.
Notation "a ~+~ b" := (sum_game a b) (at level 31, left associativity).
```

This definition is not particularly nice to work with when trying to show a particular move is valid, or when reasoning about what kinds of moves are valid from a given state. Instead, I make heavy use this following lemma to make both kinds of goals more manageable.

```
Lemma moves_in_game_sum : forall a b (s s' : position (a ~+~ b)),
valid_move (a ~+~ b) s' s <->
(valid_move a (fst s') (fst s) /\ snd s' = snd s) \/
(valid_move b (snd s') (snd s) /\ fst s' = fst s).
```

We can prove various things about the sums of games--for example, that the game $A \oplus A$ is always losing--
fairly straightforwardly using well-founded induction on `valid_move`

relation.
One proof I found a bit trickier was to show that if $A$ and $B$ are both in losing states, then
$A \oplus B$ is in a losing state.
This is stated as follows:

```
Lemma sum_losing_is_losing : forall a b x y,
losing_state a x ->
losing_state b y ->
losing_state (a ~+~ b) (x, y).
```

The proof of this is not hard to understand.
Recall that by our definition of `losing_state`

, we need to show that any move from $(x, y)$ must be to a
winning state, which means that for any valid move $(x, y) \to s$, there must exist a valid move $s \to s'$ to a losing state $s'$.
Let's induct on the games states of $A \oplus B$.
By our `moves_in_game_sum`

lemma, we know that any valid move from $(x, y)$
must be a move in $A$ or in $B$.

If the move is $(x, y) \to (x', y)$, then we use that fact that $A$ is losing, so there exists some
$x''$ that is a valid move from $x'$ such that $x''$ is a losing state, and therefore by
our inductive hypothesis, $(x'', y)$ is a losing state.
And similarly if the move is $(x, y) \to (x, y')$.
The problem is that $(x'', y)$ is *not* a valid move from $(x, y)$, and so we can't directly use our inductive hypothesis,
which only applies to states that are directly valid moves from $(x, y)$.
Instead, we can define the *transitive closure* of our valid move relation and induct on that instead.

I wrote a short note about this, since it took me a while to understand it the first time.
(Conversely, if you're reading that note, hopefully this will provide some motivation about why defining the transitive closure of a relation can be useful.)
In the following proof, `wf_trans`

is a lemma that produces a proof that the transitive closure of the `valid_move`

relation is well-founded.

```
Proof.
intros a b.
enough (forall s, losing_state a (fst s) ->
losing_state b (snd s) ->
losing_state (a ~+~ b) s) by auto.
refine (well_founded_induction (wf_trans _ _ (finite_game (a ~+~ b))) _ _);
intros [x y] IH ? ?; simpl in *.
constructor; intros.
apply moves_in_game_sum in H1 as [[? ?] | [? ?]]; destruct s'; simpl in *; subst.
- inversion H; subst. specialize H2 with p.
destruct H2; auto.
apply trans_to_losing with (s', y).
valid_move_sum_left.
apply IH; auto.
apply rel_trans with (s, y); valid_move_sum_left.
- inversion H0; subst. specialize H2 with p0.
destruct H2; auto.
apply trans_to_losing with (x, s').
valid_move_sum_right.
apply IH; auto.
apply rel_trans with (x, s); valid_move_sum_right.
Qed.
```

## Equivalence of Games

Next, we need to define what it means for two games to be "equal" or "equivalent". For example, the game $A \oplus 0$ should be "equal" to $A$, but what do we mean by that? One definition, which works in this case, is that two games are equal if we can define a bijection between their game states such that whenever a move is valid in one game, the corresponding move is valid in the other. This can be defined as follows:

```
Inductive bijection : impartial_game -> impartial_game -> Prop :=
ex_bijection : forall (a b : impartial_game)
(f : position a -> position b)
(g : position b -> position a),
(forall x, f (g x) = x) ->
(forall x, g (f x) = x) ->
(forall s s', valid_move a s' s -> valid_move b (f s') (f s)) ->
(forall s s', valid_move b s' s -> valid_move a (g s') (g s)) ->
f (start a) = start b ->
bijection a b.
Notation "a ~~ b" := (bijection a b) (at level 50).
```

We can show that this defines an equivalence relation, and that `~+~`

is commutative and associative
with respect to `~~`

, and so on.
However, this bijective notion of equivalence turns out to not be strong enough.
For example, using the theorem about Nim games we will prove later, we will show that the sum of Nim games
$1 \oplus 2 \oplus 3$ is "equivalent" to $0$, not only in the sense both games losing,
but also that we can replace $1 \oplus 2 \oplus 3$
with $0$ in any sum of games without changing whether the game is winning or losing.
I chose to define this notion of equivalence as follows:

```
Definition equiv (a b : impartial_game) := losing_state (a ~+~ b) (start a, start b).
Notation "a == b" := (equiv a b) (at level 50).
```

We can show that `~~`

is weaker than `==`

:

```
Lemma bijection_equiv : forall a b, a ~~ b -> a == b.
Proof.
intros.
destruct H.
enough (forall s, losing_state (a ~+~ b) (s , f s)).
{ unfold equiv; rewrite <- H3; apply H4. }
refine (well_founded_induction (finite_game a) _ _); intros x IH.
constructor; intros.
apply moves_in_game_sum in H4; destruct H4 as [[? ?] | [? ?]].
- destruct s'. simpl in *. subst.
apply trans_to_losing with (p, (f p)).
valid_move_sum_right.
apply IH; auto.
- destruct s'; simpl in *; subst.
apply trans_to_losing with (g p0, p0).
valid_move_sum_left.
replace x with (g (f x)); [| apply H0].
apply H2; auto.
replace (g p0, p0) with (g p0, f (g p0)).
apply IH. replace x with (g (f x)). apply H2; auto.
rewrite H0. reflexivity.
rewrite H. reflexivity.
Qed.
```

In addition to showing that `==`

is an equivalence relation, we must show that
it is proper with respect to sums.
In other words, if `a == b`

and `c == d`

, then `a ~+~ b == c ~+~ d`

.
This allows us to use tactics like `rewrite`

and `symmetry`

like we are working with propositional equality.
The term for this is "generalized rewriting", and to learn more I recommend this blog post
on rewriting in Coq.

```
#[export] Instance sum_proper : Proper (equiv ==> equiv ==> equiv) sum_game.
```

Another way to define the equivalence relation `equiv`

is to use the language of contextual equivalence:
two games $A$ and $B$ are equivalent if for any game $C$, $A \oplus B$ is losing if and only if $A \oplus C$ is losing.

```
Definition equiv' (a b : impartial_game) :=
forall c, losing_state (a ~+~ c) (start a, start c) <->
losing_state (b ~+~ c) (start b, start c).
```

These two definitions turn out to be equivalent, and it's a straightforward exercise to see why.

## Conclusion

Okay, these are the main definitions we need to start proving some theorems about impartial games. Next time, we will show how to determine if any sum of Nim games is losing. We will also prove that any sum of impartial games is equivalent to a single Nim game.