Try playing this game of tic tac toe. Here, the computer plays X and has already played its first move. Additionally, a draw counts as a loss for you

Did you win yet?

Unless my code doesn’t work, you lost every time (either by X getting three in a row or a draw). You may have also noticed even a draw can only happen if your first move is to play in the center. Unsurprisingly, the code here is rigged against you in a way. No matter what move you play, X will always choose moves that guarantee them either a win or a draw. But why though? A naïve answer will be to say “the AI is smarter than us and plays an optimal strategy therefore can always prevent us from winning”. However, this answer is neither satisfying nor correct.

Consider, say, what would happen if we pit two computers against each other, each playing “optimally”. In tic-tac-toe, you would (correctly) guess that it will always result in a tie. But what about games like hex, where ties are not possible? If you put two ultra-smart supercomputers against each other in that game, one of them must lose.

So why, and how can X always win in the above game? And what exactly do we mean by “optimal strategy”? For that, we need to take a look at the theory of games

Games on Trees

A surprising number of turn-based games can be viewed using the following model:

Consider two people, Alice and Bob, driving down a road together in a car. The road is one way (from top to bottom) and does not loop back on itself, but does have stops along the way where it can split into multiple paths:

We additionally assume that every stopping point of the road has a single way to get to it from the starting point and that no matter which paths are chosen the road will end eventually. So for instance the following two will not be valid roads:

At the end of each path is a restaurant, either an Applebug’s (restaurant A) or a Burger-Monarch (restaurant B). Alice wants to go to Applebug’s and Bob wants to go to Burger-Monarch. To make things fair, they decide that at every stop one of them will decide which path to go down, starting with Alice and then alternating between the two.

This scenario can in fact model many games. Take tic-tac-toe as an example. We can model it by having each stop correspond to a state of the board. Each path will lead from that state to a new state obtained by placing either an X or O (depending on whose turn it is) in a valid way. The road ends either when there are three consecutive X-s or O-s, or when the board is full.

A slight nuance to note is that the state of the game is not determined just by the state of the board, but also by all previous moves. So for example the two states below will be considered different even though the states of the board are identical.

Mathematically, this model represents games on trees (more precisely, alternating move finite-stage games with perfect information). The game consists of a finite hierarchal tree T, where each node (stop) has exactly one parent but potentially multiple (or no) children, and a winning set W of leaves (nodes with no children).

The game starts at the root, player 1 begins by picking a child of the root, and the game state moves on to that node. Then player 2 picks a child of that node to move to, and the process repeats alternating between the two players. The game ends when a leaf is reached and no further moves can be made. If the final vertex is in W, player 1 wins, otherwise player 2 wins.

A convenient way to look at the states of the game is to denote every edge with a letter, which will be called a ‘play’. Then every state of the game can be described by the string of plays leading to that state (with the starting position corresponding to an empty string). When the string’s length is even, it’s player 1’s turn, when it’s odd, it’s player 2’s. So a simple game can be represented as so:

Now, how exactly do we define a strategy for a player? Let’s reconsider the analogy of Alice and Bob. Before going down the road, each of them (independently) considers every stop where it would be their turn to drive and decides ahead of time which path to drive on from there. Then they begin driving, each one of them driving down the path they chose beforehand. Such a choice of paths per stop for a player is called a **strategy**. Mathematically, a strategy for a player will simply be a mapping from each state where it’s their turn to a play in that state.

We can now clearly define what exactly an optimal *(or winning)* strategy is. A strategy for player 1 is winning if no matter how player 2 plays, the game will always end in a state in W. Similarly, a winning strategy for player 2 is a strategy where no matter what player 1 plays the game will always end outside of W.

This allows us to partially explain what happened in the tic-tac-toe example at the beginning. The computer knew ahead of time the winning strategy, a.k.a. what exactly to play in (almost) every board state. A nice visualization of this strategy can be seen in this very relevant XKCD

One thing to note here is that a winning strategy does not win from every state of the game. Say in tic-tac-toe, we are in this state and it is O’s turn:

Here player 2 can guarantee their win simply by playing in the top right corner, it doesn’t matter if player 1 plays their winning strategy from here. What is important, however, is that when playing from the start player 1 will always win using their winning strategy. So although it does not guarantee a win from every position it does guarantee a state where player 1 cannot win will never be reached. Formally, if a player has a winning strategy starting from a position of the game we call that state a winning state for that player.

So we saw that in tic-tac-toe X has a winning strategy (when they win on ties), what about other turn-based games like Connect-Four or Go? Is there a winning strategy for a player in them too? And how can one find it?

We call games where one player has a winning strategy determined. A crucial theorem in Game Theory (and other topics of math as a whole is the following)

ZERMELOS THEOREM(1913)Every alternating turn game with complete information and finitely many players and possible states is determined

So indeed games such as Connect-Four and Go have a winning player. Unfortunately, as we will soon see, Zermelo’s theorem doesn’t give us an easy way to determine who the winning player is nor what the winning strategy will be.

Proving Zermelo’s Theorem (for two players)

The proof of Zermelo’s Theorem is rather elegant and straightforward. Look at all final states of the game. Initially, color each of them blue if player A (Alice) is the winner and red if player B (Bob):

Let N be the largest possible number of turns for the game. Look at every position in step N-1 of the game, say it is Alice’s turn there, there are three possibilities:

- The position is already a final state for the game (and so is colored)
- Alice has a play that leads her to a blue position, a.k.a. a win for her. Clearly when in that position she should pick (one of) these plays and guarantee herself a win
- All plays lead Alice to a red position. Then it doesn’t matter what Alice plays she loses anyway

Note that since we are at turn N-1, any move Alice can make must lead her to a final state of the game. Otherwise, if she makes a play to a position where no one has won yet then Bob can make a play that results in a game with N+1 turns. This is not possible since we assumed N is the longest possible length for a game

Recalling the definition from earlier, case 2 is a winning position for Alice (that is, she can guarantee herself a win from there). Since Alice might as well have already one in these states, we color them blue as well. For states in case 3, no matter what Alice does she has lost and so Bob won. In other words, these states are winning positions for Bob. Similarly, we color those states red.

Similarly, if at turn N-1 Bob plays, we do the same process, coloring a state red if it is a winning position for him and blue otherwise. Doing this process on a sample game will look something like this

We end up with a game where every state from turn N-1 onwards is colored, and so is a winning state for one player. Now look at every state at turn N-2, say Bob plays at that turn. Again there are three possibilities:

- The state is already colored and so is a winning position for one player
- Bob has a play that leads him to a red state
- Every play Bob makes leads him to a blue state

In case 1, we do nothing since the state is already colored. In case 2, Bob has a winning strategy as he can move to a winning position for him, so we color the state red. Finally, in case 3, any move Bob makes leads to a winning position for Alice, which means the current state is itself a winning position for Alice, so we color it blue.

Looking at our sample game, it will now look like so:

We now rinse and repeat the process. If we colored every state after turn K, that is after turn K every position is winning for some player, we color states in turn K-1 according to whether the current player can move to a winning position for them. Doing this for our sample game will result in this fully colored tree:

At the end of the process, every red state is a winning position for Bob and every blue state is a winning position for Alice. In particular, since we also colored the initial state, the game itself either has a winning strategy for Alice or for Bob.

How can we find that strategy? Say Alice has a winning strategy. Then at each step, she picks (any) play that leads her to a blue position. By doing so, no matter what Bob does, the state of the game will remain blue and Alice will always win. So in our sample game, the winning strategy for Alice will be like so:

This concludes the proof for Zermelo’s theorem. We saw that indeed the game is determined and also a way to find out who has and what is the winning strategy. A similar proof also works for games with more than two players.

Unfortunately, the proof doesn’t really give an *efficient *way to find the winning strategy. For most games, the number of possible states is huge. For example, tic-tac-toe has 765 states, and connect four has a whooping 4531985219092 possible states. To map out all these states and do the coloring like in the proof will take a long while, even with the aid of a computer.

But still, at least now we know that games like connect four, go, tic-tac-toe, and so on have a player who could always *theoretically* win, although in practice finding a winning strategy is rather hard.

One question, however, you may be asking right now is…

But what about chess?

The game of chess (at least without rules about draws like the Threefold Repetition Rule) is not like the games above, as it can technically last however long you want and even forever. For example, if the two players are really dedicated (and have too much free time on their hands) the game can go on forever like so:

If you were to draw out the tree for chess as we did for, say, tic-tac-toe, you’d get an infinite complex tree. Some paths along the tree will end with a mate for a player and so have a clear, definitive, winner, but there will also be infinitely many paths where the game goes on forever. In those cases, the logical thing to do would be to consider the game a draw.

Does a player in chess have a winning strategy? We cannot apply Zermelo’s theorem here since the game is infinite. Additionally, the proof of the theorem won’t work here, since there’s no “longest possible length” for a game in chess.

In general, turn-based games on infinite trees are called **Gale-Stewart **games. Generally, such games are defined such that they always go on forever (e.g. on the tree every node has a child), but we can expand the above description of chess to fit this definition by giving each player a single, “dummy” play when a mate or finite draw is reached:

Since there are no final states in Gale-Stewart games, the winning set W (for player 1) is defined as a set of infinite paths along the tree. Alternatively, by labeling each play, we can define the winning set as a set of infinite strings of plays:

We saw the proof of Zermelo’s theorem won’t work for (variations of) chess, and in particular for general Gale-Stewart games. Is there an alternative to Zermelo’s theorem that does work? Or does there exist a game where neither player has a winning strategy?

The answer is that indeed there’s an undetermined infinite game. However, it turns out that for a rather common class of Gale-Stewart games there **is **an extension of Zermelo’s theorem!

Open Sets and Borel Sets

First, some preliminary concepts. Consider a Gale-Stewart game on a tree *T*, and look at some position *p* of the game. We define a set * T_{p}* of paths (a.k.a. games) on the tree as the set of all paths that start with

*p*and continue however we want:

Here it’s convenient to look at the paths on the tree as strings of plays. Then the set *T*_{p} will be the set of all infinite strings of plays that start with the string *p* and continue however it can. We such a set * T_{p}* a

**basic open set**. So for example with chess, take

*p*to be the position right after the start of the game where white plays a pawn to a3. The set

*here will be the set of all games that started with “white pawn to a3”.*

*T*_{p}A (general) **open set** will then be defined as a (potentially infinite) union of basic open sets * T_{p}*. In other words, an open set is a set of all games that started with a position

*p*in some collection

*Z*of positions. In the chess example, we may take

*Z*to be the sets of positions where black has been checkmated. Then the winning set for white

*W*will be exactly the union of basic open sets

*, where*

*T*_{p}*p*lies in

*Z*. In other words, the winning set for white in chess is an open set.

*(By the way, for those familiar with the concept, this notion of open sets does indeed create a topology on the tree)*

Now here is the kicker:

GALE – STEWART THEOREM (1953)Every Gale – Stewart game with an open winning set W is determined

What the theorem means in essence is that if a game is won by, and only by, reaching a certain type of position (for example, a mate in chess), then the game is determined. So in particular, chess is determined! (at least with the notions we outlined). Unfortunately for us, the theorem does not tell us which player has the winning strategy. Even more unfortunate, the proof of the theorem is rather complex and requires a bit more advanced tools, so it cannot fit in this post (a link will be provided at the end).

Now, what sort of games does the Gale-Stewart theorem **not **apply to? Winning sets that are open can be used to model almost every game humans can play, as they represent winning by reaching some finite position. But what about games where you need to play forever in order to win? As it turns out for these games the winning set might not be open.

For example, let’s define the following game: two players play tic-tac-toe infinitely many times in succession. Player 1 always plays as X and player 2 always plays as O. Player 1 wins if they both lost infinitely often and won infinitely often. Why is the winning set not open? If it were open, there would be some finite position *p* of the game from which player 1 always wins. However player 1 can lose from any position. How? If player 1 loses every new tic-tac-toe game after *p* then they lose the entire game, since they didn’t win infinitely often. It doesn’t matter what the position *p *is, player 1 can still lose afterward.

In general, most games that can be ‘easily’ describe fall under the definition of a **Borel Set**:

DEFINITION – BOREL SET

The family Ɓof Borel sets is defined as the smallest family of sets that have the following three properties:1. For any set A in Ɓ, the complement of A is inƁ2. For every sequence of sets AƁ_{1},A_{2},… in Ɓ, their union is also in3. Every open set is inƁ

The intuition behind Borel sets is that they are sets that can be “built” using open sets. Start with some countable collection of open sets, take complements of them (e.g. look at everything outside them), unions, intersection, and repeat potentially infinitely many times. Borel sets are the sets that can be gotten using this process.

For example, the winning set in the above tic-tac-toe example is Borel (why? I’ll leave it as an exercise for you (smiley face)). Another example of a Borel set is a set containing just one element. Why is it Borel? Say the single element of the set is *x*. Look at every finite prefix *p *of *x*, and take the intersection of the basic open sets * T_{p}*. The result must be just

*{x}*, as no other infinite string can have exactly the same prefixes as

*x*.

Now here is the fun part. As it turns out, games whose winning set is Borel are in fact, too, determined!

MARTIN’S THEOREM (1975)Every Gale – Stewart game with aBorel winning set W is determined

*(Note that the opposite is not true, there are games that are determined but whose winning set is not Borel)*

In general, most sets we can construct and are interested in are Borel sets. In fact, it’s actually rather hard most times to find a set that is **not **Borel.

Particularly, it is also not immediate to find a game that is not determined. The continuation post will detail how we construct such a game (since the construction requires heavy set theory tools I’ve decided to separate it to another post)

Continued in “the undetermined game”

Link to Martin’s original proof of the theorem (and in particular of the Gale-Stewart theorem)

Link to a shorter proof (also by Martin)

Tic-tac-toe applet created partially with the aid of this tutorial

## 2 thoughts on “Does Every Game have a Winner?”