How hard is it to tell dynamical systems apart?

How hard is it to tell dynamical systems apart?

Article: Classification of one dimensional dynamical systems by countable structures

Authors: Henk Bruin & Benjamin Vejnar

Equivalence relationships are everywhere, both in mathematics and in our day-to-day life: when we write “eggs” on our shopping list, we understand that it means any brand of eggs, in other words we will consider equivalent any two boxes of eggs, no matter their brand. In the same way, when we work with groups in mathematics we often consider them up to isomorphism: for example, the additive group of the real numbers is isomorphic to the multiplicative group of the positive real numbers via the exponential function. This means that for all intents and purposes, these two groups are the same: any theorem about one can be readily translated into a theorem about the other one. This allows us to save time and energy by only looking at one of them: equivalence relationships provide mathematicians with two-for-one deals when it comes to proving theorems.

When are two dynamical systems the same?

Just like for groups, one can try to introduce a way of saying that two dynamical systems are the same. A dynamical system is a fancy way of talking about the behaviour of continuous maps of certain topological spaces: for simplicity, think of the unit interval $[0,1]$ together with some continuous function $f\colon [0,1] \to [0,1]$. Studying the dynamical system determined by $f$ on $[0,1]$ then means understanding what happens if we keep iterating $f$: for example, if two points on the interval are close, what happens after we apply $f$ to both of them? Are the images still close? And what if we keep iterating $f$, do they stay close?


It is not immediately obvious what the right notion of equivalence between two dynamical systems on the interval should be. One possibility, which has proven to be quite successful, is the equivalence defined by conjugacy: namely, two dynamical systems on $[0,1]$ determined by continuous functions $f$ and $g$ are conjugated if there exists an homeomorphism $h\colon [0,1]\to[0,1]$ such that $h^(-1)\circ f \circ h=g$. Think of this as saying that, up to a transformation of $[0,1]$ that preserves its shape (just like the famous transformation of a donut into a mug), $f$ and $g$ coincide. Conjugacy is a “good” notion of equivalence: conjugated dynamical systems share the same behaviour. Among the “famous” ones, the logistic map and the tent map are conjugated and share many dynamical properties.


Proving that two dynamical systems are conjugated is useful because, just like for groups, it makes proving theorems easier: instead of considering all dynamical systems on $[0,1]$, one need only work with one for each equivalence class of conjugacy.

When are equivalence relationships hard?

The problem of understanding mathematical objects up to equivalence relationships is often called classification. A classical result, for example, classifies certain surfaces up to homeomorphism by their genus.


Since classification problems appear everywhere throughout mathematics, one would like to know in advance whether they are solvable or not. In other words, it would save everyone a lot of time if we could prove in advance that understanding dynamical systems up to conjugacy is too complicated. But how does one even define “too complicated” mathematically? Descriptive set theory (DST) comes to the rescue. Descriptive set theorists have an arsenal of definitions that capture when topological spaces are “complicated”: for example, they have identified the simplest subsets of the real line, the so-called Borel sets. Borel sets are built by repeating countably many times the operations of intersection, union and complement on open intervals.


DST has built a whole hierarchy of topological complexity. How does this help with classification problems? We can use a clever trick: we can encode our classification problem into a topological space. Informally speaking, we consider the topological space $X$ whose points are dynamical systems. If we do this carefully enough, the topological space $X$ can be examined through the lenses of descriptive set theory. More specifically, we can look at the subset $E\subseteq X^2$ given by $(x,y)\in E$ if and only if the dynamical systems $x$ and $y$ are conjugated. If $E$ is topologically complicated (in the sense of DST), then the classification problem — in other words, understanding when two dynamical system are conjugated — will probably be quite complicated too.

How hard is conjugacy?

The authors of the paper don’t go through a topological space directly, but they use an idea from DST known as “reduction”: they show that conjugacy must be as complicated as another equivalence relationship, whose complexity is known. This equivalence relationship is based on the so-called isomorphism of structures, a notion from mathematical logic that we have already encountered at the beginning: the isomorphism between the two groups $(\mathbb{R},+)$ and $(\mathbb{R}_(>0),\times)$ is one example. The notion of structures formalizes the idea of a “mathematical object” as a set together with some operations and relations: a group is a set with a certain binary operation, for example; an ordered set is a set with a binary relation. The authors construct a countable structure out of a dynamical system. For simplicity, imagine the new structure as a set $B$ together with a binary relation $\leq_B$ and a unary function $f_B$. What is an isomorphism between two such structures $(A,\leq_A,f_A)$ and $(B,\leq_B,f_B)$? It is a bijection $h\colon A\to B$ that “preserves the structure”: namely, $a \leq_A a’ \iff h(a) \leq_B h(a’)$ and $h(f_A(a))=f_B(h(a))$. If you think about it, this is a natural generalization of the definition of isomorphism that we give for groups, ordered sets, rings, and so on.


Crucially, the new structure is countable — and the “difficulty” of classifying countable structures is well known. While the isomorphism relationship is not Borel, it is analytic, which one might think of as the “next best thing”. Countable information is not enough to compute it, but it is not incredibly wild.