On a fundamental level, the study of stopping times originates in an experimental or algorithmic perspective.
How long must I run a single experiment/simulation/algorithm etc until I can reasonably expect that some particular event has occurred? In the mathematical sciences, the study of stopping times has its roots in probability theory and computer science. Within this, I am most familiar with the study of stopping times for Markov chains and random walks on graphs. Natural examples of stopping times are:
- hitting time
(how long does it take to access a certain part of the state space), - cover time
(how long does it take to cover the state space), - blanket time
(how long does it take for the empirical distribution to resemble the natural (stationary) one, in some sense).
In the context of dynamical systems, hitting times have been studied extensively, largely due to their application to inducing techniques to study slowly chaotic systems (eg parabolic systems), extremal value theory and Diophantine theory (eg shrinking target probems). On the other hand, the cover and blanket times have been introduced by me and my collaborators recently.
I consider stopping times to be
localised statistical properties: they describe the statistics of an
individual realisation
/trajectory, rather than average statistics, which I will henceforth refer to as
global statistical properties. Global statistical properties refer to things like mixing rates, statistical limit theorems etc.
What I find interesting about localised statistics, is that unlike global statistics, stopping times are sensitive to the geometry. For example, in the context of simple random walks on graphs, stopping times are often highly sensitive to geometric aspects of the graph such as symmetries, bottlenecks etc, whereas global statistics capture a smoother, more average behaviour and are more robust to things such as rare long paths.
In the dynamical systems context, the dynamics is deterministic and so the stochasticity comes from the underlying invariant measure (which we always assume to be ergodic). As such, it is not the geometry of the state space which is the determining feature (indeed already in our toy model of interval maps we can get diverse, complex behaviour). Instead it is the complexity of the ergodic distribution which is the main player - the fractal and multifractal properties of the ergodic measure. For example, hitting times are determined by local dimensions, cover times are primarily determined by Minkowski/L
-∞ dimensions and blanket times are affected by the regularity of the measure. I am particularly interested in understanding the connection between the fractal/multifractal properties and stopping times.
On the cover time in IFS and dynamical systems contexts:How long is the Chaos Game? (with
I. Morris). To appear in
Bulletin of the LMS.
On the convergence rate of the chaos game (with B. Bárány and I. Kolossváry). To appear in IMRN.
Cover times in dynamical systems (with M. Todd). To appear in Israel Journal of Mathematics.
Expository note considering cover and blanket times for a simple system:
Quantitative covering and equidistribution. Submitted to the conference proceedings of On the Interface of Geometric Measure Theory and Fractal Geometry.Other useful resources:Markov chains and mixing times, by Levin, Peres and Wilmer. A good book about Markov chains and in particular stopping times within the context of Markov chains and random walks on graphs.
Random walks and electric networks, by Doyle and Snell. Book on connection between random walks and electrical network theory.