Imprecise Markov chains: from basic theory to applications This lecture provides an initiation into the theory of imprecise Markov chains. A Markov chain is a stochastic process that can be, at any time, in a number of possible states. It is special in that it satisfies a conditional independence (or Markov) condition: given the present state, the future states are stochastically independent of the past states. This guarantees that the behaviour of such a stochastic process is completely determined by (i) a probability distribution for the initial states, and (ii) a ‘transition’ matrix describing the probabilities of going from one state to the next at a given time. In an imprecise Markov chain, this is modified or weakened in three ways: (a) the initial and transition probabilities are allowed to be imprecisely specified, (b) the transition probabilities are allowed to depend on time, and (c) the stochastic independence in the Markov condition is weakened to a so-called epistemic irrelevance condition. Part I In a first part, we discuss the basics of imprecise Markov chains in discrete time. We show how to describe such a system mathematically, and how to efficiently perform inferences about the time evolution of such systems. We discuss the similarities and differences with (precise) Markov chains. We also study the so-called stationary, or long-term, behaviour of imprecise Markov chains, and its relation to the notion of ergodicity. Part II The second part will cover the basics of imprecise continuous-time Markov chains, thereby extending the concepts from Part I to a continuous-time setting. On the theoretical level, we will provide a definition of imprecise continuous-time Markov chains, discuss their main properties, and explain how to compute inferences for them. On a more practical level, we will explain how these models can be used for the following three purposes: (1) robustify the inferences of traditional continuous-time Markov chains, (2) compute common performance bounds for several Markov chains at once and (3) tackle the scaling problem of traditional continuous-time Markov chains. We will illustrate all of this with an application in telecommunication.