Alexander Erreygers

Extending the domain of Markovian imprecise jump processes

Alexander Erreygers

Presented at the International Conference on Uncertainty Quantification & Optimisation (UQOP 2020), 17 Nov. 2020.

Recently, several authors have independently proposed imprecise versions (or generalisations) of Markovian jump processes—sometimes also called continuous-time Markov chains or Markov processes. The main motivation for these imprecise versions is that they provide an elegant framework to deal with parameter uncertainty. Whereas a continuous-time Markov chain is defined by precisely specifying its initial distribution and transition rate matrix, these imprecise generalisations elegantly deal with sets of initial distributions and/or sets of transition rate matrices.

To the best of our knowledge, there are two frameworks that obtain similar—to some extent equivalent—results. In essence, Krak et al (2017) and Škulj (2015) adhere to Walley’s framework of imprecise probabilities, while Peng (2005) and—more recently—Nendel (2019) follow the framework of non-linear (or convex) expectations. This being said, both frameworks have crucial shortcomings. That of Krak et al (2017) and Škulj (2015) only deals with lower and upper expectations of functions that depend on the state of the system at a finite sequence of time points. Similarly, that of Nendel (2019) only deals with bounded functions that are measurable with respect to the product sigma-algebra. For applications, this means that key inferences like (lower and upper) expected temporal averages or expected hitting times are undefined—or, more precisely, not included in the domain.

In this talk, we explain how to extend the framework of Krak et al (2017) to include these more general inferences. In short, our approach consists of the following. First, we a priori limit ourselves to càdlàg sample paths. This limitation allows us to extend the domain of the (lower and upper) expectation operator to a class of more general functions, including hitting probabilities, hitting times and temporal averages. Crucially, we apply Daniell integration to do this, which only uses (monotone) limit arguments. Finally, we demonstrate how these limit arguments also allow us to compute the lower and upper expectations of these more general functions.