Doctoral dissertation, Ghent University, 16 Sep. 2021.
This dissertation covers several theoretical and practical aspects of Markovian imprecise jump processes. A Markovian imprecise jump process is a particular type of stochastic process, meaning that it is a mathematical model for a dynamical system whose state evolves over time in an uncertain manner. In particular, it is a specific type of stochastic process for a system that evolves in continuous time and whose state assumes values in a finite state space.
Because a Markovian imprecise jump process models uncertainty, we begin this dissertation with a brief overview of some of the mathematical tools that can be used to model uncertainty. We adhere to the coherence framework for modelling uncertainty as conceived by de Finetti (1970), Williams (1975), and Walley (1991), and not to the more conventional measure-theoretical framework advanced by Kolmogorov (1933). In particular, we use coherent conditional probabilities as elementary uncertainty models. Unlike with the probability measures that are used in the measure-theoretical framework, this allows us to condition on events with probability zero without any issue or ambiguity.
Next, we present our take on the framework of (im)precise jump processes, which was originally put forward by Krak et al. (2017). We define a (precise) jump process as a coherent conditional probability on a specific domain: we consider finitary events – events that depend on the state of the system at a finite number of future time points – in combination with conditioning events that fix the state of the system at a finite number of (past) time points. Under the classical Markovianity and (time-)homogeneity assumptions – and a mild continuity assumption – a jump process is uniquely defined by two parameters: its probability mass function and its rate operator. Thus, given these two parameters, we can determine the (conditional) probability of any finitary event, and therefore the (conditional) expectation of any simple variable – a simple variable depends on the state of the system at a finite number of time points. Note that (conditional) probabilities are a special case of (conditional) expectations, so we can focus on the latter.
Specifying precise values for the parameters of a (homogeneous) Markovian jump process can be infeasible if not impossible, especially if these are learned from data and/or are elicited from an expert. This is where Markovian imprecise jump processes come in, as they generalise Markovian jump processes to allow for partial parameter specification. Whereas a Markovian jump process is fully determined by an initial probability mass function and a rate operator, a Markovian imprecise jump process is determined by a set of initial probability mass functions and a bounded set of rate operators. However, there is not one Markovian imprecise jump process, but there are three that we consider. All three of them are defined as sets of jump processes that are consistent with the set of initial probability mass functions and the set of rate operators, but they differ in the type of processes that are considered: the first contains all consistent homogeneous and Markovian jump process, the second contains all consistent Markovian homogeneous jump processes – so it includes the homogeneous ones – and the third simply contains all consistent jump process – so it includes the Markovian ones.
Because we work with sets of jump processes, there is not a single value for the (conditional) expectation of a simple variable, but a set of values; our aim is not to determine this set, but to determine tight lower and upper bounds, which we call lower and upper expectations. Whether or not we can compute these lower and upper bounds in a tractable manner depends on the structure of the set of rate operators. If this set is infinite, which usually is the case, then computing lower and upper expectations of simple variables is computationally intractable for the set of consistent homogeneous and Markovian jump processes. Quite remarkably, it turns out that if the set of rate operators is separately specified (and convex), then we can nevertheless tractably compute tight lower and upper bounds for the other two sets of consistent jump processes. Krak et al. (2017) identify two such cases: (i) separately specified is sufficient for simple variables that depend on the state of the system at a single future time point; and (ii) separately specified rows and convexity are sufficient for general simple variables, although only for the set of all consistent jump process. We identify a third case that sits somewhere between the previous two extreme ones: if the set of rate operators is separately specified, then we can tractably compute lower and upper expectations for simple variables that have a so-called sum-product representation.
In many applications, the variables of interest depend on the state of the system at all time points in a (bounded) time period, so on the state of the system at more than a finite number of time points. One of the more important contributions of this dissertation is that we extend the domain of Markovian imprecise jump processes to deal with such variables. Crucial to our extension is that we consider càdlàg sample paths from the start. Many important variables are then the point-wise limit of a sequence of simple variables, and we call these idealised variables; examples are temporal averages, hitting times and indicators of until events. We show that for any (bounded) set of rate operators, the (conditional) expectation corresponding to any consistent jump process satisfies monotone convergence, so we can extend the domain of this expectation through Daniell’s (1918) method of integration. Even more, we show that the tight lower and upper bounds on these extended expectations satisfy (imprecise generalisations of) the Monotone Convergence Theorem and Lebesgue’s Dominated Convergence theorem. In general, this convergence might be to conservative bounds, but we show that the convergence is tight for three important types of idealised variables – indicators of time-bounded until events, truncated hitting times and temporal averages. Even more, these idealised variables are the point-wise limits of sequences of simple variables that have a sum-product representation, and we can therefore tractably compute their lower and upper expectations, at least for the set of all consistent (Markovian) jump processes, whenever the set of rate operators is separately specified.
After this theoretical part, we turn to a – still rather theoretical – setting where parameter indeterminacy arises naturally. In many applications with homogeneous and Markovian jump process models, the state space is so large that computing expectations becomes intractable. Lumping the states – sometimes also called grouping or aggregating states – can then significantly reduce the number of states. Unfortunately, characterising the resulting lumped jump process exactly is not possible due to loss of information – at least not in general. We show that this lumped jump process is consistent with a set of initial probability mass functions on the lumped state space that follows naturally from the original initial probability mass functions, and that this lumped jump process is consistent with a set of rate operators on the lumped state space that is induced by the original rate operator. Consequently, we can use the corresponding Markovian imprecise jump process to tractably compute lower and upper bounds on expectations that we could not tractably compute otherwise.
Finally, we show that all of the aforementioned theory serves a purpose. To this end, we consider the problem of spectrum fragmentation in a single optical link, which is an example where the state space of the exact homogeneous Markovian jump process model is too large. Kim et al. (2015) only consider the random allocation policy, and they reduce the number of states by lumping; they deal with the resulting parameter indeterminacy through an approximate homogeneous and Markovian model, and they use this model to approximate the blocking ratios. With our method, we obtain guaranteed lower and upper bounds on the blocking ratios instead of approximations, and we do so for the random allocation policy but also for two other policies. Even more, we can determine lower and upper bounds on the blocking ratios that hold for any allocation policy.