Presented at the 28th European Conference on Operational Research (EURO2016), 6 Jul. 2016.
We consider the problem of controlling a discrete-time scalar linear system that is subject to stochastic input noise, using a state-feedback control policy whose performance is measured by means of a quadratic cost. If the stochastic model for the noise is specified exactly, a control policy is said to be optimal if it minimises the expected value of the cost. For independent noise, such an optimal control policy is well known to be unique, and consists of a combination of state feedback and noise feedforward. Our first contribution consists in showing that this result remains true for dependent noise. However, in this generalised case, computing the optimal control policy is intractable. Next, our main contribution consists in additionally dropping the assumption that the stochastic model for the noise is specified exactly. Basically, we impose local bounds on the expectation of the noise, and consider the set of all (possibly dependent) stochastic models that are compatible with these bounds. In this context, we call a control policy optimal if it minimises the expected value of the cost for at least one of these compatible noise models. We show that any such optimal control policy consists of the same state feedback term and a possibly different noise feedforward term, and we derive backwards recursive expressions that provide tight bounds on these noise feedforward terms. These bounds are easy to compute, and the recursive expressions turn out to be very familiar.