by P. Giselsson, A. Rantzer

We consider distributed model predictive control (DMPC) where a sparse centralized optimization problem without a terminal cost or a terminal constraint set is solved in distributed fashion. Distribution of the optimization algorithm is enabled by dual decomposition. Gradient methods are usually used to solve the dual problem resulting from dual decomposition. However, gradient methods are known for their slow convergence rate, especially for ill-conditioned problems. This is notÂ desirable in DMPC where the amount of communication should be kept as low as possible. In this chapter, we present a distributed optimization algorithm applied to solve optimization problems arising in DMPC that has significantly better convergence rate than the classical gradient method. This improved convergence rate is achieved by using accelerated gradient methods instead of standard gradient methods and by in a well-defined manner, incorporating Hessian information into the gradient-iterations. We also present a stopping condition to the distributed optimization algorithm that ensures feasibility, stability and closed loop performance of the DMPC-scheme, without using a stabilizing terminal cost or terminal constraint set.