Rate analysis of inexact dual fast gradient method for distributed MPC

Featured

by I. Necoara

In this chapter we propose a dual decomposition method
based on inexact dual gradient information  and constraint
tightening  for solving distributed model predictive control (MPC)
problems for network systems with state-input constraints. The
coupling constraints are tightened and moved in the cost using the
Lagrange multipliers. The dual problem is solved by a fast gradient
method based on approximate gradients for which we prove sublinear
rate of convergence. We also provide estimates on the primal and
dual suboptimality of the generated approximate primal and dual
solutions and we show that primal feasibility is ensured by our
method. Our analysis relies on the Lipschitz property of the dual
MPC function and inexact dual gradients. We obtain a distributed
control strategy that has the following features: state and input
constraints are satisfied, stability of the plant is guaranteed,
whilst the number of iterations  for the suboptimal solution can be
precisely determined.

 

Cooperative Dynamic MPC for NCSs

Featured

by Isabel Jurado, Daniel E. Quevedo, Karl H. Johansson and Anders Ahlen

This work studies cooperative MPC for Networked Control Systems with multiple wireless nodes. Communication between nodes is affected by random packet dropouts. An algorithm is presented to decide at each time instant which nodes will calculate the control input and which will only relay data. The nodes chosen to calculate the control values solve a cooperative MPC by communicating with their neighbors. This algorithm makes the control architecture flexible by adapting it to the possible changes in the network conditions.

Distributed MPC Using Reinforcement Learning Based Negotiation: Application to Large Scale Systems

Featured

by Bernardo Morcego, Valeria Javalera, Vicenc¸ Puig and Raffaele Vito.

This chapter describes a methodology to deal with the interaction (negotiation) between MPC controllers in a distributed MPC architecture. This approach combines ideas from Distributed Artificial Intelligence (DAI) and Reinforcement Learning (RL) in order to provide a controller interaction based on negotiation, cooperation and learning techniques. The aim of this methodology is to provide a general structure to perform optimal control in networked distributed environments, where multiple dependencies between subsystems are found. Those dependencies or connections often correspond to control variables. In that case, the distributed control has to be consistent in each subsystem. One of the main new concepts of this architecture is the negotiator agent. Negotiator agents interact with MPC agents to reach an agreement on the optimal value of the shared control variables. The optimal value of those shared control variables has to accomplish a common goal, probably incompatible with the specific goals of each partition that share the variable. Two cases of study are discussed, a small water distribution network and the Barcelona water network. The results suggest that this approach is a promising strategy when centralized control is not a reasonable choice.

Distributed Model Predictive Control via Dual Decomposition

Featured

by Benjamin Biegel, Jakob Stoustrup, and Palle Andersen.

This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a localmodel predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints. This allows coordination of all the subsystems without the need of sharing local dynamics, objectives and constraints. To illustrate this, an example is included where dual decomposition is used to resolve power grid congestion in a distributed manner among a number of players coupled by distribution grid constraints.

Bargaining game based distributed model predictive control

Featured

by Felipe Valencia, José David López, Julián Alberto Patiño, Jairo José Espinosa.

Despite of the efforts dedicated to design methods for distributed model predictive control (DMPC), the cooperation among subsystems still remains as an open research problem. In order to overcome this issue, game theory arises as an alternative to formulate and characterize the DMPC problem. Game theory is a branch of applied mathematics used to capture the behavior of the players (agents or subsystems) involved in strategic situations where the outcome of a player is function not only of his choices but also depends on the choices of others. In this chapter a bargaining game based DMPC scheme is proposed; roughly speaking, a bargaining game is a situation where several players jointly decide which strategy is best with respect to their mutual benefit. This allows to deal with the cooperation issues of the DMPC problem. Additionally, the bargaining game framework allows to formulate solutions where the subsystems do not have to solve more than one optimization at each time step. This also reduces the computational burden of the local optimization problems.

Adaptive Quasi-Decentralized Model Predictive Control of Networked Process Systems

Featured

by Ye Hu and Nael H. El-Farra.

This work presents a framework for quasi-decentralized model predictive control (MPC) design with an adaptive communication strategy. In this framework, each unit of the networked process system is controlled by a local control system for which the measurements of the local process state are available at each sampling instant. And we aim to minimize the cross communication between each local control system and the sensors of the other units via the communication network while preserving stability and certain level of control system performance. The quasi-decentralized MPC scheme is designed on the basis of distributed Lyapunov based bounded control with sampled measurements and then the stability properties of each closed-loop subsystem are characterized. By using this obtained characterization, an adaptive communication strategy is proposed that forecasts the future evolution of the local process state within each local control system. Whenever the forecast shows signs of instability of the local process state, the measurements of the entire process state are transmitted to update the model within this particular control system to ensure stability; otherwise, the local control system will continue to rely on the model within the local MPC controller. The implementation of this theoretical framework is demonstrated using a simulated networked chemical process.

Distributed MPC Via Dual Decomposition and Alternative Direction Method of Multipliers

Featured

by Farhad Farokhi, Iman Shames, and Karl H. Johansson.

A conventional way to handle model predictive control (MPC) problems distributedly is to solve them via dual decomposition and gradient ascent. However, at each time-step, it might not be feasible to wait for the dual algorithm to converge. As a result, the algorithm might be needed to be terminated prematurely. One is then interested to see if the solution at the point of termination is close to the optimal solution and when one should terminate the algorithm if a certain distance to optimality is to be guaranteed. In this chapter, we look at this problem for distributed systems under general dynamical and performance couplings, then, we make a statement on validity of similar results where the problem is solved using alternative direction method of multipliers.

On the use of suboptimal solvers for efficient cooperative distributed linear MPC

Featured

by Gabriele Pannocchia, Stephen J. Wright, and James B. Rawlings.

We address the problem of efficient implementations of distributed Model Predictive Control (MPC) systems for large-scale plants. We explore two possibilities of using suboptimal solvers for the quadratic program associated with the local MPC problems. The first is based on an active set method with early termination. The second is based on Partial Enumeration (PE), an approach that allows one to compute the (sub)optimal solution by using a solution table which stores the information of only a few most recently optimal active sets. The use of quick suboptimal solvers, especially PE, is shown to be beneficial because more cooperative iterations can be performed in the allowed given decision time. By using the available computation time for cooperative iterations rather than local iterations, we can improve the overall optimality of the strategy. We also discuss how input constraints that involve different units (for example, on the summation of common utility consumption) can be handled appropriately. Our main ideas are illustrated with a simulated example comprising three units and a coupled input constraint.

Lyapunov-Based DMPC Schemes: Sequential and Iterative Approaches

Featured

by Jinfeng Liu, David Muñoz de la Peña and Panagiotis D. Christofides.

In this chapter, we focus on two distributed MPC (DMPC) schemes for the control of large-scale nonlinear systems in which severaldistinct sets of manipulated inputs are used to regulate the system. In the first scheme, the distributed controllers use a one-directional communication strategy, are evaluated in sequence and each controller is evaluated once at each sampling time; in the second scheme, the distributed controllers utilize a bi-directional communication strategy, are evaluated in parallel and iterate to improve closed-loop performance. In the design of the distributed controllers, Lyapunov-based model predictive control techniques are used. To ensure the stability of the closed-loop system, each model predictive controller in both schemes incorporates a stability constraint which is based on a suitable Lyapunov-based controller. We review the properties of the two DMPC schemes from stability, performance, computational complexity points of view. Subsequently, we briefly discuss the applications of the DMPC schemes to chemical processes and renewable energy generation systems.

A distributed reference management scheme in presence of non-convex constraints: an MPC based approach

Featured

by Francesco Tedesco, Davide Martino Raimondo, Alessandro Casavola.

This chapter deals with distributed coordination problems which include the fulfillment of non-convex constraints. A Distributed Command Governor (D-CG) strategy is here proposed to coordinate a set of dynamically decoupled subsystems. The approach results in a receding horizon strategy that requires the computation of mixed-integer optimization programs.