by Bernardo Morcego, Valeria Javalera, Vicenc¸ Puig and Raffaele Vito.
This chapter describes a methodology to deal with the interaction (negotiation) between MPC controllers in a distributed MPC architecture. This approach combines ideas from Distributed Artificial Intelligence (DAI) and Reinforcement Learning (RL) in order to provide a controller interaction based on negotiation, cooperation and learning techniques. The aim of this methodology is to provide a general structure to perform optimal control in networked distributed environments, where multiple dependencies between subsystems are found. Those dependencies or connections often correspond to control variables. In that case, the distributed control has to be consistent in each subsystem. One of the main new concepts of this architecture is the negotiator agent. Negotiator agents interact with MPC agents to reach an agreement on the optimal value of the shared control variables. The optimal value of those shared control variables has to accomplish a common goal, probably incompatible with the specific goals of each partition that share the variable. Two cases of study are discussed, a small water distribution network and the Barcelona water network. The results suggest that this approach is a promising strategy when centralized control is not a reasonable choice.
by Benjamin Biegel, Jakob Stoustrup, and Palle Andersen.
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a localmodel predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints. This allows coordination of all the subsystems without the need of sharing local dynamics, objectives and constraints. To illustrate this, an example is included where dual decomposition is used to resolve power grid congestion in a distributed manner among a number of players coupled by distribution grid constraints.
by Felipe Valencia, José David López, Julián Alberto Patiño, Jairo José Espinosa.
Despite of the efforts dedicated to design methods for distributed model predictive control (DMPC), the cooperation among subsystems still remains as an open research problem. In order to overcome this issue, game theory arises as an alternative to formulate and characterize the DMPC problem. Game theory is a branch of applied mathematics used to capture the behavior of the players (agents or subsystems) involved in strategic situations where the outcome of a player is function not only of his choices but also depends on the choices of others. In this chapter a bargaining game based DMPC scheme is proposed; roughly speaking, a bargaining game is a situation where several players jointly decide which strategy is best with respect to their mutual benefit. This allows to deal with the cooperation issues of the DMPC problem. Additionally, the bargaining game framework allows to formulate solutions where the subsystems do not have to solve more than one optimization at each time step. This also reduces the computational burden of the local optimization problems.
by Ye Hu and Nael H. El-Farra.
This work presents a framework for quasi-decentralized model predictive control (MPC) design with an adaptive communication strategy. In this framework, each unit of the networked process system is controlled by a local control system for which the measurements of the local process state are available at each sampling instant. And we aim to minimize the cross communication between each local control system and the sensors of the other units via the communication network while preserving stability and certain level of control system performance. The quasi-decentralized MPC scheme is designed on the basis of distributed Lyapunov based bounded control with sampled measurements and then the stability properties of each closed-loop subsystem are characterized. By using this obtained characterization, an adaptive communication strategy is proposed that forecasts the future evolution of the local process state within each local control system. Whenever the forecast shows signs of instability of the local process state, the measurements of the entire process state are transmitted to update the model within this particular control system to ensure stability; otherwise, the local control system will continue to rely on the model within the local MPC controller. The implementation of this theoretical framework is demonstrated using a simulated networked chemical process.
by Farhad Farokhi, Iman Shames, and Karl H. Johansson.
A conventional way to handle model predictive control (MPC) problems distributedly is to solve them via dual decomposition and gradient ascent. However, at each time-step, it might not be feasible to wait for the dual algorithm to converge. As a result, the algorithm might be needed to be terminated prematurely. One is then interested to see if the solution at the point of termination is close to the optimal solution and when one should terminate the algorithm if a certain distance to optimality is to be guaranteed. In this chapter, we look at this problem for distributed systems under general dynamical and performance couplings, then, we make a statement on validity of similar results where the problem is solved using alternative direction method of multipliers.
by Gabriele Pannocchia, Stephen J. Wright, and James B. Rawlings.
We address the problem of efficient implementations of distributed Model Predictive Control (MPC) systems for large-scale plants. We explore two possibilities of using suboptimal solvers for the quadratic program associated with the local MPC problems. The first is based on an active set method with early termination. The second is based on Partial Enumeration (PE), an approach that allows one to compute the (sub)optimal solution by using a solution table which stores the information of only a few most recently optimal active sets. The use of quick suboptimal solvers, especially PE, is shown to be beneficial because more cooperative iterations can be performed in the allowed given decision time. By using the available computation time for cooperative iterations rather than local iterations, we can improve the overall optimality of the strategy. We also discuss how input constraints that involve different units (for example, on the summation of common utility consumption) can be handled appropriately. Our main ideas are illustrated with a simulated example comprising three units and a coupled input constraint.
by Jinfeng Liu, David Muñoz de la Peña and Panagiotis D. Christofides.
In this chapter, we focus on two distributed MPC (DMPC) schemes for the control of large-scale nonlinear systems in which severaldistinct sets of manipulated inputs are used to regulate the system. In the first scheme, the distributed controllers use a one-directional communication strategy, are evaluated in sequence and each controller is evaluated once at each sampling time; in the second scheme, the distributed controllers utilize a bi-directional communication strategy, are evaluated in parallel and iterate to improve closed-loop performance. In the design of the distributed controllers, Lyapunov-based model predictive control techniques are used. To ensure the stability of the closed-loop system, each model predictive controller in both schemes incorporates a stability constraint which is based on a suitable Lyapunov-based controller. We review the properties of the two DMPC schemes from stability, performance, computational complexity points of view. Subsequently, we briefly discuss the applications of the DMPC schemes to chemical processes and renewable energy generation systems.
by Francesco Tedesco, Davide Martino Raimondo, Alessandro Casavola.
This chapter deals with distributed coordination problems which include the fulfillment of non-convex constraints. A Distributed Command Governor (D-CG) strategy is here proposed to coordinate a set of dynamically decoupled subsystems. The approach results in a receding horizon strategy that requires the computation of mixed-integer optimization programs.
by Antonio Ferramosca, Daniel Limon and Alejandro H. González.
In this chapter, a distributed MPC strategy suitable for changing setpoints is described. Based on a cooperative distributed control structure, an extended-cost MPC formulation is proposed, which integrates the problem of computing feasible steady state targets – usually known as Steady State Target Optimizer (SSTO) optimization problem – and the dynamic control problem into a single optimization problem. The proposed controller is able to drive the system to any admissible setpoint in an admissible way, ensuring feasibility under any change of setpoint. It also provides a larger domain of attraction than standard MPC for regulation, due to the particular terminal constraint. Moreover, the controller ensures convergence to the centralized optimum, even in case of coupled constraints. This is possible thanks to the design of the cost function, which integrates the SSTO, and to the warm start algorithm used to initialize the optimization algorithm. A numerical simulation illustrates the benefits of the proposal.
by P. A. Trodden and A. G. Richards.
This chapter presents a robust form of distributed model predictive control for multiple, dynamically decoupled subsystems subject to bounded, persistent disturbances. Control agents make decisions locally and exchange plans; satisfaction of coupling constraints is ensured by permitting only non-coupled subsystems to update simultaneously. Robustness to disturbances is achieved by use of the tube MPC concept, in which a local control agent designs a tube, rather than a trajectory, for its subsystem to follow. Cooperation between agents is promoted by a local agent, in its optimization, designing hypothetical tubes for other subsystems, and trading local performance for global. Uniquely, robust feasibility and stability are maintained without the need for negotiation or bargaining between agents.