Meta-Reinforcement Learning for Adaptive Control of Second Order Systems

In Proceedings of the 7th International Symposium on Advanced Control of Industrial Processes (AdCONIP),

, , , , ,

[PDF] [Video]

Meta-Reinforcement Learning for Adaptive Control of Second Order Systems by Daniel G. McClement, Nathan P. Lawrence, Michael G. Forbes, Philip D. Loewen, Johan U. Backström, R. Bhushan Gopaluni
Fig 2. The structure of the RL agent. The control policy used online is shown in the grey box while the critic used during offline training is shown in green.

Click to enlarge image.


Meta-learning is a branch of machine learning which aims to synthesize data from a distribution of related tasks to efficiently solve new ones. In process control, many systems have similar and well-understood dynamics, which suggests it is feasible to create a generalizable controller through meta-learning. In this work, we formulate a meta reinforcement learning (meta-RL) control strategy that takes advantage of known, offline information for training, such as a model structure. The meta-RL agent is trained over a distribution of model parameters, rather than a single model, enabling the agent to automatically adapt to changes in the process dynamics while maintaining performance. A key design element is the ability to leverage model-based information offline during training, while maintaining a model-free policy structure for interacting with new environments. Our previous work has demonstrated how this approach can be applied to the industrially-relevant problem of tuning proportional-integral controllers to control first order processes. In this work, we briefly reintroduce our methodology and demonstrate how it can be extended to proportional-integral-derivative controllers and second order systems.

Read or Download: PDF

Can't find a paper? Create a GitHub issue to request a preprint.

DAIS Lab Publications

Read More: