CONFERENCE (2023)

A modular framework for stabilizing deep reinforcement learning control

In Proceedings of the 22nd IFAC World Congress,

, , , ,

[PDF] [Slides] [Video] [arXiv]

A modular framework for stabilizing deep reinforcement learning control by Nathan P. Lawrence, Philip D. Loewen, Shuyuan Wang, Michael G. Forbes, R. Bhushan Gopaluni
Fig. 1: A stable nonlinear parameter Q interacts with its environment; collected input-output trajectories are used to construct a Hankel matrix. These ingredients yield an equivalent realization of the Youla-Kuˇcera parameterization.

Click to enlarge image.

Abstract

We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees provided by using the Youla-Kuˇcera parameterization to define the search domain. Recent advances in behavioral systems allow us to construct a data-driven internal model; this enables an alternative realization of the Youla-Kuˇcera parameterization based entirely on input-output exploration data. Using a neural network to express a parameterized set of nonlinear stable operators enables seamless integration with standard deep learning libraries. We demonstrate the approach on a realistic simulation of a two-tank system.

Read or Download: PDF

Can't find a paper? Create a GitHub issue to request a preprint.


DAIS Lab Publications

Read More: