Reinforcement-learning-based dual-control methodology for complex nonlinear discrete-time systems with application to spark engine EGR operation
- PMID: 18701368
- DOI: 10.1109/TNN.2008.2000452
Reinforcement-learning-based dual-control methodology for complex nonlinear discrete-time systems with application to spark engine EGR operation
Abstract
A novel reinforcement-learning-based dual-control methodology adaptive neural network (NN) controller is developed to deliver a desired tracking performance for a class of complex feedback nonlinear discrete-time systems, which consists of a second-order nonlinear discrete-time system in nonstrict feedback form and an affine nonlinear discrete-time system, in the presence of bounded and unknown disturbances. For example, the exhaust gas recirculation (EGR) operation of a spark ignition (SI) engine is modeled by using such a complex nonlinear discrete-time system. A dual-controller approach is undertaken where primary adaptive critic NN controller is designed for the nonstrict feedback nonlinear discrete-time system whereas the secondary one for the affine nonlinear discrete-time system but the controllers together offer the desired performance. The primary adaptive critic NN controller includes an NN observer for estimating the states and output, an NN critic, and two action NNs for generating virtual control and actual control inputs for the nonstrict feedback nonlinear discrete-time system, whereas an additional critic NN and an action NN are included for the affine nonlinear discrete-time system by assuming the state availability. All NN weights adapt online towards minimization of a certain performance index, utilizing gradient-descent-based rule. Using Lyapunov theory, the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight estimates, and observer estimates are shown. The adaptive critic NN controller performance is evaluated on an SI engine operating with high EGR levels where the controller objective is to reduce cyclic dispersion in heat release while minimizing fuel intake. Simulation and experimental results indicate that engine out emissions drop significantly at 20% EGR due to reduction in dispersion in heat release thus verifying the dual-control approach.
Similar articles
-
Reinforcement-learning-based output-feedback control of nonstrict nonlinear discrete-time systems with application to engine emission control.IEEE Trans Syst Man Cybern B Cybern. 2009 Oct;39(5):1162-79. doi: 10.1109/TSMCB.2009.2013272. Epub 2009 Mar 24. IEEE Trans Syst Man Cybern B Cybern. 2009. PMID: 19336317
-
Neural-network-based state feedback control of a nonlinear discrete-time system in nonstrict feedback form.IEEE Trans Neural Netw. 2008 Dec;19(12):2073-87. doi: 10.1109/TNN.2008.2003295. IEEE Trans Neural Netw. 2008. PMID: 19054732
-
Reinforcement learning neural-network-based controller for nonlinear discrete-time systems with input constraints.IEEE Trans Syst Man Cybern B Cybern. 2007 Apr;37(2):425-36. doi: 10.1109/tsmcb.2006.883869. IEEE Trans Syst Man Cybern B Cybern. 2007. PMID: 17416169
-
Control strategies for inverted pendulum: A comparative analysis of linear, nonlinear, and artificial intelligence approaches.PLoS One. 2024 Mar 7;19(3):e0298093. doi: 10.1371/journal.pone.0298093. eCollection 2024. PLoS One. 2024. PMID: 38452009 Free PMC article. Review.
-
The use of artificial neural networks in biomedical technologies: an introduction.Biomed Instrum Technol. 1994 Jul-Aug;28(4):315-22. Biomed Instrum Technol. 1994. PMID: 7920848 Review.
Cited by
-
Safe deep reinforcement learning in diesel engine emission control.Proc Inst Mech Eng Part I J Syst Control Eng. 2023 Sep;237(8):1440-1453. doi: 10.1177/09596518231153445. Epub 2023 Feb 17. Proc Inst Mech Eng Part I J Syst Control Eng. 2023. PMID: 37692899 Free PMC article.
Publication types
MeSH terms
Substances
LinkOut - more resources
Full Text Sources
Other Literature Sources