Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Mar 5;27(3):268.
doi: 10.3390/e27030268.

Optimal Control of an Electromechanical Energy Harvester

Affiliations

Optimal Control of an Electromechanical Energy Harvester

Dario Lucente et al. Entropy (Basel). .

Abstract

Many techniques originally developed in the context of deterministic control theory have recently been applied to the quest for optimal protocols in stochastic processes. Given a system subject to environmental fluctuations, one may ask what is the best way to change its controllable parameters in time in order to maximize, on average, a certain reward function, while steering the system between two pre-assigned states. In this work, we study the problem of optimal control for a wide class of stochastic systems, inspired by a model of an energy harvester. The stochastic noise in this system is due to the mechanical vibrations, while the reward function is the average power extracted from them. We consider the case in which the electrical resistance of the harvester can be changed in time, and we exploit the tools of control theory to work out optimal solutions in a perturbative regime, close to the stationary state. Our results show that it is possible to design protocols that perform better than any possible solution with constant resistance.

Keywords: energy harvesting; optimal control; stochastic processes.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 2
Figure 2
Characterization of the solutions of PMP. Panels (a,b) show the intensity of the infinite discontinuities u0 and uf occurring at the beginning and at the end of the protocol in the two physically admissible solutions A and B of system (31). Different choices of the stationary control us, fixing the boundary conditions (37), are considered. Panels (c,d) account for the corresponding net power gain (or loss) with respect to the stationary strategy u=u*. The red squares refer to the value of the average power computed within the perturbative approach. Green circles are obtained by plugging the solution protocol u into the original (non-perturbative) dynamics, and computing the average power of the process (see the caption of Figure 4 for details). Of course, in this case the final state σ(tf) will not match the prescribed boundary condition exactly—see Figure 4. The distance between the two curves is an indicator of the quality of the perturbative approximation. Finally, the blue triangles represent, for reference, the power obtained with a stationary protocol u=us. Parameters: α=0, β=1, ζ=2 and tf=0.25.
Figure 3
Figure 3
Bulk part ub of the protocol, for the two solutions A (a) and B (b), as a function of time. Different boundary conditions are considered. Parameters as in Figure 2.
Figure 4
Figure 4
Dynamics of the system within solution B for boundary conditions fixed by us=1.02u*. Panels (ac) show the evolution of the elements of the vector σ (the covariances v2, vI and I2), as computed in the perturbative approach (red solid curves). By inserting the solution protocol u, shown in panel (d), back into the original dynamics (33), it is possible to compute the true behavior of the system under the prescribed protocol (computation has been carried out using an explicit Runge–Kutta integration scheme of 4-th order): this evolution is represented by the blue dashed curves in panels (ac). While it is expected that the two sets of curves do not overlap, the fact that they stay close is a consistency check on our perturbative approximation. Parameters as in Figure 2.
Figure 1
Figure 1
Scheme of the optimal control strategy leading to the differential system Equation (13).

References

    1. Bellman R., Kalaba R. A mathematical theory of adaptive control processes. Proc. Natl. Acad. Sci. USA. 1959;45:1288–1290. doi: 10.1073/pnas.45.8.1288. - DOI - PMC - PubMed
    1. Pontryagin L.S., Boltyanskii V.G., Gamkrelidze R.V., Mishchenko E.F. The Mathematical Theory of Optimal Processes. John Wiley & Sons; New York, NY, USA: 1962.
    1. Kirk D.E. Optimal Control Theory: An Introduction. Prentice Hall; Englewood Cliffs, NJ, USA: 1970.
    1. Guéry-Odelin D., Ruschhaupt A., Kiely A., Torrontegui E., Martínez-Garaot S., Muga J.G. Shortcuts to adiabaticity: Concepts, methods, and applications. Rev. Mod. Phys. 2019;91:045001. doi: 10.1103/RevModPhys.91.045001. - DOI
    1. Guéry-Odelin D., Jarzynski C., Plata C.A., Prados A., Trizac E. Driving rapidly while remaining in control: Classical shortcuts from Hamiltonian to stochastic dynamics. Rep. Prog. Phys. 2023;86:035902. doi: 10.1088/1361-6633/acacad. - DOI - PubMed

LinkOut - more resources