[extract] An optimal control problem entails the identification of a feasible scheme, policy, program, strategy, or campaign, in order to achieve the optimal possible outcome of a system. More formally, an optimal control problem means endogenously controlling a parameter in a mathematical model to produce an optimal output, using some optimization technique. The problem comprises an objective (or cost) functional, which is a function of the state and control variables, and a set of constraints. The problem seeks to optimize the objective function subject to the constraints construed by the model describing the evolution of the underlying system. The two popular solution techniques of an optimal control problem are Pontryagin's maximum principle and the Hamilton-Jacobi-Bellman equation.
La Torre, D., Kunze, H., Ruiz-Galan, M., Malik, T. & Marsiglio, S. (2015). Optimal control: theory and application to science, engineering, and social sciences. Abstract and Applied Analysis, 2015 90527-1-90527-2.