Optimal control has been widely applied to modern control systems design and has drawn great attention for decades. In the optimal control theory, the control problem is formulated as an optimization problem, and the control law is calculated by solv-ing the optimal control problem (OCP). Comparing with traditional control methods like PID control, optimal control is capable of providing an optimal control law in a systematic way. However, optimal control can achieve an analytic expression of the optimal control law only for some relatively simpler cases, for instance, unconstrained linear systems. Specifically, the optimal feedback control law for unconstrained linear systems with a quadratic cost function is in a simple linear form, and the optimal control gain is obtained by solving a Riccati equation. In practical systems, most of the physical plants are essentially nonlinear systems subject to physical constraints. For these cases, it is di cult to obtain an analytical solution to the optimal control problems. In order to find the optimum to an intractable optimization problem, ap-proximate solutions have to be taken into account.