Actor-critic continuous state reinforcement learning for wind-turbine control robust optimization
Larrucea Uriarte, Xabier
MetadataShow full item record
Information Sciences 591 : 365-380 (2022)
[EN] The control of Variable-Speed Wind-Turbines (VSWT) extracting electrical power from the wind kinetic energy are composed of subsystems that need to be controlled jointly, namely the blade pitch and the generator torque controllers. Previous state of the art approaches decompose the joint control problem into independent control subproblems, each with its own control subgoal, carrying out separately the design and tuning of a parameterized controller for each subproblem. Such approaches neglect interactions among subsystems which can introduce significant effects. This paper applies Actor-Critic Reinforcement Learning (ACRL) for the joint control problem as a whole, carrying out the simultaneous control parameter optimization of both subsystems without neglecting their interactions, aiming for a globally optimal control of the whole system. The innovative control architecture uses an augmented input space so that the parameters can be fine-tuned for each working condition. Validation results conducted on simulation experiments using the state-of-the-art OpenFAST simulator show a significant efficiency improvement relative to the best state of the art controllers used as benchmarks, up to a 22% improvement in the average power error performance after ACRL training.