Concerns of Wind Energy Assessment

When wind project developers and investors turn to the wind energy assessment industry to predict the energy output of proposed projects, they need to be confident that the numbers reported to them are as accurate as possible.

The key words in this statement are “confident” and “most accurate,”and ironically, these two goals can be at direct odds with one another. Maximum confidence is achieved through long accumulated familiarity with an established and unchanging methodology. On the other hand, maximum accuracy is achieved by using the best science and latest technology available in the fields of climate science, meteorology, fluid dynamics, and statistical analysis. These apparently conflicting goals can be expressed as a trade-off between “stability” of the method, and “innovation” in the method. However, as we’ll discuss, there is a way to simultaneously achieve stability and innovation through intelligent use of validation.

Stability vs. Innovation
Regarding stability, project stakeholders,investors in particular, need to have a high level of comfort with duediligence wind energy assessment reports. They want end-to-end familiarity with the assessment method, not just in terms of what’s under the hood, but also in terms ofhow it performs over a large number of casesand under a wide array of conditions. This allows them to develop an experiential calibration of the method thatgives them confidence in a project’s future energy production. All of this tends to favour a wind energy assessment method that remains stable and well-tested, with minimal innovation.

And yet, we know instinctively that we must innovate if we want to improve technology. The value of innovation within wind energy assessment is best understood in terms of the errors that these methods exhibit when compared against actual energy production. If the average error of a large number of assessments in diverse conditions is not zero, this is a bias in the methodology. Ideally, bias can be investigated, its cause identified, and the methodology corrected, to achieve calibration with respect to bias. Even if its root cause cannot be identified, the bias can be accounted for with offsets or correction factors. However, the remaining part of the error, which is the random error from project to project, is unpredictable and is the uncertainty in the methodology.

This more challenging error arises due to the combination of many small deficiencies occurring throughout the methodology, as well as unpredicted (or unpredictable) behaviors in the real-world operation of wind projects. As such, the only way to reduce the uncertainty in the methodology is through innovation: using newer, smarter, more efficient, or more science-basedmethods thatreduce these many small deficiencies in the overall process step-by-step.

Innovation can take place at any step throughout the wind energy assessment process, from the wind measurement technology or methods, to wind measurement data quality control, wind flow modeling, long-term climate assessment, turbine performance modeling, wake modeling and other loss calculations, and finally to the uncertainty modeling itself.Some examples might include:

• The use of remote sensing in measurement campaigns to capture the wind profile across the turbine rotor plane
• Use of multiple long-term reference datasets in an ensemble approach, both measured and synthetic, including multiple global reanalysis datasets, downscaled with numerical weather prediction
• The use of sophisticated modeling approaches (numerical weather prediction, computational fluid dynamics, large eddy simulations), especially in combination with each other, for high-resolution spatial mapping of wind resource
• More advanced statistical and machine-learning methods for combining the information from observations, spatial modeling, and long-term reference
• Better wake models that more accurately represent the details of wake structure, the interaction of wakes with each other and the surrounding atmosphere, and the atmospheric parameters that control wake behavior and intensity.

It is tempting to assume that all innovations are beneficial, especially when they represent the latest developments in the world of technology and scientific research. However, any innovation introduced into a wind energy assessment methodology, while intended to have a positive impact, can affect the performance of the methodology in unanticipatedways.

For example, the wind resource assessment process is comprised of many components, and a new method for one component may improve the bias error of that component, a bias which was previously hidden because it was canceled by an opposite bias of a different component. With the introduced innovation, the new net bias is now thrown out of calibration. Synergies between different components may also lead to an amplifying effect, such that a new innovation may actually increase rather than decrease the random errors. In summary, both the bias and the random errors can be changed in unintended ways, and these changes can lead to sudden shifts in value of the same wind project, depending on whether it was assessed prior to or following the implemented innovation.

The Role of Validation
A decade ago, the wind industry was in a very different place than it is today, in terms of what it expected from wind energy assessment consultants, and what it was getting. Energy assessment was focused primarily on a deterministic view (the P50 energy estimate) rather than a probabilistic view (uncertainty, and the higher probability-of-exceedance energy estimates currently used today). Evidence was beginning to mount that industry-wide project performance was falling short of pre-construction energy assessments by 10% or more. This prompted a series of important validation studies [1-3] that confirmed the industry-wide shortfall, and spurred key investigations into methodologies, and methodological corrections. These corrections have incrementally led to thecurrent state with more recent validation studies [4-8] indicatingnearcalibration with respect to industry-wide bias error.

It presents in histogram form the results of Vaisala’s recent validation study of its duediligence wind energy assessment methodology, using a total of 127 years of energy production from 30 wind farms. Each count in the histogram is the actual energy produced by a wind farm in a particular year of that wind farm’s operation, minus the long-term average annual energy production (the “P50 estimate”) predicted for that wind farm by Vaisala’s pre-construction assessment study. Values areexpressed as a percent of the P50 estimate.

From this study, it can be seen that the mean value of the distribution (+0.1%) is near zero, meaning that the system is close to calibration with regard to mean bias error. The width of the distribution represents the uncertainty of the methodology (the random errors described above), and these have a standard deviation of 8.8%, meaning that around two thirds of the wind farm year errors fall within the rangeof ±9%. This is slightly lower than the average pre-construction uncertainty estimate across all of the wind farms (11%), meaning that Vaisala’s uncertainty model is likely somewhat conservative.

So how can validation help us today as a bridge between the need for both innovation and stability? It can do so by providing a continuous view into how we are doing, so that we can innovate, but at the same time, monitor the effects and benefits of that innovation, and guard against sudden large shifts in the value of studied wind projects. However, the key is to have a structure in place to validate continuously, not just with occasionally published studies, and to do so in a way that is closely coordinated with innovation.

Each time a candidate innovation (or set of innovations) is introduced, a standard test suite is set up, which is designed to completely recreate the energy estimates used in the original validation study.The test suite is executed, and the results evaluated against the same energy production data used in the original study. Acceptance criteria are defined that essentially measure how the change in the methodology alters the look of the histogram in  as well as other relationships such as errors versus uncertainty.
For example, the criteria should raise a red flag if the center of the distribution moves more than a threshold amount to the left or right, or if the width of the distribution expands unexpectedly.If these types of sudden changes in value are indicated by the test, the result loops back to the methodology change: either investigate and address the negative issue that arose, or do not implement it.On the other hand, if the innovations result in decreased uncertainty with no change in mean bias error, the result is a genuine improvement and is implemented. By following this approach and regularly engaging stakeholders in the progress and results of this process, confidence in the methodology can be fostered and maintained while at the same time, incrementally realizing the benefits of innovation.

It is also critically important to analyze and improve the uncertainty model along with the actual energy assessment methodology.If we are making innovations in our methodologies that reduce uncertainty, but our uncertainty model does not reflect the improvements, the improvements are really of no benefit. For critical development and investment decisions, stakeholders rely on the pre-construction estimates of uncertainty from these models, not on eventual production errors, which are not yet known. If these errors turn out to be small, that’s great, but major project investmentdecisions have already been made. Therefore, a key part of this continual validation process is to examine the methods and assumptions of our uncertainty models to ensure that they are in line and keeping pace with actual errors in our energy estimation methods. Only then can the full benefits of innovation be realized.

Reducing uncertainty and improving confidence in wind energy estimates is a crucial step for the industry. This process requires a careful balance between stability and innovation to build the trust of the financial community, ultimately resulting in better investment decisions and more favorable financing terms for wind projects.

[1] Jones, S., 2008: Project Underperformance: DNV Global Energy Concepts 2008 Update, presented at AWEAWindPOWER 2008, Houston, TX, June 4, 2008.
[2] Johnson, C., et al., 2008: Validation of Energy Predictions by Comparison to Actual Performance, Garrad Hassan America. Presented at AWEA WindPOWER 2008, Houston, Texas, June 4, 2008.
[3] White, E., 2009: Closing the gap on underperformance: A review and calibration of AWS TruePower’s Energy Estimation Methods. White paper available from AWS TruePower at
[4] AWS TruePower, 2012: Closing the gap on underperformance: A review and calibration of AWS TruePower’s Energy Estimation Methods (Update). White paper available from AWS TruePower at
[5] DNV/GL, 2014: Wind project performance white paper, actual versus predicted: 2014 update. White paper available from DNV/GL at
[6] Istchenko, R., 2014: WSP Wind resource assessment uncertainty validation. Presentation given at AWEA Wind Resource and Project Energy Assessment Seminar, Orlando, FL, December 3, 2014.
[7] Stoelinga, M., and M. Hendrickson, 2015: A validation study of Vaisala’s wind energy assessment methods. White paper available from Vaisala at
[8] Natural Power, 2016: Natural Power demonstrates accuracy in North American wind energy yield predictions. Press release available from

Author Bios:
Dr. Mark Stoelinga develops and tests new techniques for improving resource assessment and forecasting of wind and solar energy. Prior to entering the renewable energy industry in 2009, he worked in research at the University of Washington, where he received his Ph.D. in Atmospheric Sciences.

Matt Hendrickson leads Vaisala’s wind and solar assessment business and has personally managed energy assessments for more than 4,500 MW of operating wind farms, and more than 30,000 MW of pre-construction projects. Prior to Vaisala, Hendrickson led EDPR’s North American Assessment group as Director of Energy Assessment from 2003 to 2011. He has a B.S. in Electrical Engineering from the University of Houston.