![]() There has been an ongoing debate about the use cases and technical differences between energy models designed to produce a rating or asset label, designed generally for use in code or as a way for consumers to compare the energy use of two buildings, and operational models, used to predict energy usage and savings on an individual building, either for to inform an investment decision, or to qualify a project for participation in an incentive program. Recently, during an interesting conversation on the complexities of estimating r-value for assemblies in the BPI RESNET Linkedin group, a point was made by David Butler that I found exceedingly interesting and important. How one builds a conservative energy model for an asset rating is exactly the opposite of how one would do it in an operational setting. When building a model for a ratings or an asset score, the definition of a conservative assumption is a low value. If you pick a lower r-value there will be a lower and therefore more conservative score. In an operational model, where the purpose is typically to estimate the delta between the base case (where the house started) and an improved scenario (post retrofit), the definition of a conservative assumption is a high value. If you estimate a higher r-value, then there will be a more conservative estimate of savings. Conversely, if you use a low value (like you would in a conservative asset score) you will actually come up with a much more aggressive prediction of savings. Software for asset ratings are designed with low values which tends to drive lower scores for those homes that can use the most improvements, and raters are typically trained to default to low values if they are uncertain. This makes reasonable sense in the context of a label or for code enforcement, where the goal is to encourage folks to take action and lower scores are more likely to encourage folks to improve. However in an operational system where the goal is to give accurate predictions of savings to consumers, as well as project savings that will drive ratepayer incentives, this tendency in an asset model, leads to an underestimation of building performance, that in turn results in often drastic overestimations of energy use, and therefore potential savings. This issue confirms the notion that asset and operational scoring tools should not be one and the same. It also explains some of the realization rate problems we see when attempting to use rating software to predict retrofit savings.
Tom Strumolo
10/11/2013 11:23:58 am
Yes, that about says it, Matt. I would only add that we have enough data now to know what savings will be, for most measures, in most conditions, in most buildings. Modeling for most residences and other small buildings with a few hundred million btu's being consumed annually is usually inaccurate, too expensive, and unnecessary. We just have to get all that data together into an accessible database and steamroll the energy audit barrier. We still have as many as 75 million houses to retrofit in this country and I am getting old... Comments are closed.
|
AuthorMatt Golden, Principal Archives
October 2017
Categories
All
|