Energy Efficiency may be the fifth fuel, but different than carbon, nuclear, wind, and solar, energy efficiency cannot be measured or metered - by definition, energy efficiency is the absence of the use of energy, which can only be calculated by comparing what might have been with what is. It is the delta between these values that comprises energy efficiency, which makes it a calculated value.
This complexity leaves energy efficiency open to interpretation. One can always debate what might have been if energy efficiency actions had not been taken. There are many complicated and interactive factors that can play into the resulting energy efficiency including, weather, building usage, building changes, changes to resource costs, and occupant behavior.
One can compare past bills against future results, but the savings are often obscured by significant noise that is common in building energy bills, and coming up with a bankable number that everyone can agree often not straightforward or clear.
Fortunately, there are many examples of markets investing in calculated metrics. In fact, globally $1.2 Quadrillion Derivatives Market Dwarfs World GDP
. While energy efficiency is complicated, it is possible for markets to agree on a standard set of “weights and measures” and is a foundational step that is underway - one project in particular to watch is the EDF Investor Confidence Project
, a methodology that could be be equally applied to residential and commercial energy efficiency (full disclosure, I run this project for EDF).
Energy Efficiency is expressed in every transaction as a prediction or estimate of savings to come. This estimate is part of a calculation made by investors (Including: building owners, financing firms, utilities, energy services companies, insurance providers) and the validity and transparency is critical to create investor confidence and ensure real results are being delivered.
Given the complexity, I wanted to go over some of key terms necessary to have a real conversation about energy efficiency. There are in fact, a number of ways to express energy efficiency, and there is not a single right answer.Realization Rate:
Realization Rate is a comparison between predicted and actual energy usage, generally corrected for weather, and sometimes societal norms with a control group. A 100% realization rate means that on average savings were delivered as expected. A 60% realization rate would mean that on average from every 100 predicted kWh of savings, only 60 kWh were delivered.
Average is highlighted above, as there can be considerable differences between energy efficiency portfolio’s that have the exact same realization rate. We will discuss Variance more in the next section, but it is possible for two portfolios with the same realization rate to have substantially different numbers of winners and losers.
It is also important to recognize that realization rate always comes with a confidence interval. Sufficient pre and post data is required to calculate savings vs. the baseline. A single or small number of projects may not be indicative of overall results, and it is necessary to get enough heating and cooling degree days to make sure we can calculate realization rate for different end uses and fuel types.Variance:
Variance is the expression of not our average performance, but instead the number of outliers. This is often expressed as standard deviation.Variance rates are critical in combination with Realization Rate to understand the actual performance of a portfolio. It is possible to get a 100% realization rate while having a lot of winners and losers. If 25% of projects were over by say 50%, and another 25% under by 50%, it is still possible to have a 100% realization rate.
In the context of program design and markets, both realization rate and variance are important, though potentially for different stakeholders. Utilities and public programs, for example, may be most interested in average performance as that is what drives fewer power plants, however they also have interest in protecting ratepayers, so variance becomes important. Individual consumers may care about realization rate in a general sense, however when it comes to what they should expect on their personal investment, variance level may be just as important as realization rate in their decision making.
As anyone who has worked with real live clients knows, even if one is perfectly correct on average, if there is a lot of variance (winners and losers) your phone will still ring on the weekends. If a client saved $25 a month, when they expected $50, they are no happier to learn their neighbor savings $75. What Kind of Savings Are We Talking About?
It turns out that in addition to how we measure savings, there are number of ways we talk about the savings themselves:Gross Savings:
Gross Savings is how utilities and regulators think about savings. This is the volume of savings in terms of units of energy saved. While this sounds simple, to consumers who generally barely understand the difference between a therm of gas or a kWh of electricity, gross savings may be confusing.Percentage Savings:
Many programs (including federal legislation) look at savings in terms percentage reduction for a specific building. This approach can be applied using site, source, or cost (more to come on that).
While this approach sounds really simple, it can have some interesting unintended consequences, such as rewarding smaller projects on home’s with lower bills at the same level as project on larger consumers that may in fact cost a lot more, and save a lot more gross energy. This can result in a selection bias that favors projects that save less energy.Site Savings:
Site Savings looks as savings in terms of reduced BTU and kWh at a specific building. While on the surface it is very simple, in reality because different fuel types may have different costs, percent reduction in site energy may not have a lot to do with percentage reductions in bills. This measure can also encourage fuel switching when it may not always be in the societal or even the customers best interest to do so.Source Savings:
Source Savings looks at energy savings based on reductions in generation, not end use consumptions. In many cases and locations, there may be as many as three kWh generated for every one kWh that is ultimately consumed in a building (the other 66% being lost to grid inefficiencies). While source makes a lot of sense to utilities and policy makers, it is often very confusing to consumers, and hard to apply in a way that is accurate due to varying fuel mixes around the country and even in different places in a single region or time of day.Cost Savings:
Money saved is what customer care about most and understand best. It also tends to split the difference between site and source (as cost is also a reflection of energy production). This approach is not used widely, but is built into new federal legislation.Making Energy Efficiency a Resource:
This energy efficiency stuff is a lot harder than it would seem on the surface. While I fully support the basic notion of more transparency, registries, and public accounting, energy savings do not just pop out of the numbers. We need to define our terms and agree on the key metrics we are applying, and recognize with every choice, there are a host of often unintended consequences we must address.
Once we have agreed on a common metric and standard approach to calculating and expressing energy efficiency, and the accuracy of predictions, then we can start using this data to drive behavior (consumers, contractors, and regulators) and most importantly create alignment that encourages private markets to emerge.
There is a raging and healthy (most of the time) debate about the future of home performance and energy efficiency more generally.
However, I have found that some of the key terms in energy efficiency and home performance are misused or have many different definitions, resulting in confusion and impeding our ability to move forward.
Here are some key terms and how I have come to understand them. Please feel free to debate and discuss - that is the point of this blog!
While “home performance” has come to mean a specific method to conducting whole house energy efficiency retrofits on existing building - generally including BPI / RESNET, energy audits and modeling, combustion safety, etc.
However, I believe home performance is not about specific tactics, but is instead an umbrella term that refers to a systems based approach, based on building science, that measures success in terms of results not by individual measures. Home performance is not how you do it, but instead is defined by outcomes including energy, health, and comfort.
We often confuse performance testing of a building and the overall performance of a project. Performance testing can provide more accurate and site specific inputs into a prediction of results or as a design tool, however real performance of the overall system can only be calculated at the meter (or in comfort or air quality), not in a blower door test or whole house assessment.
While the current understanding of “prescriptive programs" tends to focus on programs that specify improvements and values, generally using population averages (deemed savings, product rebates, etc) vs. whole house site based analysis. However, I believe the real definition should be more broadly defined. Prescriptive approaches to energy efficiency, and prediction of savings, include all methods and models that are not calibrated to results at the meter.
Employing a whole house methodology, or a simulation model to estimate overall savings, does not make a performance project. In fact, simulation models and “home performance” as we know it, is entirely prescriptive until such time as there is accountability and transparency to the results.
“Performance Based” approaches to energy efficiency are based on past performance, calculated by measured performance data (meter data). However, incentives are driven by upfront predictions and performance risk (savings) is still be shouldered by ratepayers and homeowners and paid in advance of actual results.
Past performance may be used to calibrate incentive levels and provide feedback to contractors / vendors / and ultimately consumers to drive improvement in the system, however fundamentally we are still betting with other peoples money, which triggers regulation and oversight - contractors and industry are not sharing the actual performance risk.
“Performance Contracting” is a system where there is accountability directly to actual savings calculated based on the meter. Rather than ratepayer fronting incentives, and consumers taking performance risk on their investment in EE paying off, private parties will front incentive values to customers and put their money where their mouth is - taking full performance risk.
Utilities will procure energy efficiency through demand side capacity contracts, and buy real and documented savings from aggregators or contractors directly. This will massively reduce program costs, as managing the risk becomes a private sector activity, and will result in dramatically increased value of the negawatt.
By paying for delivered savings, the program and utilities roles change from defining business models and micro managing programs to attempt to regulate good results, into simple consumer protections, and ensuring that savings being procured are real and calculated correctly. Delivery models, software, measures, training, and quality assurance, all traditional program roles, will become functions of the private sector instead.
Moving Toward Performance:
Many of the internal Home Performance industry debates center on us all using the same terms, but in different ways. In particular, there is a loud contingent focused on getting quickly to “performance.” However, they tend to want avoid regulations in the process, when in fact what they are proposing is in fact a “performance based” model, where there is scorekeeping on predictions, but risk still flows to ratepayer and homeowners.
Personally, I believe in moving to Performance Contracting and an Energy Efficiency as a Resource model for home performance (lowercase home performance - performance not a specific approach) and that there is a strategy and series of steps that will allow us to move from our current prescriptive approaches to home performance, to a reasonable middle ground where incentives are calibrated to past performance and results are reported transparently on an ongoing basis. I believe that the dataset that emerges from this Performance Based approach will be critical in defining the actual energy efficiency yields for whatever models work for industry and customers, and will provide the critical actuarial data that will facilitate a move to Performance Contracting business models. We must convert Energy Efficiency from uncertainty to manageable risk before private markets can engage.
However, tracking performance is considerably more complicated than people tend to giving it credit for, and it will take some time to make this transition. Energy Efficiency is not something you can meter or measure and is easy to confuse with a range of external changes like weather, building use, and even resource price changes. Energy Efficiency is in fact a calculation and derivative value that requires relatively significant amounts of data to be statistically valid, and is not nearly as easily to “see” through the noise as many would make it out to be.
Given all of this, I think there is actually a lot more alignment than one might believe reading some of the industry blogs and forums. I would like to challenge everyone to move from high level theory by translating ideas into actionable steps and processes. Nothing happens overnight, and we have to make sure we have a solid foundation in place to support the transition from prescriptive programs to performance based markets.
There has been an ongoing debate about the use cases and technical differences between energy models designed to produce a rating or asset label, designed generally for use in code or as a way for consumers to compare the energy use of two buildings, and operational models, used to predict energy usage and savings on an individual building, either for to inform an investment decision, or to qualify a project for participation in an incentive program.
Recently, during an interesting conversation on the complexities of estimating r-value for assemblies in the BPI RESNET Linkedin group
, a point was made by David Butler that I found exceedingly interesting and important.How one builds a conservative energy model for an asset rating is exactly the opposite of how one would do it in an operational setting.
When building a model for a ratings or an asset score, the definition of a conservative assumption is a low value. If you pick a lower r-value there will be a lower and therefore more conservative score.
In an operational model, where the purpose is typically to estimate the delta between the base case (where the house started) and an improved scenario (post retrofit), the definition of a conservative assumption is a high value. If you estimate a higher r-value, then there will be a more conservative estimate of savings. Conversely, if you use a low value (like you would in a conservative asset score) you will actually come up with a much more aggressive prediction of savings.
Software for asset ratings are designed with low values which tends to drive lower scores for those homes that can use the most improvements, and raters are typically trained to default to low values if they are uncertain. This makes reasonable sense in the context of a label or for code enforcement, where the goal is to encourage folks to take action and lower scores are more likely to encourage folks to improve.
However in an operational system where the goal is to give accurate predictions of savings to consumers, as well as project savings that will drive ratepayer incentives, this tendency in an asset model, leads to an underestimation of building performance, that in turn results in often drastic overestimations of energy use, and therefore potential savings.
This issue confirms the notion that asset and operational scoring tools should not be one and the same. It also explains some of the realization rate problems we see when attempting to use rating software to predict retrofit savings.
R-Value Quiz Questions - (click to enlarge)
The accuracy of energy savings estimates is an important aspect of energy efficiency public policy. Billions of dollars of ratepayer and taxpayer funds are spent on the basis of energy savings that bring benefits to homeowners and our energy infrastructure as a whole. Nearly all of these investments are made in advance, in the form of a rebate or other incentive, based on a prediction of savings to come.
There are many factors that impact the accuracy of energy savings estimates; this post is focused primarily on software in general, and input accuracy in particular. A software tool’s "accuracy" breaks down to a combination of the validity of the software’s approach and algorithms, combined with the quality of data inputs. For a software product to deliver consistent and accurate results, there needs be both a valid predictive model and reliable data.
If the values assigned to a building’s components are incorrect, then the predictions of savings for any given measure or combination of measure for that house are likely to be off. This is true even if the model has been calibrated using actual energy usage data.
For example, if a home energy auditor were to under-estimate the r-value (measure of energy resistance) of an existing insulated attic, then an improvement made to that attic would show a disproportionate level of savings. This input error will lead to incorrect expectations being set with customers as well as producing fewer public benefits than promised via utility efficiency portfolios that are approved by regulators and funded by ratepayers. For a program like Energy Upgrade California, where rebates are based on percentage reduction (similar to the Homes Act, and 25E in the US Congress), this issue is particularly pronounced and may results in incorrect rebate payments being made to homeowners.
Recent analysis on the Energy Upgrade California program has returned some surprising results related to contractor performance. While there was a high level of overall variance and an over-estimation of savings on average for the program, when the data was broken down by contractor, realization rates (billing analysis vs. predicted savings) by contractor was tightly clustered, with almost all contractor values overlapping when confidence interval was taken into effect. There was little apparent difference between contractors in the program.
The analysis tells us the the model being used in CA has a built in propensity to over predict baseline and therefor savings, however it also tells us that across all contractors, the way the model is used is consistant. Even though much as been made about how hard the current CA software tool is to use, it appears from analyzing over 1000 homes, including modeling data and actual bills, that two contractors using the same tool on the same set of houses, are likely to have similar results based on the fact that assumptions such as r-values of assemblies are consistent.
Like most software tools, the software used in CA (EnergyPro 5) uses look up tables that translates assembly attributes an auditor sees in a home (e.g. 2x4 wall, no insulation, stucco exterior) into an r-value from a library of consistent values. The auditor or contractor inputs what they are looking at - which in some cases also include attributes like quality - and an r-value is pulled. If those assumptions are wrong, then the overall model may have errors, but it would appear that the CA system is managing one of the major sources of error, which is a lack of consistancy in input value.
The question is, how important is it to standardize input values such as the r-value of wall assembly, and can contractors or auditor reliably or accurately estimate these values without the aid of look-up tables?
In an effort to get some data to attempt to answer these two question, a simple poll was created (SEE POLL
), that shows participants a picture of an assembly (wall, attic, floor, IR vault), and a simple description of construction and insulation details in said assembly. Poll takers were then asked to give their estimate as to r-value for each assembly, much like they might have to do in the field.
The goal of this poll is to find out if contractors or auditors looking at the same wall, floor, attic, and vault come up with:
- A effective assembly r-value
- A consistent set of values
The poll was announced on the BPI / RESNET Group on Linkedin, as well as sent to a list-serve of about 190 home performance contractors in CA. This group is primarily comprised of fairly experienced auditors and contractors, and while this poll is not scientific, the results speak for themselves and should give us pause if we are relying on field estimates for r-values as the basis for a prediction. Answers were received over a 3 day period with 40 participants.Question:If you give 40 different auditors the same common four building assemblies (photos and descriptions), and ask them to give you what they think is the most correct r-value, will the results be consistent?
Figure 1: Predicted R-Value by Assembly
We can see from the range of these responses that there is a very significant variance in the results of this exresize.
To make it a bit easier to read, the graph below is reorganize and grouped by predicted r-value. Now one can see that there are some values where there is more agreement than others, but it is a pretty broad standard deviation - not a lot of clear agreement, and a lot of outlying values.
Figure 2: Predictions Sorted By R-Value
This spread is likely not a function of inexperience or lack of knowledge in the participants of this poll. It is more likely a case where participants are being asked to do something that is very hard - essentially memorize tables of values and make complex judgement calls to generate a value that has many variables.
Based on this initial data, It is clear that there is substantial variance on auditor's estimates of r-value. While a more detailed analysis would be welcome, these results are extremely pronounced and argue for software tools to provide r-values for assemblies rather than leave an empty box for the contractor or auditor to put in the r-value.
Until there is model for home performance where actual performance is tracked and used to calibrate software results, programs that rely on software predictions to drive consumer decision-making or rate payer funds, should require that input values are consistent, regardless of auditor, and based on imbedded assumptions. It may be fine to allow contractors to over-ride values, but this should be the exception and trigger QA, or should be through established methods like a quality rating that uniformly makes changes to the underlying assumption.
Additionally, if only an r-value is recorded without some sort of description of the assembly construction, it may be difficult or even impossible QA these inputs after the fact, as there may not be any way to verify the attributes of the assembly being estimated.
Vetting a software vendors model that does not include standardized assumptions through an engineering review that is conducted assuming reasonable or pre-established r-values is simply not sufficient - when in the real world it is now apparent how much variance we can expect input into those fields. This analysis strongly suggests that input quality in the field if not constrained will vary substantially from those used in an upfront review, and therefor render conclusions reached without factoring in quality of inputs in the field invalid.
For those tools who currently require an auditor to directly input r-values, vendors should be required to develop a set of look up tables that generate a value based on a description of the assembly. There are multiple places a vendor could go to get established assumption values (NREL, ACCA Manual J, etc.), and the fix could be as easy as adding a drop down menu to the software - which should also have little impact on an auditor in the field, and may in fact reduce the time it take to complete the model's inputs.
Recommendation: All software used in programs to predict savings should use standard assumptions based on assembly characteristics in order to improve the consistency and quality of input values.
On that note, I wanted to leave everyone with something to discuss. Given the fact that in this exercise, there really is no "right" answer, here is a cut of how we did on average for each assembly type.
Question: How is our collective intelligence based on these average predicted values?
Average Estimated R-Value by Assembly
POLL RESULTS UPDATE - October 12, 2013
Here are some updated charts based on 75 people who have filled out the poll as of Oct 12, 2013. I think it reenforces the conclusions in the blog.
On September 18th the California Public Utilities Commision released their proposed Decisions Implementing 2013 - 2014 Energy Efficiency Finance Pilot Programs
. This blog is an attempt to distill down the 156 pages into the key information you need to prepare for the coming California energy efficiency finance pilot programs. The Basics
The California Public Utility Commission (CPUC) is allocating $65.9 million to launch implementation of selected pilot programs designed to test market incentives for attracting private capital through investment of limited ratepayer funds.
A core feature of the authorized pilots is the leverage of limited ratepayer Energy Efficiency funds for Credit Enhancements (CE), such as a loan loss reserves, to provide incentives to lenders to extend or improve credit terms for energy efficiency projects. A key objective is to test whether transitional ratepayer support for credit enhancements can lead to self supporting energy efficiency finance programs in the future.
The Decision establishes an administrative hub, identified as the California Hub for Energy Efficiency Financing (CHEEF), created to increase the flow of private capital to energy efficiency projects. The California Alternative Energy & Alternative Transportation Financing Authority (CAEATFA) will assume the CHEEF functions and direct the IOUs and Commission staff to assist CAEATFA with implementation.
Implementation of both the CHEEF, and the financing pilots, will be phased in beginning in the fourth quarter of 2013, and all pilots should be online by mid-2014, due to potential for delays, the pilot period has been extended to include 2015.
Three residential energy efficiency financing pilot programs are approved, all of which have a component to reach low-to-moderate income households currently overlooked by the capital markets. None would permit shut off of electric service as a result of non-payment of energy efficiency financing obligations. One program supports lending to the single family market sector, complemented by another program which allows the loan payment to appear as an itemized charge on the electric bill. A third pilot program targets master-metered multifamily buildings that house primarily low and moderate income households.
The Decision also authorizes three non-residential energy efficiency financing pilot programs, two for small businesses, and expand on-bill utility collection of the monthly finance payments. The On-Bill Repayment (OBR) feature will test whether payment on the utility bill increases debt service performance across market sectors. No “credit enhancements” (i.e., ratepayer funds) are authorized to support OBR financing for medium and large businesses. This decision requires the utilities to develop uniform OBR tariff language that includes transferability of the obligation through written consent (and other mechanisms), and service disconnection for default on the debt obligation.
A cornerstone of the recommended pilot programs is a “credit enhancement” strategy (e.g., loan loss reserve) for residential and non-residential markets in which ratepayer funds are leveraged to achieve more deal flow, primarily through reduced interest rates, during the pilot period. A second critical element is the introduction of a repayment feature on a customer’s utility bill for non-utility Energy Efficiency financing. Significantly, no residential service disconnection is authorized for non-payment of Energy Efficiency loans. A third feature is a database that includes project performance and loan repayment history to inform what hopefully will become new underwriting criteria for the financial industry.
The Commission specifically authorizes two types of Credit Enhancements: Loan Loss Reserve (LLR) and Debt Service Reserve Fund (DSRF).
Highlights of the Implementation Plan, as modified to reflect comments, include the following approximate milestones:
Eligible Energy Efficiency Measures
- CAEATFA is fully operational to act as the CHEEF in December 2013
- Two pilots are operational in an early “pre-development’’ phase by December 2013 (EFLIC and MMMF)
- On-Bill Repayment tariff filed by January 2014
- Trust Accounts are established in February 2014
- Credit Enhancement functionality is ready in February 2014
- Two pilots (Single Family and off-bill Non-residential Lease) are operational by March 2014
- Master Servicer begins operations in April 2014
- OBR is launched in July 2014
There is significant disagreement about whether and how to limit Energy Efficiency financing pilot programs to funding in support of qualified Energy Efficiency projects, identified here as Eligible Energy Efficiency Measures (EEEM). EEEMs are measures that have been approved by the Commission for a Utility’s Energy Efficiency rebate and incentive program, although the customer need not get an incentive or rebate to qualify for the loan. Each utility is directed to make a list of EEEMs publicly available, including on the utility’s website.
The Decision authorized energy efficiency pilot program financing qualifying for CEs must apply a minimum of 70% of the funding to Eligible Energy Efficiency Measures (EEEMs). Therefore, financing eligible for CEs may include funds for non-EEEMs totaling up to 30% of the loan total.
The 70% / 30% ratio of Energy Efficiency measures and non-Energy Efficiency measures also applies to financing which does not rely on ratepayer-funded CEs (e.g., OBR for medium and large businesses). However, a wider range of eligible projects (e.g., demand response, distributed generation) may be included in the 70% eligible Energy Efficiency measures for those pilots. Based on the Decision, it appears that projects that receive no credit enhancements, such as through OBR, will be able to finance projects that consist of only distributed generation or demand responce.Residential Pilot Programs
The primary goals of the Single Family pilot programs are:
- Increase the volume of Energy Efficiency financing to attract capital providers and attract new market
- Provide a reliable, one-stop mechanism which provides attractive rates and terms for consumers; and
- Have a quick turn-around for payments to contractors.
The Decisions allocates $26.0 million to programs targeting single family Energy Efficiency improvements. However, in this decision we do not adopt two of the pilots proposed by HBC (i.e., Warehouse for Energy Efficiency Loans or “WHEEL” and a pilot targeted to middle income residents.) CPUC finance consultants proposed a direct loan pilot open to all ratepayers occupying single family residences. In response to parties’ comments, the CPUC is modifying this program to also allow indirect loans.
The fact that the CPUC is not authorizing WHEEL represents a serious disconnect. CA has goals that will cost by their own estimation at least $68 Billion. Of all the proposed financing options, only the WHEEL program, which is a securitization and access to senior capital markets for energy efficiency, has the potential to deliver the volume of capital to CA that is required to hit even a fraction of the CPUC’s goals. By not authorizing this pilot, CA is building on a foundation that from the beginning cannot drive sufficient capital. Learn more about WHEEL
It is also a little surprising that in this pilot period, which is meant to lay a foundation for goals that may require as much as $4 Billion per year (based on a CPUC estimate) in available financing, the CPUC chose to pilot Loan Loss Reserve approaches that closely mirror existing financing mechanisms that currently exist in CA for single family retrofits - in some cases already administered by CAEATFA. These existing product have not seen significant demand to date, and it seems short sighted to put all of our eggs in this basket. This pilot period was an opportunity to try new ideas and experiment. Doing more of the same and expecting different results seems like a long-shot (Einstein might call it something else).Energy Financing Line-Item Charge (EFLIC) PILOT
The Decision creates a Pilot for “Line Item Billing” whereby collection of principal and interest payments on customer loans occurs through utility bills. The primary purpose of this sub-pilot is to test the attractiveness of on-bill repayment and its impact on residential loan performance. In this decision, the sub-pilot is identified as the Energy Financing Line-Item Charge (EFLIC).
The Energy Financing Line-Item Charge differs from non-residential OBR in significant ways. The primary differences are that it does not result in utility disconnection for failure to pay the debt charges, nor does it involve an allocation of partial customer payments between utility energy bills and energy improvement finance charges. The loan obligation does not transfer to subsequent owners or occupants.
Also authorized is a pilot targeting master-metered multifamily housing and offers owners repayment on the master utility bill without the risk of service disconnection. Bill neutrality will not be require for this pilot. (This leaves the owner free to size the project and loan to meet their own objectives and cash flow.) Non-Residential Pilot Programs
The primary goal of the Non-Residential pilot programs is to build the deal flow necessary to test the value of On-Bill Repayment (OBR) as a bridge to overcome traditional lending barriers in these markets.On-Bill Repayment
The primary goal of the On-Bill Repayment (OBR) pilots is to test whether the combined single bill payment can overcome lending barriers in the non‐residential sector, and attract large pools of accessible private capital to Energy Efficiency markets.
The OBR system is what is often called an open market approach. Project developers, building owners, and investors can agree to terms on private sources of financing (can include loans, leases, energy service agreements (ESAs), or other structures). By attaching the payment to the meter as a rate tariff, the premise is that this creates a significant credit enhancement without the need for public investment.
OBR, as authorized here, will have two applications: with CEs for small business Energy Efficiency financing and leases, and without CEs for all sized businesses, primarily medium and large-sized non-residential customers.
Three non‐residential OBR pilot programs are authorized in this decision. Two apply credit enhancement and target Small Businesses: one for financing to support Energy Efficiency improvements and one to support Energy Efficiency equipment leasing. The third pilot would expand use of OBR without any CEs to Energy Efficiency financing incurred by any size business using CAEATFA ‐ administered financing products.
The authorized OBR pilot feature discussed herein will be offered only to non-residential customers, and no prohibition exists against disconnection of a non-residential utility customer for nonpayment of a third party change. The Decision requires that the IOUs use the same shut‐off protocols in place for the OBF program for the non‐residential OBR program.
The Decision concludes that written consent should be part of the OBR tariff in order to achieve transferability. Specifically, property owners and landlords that initially commit to the Energy Efficiency financing and OBR program (“current landlord”) and all of the current landlord’s tenants responsible for repayment under the OBR program (“current tenants”) should be required to give their written consent to abide by the terms and obligations of the OBR program.
The fact that OBR will not survive foreclosure is likely a major problem for lenders, as this survivability is one of the key Credit Enhancements that makes OBR so potentially game changing. There is a serious concern that this issue may in fact be a serious drag on the program. EDF recently laid out these concerns in the blog: On-Bill Repayment in California: A Step Forward and a Missed Opportunity
The 70% / 30% ratio for EEEMs / non-EEEMS applies to all OBR pilots, with one exception. For OBR without CEs, the 70% eligible Energy Efficiency measures may include distributed generation and demand response since no ratepayer funds are involved in the loans. CAEATFA has reasonable flexibility, through its rulemaking, to develop basic minimum standards for financing terms and underwriting criteria, consistent with this decision. California Hub for Energy Efficiency Financing (CHEEF)
CHEEF, is designed to act as a facilitator to allow for the easy flow of cash, information and data, among Investor Owned Utilities, financial institutions, the Commission and others. CHEEF will be run by the California Alternative Energy and Advanced Transportation Financing Authority (CAEATFA).Master Servicer
CAEATFA is encouraged to contract with a Master Servicer (MS), as its agent, to provide CE fund flow management, oversight, instructions, and reporting. The Decision finds it reasonable for CAEATFA, as CHEEF, to hire an MS through a competitive solicitation. According to the Implementation Plan, CAEATFA expects to complete the RFP process and award the MS contract by January 2014.
If CAEATFA cannot perform the CHEEF role by January 15, 2014, the record in the consolidated proceedings should be reopened to determine another entity to effectively assume the CHEEF role.The Energy Finance Database
The Energy Efficiency Finance database has the objective of providing sufficient accessible data to see whether Energy Efficiency financing outperforms non-energy debt obligations. The database should be housed and managed by the CHEEF for the benefit of ratepayers.
CAEATFA would need to develop and manage an RFP process for the Energy Finance Database, competitively select a Data Manager, and obtain final approval of the Data Manager contract by February 2014.
No later than November 30, 2013, each Investor Owned Utility shall provide the Commission with a breakdown of utility bill payment history segregated by minimum customer classes of Residential, Commercial and Industrial, for a period of seven to ten years (from December 31, 2012) as identified by the IOU above. The data should be broken down monthly, if available.
The data shall include, to the extent available through reasonable efforts, what percentage of customers within a customer class received, monthly or annually, late notices, shutoff notices, and service disconnection. Finally, annual write-offs per customer class should be expressed as a percent of customer class revenue, no later than January 31, 2014Where exactly is all the money going?
Cost effectiveness testing is such a complicated and often boring topic that it gets overlooked as one of the key barriers to achieving our energy efficiency goals.
The fact is, the days of the Recovery Act dropping piles of money on energy efficiency is over (thankfully). For the foreseeable future, the funding stream for energy efficiency will primarily be coming from ratepayer funds (utilities and PUCs), where spending on energy efficiency programs funded by electric and natural gas utility customers will double by 2025 to about $9.5 billion per year, according to projections published by Lawrence Berkeley National Lab
. This move towards ratepayer funding will further cement requirements for program cost effectiveness.
While this seems like a simple and reasonable requirement - that investments of ratepayer funds in energy efficiency pay for themselves - the reality is that there is wide disagreement in what it means to be cost effective, and the approach most taken by utilities is simply not appropriate for energy efficiency, and in particular energy efficiency for homes.
The most common approach to how we view cost effectiveness is the Total Resource Cost (TRC) test. This test measures the total cost of an energy efficiency (incentives, homeowner contribution, program overhead, etc.) resource against essentially the cost of a natural gas power plant. The idea being that we should not invest of energy efficiency if it is cheaper to generate power.
If you want to learn all about these tests, read the California Standard Practice Manual
. Though many would claim that we are actually applying all of these test incorrectly. The original intent was to apply ALL the tests and then Public Utility Commissions can make measured choices based on these tests and their goals. However, this testing regime has been transformed into something binary - Pass / Fail. We have lost the ability to think and make reasoned decisions.
TRC sounds logical, until you consider more fully the many benefits of energy efficiency, such as comfort, health, and durability, and most importantly, that in most cases ratepayers are paying for only a small fraction of the total project cost. While the great energy efficiency theory proclaims that there is a huge amount of low hanging super cost effective energy efficiency, just waiting to be picked up, reality is something very different. Due to a range of transaction and opportunity cost issues, retrofits simply for energy efficiency efficiency sake, rarely have the return we have been lead to expect. However we have the great advantage that, while the energy efficiency return is not the no brainer value proposition put forward by so many white papers, study after study have shown that consumers who are investing in their home’s are doing so for a range of reasons. While energy efficiency is often on the list, by no means is at the top of the list of reasons why an upgrade is being made.
Homeowners are investing in modern equipment (1 in 12 HVAC units go out every year), or to solve a range of other problems, where the primary benefit is often comfort, or better indoor air quality. The fact that a consumer may save a few dollars a month, or get a rebate is great and can drive deeper energy efficiency projects, but the rebate and the resulting energy savings are not the primary reason funds are being spent.
So, on one side of the equation we are looking ONLY at the value of energy savings from a project that is producing many valuable outcomes for the party making the buying decisions (that’s the homeowner), and we are comparing that against the TOTAL cost of the project.
Unlike a power plant, where the ONLY benefit is the energy it produces, in a residential retrofit, the primary benefits are not even being counted. Of course this math does not come out looking good for energy efficiency. But that is not the fault of energy efficiency not being cost effective, we are simply applying the wrong metrics to understand the value.
There is one school of thought that to fix this imbalance equation, we should put a value of the non-energy benefits (NEBs) associated with energy efficiency, and deal them into the equation. While this makes sense at one level, it is a giant mess at another.
Comfort, health, and durability are all very complicated to apply a simple payback number to in a way that is all believable or testable in the real world. So while attempting to count the value of NEBs will further the growth and full employment of the M&V industry, in the long run we will be left in a similar place where there is a lack of confidence in energy efficiency, and we are simply too far from the truth of the matter for it to pass muster in the world where energy efficiency is treated as an actual resource, not just a matter of public policy. This is a short-term solution at best - and maybe not even that.
Non-energy benefits are already being accounted for in every single energy efficiency transaction. Building owners, who are putting up the vast majority of the investment, are weighing that investment against these benefits, and frankly speaking, trying to account for these benefits is just not the business of utility commissions or utilities.
Instead we should focus on a simpler more transparent approach to cost effectiveness that is more aligned with how markets view energy efficiency as a resource and the public good, and leave non-energy benefits up to the people who are making the investment, not as a function of policy.
A Simple Solution
There is another approach to measuring cost effectiveness that is both considerably simpler and aligned with how markets will someday value energy efficiency, which is referred to as either the Utility or Program Cost test. This test looks not at the total cost, but instead focuses on the public or ratepayer investment vs. the energy efficiency savings that emerge.
Rather than attempting to quantify non-energy benefits, it says that it is not the concern of the utility or program what a building owner is spending on benefits that really are private, and instead treats energy efficiency like a commodity or resource. If we were buying oranges, we would not be interested in the types of tractors used, or if a farm was profitable or not, we are simply buying a product at a price.
It is time that we take the M&V magic out of energy efficiency, and keep it simple (stupid). The public is investing in energy efficiency so we can build fewer power plants, and lets keep it to that. We need to stop trying to decide for consumers what is in their best interest, and stop telling the market and industry that we should only be paying attention to a single benefit.
Fundamentally, for energy efficiency to scale, we need business models that are profitable and can scale; products that a range of consumers want to actually invest in; and we must survive off of public funds tied to the value of the energy efficiency we actually produce.
Currently programs and utilities are doing a terrible job at the one aspect of this venn diagram that they should actually be responsible for, which is measuring and holding actors accountable to the savings produced. The last thing we should do is set a bunch of bureaucrats and high paid consultants on a new complex and very expensive, but ultimately impossible mission to put a value on warm feet, happy marriages, less sniffles, homeowner’s egos, keeping up with jones’s and any number of potential other reasons for investing in energy efficiency.
The simplest answer is usually the right one. We need to start measuring the results of our work in a way that is transparent and that creates real accountability, and we need to simply measure the necessary public investment to harvest the resource. It is time to start compareing apples to apples - Public or ratepayer dollars against savings at the meter.
I'll leave you with one parting thought - click the quote below to learn the truth:
The premise of financing seems so simple. Energy efficiency saves money overtime, so we need easy access to low cost financing options to align payments with savings and consumers will start to invest in this seeming no-brainer proposition.
While on one hand that makes complete sense, on the other we have seen rather universally that access to financing certainly can help close business, and can focus the conversation on payment vs. upfront cost, but generally speaking, it is not a great driver of demand. Simply put, people don't replace a furnace because there is cheap financing - the financing just makes it easier and potentially enables investments in more efficient systems that cost a bit more upfront. But the demand has to be there first.
However, there is a simple but potentially detrimental kink in this plan. Regardless of what fancy low cost financing comes to market. Regardless of loan loss reserves, on-bill repayment, or PACE financing, that could drive low cost creative financing solutions into homeowners reach, there is a simple yet critical issue might mean very little uptake in the market. The vast majority of loans pay a contractor only upon completion of a job. In Home Performance and proactive energy efficiency in general, we are talking about longer and larger projects that often can take weeks or even more than a month to complete - from when parts are ordered and work is conducted to final sign-off on the project. For many of the small business in this market, that requirement to float the financing can be detrimental. These companies are typically not well capitalized, and have little access to credit. Since 2008 credit lines have become much harder to come by, and the costs have increased to where for many business rates look more like a credit card than a business loan.
In talking with some of the leading firms in the Energy Upgrade California program, this float means that in order to keep cash flow where it needs to be to keep the doors open and meet their monthly obligations, they work hard to make sure they are only floating one loan at a time. Any more than that, and it puts their business at risk. Regardless of what low rates or fancy financing structures get developed, these firms make a concerted attempt to not financing more than one or two projects at any given time.
With hundreds of millions of dollars flowing to programs, program implementation contractors, marketing firms, loan loss reserves, and the likes, very little of those funds are ultimately reaching the contracting community in a way that helps the business of energy efficiency contracting reach a level of profit and access to capital necessary to scale.
If we want financing to work, we need to make sure it works for the businesses that ultimately sell it to consumers, or we may be surprised how little things like interest rates, or credit requirements mater!
We need to take some of these funds an use them to provide low cost capital to contractors based on percentage of accounts receivable so that they can manage cash-flow and meet their obligations to their employees and customer. With all our massive investments in energy efficiency, the Achilles heal is that while consultants and implementors have gotten rich, the contractors who constitute the industry, drive all the savings, and employee the workforce have not shared in that bounty. This is the core failure of our approach and until energy efficiency and home performance are good business models and make money, no amount of program infrastructure is going to scale this industry up.
Even with best of intention, and two commissions both charged with similar objectives to promote energy efficiency, we have developed a situation in California (and in many places across the country) where two public systems, both designed to encourage the exact same outcomes, are in fact in direct conflict.
The California Public Utilities Commission (CPUC) has strict rules governing the use of ratepayer funds. To avoid the trap of paying incentives to "free riders" or people who would have made an upgrade anyway, they have put in place a rule that says, if a consumer would have done something anyway, they are not eligible for a rebate. Sounds reasonable...
However, in the real world, this rule means that any upgrade that is, say, a replacement of a broken furnace or water heater, can only get an incentive from current code, to whatever level the new equipment efficiency exceeds that number (rather than efficiency gained from what was previously in place, to the new equipment the consumer is buying).
"Pre-existing equipment baselines are only used in cases where there is clear evidence the program has induced the replacement rather than merely caused an increase in efficiency in a replacement that would have occurred in the
absence of the program. " (CPUC Decision 11-07-030, July 14, 2011)
The rub is that it is basically universally understood that if we hope to hit our energy efficiency goals, which in California (brace yourself) is a reduction of energy consumption in existing residential buildings of 40% (and by 2020), the only path to get even close is to harness the billion every year spent on equipment replacement in the reactive markets. Standing up an industry that is proactive is great, but at this point we have basically proven that it is going to be a slow long slog to get there.
CA Long Term EE Strategic Plan Residential Goals
So here's the crux of the issue.
First, we are setting up a system that by design gives less incentive to replacement systems and drastically limits what can be deemed cost effective and access utility ratepayer funds. We are asking the HVAC and other energy efficiency industry to change the ways they do business, but we are massively reducing the payoff for doing so. Not a great recipe if we need massive change, and fast.
To make matters worse, we have the California Energy Commission (CEC), also attempting to fulfill the very same CA Energy Efficiency Strategic Plan
goal of a 40% reduction, has been steadily ratcheting up HVAC and other code requirements. While this is potentially great for what new construction we still have, it does not help us reach the 13.8MM existing CA homes.
So, every time the CEC pushes up code requirements, they are simultaneously cutting the amount of ratepayer funds that we can use to solve the problem. This is a real issue as the CEC basically has regulatory power, but not money... the CPUC has the money and they can't spend it, because the delta from code baseline to improved buildings keeps getting smaller.
Energy code was originally designed as a minimum standard. In 1978 when Title 24 was implemented in California, it left plenty of room for ratepayer incentives to encourage consumers to adopt greater efficiency measures. Today, however, code has reached a level where it has changed from being a base for the market, into a stick that attempts to push the market towards ever deep savings goals, leaving little room for incentives and carrots. Of course, when code vastly exceeds the market, rather than moving us forward we instead see very low level of adoption. California has what must be the most stringent HVAC change-out codes in the country, however, instead of seeing very efficiency HVAC systems being installed, we see compliance rates that are generally understood to be less than 10% of installed system.
Current CPUC programs actually specify that "The IOU programs seek to enroll customers through a targeted marketing approach that focuses on high energy use homes built pre-1978 (the year that Title 24, California’s building code on energy efficiency, was instituted)." (View PDF
) Now clearly there is a problem if we are attempting to target buildings that are farthest from code and represent the most savings potential, which of course is very logical, however those same homes, when code baseline is applied, will not be able to achieve current code and therefor wont receive incentives even with a deep energy efficiency retrofits.
The balance of push and pull has broken down, and we are in danger of turning energy efficiency into tax on building owners where many of the key public and ratepayer benefits accrue without compensation to the utilities and to achieve the public good. While setting a code baseline floor for ratepayer incentives might have made sense in a 1978 world before any minimum standards, it simply no longer allows us to reach our goals.
If we can afford to purchase energy in the absence of energy efficiency, we can afford to use equivalent funds to incentivize energy savings that are 100% clean, and represent a less expensive resource than generation.
This issue represents yet another clear case of the energy efficiency community building barriers and undermining our own efforts, and a reason we need to start unwinding this web of regulation and let data and performance rule the day.
So, in a nutshell, the more we ratchet up code, with a goal of getting to netzero, the less ratepayer dollars will be available to pay for the investment required to get there. In the current model we cannot simultaneously use code to achieve zero energy while tying consumers incentives to the ever shrinking delta between code requirements and the realities of existing buildings. We must move away from this regulatory paradigm towards energy efficiency as a resource where we acknowledge that every BTU and kWh we save is one less we have to generate, and therefor has a value we should reward, regardless of code baseline.
The undeniable power of Solar Leasing and PPAs is now the driving force in all major PV markets. These energy service contracts (by another name) allow homeowners to go solar without upfront investment and without long-term performance or equipment risk. Clearly it is working.
The Investor Confidence Project believes we can capture this same momentum in the energy efficiency market. Clearly EE is more complicated, however there is no technical reason that we can't produce a very similar consumer value proposition in both the residential and commercial sectors.
It was not long ago that solar was in the same boat. Less than 10 years ago just about the only finance mechanism in residential was the home equity loan or a bank account with $25K in it... and a healthy personal tax appetite. A little data, and some right thinking policy and energy efficiency can begin to offer a similar proposition and join our friend in the solar industry, selling customer what they want as service.
With solar costs ever declining, there appears to be a disturbing trend for third party leasing and PPA firms to overstate the cost of solar systems. They are doing this for the simple reason the the 25D Solar Investment Tax Credit (ITC)
is cost based. Which means that these companies essentially get a tax credit for 30% of the cost of a system. This creates an incentive to report high prices... maybe higher prices than are actually reality in the market.
This issue is essentially common knowledge in the industry. There are reports of smaller firms marking up cost of system to levels that have not been seen in the retail market for 10 years. However, this issue does not seem to be exclusive to a few small rogue companies. It appears to be likely the practice is also being conducted by some of the largest firms in the market.
released by LBNL with input from SEIA and other industry groups on the cost of solar, references how they had to strip out overpriced third party owned data so as not to skew results:
"Third party owned systems were screened out of the data sample in cases where reported installed prices were deemed likely to represent appraised values; the median installed price reported for these systems was significantly higher than for host customer owned systems (e.g., $8.0/W vs. $6.2/W, among ≤10 kW systems completed in 2011)
. In contrast, installed prices reported for other third party owned systems that were retained in the sample were similar to those reported for host customer owned systems."
This issue was also highlighted as a risk in the recent SolarCity S-1 Filing
prior to their IPO, where they stated:
"The Office of the Inspector General of the U.S. Department of Treasury has issued subpoenas to a number of significant participants in the rooftop solar energy installation industry
, including us. The subpoena we received requires us to deliver certain documents in our possession relating to our participation in the U.S. Treasury grant program. These documents will be delivered to the Office of the Inspector General of the U.S. Department of Treasury, which is investigating the administration and implementation of the U.S. Treasury grant program.In July 2012, we and other companies with significant market share, and other companies related to the solar industry, received subpoenas from the U.S. Department of Treasury’s Office of the Inspector General to deliver certain documents in our respective possession. In particular, our subpoena requested, among other things, documents dated, created, revised or referred to since January 1, 2007 that relate to our applications for U.S. Treasury grants or communications"
The question remains as to what extent the solar leasing and PPA industries, which represent as much as 80% of the California solar market, have legal exposure to this issue. It also begs the question as to how much this price inflation has helped achieve the economics that have led to this current boom.
In general this is more validation that cost based incentives are inferior to performance incentives. This is particularly evident when you compare cost of PV in Germany, where there is a Feed In Tariff vs. costs in the US where incentives are driven as a percentage of the cost of the system, not on its output.