# Limitations of Discounted Cash Flow Valuation Models

While the discounted cash flow (DCF) methodology is the most rigorous and financially sound for business valuation, it does have several significant limitations, namely:

- Extreme sensitivity to certain input assumptions.
- Uncertainty in calculating the terminal value of the company.

## Sensitivity to Assumptions

Two variables overwhelmingly influence the output of a DCF model:

**1.A. Future growth projections**

**1.B. Discount rate – the Weighted Average Cost of Capital (WACC)**

If DCF models are so sensitive to these input assumptions, we should ask, “Can we accurately forecast growth 5 years out and can we really know the WACC with sufficient accuracy to have confidence in our model?”. The answer is “No” to both. Here’s why…

### Projecting Growth

Few companies, especially mid-market companies, can accurately project their financial results 5 years into the future. Even two years can be opaque. I have built numerous financial models over the years and have reviewed even more. In all these models, I have never seen financial projections that included a broad market downturn due to a global economic crisis, a squeeze on margins because a large client started hammering the company on price and demanding more service than expected, or declining revenue due to competitive pressures. These events often can’t be known in advance, so they aren’t modeled, but they happen in the real life of a business. As such, we should be skeptical of financials projected beyond year 2. And yet, most DCF models project 5 years.

If your model is particularly sensitive to an inaccurate input (in this case the growth assumptions), the model has a degree of uncertainty built into it.

### Defining WACC

The Weighed Average Cost of Capital (WACC) is the combined cost of equity and the cost of debt, weighted by the relative amounts of each, given the capital structure of the company being valued.

Because the projected future cash flows are discounted to present value by the WACC, small changes in WACC have a profound effect on the resulting valuation. A higher WACC gives a lower current valuation. That is, if the expected return (WACC) is high, the company must be valued less now to realize the expected return over time.

### Determining WACC

To determine WACC, you simply need to know the (cost of debt) and the (cost of equity)… and the relative weight of each.

The “relative weight of each” is the easiest to determine. It’s just whatever capital structure you decide on, which usually depends on the amount of available debt financing. The rest is equity.

The cost of debt is also easy. It’s simply the company’s interest rate on its debt.

### Cost of Equity

The cost of equity, however, is much more elusive. For public companies, the cost of equity may be “computed”. I use the term computed loosely because the computation involves a roundabout exercise based on some questionable assumptions and results in a mathematical conundrum – circular logic.

The theory, called the Capital Asset Pricing Model (CAPM), sets out to determine the cost of equity in the following way:

**re = rf + β(rm – rf)**

[cost of equity] = [risk-free rate] + [the overall market risk premium] x [the specific company’s β]

where β is a measure of the specific company’s volatility compared to the overall market.

So where are the hidden assumptions in this formula that might lead to uncertainty in the DCF model?

### Questionable WACC & CAPM Assumptions (for publicly traded companies)

The CAPM assumes that…

- The cost of equity (re) is in fact accurately defined by the CAPM formula. Perhaps not.
- The risk-free rate (rf) is quantifiable. Most people use U.S. Treasury notes as the risk-free rate. While stable thus far, U.S. Treasuries are not completely risk-free. (This is a minor point, not because my argument lacks validity, but because adjustments to the risk-free rate would be quite small and would therefore have little influence on the final valuation.)
- The market risk premium (rm – rf) is quantifiable. Even assuming rf is correct, calculating the return of the overall market (rm) poses its own set of challenges. What is used as “the overall market” and over what time period is it calculated?
- β may be computed in a reasoned way. It isn’t. The purpose of the DFC approach is to value a company. However, to calculate the return on equity (re), we must determine the value for β empirically. But to do this, we must know the value of the company over time. So, the circular logic is that we are using the value of the company as part of our calculation to determine the value of the company. One might think that if the known valuation is incorrect, that the calculated valuation, which depends on it, might also be off.
- β is stable through time. It isn’t.

Technically, β is the covariance of a stock’s return against the overall market. That is, it uses historical data to compare how an individual stock price moves relative to the market.

β = 1 means that the stock moves perfectly in sync with the overall market.

β = 2 means that the stock moves in the same direction as the market, but with double the volatility.

Negative values for β imply there is a correlation with the overall market, but in the opposite direction.

Measuring β’s with historical data is problematic because β’s are not stable over time. Empirical studies have shown that CAPM tends to be a valid predictor of expected return for β = 1, but tends to overstate returns for higher β’s and understate returns for lower β’s. Intuitively, I would not expect mid-market companies to have a β close to 1.

Further, sometimes thinly-traded stocks are more volatile just because of their size and float, not due to their fundamentals. This introduces a fake volatility into the equation.

Lastly, covariance is a statistical measure calculated using the average of the inputs. For non-Gaussian data (and financial markets definitely produce non-Gaussian data), averages can be swayed significantly by extreme events. In general, we should be suspect of any financial theory that implicitly assumes financial data fits a normal distribution. It does not. Extreme events in the tails of the distribution happen way too frequently for the financial markets to be considered Gaussian.

If there is an abnormal, extreme event buried in the calculation of β (or missing from your calculation of β), your value for β is off… your re is off… your WACC is off… your valuation is off… and the whole valuation exercise is defeated. And, you may not even realize it because you plucked the value for β from a data provider that published it in a table (it’s generally calculated for you).

### WACC Assumptions (for privately-held companies)

Even if we can get intellectually comfortable with the WACC calculation for publicly-traded companies, how do we apply this to private companies?

The WACC for privately held companies is imputed from the WACC of similar publicly-traded companies while also considering that public companies provide investors with liquidity through the public markets.

Because privately held companies are often illiquid investments, often with long time horizons, the WACC should arguably be higher. But by how much? No one really knows. Academics have studied historical data to help quantify this, or at least to put some boundaries on it. These studies look at companies that have moved from privately-held to publicly-traded (often by being bought by a public company) and then evaluate the effects on the cost of equity for these companies. These studies suggest the WACC for privately held companies should be X% higher compared to their publicly-traded peers (where X% is often cited around 20-30% as the liquidity premium). This somewhat random X% amount is added to the calculated WACC to arrive at the final WACC to apply to privately held companies.

The calculation of the WACC for private companies is a byproduct of the less-than-exact-science of calculating the WACC for public companies. Because the entire DCF model depends on an accurate value for WACC, caveat emptor.

### WACC in Practice

Few valuation experts use CAPM (in the sense that they do not calculate the cost of equity for comparable publicly-traded companies and do all the math to determine the cost of equity for the private company they are valuing). Why not? Because it’s a lot of work with a lot of uncertainty built on a shaky theory. Instead, valuation experts typically use something around 30% for the cost of equity in their valuation models. I have seen anywhere between 25% – 40%. Any number in this range is fully justifiable.

So, what is the cost of equity really? It’s the expected return on investment that a buyer (or current owner) expects to receive on his or her investment in the company. In reality, it can be quite different for each person, depending on their circumstances, perceived risk and many other factors. Apparently, it can be any value you want in the range of 25% – 40%… and it often becomes the variable people tweak and later rationalize to arrive at the desired end valuation. I don’t mean that to sound cynical, it’s just mostly what I have seen in practice.

There’s more…

## Terminal Value Uncertainty

There are generally two accepted methods to determine the terminal value of a company (the value of the cash flow produced beyond the initial 5-year projection):

**2.A. Perpetual Growth Method**

**2.B. Terminal Exit Method**

### Perpetual Growth Method

In the perpetual growth method, you assume the company continues to grow at some constant rate into perpetuity. Obvious concerns:

- It would be quite difficult to know that perpetual growth rate.
- Because the perpetual growth rate is forecast to infinity, if you are just slightly above the rate of inflation over that much time, the model will show that the company essentially takes over the world.

In practice, I have never seen someone use the perpetual growth method to value the cash flow in the outer years (except when I tried it once before realizing that this one assumption’s influence eclipses the entire model). Although the Perpetual Growth Method may be theoretical sound, it’s just not practical.

### Terminal Exit Method

With the terminal exit method, you assume the company is sold at the end of year 5 at some value. But what value? In practice, this is often reduced to a multiple of year-5 EBITDA (an uncertain number itself). But what multiple should be used? Most people would argue that the exit EBITDA multiple in year 5 should be somewhat close to the current EBTIDA multiple, which is circular logic. You need A to determine B, to calculate A.

Because the majority of the current valuation produced by the DCF model is attributed to the terminal value (especially for 5-year projections, as opposed to 10-year projections), it behooves us to at least understand the great uncertainty in the terminal value itself.

## Summary

At the risk of bashing a methodology without offering an alternative solution… I will say that, short of better alternatives (and I offer none here), the discounted cash flow approach is currently the best available method to value a company. The overall premise, that valuation is directly tied to the present value of expected future cash flows, is correct. The kink is knowing how to accurately project future cash flows and knowing how much to discount those projected future cash flows to arrive at a present value.

So, use your DCF models, but thoroughly understand the inherent limitations and weaknesses.

- Understand the limitations in visibility of 5-year projections.
- Understand how sensitive your model is to your forecasted growth assumptions and the WACC you choose.
- Understand that the determination of WACC is not an exact science.
- Understand that the methods for calculating the terminal value are either questionable or circular.

## Aside – Implied Accuracy & Significant Digits

Because of the DCF model’s sensitivities and uncertainties, do not show your projections nor your valuation to the nearest dollar. Just because Excel can calculate the digits that accurately does not mean the model is that accurate. It’s not.

There’s no way you can be within 1% accuracy with a DCF model. So, here’s a good rule of thumb. At a minimum, round to the nearest 1% of the end valuation.

Example: If your valuation is in the 10’s of millions, round to the nearest 100,000. If Excel displays the valuation as $20,864,867, report $20.9 million. Reporting more accuracy than this shows you do not understand the limitations of your model nor the proper use of significant digits in your analysis.

Visit Private Equity Info to learn more about our product and service.

## Coaching, Mentoring & Consulting

Want to learn more about financial modeling, how to build DCF models or how to create value in your company? Reach out via email (ajones@PrivateEquityInfo.com) to discuss coaching, mentoring or consulting options.

An excellent analysis, and brings back to mind a heated debate in business school with our M&A Accounting professor. Having just come out of the high yield group of a Wall Street firm in 1989, I argued that the DCF was built upon a pyramid of subjective assumptions – as you laid out. But an LBO model is based upon a large number of “known” assumptions. While the same risk of error is there for the 5 year projections, We Know: total leveragability of the cash flows; required amortizations; market interest rates and required covenant compliance. We also know required target equity returns for PE funds at differing size levels, and we know Enterprise Value to EBITDA multiples of comparable companies to sense check the outcomes. Let to an energized debate.

David – thank you for your comment. I agree with you on the LBO model. Sometimes one of the biggest risks is thinking we understand something when we don’t. This was an attempt to help younger M&A professionals better understand the limitations of their tools / valuation methodologies.