By Jack Forehand (@practicalquant) —
This is the first interview in our new Five Questions series. Each month, we will take an in depth look at a specific issue in the market through a conversation with a leading expert in the area. The goal of this series is to move beyond the high-level discussion typically available in the media and to learn from someone who has studied the issue in depth. In our first interview, we talk with Wes Gray of Alpha Architect about Value Investing.
The topic of value investing is at the forefront of many investors minds these days. Over the long-term, it has been very successful, but in recent years, it has struggled to varying degrees depending on how you measure it. As is typically the case when investment strategies struggle, this had led many to question whether things have changed and it doesn’t work anymore.
I can think of no better person to put what is going on with value in the proper historical perspective than Wes Gray. Wes is the founder of Alpha Architect and the author of Quantitative Value, one of the best books out there on systematic value investing. Wes is also a former Marine and studied under Eugene Fama at the University of Chicago.
Jack: Value stocks have underperformed for over a decade now. Since the beginning of 2007, the Russell 1000 Growth index has more than tripled the Russell 1000 Value index, and value has also underperformed growth significantly in the small-cap space.
There seem to be two schools of thought on what is going on and what it means for the future. One thought process is that this underperformance has created a significant opportunity for long-term value investors since other periods like this historically have typically been followed by a strong reversion. The other is that this underperformance has been justified because the businesses of growth companies have just been doing much better. As someone who has studied value investing extensively, I was wondering what your thoughts are on how this period fits into a historical context and what if anything it means for the likely long-term returns of value stocks going forward?
Wes: This result has a lot to do with the construction of the Russell Indexes. For example, you can take the academic small-value portfolio and the small-growth portfolios since 1927 and plot them relative to the S&P 500. This data is from Ken French’s site and is available here. There are numerous periods where these strategies will drag on the overall index. But, if you directly compare the performance of small value and small growth there are very few periods where small value doesn’t beat small growth on a compounded annual return basis over 5-year rolling periods – to include the most recent 5-year period, where small value eeked out a win over small growth using the data from Ken French’s site. So when arguments are framed around “growth beat value over the past X years,” one should always consider the evidence that supports the statement in the first place.
Regardless, “value is dead” articles are fine by me and I actually love them. The more perceived – and actual experienced pain – the better for long-duration capital compounders, in expectation.
Anyway, we can probably all agree that value hasn’t “killed it” the past decade, regardless of how one measures the performance. Should we be worried about the future efficacy of value? Of course we should be. But let’s try and understand why it might work in the first place.
We always try and reiterate to investors that “open secret” factors, like value or momentum, are robust for a reason – their excess returns are associated with predictions from “rational” economic and “behavioral” economic theories. But “robust” doesn’t mean that these strategies grow money on trees. “Robust” means they generate excess returns, on average, and over the LONG HAUL. These excess returns are often compensation for various forms of risk. And risky bets don’t always pay off – otherwise, risky bets wouldn’t have the word “risky” in them.
Our overarching framework for “active” strategies (which includes “value”), is what we call the sustainable active framework. The basic idea is simple: you get paid to do things that are painful.
The source of the pain can be driven by several components: fundamental risk or mispricing that is costly to exploit (e.g., career risks or trading costs).
There is an enormous academic literature that claims that the value premium is driven by fundamental risk exposure and there are others that claim the value premium is driven by mispricing that is difficult to arbitrage. Eugene Fama and Ken French are the fathers of the “risk-based hypothesis” and Lakonishok, Shleifer, and Vishny are the original gangsters on the “mispricing hypothesis.” (good piece here on the subject).
So with that basic background, if we step back and look at the recent performance of value, which has arguably sucked (at least as measured via cheap book-to-market), we can look at 1) why that has happened, and 2) what do we think will happen on a go-forward basis.
As far as “why” value sucked, it can be boiled down to the two return drivers of the value premium – risk and mispricing. With respect to “risk”, a lot of firms in the cheap book-to-market category are old economy types with dying business models at risk of elimination. Part of the relative performance spread is likely due to the fact that the risk associated with these firms has been realized to the downside in many areas of the economy, and prices now reflect this reality – in short, the risk didn’t pay off. The other potential element is on the mispricing side. Historically value firms have had more positive earnings surprises and these surprises get a large return bounce when the market realizes the surprise and there is a revaluation (i.e., mispricing). This mispricing effect hasn’t been as strong the past 10 years and reactions to surprises have been more muted.
Let’s summarize: risk didn’t pay and mispricing didn’t pay —> value didn’t pay.
Now, what about on a go-forward basis? What would one expect from value stocks? Again, I don’t think there are any surprises here. Value stocks, which in our context means a portfolio of the cheapest stocks in the universe on some price to fundamental, are still going be fundamentally riskier (otherwise they wouldn’t be so cheap!) and we still believe human investors, on average, will overreact to the short-run situations of “crappy firms,” which are affected by “sentiment” shifts. In the end, we’re confident that the economic principles of “why value works” have not changed. Investors who buy dirtball cheap stocks will continue to earn an expected excess premium because they are taking on more fundamental risk and probably taking on more career risks – premiums that will probably pay off, on average, over the long-haul.
Jack: Another major source of debate in the world of factor investing is whether factor timing can be successfully used to enhance long-term returns. Most everyone agrees that trying to time factors in the short-term is impossible, but some argue that slowly adding exposure to a factor when it is out of favor can enhance long-term returns. Do you believe there is any way investors can successfully add exposure to a factor like value during a down period like this to increase long-term returns?
Wes: I am not a fan of factor-timing. Factor investments should probably be strategic long-haul allocations that are best served diversified. And our favorite dish is value paired with momentum. That is the safest way to earn risk premium – be exposed to it.
If you made me do a factor-timing model I’d probably do something related to relative strength. See here for a piece we have on the subject.
Of course, embedded in stock investing is a beta factor bet. With respect to beta factor-timing we like timing it via trend-following methodologies – see here for a discussion on that.
Jack: More so than any other value metric, the price/book ratio has really struggled in the past decade. The strategies we follow that utilize price/book have been the worst performers during this downturn, trailing pretty much every other valuation ratio. Underperformance is obviously not enough reason to say something no longer works, but with price/book, there is much more than just that going on since the increase of intangible assets relative to tangible assets has resulted in the ratio offering less insight than it once did. So on one side of the debate you have significant academic research supporting price/book, but on the other, you have the reality that the world has changed and it may no longer make sense as a way to value companies. I know you don’t use price/book in your models, but I was wondering what your thoughts are on the ratio and whether it offers any value for investors today.
Wes: We wrote a fairly extensive piece called, “Alternative Facts about Formulaic Value Investing,” which speaks to many of the arguments against price-to-book investing and systematic value investing as a whole. Your readers might want to check that out for some additional context.
But to your specific question. Listen, all these “value” metrics seem to work over the long-haul and Jack Vogel and I have papers and books on the subject that make a simple, well-established claim: cheap stocks – regardless of the measure to identify cheapness – have historically earned higher returns.
On a relative performance basis, we think price/book is sub-par among value metrics. And many of the reasons for this are associated with the fact that the applicability of the ratio is fairly limited and there are some quirky accounting issues with the metric (OSAM has a good article on this). All that said, price/book is still a reasonable way to capture excess expected returns based on the discussion in the prior question. If you buy cheap companies based on price/book, you are probably buying a lot of risk and/or potentially some mispricing. So you will likely get paid for taking on this pain, out of sample. We see no reason to believe that cheap stock investing based on price/book is broken, we simply think it isn’t as compelling as other measures of “cheapness.”
Jack: The divergence in returns among value metrics has led many investors to conclude that using a composite of all of them may be a better approach than using a specific one. You prefer a single metric approach and utilize EV/EBITDA and I was wondering why you prefer that approach and what you think the pros and cons are of a single metric approach vs. a value composite?
Wes: We try and honor economic and behavioral foundations as much as possible when conducting research, and systematically avoid complexity on the statistical/mathematical front, unless there is a crystal-clear value-add to such an approach. Our experience suggests that 1) focusing on first principles and 2) keeping things relatively easy to understand, helps investor outcomes more than other approaches that drift into the “black box” category. But that’s our situation. Other situations may dictate different approaches. We acknowledge this up front and don’t claim that our answer is the correct answer for everyone.
Long story short, we’ll take a strategy with strong economic foundations and solid evidence versus a strategy with weaker economic foundations and better evidence. To us, value investing is best when it is most businesslike (stealing a phrase from Ben Graham). And when we started this journey many years ago, the most “businesslike” and simplest way to assess the value of an enterprise is to figure out what the firm generates in operating income (~Revenue minus costs of goods sold and SGA), and then figure out how much it would cost to buy all the assets that generate this operating income. This is how private equity buyers look at firms because it helps untangle the complexity associated with different capital structures. We also like measures that appeal to private equity investors (i.e., “business-buyers”) because we think the private/public market arbitrage is one of the reasons cheap public stocks earn excess returns – you can “free ride” on the private equity buyers arbitrage activity. Some of our internal research suggests that this benefit is highest for cheap EBIT/TEV firms, versus other valuation metrics. Also, simple valuation metrics like EBIT/TEV or EBITDA/TEV capture the value concept succinctly and the measure is widely applicable across various operating companies (same cannot be said for B/M). We also like gross profits / TEV for similar reasons. There are obviously potential issues with all measures – nothing is perfect – but, at the margin, we like enterprise multiples over things like book/market.
On empirical grounds, we think the evidence supports our preference for enterprise multiples over other value metrics, which has been documented by other academic researchers. (e.g., here and here).
Of course, there is the concept of “combination” value metrics. We have a whole chapter dedicated to this in our Quantitative Value book. The way this works is you take a bunch of valuation metrics – that you already know work – and pool them together. Almost mechanically you’ll generate better risk-adjusted performance because the metrics aren’t perfectly correlated so your standard deviation will go down. That said, your expected returns aren’t necessarily better, and are often worse depending on how the study is set up. This is certainly an interesting approach, but it comes with a potential back testing bias because you include a handful of metrics you already know work and these metrics aren’t perfectly correlated. By construction you’ll lower your standard deviation, but it is unclear that an investor will experience this lower standard deviation out of sample. Novy Marx has a paper on this subject and it really opened my eyes to back tests that proclaim victory on “pooled strategies” based on “Sharpe ratio” type success measures. So in our mind, if we can identify a valuation metric that is intuitive, broadly applicable, simple, and backed by evidence, we would prefer this approach versus “pooling” valuation metrics – for example, we could 50/50 EBIT/TEV and Book-to-Market. But why? If we aren’t huge fans of Book-to-Market on theoretical grounds, why would we dilute a valuation screening metric we believe in just because the back tests looks better?
Another thing to consider is that we don’t do “pure value” screens. We incorporate a negative screening component at the top of our algo to kick out extreme trash. We also have a heavy quality component at the bottom of the funnel to sort the cheap stuff into cheap quality and cheap junk. When one considers these components of our approach, composite metrics add little value – even on a pure datamining basis – so why increase complexity if it isn’t necessary?
Anyway, in the end we can debate this forever. But none of it really matters. The trick with the value premium is not the intelligence of your value algo, but the discipline of the capital deployed into the value system and how the portfolio is constructed (see here). The smartest value algo in the world paired with short-horizon performance-chasing capital will experience a terrible long-term investment record relative to the worst value in the world matched with long-horizon discipline capital. Hence the reason why our firm mission is to “empower investors through education” and not “build the smartest value algo” in the world.
Jack: The rise of big data has led many to question whether simple, fundamental based approaches to value will be as effective going forward as they have been historically. The fact that professional investors have access to things like credit card transaction data and satellite imagery could potentially give them a significant advantage in identifying changes in a company’s business before it is ever reflected in fundamentals. Some argue this increased access to information will lead to companies being priced more appropriately (i.e more cheap companies will be cheap for a reason) and could reduce the outperformance of more traditional approaches to value investing going forward. What are your thoughts on big data and how it will impact value investing going forward?
Wes: Again, going back to first principles on why value works in the first place, you have fundamental risk premium and you have tough to arbitrage mispricing often caused by delegated asset management incentives (see Shleifer and Vishny 1997 for an explanation on that component). More data and more timely data could certainly affect these two expected return drivers, at the margin, but I’d like to point out a fascinating study we’ve conducted. Ben Graham recommended a brain-dead value strategy in 1972 that he backtested back to the 30’s. The basic idea was to buy a basket of low P/E stocks that aren’t ridden with debt. Roger that, simple enough. He claimed 15%+ returns and a ton of volatility. We backtested the same strategy from 1973 to the present day – guess what? Same outcome: 15%+ returns with insane volatility. A lot changed from 1930 to today. Information is way better. Access is way better. Machine learning algorithm theories are now plausible with today’s insane computing power. But did it cause any change in the profile of brain-dead cheap stock investing? Nope, not really. So, if I had a gun to my head, I would say that the general mantra of “no pain, no gain” will ring true forever, regardless of the information environment or investment technology available in the marketplace. The value premium is arguably an economic equilibrium outcome determined by investor taste/preference for risk/reward, not a traditional information-based or flows-based arbitrage opportunity.
Does that mean we’ll stop doing research because it is a waste of time? Not at all. For one, we’d get bored. Second, one can theoretically improve their algorithm, at the margin, via intense research and development. Also, does that mean that unique data and unique data processing techniques are a waste of time? Not at all. There are probably ways to improve an algorithm, at the margin.
What would I focus on if I was worried about an eroding value premium? I’d try and assess if the risk appetite of investors has changed. Maybe the risk premium for holding fundamentally riskier companies is lower? That’s possible. Or maybe the cost of career risk has come way down and we identify long-duration capital in large size being dedicated to exploiting focused value premiums? i.e., People no longer make fun of value investors when they aren’t buying tech stocks in the late 90’s and are getting smoked out on a relative performance basis by the QQQs, but instead they dump money into value hand over fist. That would certainly whittle away at the value anomaly on the “mispricing” component.
Jack: Thank you again for taking the time to speak with us. If investors want to find out more about you and Alpha Architect, what is the best place for them to go?
Wes: Our website is probably the best place to start: https://alphaarchitect.com/
For value investing specific articles, we recommend our research archive: https://alphaarchitect.com/research-category-list/
And you can always follow our team on twitter:
 I put rational and behavioral economics in quotes because many argue (to include Dick Thaler) that behavior IS economics, not some sub-set of economics. Anyway, this is how the research seems to segment the two schools of thought.
 i.e., the past 200 years, with multiple bouts of 10yr+ relative performance droughts.
 By “active” we mean anything deviating from the global market-cap weighted portfolio or all risk assets (or “stocks” to keep it simple). (explainer here).
 In the context of value, the “tough to arbitrage mispricing” is generally considered to be an overreaction to short-run bad news.
 We have a paper on this subject and think price/book is mostly risk and less mispricing.
 Heck, we may even change our minds at some point down the road!