Open almost any portfolio construction software and the first input it asks for is an expected return. What do you think this asset will deliver next year? Next decade? The number feels like the foundation. It is treated as the starting point from which everything else — risk, allocation, optimization — is derived.
This is backwards. Expected return is the input the math trusts the least, because it is the input that is hardest to estimate well. We do not center our portfolio construction on return forecasts. The reason is not philosophical caution. It is the structure of estimation error.
The Asymmetry the Textbook Glosses Over
To estimate the expected return of an asset, you need to estimate its mean. To estimate the risk, you need to estimate its variance and its covariance with other assets. These sound like symmetric problems. They are not. The standard error of a mean estimated from historical data scales with the square root of the sample length. The standard error of a variance scales much more favorably with sample length and with the frequency of observation.
Put concretely: thirty years of monthly returns is barely enough data to estimate the mean of US equities with any confidence — the 95% confidence interval on the long-run expected return is several percentage points wide. The same thirty years gives you a very precise estimate of the variance and covariance structure, especially if you use higher-frequency data.
Why This Matters for Optimization
Mean-variance optimizers are exquisitely sensitive to expected return inputs. A 1% shift in your assumed return for one asset can produce double-digit shifts in the optimal weight assigned to it. This is the well-known "error maximization" property of unconstrained mean-variance optimization, documented by Michaud and others as far back as the late 1980s.
The optimizer is doing exactly what you asked it to do. You told it that asset A is expected to return 8% and asset B is expected to return 6%. It allocates accordingly. The problem is that your confidence in those numbers does not match the confidence the optimizer is imputing to them. The output looks precise. The input was noise.
An optimizer is not a forecasting tool. It is a magnifier. Feed it precise inputs and it produces useful portfolios. Feed it noisy inputs and it produces precise-looking noise.
What to Build Around Instead
The inputs we can estimate well are variance and covariance. The inputs we cannot estimate well are expected returns. A portfolio construction process that respects this asymmetry will lean on the things it can measure and minimize its dependence on the things it cannot.
This is the logic behind several approaches we use:
- Risk-based weighting — minimum variance, risk parity, hierarchical risk parity — uses only the covariance matrix. It does not require any return forecast. It produces portfolios that are well-behaved in the risk dimension, where our data is strong.
- Black-Litterman starts from the market-implied returns and incorporates investor views only when those views are explicitly held and parameterized with a confidence level. The default behavior, in the absence of views, is the market portfolio — not a forecast.
- Regime-conditioned allocation adjusts exposures based on the current market environment without requiring a point estimate of future returns. The signal is the regime, not the forecast.
None of these approaches claims to predict next year's returns. They are constructed precisely to be robust to the fact that no one can.
The Forecast We Do Trust
There is one return-related quantity we do treat as reasonably estimable: the unconditional risk premium of a broad asset class over a long horizon. Over thirty-plus year windows, equities have produced a positive premium over cash. Long-duration government bonds have produced a positive premium over short-duration bills during many — but not all — regimes. These are weak forecasts, in the sense that the confidence intervals are wide, but they are not zero-information.
The right question is not "what will this asset return next year." The right question is "how should I size my exposures given how confident I actually am about each input."
That reframing changes the whole portfolio construction problem. You stop asking the data to tell you something it cannot tell you. You start asking it to tell you something it can — how things have moved together, how that movement has changed across regimes, and where you can build durable diversification given those facts. The portfolio that results is less dependent on prediction. It is more dependent on structure. That is the trade we are happy to make.
This piece discusses portfolio construction methodology for educational purposes. Referenced research findings describe historical outcomes and do not guarantee future results. This does not constitute investment advice. This does not create an advisory relationship.