ANALYSTS' FORECASTS ON EARNINGS PER SHARE (ENCYCLOPEDIA)
Tipo voce : Voci enciclopediche
Categorie: Entrepreneurial finance
Analysts’ forecasts of firms’ earnings and the related forecast errors are issues widely discussed over the huge economic literature. Analysts’ forecasts are considered as a proxy of rational expectation (RE), therefore they are expected to be much more useful than traditional time series forecasts.
Timeliness of the forecasts and forecast accuracy are an interesting trade-off for those who issue forecasts. They need to choose between releasing forecasts with respect to new information or waiting in order to produce more accurate forecasts in the future by using additional information.
Information about earning per share can be gained starting from different sources such as proxy statements, quarterly and annual reports, conference calls and other management communications.
The information produced by analysts is used, among others, by investors in their trading decisions that affect market prices. If capital markets and the analyst forecasting process are efficient, market prices and analysts’ forecasts fully and immediately reflect the processed information. A forecast produced in this way is denoted as follows:
Where represents the information available at
a horizon prior to the realization, and
is the conditional expectation operator. Nevertheless, in the span between the forecast and the realization date, new information may arrive on the market, producing inefficiencies that lead to forecast errors. Therefore. the forecast error is the difference between actual earnings and forecast companies’ earnings (per share) and it is defined as:
Where is the computed relative forecast error for the company i made t months before the release date by analyst j for year T,
is the actual earning per share for company i in year T,
is the forecasted earning per share for company i by analyst j made for year T with the forecast being made t months before the release date and
is the stock price for company i at the end of the previous year, T-1. Technically, there is a scale problem in measuring analysts’ forecasts and forecast errors when data measured in level are used. This problem can persist across firms and over time.
So a firm with the same total earnings as another, but half as many shares outstanding, will have earnings per share that are twice as large. To adjust for differences in the magnitude of earnings (per share) and forecast errors across firms, it is necessary to divide the forecast error by the stock price. With such an operation, it is assumed that errors in forecasting earnings per share relative to the stock price are relatively homogeneous across firms.
So far the literature focused on the deviations between earnings and forecasts, which makes it easy to loose sight of how informative the forecasts are about actual earnings. Analysts’ earnings forecasts are quite informative about actual earnings.
It can happen that a large number of positive/negative forecast errors over time may reflect too low/high forecasts; it can also occur for other reasons. For instance, firms with actual earnings that are smaller than the forecasted earnings may provide analysts with information before the announcement and consequently forecasts can be revised accordingly.
Forecast errors across firms and analysts are likely to differ for a variety of reasons, one being the likelihood that earnings are more predictable for some industries than others. It is plausible that earnings forecasts in less-volatile industries are smaller. For example, energy prices are subject to large unpredictable price swings, which obviously affect earnings. Although health care prices have risen substantially in recent years, the increases have been relatively persistent and therefore predictable. Health care can be virtually unaffected by recessions, while the demand for energy falls in recessions. Some other industries show low earnings around recessions as well, such as materials and consumer discretionary goods. If recessions are not predicted, there is little reason to think that these decreases in earnings are predictable either.
More information becomes available as time goes on, and this information is substantial: eleven-twelfths of the year are gone by when the one-month-ahead forecast is made. Firms announce earnings quarterly; when the one-month-ahead forecast is made, earnings for the first three quarters of the year have been announced and are known. Due to this mechanism, as time passes other information becomes available, and consequently the magnitude of forecast errors can be expected to decrease proportionally. Therefore, evidence indicates that analysts’ forecasts on earnings, far from the release date, are higher on average than actual earnings. In other words, whatever earnings an analyst forecasts for a firm, a better prediction is the one showing a somewhat lower level of earnings. This predictable difference is called biased forecast.
At first glance, it seems obvious that unbiased forecasts are the best forecasts. Yet, there are many conditions in which a biased forecast is the best one. A common criterion for forecast errors is the mean squared error. If a forecaster wants to minimize the expected mean squared forecast error, then an unbiased forecast is the best one. The expected mean squared forecast error applies an increasing penalty to forecasts relatively farther from the average - a forecast twice as far from zero is four times as bad.
The unbiased forecast - the mean forecast - is not necessarily the best forecast in all circumstances. Suppose that an expert forecaster wants to forecast the value shown when a fair die is thrown. The mean forecast is the average of 1, 2, 3, 4, 5, and 6, which is 3.5. If the forecasted earnings depend on how close the forecast is to the actual value, the best forecast is 3.5 indeed. On the other hand, if the forecaster gets paid only when the value shown by the die is the same as the value forecasted, this unbiased forecast guarantees that the forecaster will be never paid.
As a matter of fact, the die will never show the value 3.5. If the forecaster is paid only when the forecast is the same as the value thrown, and values from 1 to 6 are equally likely, any integer from 1 to 6 is equally good and 3.5 is never predicted. While this is a simple example, the point is more general. The forecasted value depends on the analyst’s incentives and on the distribution of the data. An unbiased forecast may not be the "best" forecast. There also are objectives similar to minimizing the expected squared error that lead to forecasts being "biased." If a forecaster wants to minimize the expected absolute deviation of the forecast error, then the median is the best forecast.
The absolute forecast error applies an increasing penalty to forecast errors farther from zero - a forecast error twice as far from zero is twice as bad. The costs of forecast errors increase linearly with the size of the error.
The forecast that minimizes the expected absolute forecast error is the median, not the mean (or more precisely, the arithmetic average). If the mean and the median are the same, this is a distinction that does not matter. On the other hand, if the distribution is not symmetric, as the earnings distribution is not, the median is a better forecast than the mean if the forecast error’s cost increases linearly with the forecast error.
The median is the middle forecast, and it divides the forecast into two parts, with half the observations above the median and half below the median. If the median forecast error is noticeably closer to zero than the average forecast error, this indicates that the typical negative (positive) forecast error is larger in magnitude than the typical positive (negative) forecast error. In other words, the distribution of forecast errors is not symmetric. So the consistently negative/positive values of skewness indicate that forecast errors are larger in magnitude than positive/negative errors. The index of skewness indicates how much errors are skewed towards negative/positive values. Finally, kurtosis measures how concentrated a distribution is around the mean as compared to the number of observations in the tails of the distribution. A positive kurtosis value indicates that the tails of the distribution have more observations as compared to the normal distribution.
Empirical evidence on the distribution of analysts’ earnings forecasts and their relative errors using data on US firms from 1990 to 2004 indicates a substantial asymmetry of earnings, earnings forecasts, and forecast errors. There is also great empirical evidence that earnings forecasted using the expected value and the median a year before the earnings announcement were higher than the actual earnings. Such differences between earnings and forecasted earnings also exist across time periods and industries. In the month before the earnings announcement, the difference between the mean and the median is small. Therefore, one question can be raised: are there predictable differences between analysts’ earnings forecasts and actual earnings?
Moreover, this empirical evidence suggests that analysts’ forecasts close to the earnings announcement decrease less than the actual earnings. The rationale for this reverse bias is that earnings resulting greater than recent forecasts are interpreted as a positive earnings surprise and consequently the stock price increases.
Almost the entire existing economic literature on analysts’ forecasts examines whether their forecasts are biased, finding that analysts overestimate earnings. This overestimation decreases as the earnings announcement approaches. Moreover, some research suggests that analysts’ attitude change from overestimating to underestimating just before the earnings announcement. Such near-term forecasts are intended to be helpful to a firm’s management because the announcement of higher-than-forecasted earnings generates favourable publicity and a higher stock price after the announcement. Asking for forecasts that are neither too high nor too low on average seems like a sensible request, especially as compared to asking for an accurate forecast. Even so, it is possible that analysts process the information available to them in the best possible way, but some or all analysts do not get an incentive to produce forecasts that are correct, not even on average. On average, the average of forecast errors declines as the announcement of earnings for the year approaches.
The theme of the analysts’ incentive has been deeply analyzed in the literature and in particular it has been highlighted the fact that analysts do not make forecasts in isolation. Other analysts make forecasts as well, and the existence of other forecasts can affect an analyst’s forecasts in many ways. Furthermore, the analyst’s ability may change over time by doing forecasts and consequently gaining new experience in forecasting that affect the evolution of their forecasting errors.
Rather than isolated forecasts, the analysts’ activity may be considered close to a forecasting game in which the smallest forecast error wins (and receives a prize), while everyone else does not receive anything. Such a forecasting game illustrates that an unbiased forecast may not be an analyst’s best forecast, and that the incentive may induce to get only “the closest possible” to the best. If you are not the closest, then it does not matters at all whether your forecast error is almost as good as the best or if it is far away. More generally, any analyst’s forecast will depend on what he or she thinks other people will forecast or what others have already forecasted. A simple example is one in which two people guess someone else’s pick of a number between 0 and 10. The unbiased forecast is 5. Suppose that the first person picks 5. If the second person picks 5, then he or she cannot win, only tie. A pick of either 4 or 6 can increase the expected winnings of the second person if there is no payoff from tying. Neither 4 nor 6 are unbiased, but that does not matter. Either number maximizes the expected winnings, and that is what matters in this game. This suggests that, even if analysts’ forecasts are biased, it is important to consider analysts’ incentives before denouncing them as "irrational" or "ignoring information readily available to them". A lot of factors can explain a nonzero predictable forecast error, for instance an analyst who performs poorly and is at risk of being fired is more likely to make a "bold" forecast that is unlikely to be correct but that will save the analyst’s job if it is correct.
Bibliography
CICIRETTI R. BAGELLA M. e BECCHETTI L. (2007), "The Earning Forecast Error in the US and European Stock Markets", in The European Journal of Finance, Vol. 13 (2), pp. 105-122, February 2007
CICIRETTI R., DWYER Jr. G. P. e HASAN I. (2009), "Investment Analysts’ Earnings Forecasts and Errors: A Summary of the Data", in Review Federal Reserve Bank of St. Louis, 91(5, Part 2), pp. 545-67, September/October 2009
CLARKE J. e SUBRAMANIAN A. (2006), "Dynamic Forecasting Behavior by Analysts: Theory and Evidence", in Journal of Financial Economics, Vol. 80, pp. 81-113, April 2006
GU Z. e WU J. S. (2003), "Earnings Skewness and Analyst Forecast Bias", in Journal of Accounting and Economics, Vol. 35(1), 5-29, April 2003
HONG H. e KUBIK J. D.(2003), "Analyzing the Analysts: Career Concerns and Biased Earnings Forecasts", in Journal of Finance, Vol. 58, pp. 313-51, February 2003
KEANE M. P. e RUNKLE D. E. (1990), "Testing the Rationality of Price Forecasts: New Evidence from Panel Data", in American Economic Review, Vol. 80(4), pp. 714-35, September 1990
Editor: Rocco CICIRETTI
© 2010 ASSONEBB