Equities and the Economy: Another Intertemporal Anomaly
By John E. Golob
RWP 95-16, December 1995

Intertemporal optimization models of the macroeconomy are consistent with several features of the business cycle, and these models have become familiar tools for analyzing economic cycles and the propagation of economic shocks. Critics of this dynamic equilibrium approach have pointed out, however, that the models often fail to replicate important features of both labor and financial markets. This paper identifies another financial market anomaly of intertemporal optimization models, the equity-economy puzzle, which is a negative correlation between equity prices and future economic growth. That is, these models are often inconsistent with the positive correlation between equity prices and future economic growth found empirically.

The equity-economy puzzle tends to emerge in intertemporal models with high risk aversion. Because the equity premium puzzle has led researchers to consider models with high risk aversion, they need to recognize that this strategy can lead to another anomaly. The paper explains why high risk aversion generates the equity-economy puzzle. The paper also shows that an intertemporal optimization model with nonexpected utility preferences can be consistent with the positive correlation between equity markets and future economic growth.

Business Cycle Turning Points: Two Empirical Business Cycle Model Approaches
By Andrew J. Filardo and Stephen F. Gordon
RWP 95-15, December 1995

This paper compares a set of non-nested empirical business cycle models. The alternative linear models include a VAR and Stock and Watson's (1991) unobserved components model. The alternative nonlinear models include the time-varying transition probability Markov switching model (Filardo 1993) and an integration of the Markov switching model with the Stock and Watson model as proposed by Diebold and Rudebusch (1994) and Chauvet (1994). Generally, this paper finds that no one model dominates in a predictive sense at all times. The nonlinear models, however, tend to outperform the linear models around business cycle turning points. Econometrically, this paper applies the general model comparison methodology of Geweke (1994).

Exchange Rates in the Long Run
By Sean Becketti, Craig S. Hakkio, and Douglas H. Joines
RWP 95-14, December 1995

If Purchasing Power Parity holds in the long run, then real exchange rates are mean stationary. To test this hypothesis, monthly data on bilateral real exchange rates between the United States and five countries extending back to the 1920s are calculated. The null hypothesis of mean stationarity is tested against a variety of nonstationary alternatives. Our results strongly favor mean stationarity over models that permit long-run trends in real exchange rates. The data also favor stationarity over a unit root process with no drift. We show that the realized path of the real exchange rate lies predominantly within the prediction interval for a stationary AR(1) model, a result that is more consistent with stationarity than with a unit root. We develop simple statistics that make this intuitive reasoning more precise. Finally, the data contain no reliable evidence of discrete shifts in the mean of the real exchange rate. Thus, PPP appears to provide a reasonable characterization of the long-run behavior of national price levels and exchange rates.

Forecasting an Aggregate of Cointegrated Disaggregates
By Todd E. Clark
RWP 95-13, December 1995

This study examines the problem of forecasting an aggregate of cointegrated disaggregates. It first establishes conditions under which forecasts of an aggregate variable obtained from a disaggregate VECM will be equal to those from an aggregate, univariate time series model, and develops a simple procedure for testing those conditions. The paper then uses Monte Carlo simulations and an empirical example to examine how analysis of forecasting an aggregate might be affected by a failure to correct for cointegration. The Monte Carlo and empirical analyses indicate the effects of ignoring cointegration vary sharply with model parameterization. When the aggregate of the error correction coefficients is small, ignoring cointegration will not have large effects.

JEL Classification: C32, C22, C53

Bank Derivative Activity in the 1990s
By Ken Heinecke and Pu Shen
RWP 95-12, December 1995

This paper tries to grasp banks' motivation for entering derivative markets. The motivation question is interesting for the following reason: if banks' main motivation for using derivatives is speculation, derivatives are likely to increase the risk to banks' capital and thus increase the cost of deposit insurance.

The first major finding of the paper is that currently available data are not informative of banks' usage of derivatives. We find no evidence that derivatives are mainly used for speculation purposes. There is some indication that users of derivatives are interested in expanding into non-traditional banking activities for the purpose of revenue enhancement. On the other hand, the data also indicate that these users tend to be more avid commercial lenders. A possible explanation for these relationships is that banks are using derivatives as hedging instruments. We search for evidence of such hedging activity as well as for measures of derivative users' risk attitude. Based on our assumptions, the results of this paper give little support to this hedging hypothesis. Furthermore, derivative users tend to be less risk averse than nonusers judging from the credit risk that they undertake. We note that banks which are newcomers to the derivative industry tend to be more growth oriented than banks that have a longer history of derivative use. They also seem to be less interested in their traditional business. These newcomer banks bear watching.

To summarize, the current Call Reports provide little information on how and why derivatives are being used in the banking industry. We see no obvious warning signs in the data, but we also find little supporting evidence from the data that derivatives have contributed to the safety and soundness of the banking industry. We conclude that better data are needed in this area.

Some Intranational Evidence On Output-Inflation Tradeoffs
By Gregory D. Hess and Kwanho Shin
RWP 95-11, November 1995

In a seminal paper, Lucas (1973) provided the theoretical relationship between aggregate demand and real output based on relative price confusion at the individual market level. Ball, Mankiw, and Romer (BMR, 1988) derive the same relation using a New Keynesian framework. Even though both theories predict a positive relationship between nominal shocks and cyclical movements in real output, they are distinguished by two notable differences. First, according to New Keynesian theory, nominal shocks have a smaller effect on real output for high inflation countries since prices are adjusted more frequently. Lucas' model has no implication for the level of inflation. Second, according to New Keynesian theory, a higher variance of relative prices, and hence an increase in uncertainty, will lead to a smaller effect of nominal shocks on real output since prices are set for shorter periods and adjusted more frequently. Lucas' model, however, makes the exact opposite prediction since a high variance of relative prices leads to more confusion in the market level equilibrium. By emphasizing the first implication of the New Keynesian theory, BMR obtain strong evidence supporting their model using international data.

In this paper we concentrate on the second difference between the New Keynesian theory and Lucas' model which, we believe, distinguishes one from the other more clearly. We derive the individual market level equilibrium relationship as well as the aggregate level one for the Lucas model. We demonstrate, similarly to BMR, that both the Lucas model and New Keynesian models make similar predictions for the response between nominal and real variables, even at the disaggregate level.

We estimate, using cross-sectional data for the U.S., the crucial parameters of the relationship between aggregate nominal demand shocks and real output. The data we use to estimate the market level model are nominal and real output, and inflation for 50 states plus the District of Columbia at the annual frequency over the time period 1977-1991. The regression results suggest that the model provides a good fit of the data at the state level. However, we find strong support for New Keynesian theory in that an increase in the variance of relative prices across states leads to a smaller effect of demand shocks on real output. We conclude that the Lucas model omits New Keynesian features of intranational data.

JEL Classification: E12, E23, E31, E32

Keywords: New Keynsian Theory, Lucas's Island Model

Measuring Business Cycle Features
By Gregory D. Hess and Shigeru Iwata
RWP 95-10, October 1995

Since the extensive work by Burns and Mitchell (1947), many economists have interpreted economic fluctuations in terms of business cycle phases. Given this, we argue that in addition to usual model selection criteria currently used in the profession, the adequacy of a univariate macroeconomic time series model should be based on its ability to replicate two most important business cycle features of the U.S. data--duration and amplitude. We propose a number of checks for whether univariate statistical models generate business cycle features observed in US GDP and find that many popular non-linear models for the log of real GDP are no better at replicating the duration and amplitude features of the data than a simple ARIMA(1,1,0).

Keywords: business cycles, random walk with drift

How Wide Is the Border?
By Charles Engel and John H. Rogers
RWP 95-09, October 1995

Previous tests of stock index arbitrage models have rejected the no-arbitrage constraint imposed by these models. This paper provides a detailed analysis of actual S&P 500 arbitrage trades and directly relates these trades to the predictions of index arbitrage models. An analysis of arbitrage trades suggests that (i) short sale rules are unlikely to restrict arbitrage, (ii) the opportunity cost of arbitrage funds exceeds the Treasury Bill rate, and (iii) the average price discrepancy captured by arbitrage trades is small. Tests of the models provide some support for a version of the arbitrage model that incorporates an early liquidation option. The ability of these models to explain arbitrage trades, however, is relatively low.

New Estimates of the U.S. Economy's Potential Growth Rate
By George A. Kahn
RWP 95-08, October 1995

Using an Okun's law framework, this paper estimates potential growth for the 1990s as measured by both fixed-and chain-weighted indexes of GDP. Estimated potential growth rates are then decomposed into growth in labor productivity and growth in labor input using a regression analysis to separate secular from cyclical changes. Estimates of potential output and trend productivity growth for the 1990s are compared with estimates from earlier periods using both fixed and chain weights.

The first section of the paper compares the behavior of output, productivity, and employment during the current recovery with past recoveries noting the unusually large contribution of productivity growth to output growth early in the current recovery. The second section uses a version of Okun's law to estimate the economy's potential growth rate. The third section uses an output identity to determine the relative contribution of productivity and employment growth to potential output growth.

The paper concludes that eliminating the substitution bias associated with fixed-weight measures of real GDP raises estimated potential GDP growth in the 1980s but lowers estimated potential GDP growth in the 1990s. As a result, potential growth is estimated to have slipped from roughly 2.5 percent per year in the 1980s to roughly 2.0 percent in the 1990s. Decomposing potential growth into productivity growth and growth in labor input shows that this slowdown has occurred despite a modest increase in estimated trend productivity growth. Based on chain-weighted data, trend productivity growth is shown to have increased from 0.9 percent per year in the 1980s to 1.2 percent in the 1990s--perhaps boosted modestly (but statistically insignificantly) by business downsizing and investment in new plant and equipment. Finally, the increase in productivity growth has not translated into an increase in potential output growth because of a secular decline in the growth rate of aggregate hours worked.

JEL Classification: 047

Intranational Business Cycles in the United States
By Gregory D. Hess and Kwanho Shin
RWP 95-07, September 1995

We employ intranational data for the United States from 1978-1991 to re-explore two discrepancies between international real business cycle models and data (so called 'anomalies') that have been highlighted by Backus, Kehoe and Kydland (1993). The benefit to our approach is that the analysis of business cycles within one country is a natural experiment for understanding the `anomalies' found in international business cycles since, as in the model, there are no tariffs or trade barriers between states in the U.S. and there is only one currency.

Similar to the evidence for international business cycles, but contrary to the theory, we find that consumption is less contemporaneously correlated across states than output. This observed deficiency of intratemporal (contemporaneous) risk sharing is referred to as the `quantity anomaly'. Unlike the international data, however, we find that the 'price anomaly' does not hold for intranational data; namely, the terms of trade for states are not more volatile than output or productivity shocks. Furthermore, we present additional evidence based on the relationships between labor earnings, non-labor earnings and government transfers which supports the view that the observed amount of intratemporal risk sharing is quite limited as compared to the observed amount of intertemporal risk sharing.

Keywords: open economy RBC models, risk sharing, and price and quantity anomalies

Why Is the Forward Exchange Rate Forecast Biased? A Survey of Recent Evidence
By Charles Engel
RWP 95-06, September 1995

Forward exchange rate unbiasedness is rejected in tests from the current floating exchange rate era. This paper surveys advances in this area since the publication of Hodrick's (1987) survey. It documents that the change in the future exchange rate is generally negatively related to the forward premium. Properties of the expected forward forecast error are reviewed. Issues such as the relation of uncovered interest parity to real interest parity, and the implications of uncovered interest parity for cointegration of various quantities are discussed. The modeling and testing for risk premiums is surveyed. Included in this area are tests of the consumption CAPM, tests of the latent variable model, and portfolio-balance models of risk premiums. General equilibrium models of the risk premium are examined and their empirical implications explored. The survey does not cover the important areas of learning and peso problems, tests of rational expectations based on survey data, or the models of irrational expectations and speculative bubbles.

Money Is What Money Predicts: The M* Model of the Price Level
By Gregory D. Hess and Charles S. Morris
RWP 95-05, June 1995

Over the past twenty years, the monetary aggregates used by the Federal Reserve as indicators of economic activity and inflation have changed several times. Each of the changes in the measures of money was sparked by a breakdown in the fit of empirical money demand functions. The Federal Reserve's strategy following these breakdowns has been to redefine money by simply adding new assets to the old definitions. The criterion in each case was whether adding the new assets produced an empirically stable money demand function. Unfortunately, while a stable demand for money is a worthwhile ultimate goal, history has demonstrated that it is also an elusive one.

In this paper, we propose an alternative objective for identifying a useful monetary aggregate--the price level. Our monetary aggregate is a weighted-sum aggregate where the weights on the component assets vary across assets and over time such that the aggregate is the best predictor of the price level. The only assumption made in choosing the weights is that the Quantity Theory of Money holds in the long run. We find that the new monetary aggregate, M*, has a stable velocity in the long run and that it predicts the long-run price level and rate of inflation better than M2.

Central Bank Intervention and the Volatility of Foreign Exchange Rates: Evidence from the Options Market
By Catherine Bonser-Neal and Glenn Tanner
RWP 95-04, April 1995

This paper tests the effects of central bank intervention on the ex ante volatility of $/DM and $/Yen exchange rates. In contrast to previous research which employed GARCH estimates of conditional volatility, we estimate ex ante volatility using the implied volatilities of currency options prices. We also control for the effects of other macroeconomic announcements. We find little support for the hypothesis that central bank intervention decreased expected exchange rate volatility between 1985 and 1991. Federal Reserve intervention was generally associated with a positive change in exante $/DM and $/Yen volatility, or with no change. Perceived Bundesbank intervention did not alter $/DM ex ante volatility in any of the periods, while perceived Bank of Japan intervention was associated with positive changes in ex ante $/Yen volatility during the 1985-91 period as a whole and during the February 1987 to December 1989 post-Louvre Accord subperiod.

Direct Tests of Index Arbitrage Models
By Robert Neal
RWP 95-03, March 1995

Previous tests of stock index arbitrage models have rejected the no-arbitrage constraint imposed by these models. This paper provides a detailed analysis of actual S&P 500 arbitrage trades and directly relates these trades to the predictions of index arbitrage models. An analysis of arbitrage trades suggests that (i) short sale rules are unlikely to restrict arbitrage, (ii) the opportunity cost of arbitrage funds exceeds the Treasury Bill rate, and (iii) the average price discrepancy captured by arbitrage trades is small. Tests of the models provide some support for a version of the arbitrage model that incorporates an early liquidation option. The ability of these models to explain arbitrage trades, however, is relatively low.

How Reliable Are Adverse Selection Models of the Bid-ask Spread?
By Robert Neal and Simon Wheatley
RWP 95-02, March 1995

Theoretical models of the adverse selection component of bid-asked spreads predict the component arises from asymmetric information about a firm's fundamental value. We test this prediction using two well known models [Glosten and Harris (1988) and George, Kaul, and Nimalendran (1991)] to estimate the adverse selection component for closed-end funds. Closed-end funds hold diversified portfolios and report their net asset values on a weekly basis. Thus, there should be little uncertainty about their fundamental values and their adverse selection components should be minimal. Estimates of the component from the two models, however, average 19 and 52 percent of the spread. These estimates, while smaller than corresponding estimates from common stocks, are large enough to raise doubts about the reliability of these models.

Small Sample Properties of Estimators of Nonlinear Models of Covariance Structure
By Todd E. Clark
RWP 95-01, March 1995

This study examines the small sample properties of GMM and ML estimators of non-linear models of covariance structure. The study focuses on the properties of parameter estimates and the Hansen (1982) and Newey (1985) model specification test. It use Monte Carlo simulations to consider the properties of estimates for some simple factor models, the Hall and Mishkin (1982) model of consumption and income changes, and a simple Bernanke (1986) decomposition model. This analysis establishes and seeks to explain a number of results. Most importantly, optimally weighted GMM estimation yields some biased parameter estimates, and GMM estimation yields a model specification test with size substantially greater than the asymptotic size.

Keywords: GMM, ML, covariance structure, Monte Carlo