1 option
Using Lagged Outcomes to Evaluate Bias in Value-Added Models / Raj Chetty, John N. Friedman, Jonah Rockoff.
- Format:
- Book
- Author/Creator:
- Chetty, Raj.
- Series:
- Working Paper Series (National Bureau of Economic Research) no. w21961.
- NBER working paper series no. w21961
- Language:
- English
- Physical Description:
- 1 online resource: illustrations (black and white);
- Place of Publication:
- Cambridge, Mass. National Bureau of Economic Research 2016.
- Summary:
- Value-added (VA) models measure the productivity of agents such as teachers or doctors based on the outcomes they produce. The utility of VA models for performance evaluation depends on the extent to which VA estimates are biased by selection, for instance by differences in the abilities of students assigned to teachers. One widely used approach for evaluating bias in VA is to test for balance in lagged values of the outcome, based on the intuition that today's inputs cannot influence yesterday's outcomes. We use Monte Carlo simulations to show that, unlike in conventional treatment effect analyses, tests for balance using lagged outcomes do not provide robust information about the degree of bias in value-added models for two reasons. First, the treatment itself (value-added) is estimated, rather than exogenously observed. As a result, correlated shocks to outcomes can induce correlations between current VA estimates and lagged outcomes that are sensitive to model specification. Second, in most VA applications, estimation error does not vanish asymptotically because sample sizes per teacher (or principal, manager, etc.) remain small, making balance tests sensitive to the specification of the error structure even in large datasets. We conclude that bias in VA models is better evaluated using techniques that are less sensitive to model specification, such as randomized experiments, rather than using lagged outcomes.
- Notes:
- Print version record
- February 2016.
The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.