Audrey Spalding
I had no idea that there were so many witches in Romania. Or that European politicians (including French President Nicolas Sarkozy) often go to witches to seek advice.

This is exactly why I listen to the Freakonomics podcast, which highlights the ways that economics can provide insight to seemingly inexplicable situations. Recently, Freakonomics discussed efforts in Romania to fine witches if their predictions fail to come true. The jail-time punishment being proposed for multiple false predictions could result in six months to up to three years in jail.

I suppose that if you acted on a false prediction, you would want to punish the person who led you astray. But think of all of the people and organizations who make predictions that affect the way our economy runs. We don't penalize, say, politicians, economic development officials, or coalition groups when the promises they make fail to materialize.

As Steven Dubner, host of the Freakonomics podcast put it, "I don’t care if you’re anti-witch or pro-witch or witch-agnostic. Why should witches be the only people held accountable for bad predictions?"

In Missouri, it isn't very hard to find evidence of bad economic development predictions. The recent Mamtek scandal is one. The 2006 prediction that the Ballpark Village development in downtown Saint Louis would result in more than $700 million in economic impact looks unlikely, to put it kindly. And, for a recent example, we have the ever-changing job estimates associated with a proposal to dedicate $300 million in state tax credits to construct warehouses and facilities.

Consider also a state audit report that found, among many other problems, that Missouri's Low Income Housing Tax Credit is much more costly than initially predicted. How about the overly rosy economic growth assumptions used to sell Tax Increment Financing (TIF) projects? An East-West Gateway Council of Government study found that "broad measures of regional economic outcomes strongly suggest that massive tax expenditures to promote development have not resulted in real growth" (emphasis mine).

Of course, I'm not advocating that we throw politicians and economic development officials in jail for making the wrong promises. But I would suggest, for the health of Missouri's economy, that we start holding these people responsible for their predictions.

As Freakonomics co-host Steve Levitt points out in the podcast, people have every incentive to make absurd predictions:
So, most predictions we remember are ones which were fabulously, wildly, unexpected and then came true. Now, the person who makes that prediction has a strong incentive to remind everyone that they made that crazy prediction which came true. ...But if you're wrong, there's no person on the other side of the transaction who draws any real benefit from embarrassing you by bringing up the bad prediction over and over.

Levitt's point reminds me of the St. Louis Regional Chamber and Growth Association's outlandish predictions. The RCGA frequently issues press releases touting incredible job and investment numbers. Sometimes, the message of one RCGA study (say, that the region needs to build millions more in warehouse space) conflicts with another RCGA press release (that the region has an abundance of cheap warehouse space). The agency clearly isn't worried about making an unlikely prediction, either.

I also wonder about the Missouri Department of Economic Development, and the state legislature's propensity to create tax credit programs in the hopes of attracting jobs to the state. Audit reports have shown that these tax credits are more expensive than anticipated, and that the state gets little in return. And yet, in the face of  bad earlier predictions (and even blatant overstatements), state legislators continue to fail to pass substantive tax credit reform.

A solution that Freakonomics proposes is a little unexpected, but elegant. We all are familiar with baseball players' batting averages. Let's apply those to people who make economic development predictions.

Consulting organizations should report their track record of success (and failure). What if every estimate of job and investment creation the RCGA publishes had to be accompanied with a percentage showing the accuracy of previous estimates the agency predicted? What if, when contemplating creating new tax credit programs, we considered whether existing programs delivered on the promises used to create them?

If we are considering whether hundreds of millions of taxpayer dollars should be allocated to a particular project, it is not enough to take proponents' claims for fact, especially if those organizations have a track record of poor prediction. We need to know how frequently those predictions actually become reality.

We wouldn't throw anyone in jail. We might find that some organizations are really good at making predictions. And, like Romanians burned by a bad prediction from a witch, we could stop relying on organizations and individuals that provide wildly unreliable predictions.

About the Author

Audrey Spalding