Monday, April 29, 2013

042913 - Knock me over with a feather

042913 - Knock me over with a feather

Public Debt Overhangs: Advanced-Economy Episodes Since 1900
Carman M. Reinhart, Vincent R. Reinhart, and Kenneth S. Rogoff

Journal of Economic Perspectives - Volume 26, Number 3 - Summer 2012 - Pages 69-86

When I read this article a few months ago I was not floored by their results.  Actually, I was rather disappointed that the conclusions were not more dramatic.  It seems intuitive that high public debt would impede core economic growth.  The same that high debt could be "evaporated" with high inflation.  Either way high debt has never been considered a healthy contribution to economic growth especially when the debt has been non productive.  After I read the article I took time to pause and think about its importance.  Glad I did, the contents of the article are not as impressive as the life lesson now in play.

This article was honestly, long, detailed and boring.  What else to expect from such noted and accomplished authors.  Nearly 1/4 of the first page of the article is dedicated to providing the accomplishments of these distinguished individuals.  The article is also flawed according to some folks who claim there was a math error in the spreadsheet used to calculate the results.  Heresy?  Perhaps, especially with the aggressiveness of the accusations and responses.  I am not one to judge.  Certainly, I keep my HP-12c's close at hand and have been the victim of a flawed calculation once or twice (I know to double and triple check results thanks to my old boss Ed Z.).  No excuses especially when the accuracy of the data can be tested with the infallable 12c.

I found the article too exhaustive in the data contained.  The results are expected, as a small percentage change can have a huge result even if that result if 5.0%.  If memory serves me correct, or my spreadsheet is on target, it was about 5.0% of mortgages that went into default that caused the mortgage crisis.  This precipitated a greater crisis and here we are today.  Some times the common result belies the magnitude of the result or as we have seen so ofter the outlier of the statistical analysis.

Yet we believe flaws in the data we see daily without argument.  Those "nut jobs" that should be wearing tin foil hats claiming that the government data is wrong!  (Secret: There is a lot of merit to their arguments and they may be right).  Is this the stuff of conspiracy theorists or could there be merit in their argument?  Perhaps the bigger question is: does this even matter?

How many angels can fit on the head of a pin?  An often repeated question presented by my graduate professor of statistics: i.e.: does it really matter?  Certainly, getting the correct answer is important especially when you are a distinguished professor or highly paid representative making money off your analysis and publication.  However, it would seem that a consistent flaw in the data may also serve to support the data.  The theory is this: if the method has a flaw and the flaw is consistent but the overall result is indicative of of what the correctly calculated result would be, why care?  Example, if the number of unemployed is always wrong, but consistently wrong throughout the historic data series.  Then the unemployment rate would be consistently wrong as well.  If the error amount were 5.0% across the board.  Population: 300,000,000 +/- 5.0% /+/-15,000,000.  Number of eligible working: 35-40% of population = 105-120,000,000 people.  number of unemployed: 5.0%-5.5% = in millions either 5.3, 5.8, 6.0 or 6.6.  The difference from 5.3 million to 6.6 million people is 1.3 million people or 0.5% to 0.4% difference.  Think about the revisions of the previous numbers that are made with the release of each new data point.  Few people pay attention to the revisions because that is old news.  I cannot remember a data set that was not revised, re-based or changed that caused an uproar.

OK so I tortured this with math.  The bottom line is that if the calculation error is consistent across the data set the error is lost in the magnitude of the numbers.  Either way it is an estimate. and should not detract from the overall message.

Bring out one of my favorite quotes, attributed to old 19th-century British Prime Minister Benjamin Disraeli (1804–1881): Lies, Damned Lies and Statistics.  Think about that most often cited statistical measure called correlation coefficient.  People get really excited about this number when they hear it.  I happen to like to see it in a graph.  Truth is something that tracks or doesn't track another thing may be truly meaningless.  It something has a .95 correlation coefficient over a period of x can have a .50 correlation coefficient over a period of x+y.  The statistical result is correct, the implication of its significance may be deceptive.


As a quick aside, when I was trading on a desk, I developed computer based models to value various securities.  Australian options come to mind as one that had a day count that was different than the average traded option.  We used the model for analysis, to check pricing and verify the quoted price at the time of trade.  A young and eager trader was thrilled when they reported they had done a trade in this option.  I asked if they used the model and they declared they had used the model that was on the common industry data provider.  Needless to say the big name data provide used the wrong day count for their calculation.  However, since the seller and the buyer used the same flawed calculation the difference was negligible.  Blind squirrels finding acorns, angels on the head of a pin or dumb luck, the error existed the result was negligible.

Not that I support erroneous research, and I have a bone to pick because I wanted to write about their research, the overall result looks to be unimportant.  Let the titans of economics fight this one out.  I will be sitting aside and suffering how to make returns, avoid duration risk and make money as defined by my mandate in this whorl wind market period.