Take care with those units…

Recent furore over Rogoff & Reinhart’s discredited research on the effects of high national debt:GDP ratios led me to consider some differences between economics and the natural sciences… If you’re familiar with the arguments, my perspective – as a scientist – is nearer the bottom.

Metres are not feet. Seconds are not years. Take care with your calculations, and always compare them against what you expect. Data are noisy, so repeat significant results and check, check, check your working. It’s the first lesson every scientist learns, reinforced through years of training. Because natural science (literally ‘knowledge’, eh kids?) is essentially an attempt to discover and convey truths about how the universe works, scientists have an almost pathological fear of mistakes. Results are checked and re-checked. Journal articles are scrutinised by expert reviewers and editors, and errata swiftly published. And scientists tread with particular care where their results impinge on controversial phenomena with an impact on human society, such as genetics or climate science.

Loch Lomond: Hard to mistake for a strictly deterministic phenomenon. Picture: Wikimedia

There are exceptions.  Few professional scientists (including me) would confidently claim their published work is entirely error-free. Nonetheless, just as aero engineers worry about ‘unmodelled’ faults (those they have not anticipated) more than modelled ones (the ones they expect and can mitigate for), the general attitude amongst scientists is a fairly healthy level of conscious vigilance combined with a realistic acceptance that mistakes happen. Maybe we are too cautious when it comes to engaging with the policy implications of our research; maybe not.

Does economics – that wide-ranging and fascinating discipline, practiced by a range of actors from academics, to politicians and pundits, to bankers and businesses – take a similar approach?

I’m not so sure.

A recent episode illustrates a couple of worrying issues. First, the background: In 2009 Carmen Reinhart and Kenneth Rogoff, two Harvard economists, published an analysis of the relationship between national (government) debt and GDP (national wealth) in a dataset of 20 historical examples. Their finding: a ratio of debt:wealth greater than 90% (roughly speaking, owing more than 90p for every £1 earnt) was correlated with a decline in GDP growth from a few percent per year (around ~1-3% GDP growth per year being typical for healthy, robust postindustrial democracies) to -0.1%. A very rough reading of this is: Western countries that owe more than 90% of GDP will experience economic contraction. At the time the US (well, everyone) was gripped by financial panic and, as respected academics, their work attracted substantial attention (this is all much better documented by Heidi Moore among others).

These raw data are numbers – but still as messy as trees, stars, or chemical reactions. Picture: Wikimedia.

Now, every academic is conscious today of the need, where possible, to publish high-profile research, and a few headlines never hurt. These often result in a bit of ‘sexing-up’ when your study is reported by the university press office, and further dumbing-down by the time the story makes it to national news media (how sexing-up becomes dumbing-down is a great mystery). Rogoff & Reinhart went much further than this, and in addition to seeking media appearances, called publicly for the conclusion of their analysis (‘debt is correlated with economic stagnation’) to be turned into policy (‘governments must reduce public debt, or they will cause further recession’.) Their advice gave supporters of fiscal austerity some powerful rhetorical ammunition. Which they unloaded promptly.

This very public stance from a two-person research team drawing a startling conclusion would be unusual even in the charged arenas of public health, pharmaceutical or climate science, where the extreme policy implications mean applied research attracts significant attention and conclusions are regularly distorted. However, the professional researchers involved, by and large, are aware of this – and, as mentioned above, take pains to ensure the robustness of their results accordingly.

Rogoff & Reinhart were unlucky. A reanalysis of their data by peers revealed significant problems with their work, including inappropriate methodology and even (incredibly) simple arithmetical errors (apparently they actually used Excel to implement their models, which is staggering when professional tools like R or Matlab are available). Understandably, given the impact their work had the first time round, a storm of recriminations ensued.

I’m a computational biologist, not an economist, so I can’t add to the existing critique of the original study; either its data, methods, analyses or conclusions. What I can offer is  my perspective as a professional natural scientist. First, Rogoff & Reinhart’s desire to disseminate their apparently noteworthy result was entirely understandable. All academics share this excitement at uncovering a new relationship, getting a new model or technique to work, or spotting a unifying theme in previously unrelated pieces of information. Without this science would be totally, utterly, drearily mundane. The novelty of true discovery is what separates school science (boring) from real science (fascinating), trust me.

The worrying thing for me isn’t their desire to publicise their research, or even their apparent failure to check their results (though this would surely have been detected in a thorough review), or evenand I can’t really believe this either – they apparently fitted these complicated models in Excel (which you reallyreally shouldn’t ever do, especially when so many alternatives are available – if you don’t believe me, consider Excel’s notorious numerical sloppiness).

Instead, I’m concerned about two effects that amplified the repercussions of a poor piece of research. Firstly, unlike in science, where there is a clear division between theoretical, applied, and industrial research – with established conduits to policymakers – individual economists often seem to operate over a much wider set of  contexts. In some cases, there seems to be a carousel of posts in academia, business and government. This may well be one motivation of economics (if I could get test my theories on community phylogenetics by getting cosy with God and persuading Him to nudge the genetic mutation rate in some entire ecosystems up a bit, I probably would – just out of curiosity). But at best, it seems to blur the distinction between rigorous research, popular economics, and policy-making. At worst it creates a severe moral hazard; a researcher with a strong point of view, picked (and remunerated) as a government advisor, will have less incentive to moderate their view or publish contradictory results. Politicians tend to treat economists  (and their ideas) like pets – when in fact the consequences of economic policy are so wide-ranging that a statutory panel of experts might be a better idea. The IPCC might be a useful model (I’m not even going to consider the UK’s politicised and opaque OBR).

Secondly, policy-makers and the public in general do not have a clear idea of the philosophical limitations of economics. The oft-used phrasing which holds that it is ‘an inexact science’ appears to build in some caution; but then the taking of penalty kicks in football is described in similar terms, so that caveat is probably inadequate. There’s also the natural scientists’ commonest objection to the inclusion of economics with science, e.g. few other phenomena respond to (and interact with) the state of human knowledge about them. That is, gravity works identically whether you are aware of relativity, or not. Markets, aware that ‘market confidence’ can affect business, may react badly to new research showing markets overestimate market confidence by losing their confidence, an effect which may now need to be modelled in studies of market confidence… So although parts of economics are quantitative and scientific, the discipline very possibly isn’t since the phenomenon of human economies includes humans’ study of economics. Gödel would have a field day.

Finally, and this our fault, not economists’, I believe that one under-appreciated source of bias in the public’s understanding of economics (though possibly not economists themselves) has to to with numerosity, the collection of cognitive biases we exhibit when dealing with numbers. We’re only just starting to understand how these affect our thought processes, but already it seems clear that we think about numbers very differently to how we think we think about them. I reckon there’s a particular issue with economics – call it a ‘countability bias’: since much of the phenomena under investigation (currency, capital, rates of exchange and interest) are numbers; quantitative models use numbers; therefore (our brains might subconsciously assume) largely descriptive and qualitative models predicting numbers are actually quantitative, even deterministic. Clearly, practicing economists are aware of this distinction. But are the public?

 

This entry was posted in Blog, Science and tagged , , , , , . Bookmark the permalink.