Indicators of what? – an article from the Jerusalem Post

Originally published on November 21, 2014.

The Economist magazine occasionally devotes the last of its several weekly leading articles to a tongue-in-cheek — sometimes even deliberately fantastic — look at a specific topic. Last week’s issue contained such a leader, discussing ‘performance indicators’ and headlined “How to lie with indices”.

Performance indicators are complex statistical constructions that purport to measure how something — typically a national or regional economy — is ‘performing’. They have become extremely popular: the Economist provides a chart showing how their number has exploded from a small handful in the 90s, to 100-odd by the early years of this century and to almost 200 today — of which perhaps a quarter have (mercifully) been “discontinued”, the statistician’s equivalent to the physician “disconnecting” a patient and thereby precipitating his demise.

It is important to understand the difference between these performance indices — especially the sub-set known as ‘sentiment indices’– and formula-based indices, such as those used in equity markets. The latter begin with established facts, in the form of share prices, and combine these into an index, which covers a defined group of shares (or bonds, or whatever). Even then, you need to understand a good deal about weightings and other statistical techniques issue to be able to assess whether the index is as objective and balanced as it usually proclaims itself to be.

With sentiment indices there is no hope of objectivity. They are based on surveys that poses questions to a selected (that’s the term that leads to a huge statistical minefield) group of people. The answers they give are usually in a “qualitative” rather than a quantitative form: asked whether how their firm’s output has changed over the last period, they answer “a lot” or “a little”, in one direction or the other — but do not provide a precise answer, or even a range (5-10%).  At least with regard to the past, the response should be based on a rough factual assessment, but when the question is about the future, it becomes highly speculative and strongly subjective.

A wonderful example of how misleading, and hence effectively useless, survey-based performance indicators can be was provided yesterday, by the Philadelphia Federal Index, defined by Investopedia as A regional federal-reserve-bank index measuring changes in business growth. The index is constructed from a survey of participants who voluntarily answer questions regarding the direction of change in their overall business activities… The index generated from this month’s survey was expected to drop slightly to 18.5. In fact it soared to 40 — the highest reading since 1993.

This outcome — technically a 10-sigma event (don’t ask) — may have delighted statisticians, but it disgusted economists, who are supposed to use this indicator. Maybe economic conditions in the Philadelphia area improved a lot (until the snow came…), but certainly not by anything like the implication of the ‘indicator’.

Another indicator-type tool that has become ubiquitous is the PMI (purchasing managers’ index, so called because it started as a survey of, would you believe, purchasing managers). The object of having these indices is to provide more timely indicators than the official data, which appear one, two or even three months later. But nowadays people can’t wait until the first of each month for the latest PMI, so there are ‘flash’ PMIs, published on or around the 20th of the month, which are based on fewer responses to fewer questions from fewer people. But the markets still move in response, because the headline-reading algorithms which dominate trading read and respond to the ‘data’, despite their being tendentious twaddle.

The Economist, however, was not aiming its barbs at the PMIs, but rather at the seemingly sophisticated performance indices published by ‘prestigious’ organisations. An excellent example is the World Economic Forum’s annual Global Competitiveness Report, which ranks every country according to a very long list of parameters, and, after combining these sub-components into components and the components into an overall index, publishes rankings which people like Binyamin Netanyahu relate to very seriously.

However, most of the sub-component rankings are based on the responses from senior executives in each country, covering a very broad range of topics in their country. Yet these people’s ‘opinions’ are largely subjective and have no more value — and often less — than those of taxi drivers. Granted, in topics such as labour relations or taxation they may know what they are talking about, but in many others they don’t.

Their opinions and hence their answers, and hence the rankings derived from them, are based on what they see, hear and read in the media. That — and only that — can explain why with regard to “the extent of market dominance” — whether a few firms dominate the local market — they gave Israel such low marks that it ranked second-from bottom out of 144 countries (above Angola — but below Myanmar). They have no real knowledge of the Israeli situation, but they have been brainwashed by Ha’aretz/ The Marker’s relentless hammering at this issue, that it must be really terrible.

Conversely, when asked to assess to what degree Israeli firms (not their firm, but firms in general) adopt new technology, they gave the country a mark that propelled it to number 5 in the world on this topic. Do they know what goes on outside their firm or, at most, their sector? Never mind. The “Start-Up Nation” hype is now so ingrained that we must be very hot in adopting new technology. Baseless opinions create baseless facts that generate baseless rankings — good, bad or otherwise — and lots of headlines and hot air.

Leave a Reply

Your email address will not be published. Required fields are marked *

21 − = 11