(First of Two Parts)
Journalists and pundits writing about Japan seem to love league tables (lists of rankings) that give Japan a low rank. Those that rank Japan high are generally ignored.
This pattern is so strong that I have joked with friends about the latest poll that ranks Japan 197th out of 196 countries.
Beyond the jokes, however, none of the league tables commonly cited in articles about Japan have real analytical merit. That is especially true of the World Happiness Report.
Misnaming Invites Confusion
Problems with the World Happiness Report begin with the title. It does not report on happiness. It reports on individual perception of well-being.
Happiness is a transitory psychological state. For example, “I was happy I was not hit by that car that ran the red light.”
What the World Happiness Report tabulates are answers to a single question from the Gallup World Poll described in the report as follows:
The rankings are based on answers to the main life evaluation question asked in the poll. This is called the Cantril ladder: it asks respondents to think of a ladder, with the best possible life for them being a 10, and the worst possible life being a 0. They are then asked to rate their own current lives on that 0 to 10 scale.
The misnaming of the report leads to questions in internet venues, such as Quora, asking, “Why are the Japanese generally unhappy?” and headlines about Japan being the least happy country in the G7, and ranking only 58th in the world in terms of happiness.
Questionable Use of the Gallup Data
The Gallup organization is much more circumscribed in its use of the Cantril ladder than the United Nations Sustainable Development Solutions Network producers of the World Happiness Report. Instead of making a 1 to 156 ranking based on essentially bogus N.NNN numbers, Gallop uses a simple threefold categorization:
Thriving (7+) – well-being that is strong, consistent, and progressing
Struggling – well-being that is moderate or inconsistent
Suffering (4 and below) – well-being that is at high risk.
Given the highly subjective nature of the Cantril ladder, I see no real basis for reporting results in a ranking such as that used by the report.
Giving pseudo precision to highly subjective evaluations is a case of what is called “physics envy.” The term is taken from a well-justified criticism that academics and researchers working in “soft” fields, such as the social sciences, business studies, and the humanities, try to make their work appear more rigorous and their claims less subjective by introducing mathematical models and complex jargon.
Both Greece and Tajikistan have exactly the same rating in the latest report: 5.358. Are Greece and Tajikistan really identical in terms of whatever it is that WHR claims to measure? Mauritius and Jamaica differ only at the 3rd decimal point: 5.891 vs 5.890. Is Mauritius really superior to Jamaica? I think not.
Misleading with Dated Information
As with the Reporters without Borders (RWB) press freedom index, the cover date of the WHR is the year of publication, not the year of data being analyzed. Moreover, the year being analyzed is not the preceding year, but a composite of several previous years.
For example, the 2019 report states, “We combine data from the years 2016-2018 to make the sample size large enough to reduce the random sampling errors.”
Thus, any media report that says “Japan’s ranking fell in the 2019 [report]” shows the reporter has not actually read the report. Even saying that “Japan’s ranking fell in 2018” would be inaccurate. That’s not the way the index works.
None of the English language stories I found on this report noted this peculiarity, although some Japanese language articles did.
Faux Social Research
The WHR index is one of several rankings that are frequently cited in articles about Japan. The Reporters Without Borders World Press Freedom Index and the World Economic Forum Global Gender Gap Index are even more widely cited than the WHR.
As has been explained in previous JAPAN Forward articles, the RWB index is marked by incongruous and volatile rankings that the head of the organization cannot explain. The WEF gender gap index is marked by similar issues.
These league tables are PR efforts, first and foremost, for the agency producing them.
For journalists and editors, particularly in the case of Japan, they are a ready source of articles to which clickbait headlines can be attached. Contemporary news media are, to an extraordinary degree, driven by click counts as was described in a recent article in the New Yorker by one of its editors. Media, such as the Japan Times, that publish such articles are irresponsible and contradicting their own editorial line.
The Japan Times has carried a number of articles and opinion pieces critical of Japanese news media and Japanese journalists for recycling press releases and failing to do investigative journalism. Yet, when it comes to league tables, particularly those that show Japan in a negative light, the Japan Times practices cut and paste “journalism” with a clickbait headline to boot. No effort is made to cross reference commentary on the indexes, let alone seek the opinion of experts in survey methodology and the subject at hand.
Publishing stories based on these indices without noting that all have been subject to serious academic criticism is irresponsible journalism. The “Happiness Index” in particular has been subject to a volume scholarly criticism too voluminous to cite here.
While the FAQ for the WHR claims that a sample size of 1,000 per country (combined for three years) is sufficient, I have grave doubts about this.
One thousand might be sufficient for a relatively homogenous country such as Iceland with a total population of 338,349 in 2017, but is it sufficient for a country such as India with a population of 1.339 billion with scores of ethnic and religious groups? I doubt that any pollster would try to predict the behavior of India’s more than 830 million eligible voters on the basis of a sample of only 1,000 in a given election year.
The small sample size means that not only are the rankings questionable, they are useless in analytical terms. Neither the WHR nor the Gallup Survey can be used for asking questions such as: “In which prefectures are elderly people most satisfied with their lives? What life circumstances correlate with this satisfaction?”
With a 1,000 size sample for a country the size of Japan, subcategories have such small samples as to be meaningless.
Even if one totally writes off the WHR in terms of national policy making and regional analysis, its national rankings look quite questionable.
France ranks 23rd in terms of “happiness,” but the country has been gripped by months of demonstrations and violent protests by the Yellow Vest protestors. Clearly, a large segment of the French population is very unhappy about something.
The United States ranks 18th, but is in the grip of an opioid epidemic that is deadlier than the worst year of the Vietnam War. Medium-size U.S. cities have more homeless than all of Japan, and the U.S. has more people in prison, absolutely and per capita, than any other country. (The Gallup methodology does not appear to cover the homeless or the incarcerated.)
The top countries in terms of “happiness” generally have falling fertility rates, suggesting that possibly the current population is not all that sanguine about the future.
In contrast, relatively “unhappy” Japan has had a slowly rising fertility rate since 2005.
‘Happy’ Countries and Self-Destructive Behavior
Some commentators have rightly noted that a number of the countries that rank high in terms of “happiness” are high suicide rate countries both in contemporary and historical terms.
The U.S. in particular has seen an increase in suicide, even without counting self-killing by drug overdose as suicide. Youth suicide in particular has seen a notable increase. Further, because of the highly devolved nature of the U.S., the reported suicide count is generally believed to be understated.
New Zealand is even more of an anomaly. It ranks 8th in terms of “happiness”, but has the highest youth suicide rate of any developed country. In contrast, relatively “unhappy” Japan has seen suicide overall declining steadily for more than a decade. While foreign journalists like to proclaim “suicide is the leading cause of death for young people in Japan,” that is only because relatively few young people die in traffic accidents or are victims of homicides or drug overdoses.
The fact that the U.S. is both a relatively high suicide country and a country with a very high rate of self-killing by overdose led Jeffrey Sachs, one of the founders of the WHR report, to write a chapter in the 2019 report on this very issue. While his chapter effectively documents what he calls a “mass addiction society,” he does nothing to explain why such an apparently happy country has so many people turning to opioids, tranquilizers, and other mood ordering substances.
Others have noted high levels of cannabis use in a number of the “happy” countries. So too for antidepressant drugs. The U.S. leads, but other “happy” countries, including Iceland, Australia, Canada, Denmark, Sweden, Finland, Belgium, etc., have high reported usage of antidepressants.
There is no indication in the Gallup methodology that it asks respondents whether they are high, stoned, or drugged when they respond to its surveys. Perhaps it should.
Gender Gap, Poverty, and the ‘Happiness’ Index
Similar anomalies appear when the WHR ranking of a country is compared with its ranking on the WEF gender gap index.
Nicaragua is 5th on the WEF index but 41st in happiness, and it is gripped by ongoing political violence. The Philippines ranks 8th on the gender gap index but 71st in terms of “happiness.” Rwanda ranks 6th on the gender gap index but 151st in happiness in the respective 2018 reports.
The WHR does not provide data on whether, for example, long working hours or lack of daycare are factors in Japanese “unhappiness.” Some of the high-rated countries have relatively low working hours and well-developed national daycare systems. Some do not. There may be a shortage of nationally mandated daycare places in Japan, but the U.S. and the U.K. — which have no such system — rate higher in happiness than Japan.
Similarly, the U.S. has a relative poverty rate higher than that of Japan, but it ranks much higher in the index.
Use of the index becomes merely an example of confirmation bias. The user uncritically accepts a data set because it appears to support something you want to say.
(To be continued)
For more “Mythbusters” articles, go here.
Author: Dr. Earl H. Kinmonth