Stephen Gordon: Guess who’s got more credibility—professors, think tanks or…the CBC

Interesting piece by Gordon:

Think-tanks are an ever-present, yet somehow under-examined feature of the public policy landscape. Think-tanks get a lot of press, at least partly because they are adept at issuing press releases advertising their work to the media, complete with pullquotes and readily available experts for radio and TV hits. Academic studies — the sort of work written by professors for professors — pass almost unnoticed, mainly because most of it is not relevant to current policy debates, and because peer-reviewed publications are not so readily accessible. But is visibility the same thing as credibility?

It would seem not. Carey Doberstein, a political scientist at the University of British Columbia, recently published a study in Canadian Public Policy on the credibility gap — he calls it a “credibility chasm” — between academic research and research published by think-tanks and advocacy organizations. Interestingly, his study is not carried out among the general population, but among policy analysts in the provincial governments of British Columbia, Saskatchewan, Ontario and Newfoundland and Labrador.

Participants in the study were asked to read and evaluate the credibility of different studies in two areas of provincial competence — minimum wages and income-splitting. The analysts were asked to evaluate a set of five or six studies produced by academics, think-tanks and advocacy groups. Doberstein very sensibly does not draw inferences about credibility from these evaluations: one study is hardly enough to evaluate the credibility of one group, or even of one researcher. He focuses instead on how the source of a study affects policy analysts’ perceptions of its credibility.

Instead of sending the studies out to the analysts under their proper affiliations, Doberstein randomly altered them. For example, a study on the effects of an increase in the minimum wage written by researchers at the University of Toronto and published in a peer-reviewed journal was sent out with the correct affiliation to one group of analysts, under the name of the Canadian Centre for Policy Alternatives (CCPA) to another group, and under the name of the Fraser Institute to yet another group. Similarly, in addition to being sent out under its own name to one group, a CCPA study would be sent out as a University of Toronto study to a different group, and represented as a Fraser Institute study to yet another set of analysts, and so on. Two advocacy groups, the Wellesley Institute and the Canadian Federation of Independent Businesses, rounded out the minimum wage exercise, and a similar mix of academic, think-tank and advocacy groups was used for the income splitting case.

This randomisation strategy allows Doberstein to identify the reputation effects of the various sets of researchers: How is a study’s credibility affected by its affiliation? The answer is: pretty much in the way you’d expect. Adding a university affiliation to a think-tank or advocacy group study increases analysts’ perceptions of its credibility, while adding a think-tank or advocacy group’s name to an academic study makes it less credible. Generally, credibility among policy analysts declines as you move from university affiliations to think-tanks to advocacy groups.

These results aren’t hard to explain. Policy analysts know full well that advocacy groups cannot be expected to publish anything that does not fit their stated agendas, so a study showing (once again!) that the data supports their previously-held position is not a particularly strong signal. Doberstein finds a similar effect among think tanks: Think tanks with a more stridently ideological focus (CCPA, the Fraser Institute) are viewed as being less credible than the relatively neutral C.D Howe Institute.

Is this good news or bad? On the positive side, it shows that policy analysts are well aware of the incentives facing various sets of researchers, and know enough to put their work in context. On the downside, one might have hoped that analysts could set all that aside and evaluate the research on its own merits. Of course, that’s an ideal that almost no one can match: this is why so many academic journals use double-blind peer review, in which neither authors nor reviewers are identified to each other.

Perhaps the more interesting question is why advocacy groups and ideologically-driven think-tanks even bother to produce reports that are discounted so heavily by policy analysts. One answer might simply be that their reports aren’t written for the benefit of analysts; they’re written for the benefit of their donors. People like to have their beliefs confirmed, and they’re willing to pay to have someone tell them that they were (once again!) right.

This discussion also provides some insight into the challenges facing the media, particularly as it concerns the markets for news and opinion. Asking people to pay someone to tell them what they want to hear is a viable business model, and many digital outlets — from The Rebel through Canadaland to Rabble — are in the process of filling out that landscape. (It also raises the question of why the CBC would want to cut into this action with its CBC Opinion site. There’s no obvious market failure here that needs a public-sector fix.)

News, on the other hand, has the elements of a pure public good: everyone benefits from knowing the basic facts of what is going on, and technology has made it almost impossible to control access to news once it’s been published. Profits from advertising revenues can no longer finance news gathering to the same extent that they used to, but academic researchers can still fall back on teaching to cross-subsidize their research work. If you really want to make an academic researcher sweat, ask her to imagine trying to make a living from her research alone.

Source: Stephen Gordon: Guess who’s got more credibility—professors, think tanks or…the CBC

Stephen Gordon: Canada doesn’t have a Harvard, and that’s a good thing

Stephen Gordon on the weakness of the US elite college system in terms of social mobility:

It’s hard to tell which theory is correct: human capital models and signalling models both make the same basic prediction about the salaries of university graduates. Researchers are obliged to leverage information from natural experiments to distinguish between the two theories, and it’s usually the case that evidence that seems to support one side can be re-interpreted as supporting the other as well. A reasonable conclusion is that both stories have support in the data, and that each may play stronger roles in different contexts.

This brings us back to Harvard. The lengths to which people will go in order to obtain a Harvard degree are easier to understand if you think if a Harvard degree as a signal, and not a measure of human capital. To be sure, Harvard’s faculty deserves its reputation, but to the extent that teaching assistants and contract lecturers are responsible for much of the teaching at the undergraduate level (as is the case at so many other universities), the amount of human capital on offer at Harvard is unlikely to justify the prestige a Harvard degree conveys.

A more plausible story is that a Harvard degree conveys a signal: it shows that you have what it takes to get into Harvard in the first place. And indeed, the signalling story would also explain the trend to grade inflation at Harvard and other Ivy League universities. The grade most frequently awarded at Harvard is an A, and the median grade is A-. If students (and their parents) are paying for a signal, elite universities are going to be expected to provide it.

Signalling — and the wasted effort that goes with it — is much less pervasive in the Canadian university system. While some universities and some programs may have relatively higher entrance standards, getting into a “top” Canadian university is nowhere near as difficult as entering an elite U.S. college: the entire undergraduate population of the Ivy League is roughly equivalent to that of the University of Toronto. Moreover, the consequences of not getting into a top Canadian school are relatively minor: those who graduate from a Canadian undergraduate program are on a much more equal footing than they are in the U.S.

The U.S. has a rigid hierarchy of universities: the fact that they have a certain number of high-prestige schools has to be set against the fact that access to them is extremely limited, and that those who don’t make it into the top are at a permanent disadvantage. And since children from high-income families have greater access (elite universities typically offer “legacy” admissions to children of alumni), post-secondary education in the U.S. is at best a weak force for social mobility.

If — as available evidence suggests — Canadian social mobility is significantly greater than it is in the U.S., then much of the credit goes to the fact that there is no Canadian university that plays the prestige-signalling game that Harvard does. A “Harvard of Canada” is the last thing we need.

Source: Stephen Gordon: Canada doesn’t have a Harvard, and that’s a good thing | National Post

Stephen Gordon: The damage the Tories did with the census won’t be easily undone

Stephen Gordon on the possible long-term damage to the Census:

The census is only useful if (approximately) everyone co-operates. The same goes for lots of other things: carpool lanes, anti-littering bylaws and jury duty, to name three. The nature of collective action problems is that it’s never in one’s individual rational interest to take part in the solution; it’s better to simply free ride off the efforts of others. This is why one of the core tasks of government is to enforce participation — and this means imposing penalties for not co-operating.

This is where social capital comes in — or social trust, or social cohesion, or whatever you want to call it. It’s not feasible to governments to micromanage their citizens and enforce their co-operation in their daily activities, even if they wanted to. To a very great extent, the smooth functioning of society relies not on government enforcement, but on people’s willingness to go along with the rules, so long as they believe that everyone else is obeying them as well. Everything depends on a willingness to trust strangers, and to reward their trust in you.

It’s worth dwelling on this point, because one of the most debilitating consequences of the Conservatives’ time in office has been the creation of a constituency for whom the census is now a highly-politicized symbol, instead of being a neutral instrument for good governance. While the government can force co-operation, this isn’t the same as restoring mutual trust.

You can’t expect people to take your concerns seriously if you won’t do the same for them. To the extent that their concerns are about privacy, the most promising way of restoring that lost trust is to demonstrate the extent to which concerns about privacy are taken seriously, and to show some flexibility on the details. For example, questions about religion have been dropped from this year’s census questionnaire.

Social capital is difficult to build, and easy to destroy. The former Conservative government demolished a big chunk of our social capital when it blew up the census, and it will take time and effort to restore it. Posting selfies with census forms can’t hurt, and just might help.

Source: Stephen Gordon: The damage the Tories did with the census won’t be easily undone | National Post

The case of the disappearing Statistics Canada data

Good piece on Statistics Canada and the impact of some of the changes made to reduce long-standing data series:

Last year, Stephen Gordon railed against StatsCan’s attention deficit disorder, and its habit of arbitrarily terminating long-standing series and replacing them with new data that are not easily comparable.

For what appears to be no reason whatsover, StatsCan has taken a data table that went back to 1991 and split it up into two tables that span 1991-2001 and 2001-present. Even worse, the older data have been tossed into the vast and rapidly expanding swamp of terminated data tables that threatens to swallow the entire CANSIM site. A few months ago, someone looking for SEPH wage data would get the whole series. Now, you’ll get data going back to 2001 and have to already know StatsCan won’t tell you that there are older data hidden behind the “Beware of the Leopard” sign.…

Statistics Canada must be the only statistical agency in the world where the average length of a data series gets shorter with the passage of time. Its habit of killing off time series, replacing them with new, “improved” definitions and not revising the old numbers is a continual source of frustration to Canadian macroeconomists.

Others are keeping tabs on the vanishing data. The Canadian Social Research Newsletter for March 2 referred to the cuts as the CANSIM Crash Diet and tallied some of the terminations:

  • For the category “Aboriginal peoples” : 4 tables terminated out of a total of 7
  • For the category “Children and youth” : 89 tables terminated out of a total of 130
  • For the category “Families, households and housing” : 67 tables terminated out of a total of 112
  • For the category “Government” : 62 tables terminated out of a total of 141
  • For the category “Income, pensions, spending and wealth” : 41 tables terminated out of a total of 167
  • For the category “Seniors” : 13 tables terminated out of a total of 30

As far as Statistics Canada’s troubles go, this will never get the same level of attention as the mystery of the 200 jobs. But, as it relates to the long-term reliability of Canadian data, it’s just as serious.

Given my work using NHS data, particularly ethnic origin, visible minority and religions, linked to social and economic outcomes, still in the exploration stage of what data and linkages are available – or not.

The case of the disappearing Statistics Canada data