Thursday, September 3, 2015

Inequality and Productivity

If you regularly read online about economics, you probably have seen this graph from Jared Bernstein and Larry Mishel so many times that you have lost count:

What it seems to say is that compensation had once kept pace with productivity but does not any longer -- in essence, that workers are getting a raw deal. And Mishel and Josh Bivens are back with a big, new update to the analysis. But there are some issues with the graph, despite the latest defenses.

Let's review the debate. To an economist, the first red flag should be that this divergence looks nothing like the change in the labor share of income over the same period. The labor share was stable until about ten years ago, and since then has declined by about 10 percent, or about 6 percentage points. That translates to productivity having outgrown compensation by about 25 percent, not 140 percent as in the above graph.

So something is clearly strange here. That something, as James Sherk of the Heritage Institute has explained in a research memo, is a distinction in how productivity and compensation are adjusted for inflation. And that's a an important question -- why is the average price of consumption rising faster than the average price of output? -- but it's not at all an issue of workers not being compensated for rising productivity. It's a change in the terms of trade. Since workers buy a different basket of goods than businesses produce, the prices of those baskets can diverge.

Another key issue here is who counts as a worker: The above graph focuses only on compensation of production and non-supervisory workers. But supervisory workers are obviously part of the production process. While the graph certainly demonstrates inequality, in some vague sense, to compare the productivity of all workers against the compensation of a subset destroys the graph's ability to test the actual claim, which is that workers are not being compensated for their productivity.

What Mishel and others at the Economic Policy Institute have most recently argued, then, is that we should be focused on inequality within labor income. Even if, in an apples-to-apples comparison, mean labor compensation of all workers has kept up with productivity, median labor compensation hasn't. So what, they ask, explains the difference between mean and median labor compensation?

Mishel and Bivens's answer, in the latest study, is a "portfolio of intentional policy decisions" that have hamstrung labor. Capital deepening and broad gains in labor quality should lead us to believe that productivity has risen broadly, they say, and yet less than 10 percent of workers have seen their compensation keep up with productivity gains.

Inequality within labor compensation is a interesting place for the debate to have ended up. For one thing, it's where the academic conversation is: see, for example, Matt Rognlie's recent Brookings paper. Yet, on the other hand, it seems to make the key question -- are workers being compensated for rising productivity? -- intractable.

Why? Because it's hard to assess the productivity of individual workers. The whole debate over CEO pay, for example, has foundered on this issue. You can see that in Bivens and Mishel's recent paper on CEO pay, in which they concede that the evidence that CEO pay is untethered to productivity must be suggestive and circumstantial. In their latest, they again say it is hard to draw a link, and they're right.

It's easy to be pessimistic, then, about economists' ability to answer this productivity-compensation question. Mishel and Scott Winship at the Manhattan Institute have gone blue in the face, without any apparent resolution, arguing about how the productivity of the median worker has changed since the 1970s.

There's another approach. It is important to concede up-front that there is, at the moment, no way to measure productivity at the individual level. We have to aggregate upwards, at least to the firm level, where there is a meaningful measure of output to be had. In this post, I'll use detailed industry-level data from the Bureau of Labor Statistics because firm-level data requires special clearances with the US government that I don't have.

The implication of this aggregation, however, is that I can't say anything about divergences between productivity and compensation within sectors. Which could be important. Critically, my results have no bearing on the debate about the very top of the income distribution; using sectors, I can only look at the body of the distribution. My industry breakdown, though, is reasonably granular: 246 industry categories.

With those caveats in mind, here's the big takeaway: Between 1987 and 2013, changes in sector-level labor productivity explain almost all of the changes in sector-level hourly labor compensation. And almost all of those productivity increases were paid as compensation to labor.

If you really want to know, 74 percent of the variance in the change between 1987 and 2013 in sector-level log hourly labor compensation is explained by changes in log labor productivity over the same period. A one-percentage point increase in productivity generated a 0.81-percentage-point increase in compensation. I've used 1987 and 2013 because this data was collected starting in 1987 and much of the data is still missing for 2014. As always, you can find my cleaned dataset here for your own analysis. (Weighting by 2013 employment gets you to the same result.)

We should ask if this result makes sense from a theoretical perspective. The key substance of labor economics has for decades circled around a key question: How true is the basic intuition that workers are paid their marginal product?

You might think it should be true. If workers aren't compensated for their productivity, it seems, they'll switch firms or industries. Yet there's a countervailing argument, associated with the economist William Baumol, which leads to the opposite result: If productivity rises more slowly in one industry than in others, its workers will demand wages in line with their opportunities in other industries -- and so, at the industry level, we shouldn't expect a link between productivity and compensation.

Whether the classical viewpoint or Baumol's is correct turns upon the strength of workers' "outside option" to exit industries where productivity growth is lagging and enter industries with faster productivity growth. If this outside option is weak, then workers' wages are determined by industry-level productivity; if this outside option is strong, then workers' wages are determined by the productivity of their best-alternative industry.

Mishel and Bivens, in their latest study, have argued that the industry-level approach doesn't make sense. Yet I don't think their critique really goes anywhere. (Read it yourself and be the judge.) Yes, measures of labor productivity reflect the average, not marginal, product of labor. Yes, workers in low-productivity industries could move to higher-productivity industries, so we can't say that low-paid workers are inherently and always unproductive.

Neither of these points, however, seems to have any bearing on the industry-level comparison. If labor productivity predicts labor compensation, then it seems fair to say that, at the industry level, workers have been compensated for their productivity gains.

My reading of the evidence, then, is distinctly different than that of Mishel and Bivens. We'd agree that, since 2000, the decline in the labor share of income is concerning. And we'd agree that some of the apparent divergence between compensation and productivity is attributable to changes in relative prices of consumption versus output, a phenomenon which isn't readily linked to inequality.

Where we differ is the extent to which changes in productivity explain changes in compensation. At least for the body of the income distribution, this evidence should lead us to explanations centered on productivity rather than on labor-market institutions.

Note: I revised this post after I published it to reflect the just-released EPI report.

Monday, August 31, 2015

A Fact about China's Crash

Imagine I took all the stocks in the Shanghai index on June 11, 2015, the height of the bubble in Chinese equities, and created 50 different investment portfolios.

Portfolio #1 would invest only in those stocks that had cumulatively performed worst since March, roughly when the boom began. Portfolio #2 would invest in stocks that had performed slightly better, and so on until I had broken the whole index into 50 portfolios, with the very last portfolio invested in the stocks that ran hottest in the boom.

The performance of those portfolio since June shows a clear pattern, and a very familiar one to students of America's housing bubble and bust. The bubbliest portfolios, and the bubbliest stocks have performed the worst amid the crash -- just like Las Vegas in 2005 versus in 2009.

What went up, in essence, is now coming down. This graph shows that fact.

Indeed, this "what goes up must come down" result is so strong that it explains a third of all cumulative returns of individual stocks, and virtually all of the cumulative returns of the portfolios, over this period. (This statistical performance compares favorably to most tests of the CAPM or the Fama-French 3-factor model on US equities.) About half of all the bubble-related gains, from this perspective, have been unwound.

One shouldn't infer directly from this that the crash is "good." But, if you thought that the boom was nuts, you might be relieved to know that the crash is quite focused on undoing the boom and isn't just dragging everything lower.

This post would not have been possible without help with Python from my friend Evan Chow. Evan helped me scrape Yahoo Finance for individual .csv files from the Shanghai index. I wrote a Stata program to merge the files together into a single panel dataset and completed the analysis.

How Are Economists Connected?

The National Bureau of Economic Research, an organization of top economists that serves as a sort of clearinghouse for new research papers, counts nearly 1,400 members. Their interests vary widely, but upon joining the NBER, they sign up for research programs that represent their favored topics.

The NBER has 20 such programs, and economists usually sign up for one or two, although some sign up for more. (Twelve economists are signed up for five programs. Andrei Shleifer is the only one signed up for six.) As a result, we have over 700 connections between topics.

With so many members signing up for different combinations of programs, the NBER's member interest list gives a picture into the field. Not only can it tell us what fields are popular and unpopular, but also, it shows us what combinations are comparatively more or less common -- a window, perhaps, into the connections economists draw within their own field.

So I scraped the NBER's member list and got started. (As always, my data set is available here.)

The first metric I looked at was the correlation of registrations, as you can see in the matrix below. (Click it to enlarge.) You should interpret a significantly positive cell as "economists often put these topics together," a cell with a value near zero as "there are no strong connections between these two topics," and a significantly negative cell as "economists tend not to put these two topics together."

Some things immediately popped out at me. NBER members that were interested in monetary economics, for instance, also tend to be interested in economic fluctuations and growth. Those that are interested in the economics of education also tend to do work on labor economics. Both of those connections make a great deal of sense!

The areas where economists seem to pick and choose are also fascinating. Labor economists seem to dislike asset pricing. Those interested by economic fluctuations and growth stay away from education. And so on.

Friday, August 28, 2015

What Ails the American Startup?

For all the hoopla about Silicon Valley, the data are clear: These are rough times to be a young business in America. In the early 1980s, about 12 percent of all firms were less than a year old. In 2012, however, only 8 percent were.

This raises a good question: What's going on? Why are new firms struggling to gain a foothold? Data from the Business Dynamics Statistics of the US Census offer an interesting answer: The problem isn't with the startups. It's with the economy in which they are starting up.

To reach that conclusion, though, we first need to learn a little bit about entrepreneurship in America. You've probably heard the factoid that 9 out 10 restaurants fail in their first year -- it's false, but never mind -- and actually, only about a quarter of all new firms go bust in their first year. Five years later, 45 percent of firms have survived. It's a pattern, technically called a "survival function," that has repeated itself since at least 1977, when the Census began collecting this data, as the next graph shows.

Let's take that survival function for granted, then, and focus on two specific phenomena. The first is a year-level effect: something that hits all firms in a given year the same amount, no matter when they were founded. The second is a cohort-level effect: something that hits firms founded in a given year the same amount and sticks permanently with that cohort of firms. (Economists: Scroll to the end of the post for the modeling details.)

You might think of the first as a cyclical or structural shock to the economy and the second as whether it was just a big or small "class" of new firms that year. Using the Census data, we can track the number of firms in each cohort for their first five years of existence, allowing us to disentangle the cohort and year effects. We can answer the question: Are the startups getting worse? Or is survival getting harder?

I find that about half of the decline in new firms from 1977 to 2012 can be ascribed to the year-level effect, and that there has been no average change in the cohort-level effect over the same period. The startups aren't that much worse, essentially, but the economy is much harsher towards them. With the same cohort strength but the prior economy, we would have about 200,000 more startups per year -- and about 700,000 more firms less than five years old. Since the US has about 5 million firms, that's a substantial change.

We can compare the actual decline to a counterfactual without the year-level effects:

Here are few more graphs to make sense of this. The first shows the cohort-level effect, and you should notice the lack of a down trend, but also the strong cyclicality, which shows the "smothered in the cradle" effect of recessions on new firm formation. High cohort effects can be thought of as years in which lots of startups launched successfully, whereas low cohort effects are bad years, with few successful launches.

The second shows the year-level effect, and you should notice the persistent down trend, indicating that, for any given firm, survival is becoming harder.

I've also taken the change in the year-level effect, so that we can see more clearly when survival has become harder. What we see, clearly, are two bloodbaths -- the 1980 and 2008 recessions -- and then a slow decline between them, without any obvious cyclicality.

There's a big takeaway here: The decline in new firms seems to be driven by changes that are making new firm survival more difficult in general, not just a decline in the cohort size itself.

*   *   *

Technical explanation

Let nft be the log number of firms founded in year f and alive in year t. I specify the model:

nft = bf + bt + bt-f + eft,

where all the b terms are OLS coefficients and e is an error term. Then bf can be thought of as a cohort-level effect, bt as a year-level effect, and bt-f as a survival function. Note that this isn't actually a survival model but rather more of a quick-and-dirty test with panel-data techniques, and if bt increases year-over-year, the model doesn't make any sense. (Fortunately, this isn't a problem for our data set.)

My cleaned dataset is available here.

Tuesday, June 2, 2015

Who Is On the RUC?

For the last year, I have been working to reconstruct the membership of the RUC, which is probably the most important policy entity in healthcare you've never heard of. The short of it is that RUC is a private organization with a critical public function: it advises the Centers for Medicare and Medicaid Services on how to set the relative prices for physician reimbursement within Medicare.

For example, it's the RUC's job to decide that, say, one treatment of a heart attack is equivalent in value to two treatments of pneumonia. It has come under extensive criticism -- see here, here, and here -- for basically being an unaccountable shadow government that acts in the interest of the American Medical Association and specialist doctors, rather than the medical community as a whole, patients, or the taxpayer. To be clear, I am repeating, not endorsing, that phrasing of the critique of RUC.

Initially, it was my intention, working with Judd Cramer, a friend and grad student at Princeton interested in labor economics, to try to link changes in the composition of the RUC to changes in Medicare's relative prices, known in health-policy circles as RVUs. But we never finished the project, mostly because I was overwhelmed with work this year -- I took a more-than-full load of classes and also wrote this research paper as independent work on the side.

Then the plan was to publish the list in an article with extensive commentary and discussion. In particular, I was very interested in potential conflicts of interest among RUC members, as prior work by Roy Poses has shown this to be a real problem. Yet, to do that, I really needed a complete and fully accurate membership list. That, as I have learned over the last few months, is basically impossible. RUC has been overseen by the AMA since 1991. It now has 32 seats, though it has expanded over the years. This means there are 736 person-years to account for. I could get all but 23 of them.

Over the last year, however, various health-policy researchers have found out that I have been working on this project -- and so I have an increasingly long list of people whom I've been telling to wait.

Yet I've decided that it's in the public interest for me just to publish the list already. (It's the document at the top of this post.) I do so with two honest caveats. First, it's incomplete. I'm missing a handful of years for certain seats, as my efforts to track down some person-years failed. Second, there are probably some inaccuracies. I do not think it is ridden with errors, but I would frankly be surprised if I got everything right. That's just the nature of trying to research a body that has made an extraordinary effort to remain cloaked in secrecy. (The type of error that I think is most likely is that I got some of the years wrong. I think all the names are correct; I am pretty sure anyone I claim was on RUC was in fact on RUC, for approximately the period I say they were. My guess is that I will be off by a year, say, for 10 percent of the people.)

Here is how I put this list together: dozens of hours of archival research. First, I managed to track down old AMA Board of Trustees reports. Those sometimes contained RUC appointments. Second, the medical-specialty newspapers and journals often mention who is currently serving on the RUC on the specialty's behalf. Third, the résumés and websites of ex-RUC doctors often list their full years of service; sometimes you can also find these in articles for the medical-specialty publications when they retire. Fourth, the AMA recently began publishing the current membership as part of an (admirable, but highly incomplete) effort towards transparency. Fifth, I relied on other efforts that Roy Poses and Brian Klepper, among others, have made, to identify RUC members.

I will also try to release some of the related research that I have done on RUC in the coming days. It was past time for me, however, to share this document. Thank you to the many who helped or cheered along this project.

SNAP and Food Security

"SNAP and Food Security: Evidence from Terminations" is the title of my first-ever working paper, which I wrote for my junior-year independent work at Princeton. What I do in the paper is try to measure very carefully the effect of participating in SNAP on households' food security, and the basic idea of how I do that is pretty simple:
[C]onsider two similar groups of households. The first group receives SNAP benefits in both November and December of a given year. The second group receives SNAP benefits in November but not in December. The difference in December food security between the two groups provides an intuitive estimate of the effect of SNAP benefits on food security in December.
 With that kind of comparison in mind, here's what I find:
SNAP participation increases the probability of food security by 10 percentage points (22 percent), with gains concentrated in reducing the probability of extreme food insecurity by 8 percentage points (36 percent), an effect that is broadly comparable to that of a change in household income from $10,000 to $20,000.
Naturally, there's a whole lot more in the paper itself.