Showing posts with label assumptions. Show all posts
Showing posts with label assumptions. Show all posts

Friday, March 17, 2017

A Lawyer and a Physicist Walk Into a Bar

by Levi Russell

A lawyer and a physicist walk into a bar... 

I don't have a good joke for that intro, but I do have a punchline: physicist Mark Buchanan's recent Bloomberg View column entitled "The Misunderstanding at the Core of Economics." What is this misunderstanding, you ask? Well, it's the (mistaken) belief that markets are perfect. This belief, Buchanan alleges, is widely held among professional economists. Buchanan argues that this widespread belief has had tragic consequences:
Economists routinely use the framework to form their views on everything from taxation to global trade -- portraying it as a value-free, scientific approach, when in fact it carries a hidden ideology that casts completely free markets as the ideal. Thus, when markets break down, the solution inevitably entails removing barriers to their proper functioning: privatize healthcare, education or social security, keep working to free up trade, or make labor markets more “flexible.”

Those prescriptions have all too often failed, as the 2008 financial crisis eloquently demonstrated. ...
The trouble with all of this is that none of it is true. If political party affiliation is any indication, the fact that academic economists are overwhelmingly Democrat indicates that pro-market utopianism isn't widespread. Another survey indicates that a mere 8% of academic economists can be considered supporters of free-market principles and only 3% are strong supporters. In terms of economists, Buchanan's only reference is to the late Kenneth Arrow. He provides no evidence that a massive swath of the profession are all free-market ideologues incapable of nuance. Buchanan cites one newly-popular economic commentator, an historian and lawyer, James Kwak. Kwak has been roundly criticized by economists for his simplistic analysis of economic phenomena many times, notably here, here, here, and here and many other times over the years.

So-called "free market" economists are far more nuanced in their views of market and government solutions to the problems in our imperfect world inhabited by imperfect human beings. A short but accurate summary would be something like: "In the real world, markets are, for the most part, better at dealing with externalities and other economic problems than actually-existing governments staffed by actually-existing politicians and bureaucrats." That is, no institutional arrangement is perfect, but the problems associated with voluntarily and spontaneously generated institutions are usually relatively minor when compared with those associated with institutions designed by a central authority. Examples of this nuanced position can be found in previous Farmer Hayek posts here, here, here, here, here, and here, as well as in the writings of Jim Buchanan, Gordon Tullock, Deirdre McCloskey, Pete Boettke, etc. A closer reading of these and other "free market" economists might change Buchanan's mind about the types and level of analysis that leads to "free market" conclusions.

Saturday, December 31, 2016

Testing Market Failure Theories

by Levi Russell

I recently picked up a copy of Tyler Cowen and Eric Crampton's 2002 edited volume Market Failure or Success: The New Debate (now only in print with the Independent Institute, though it was originally published by Edward Elgar) and have really enjoyed what I've read so far. The book is a collection of essays by prominent IO scholars organized into four sections: a fantastic introduction by the editors, four essays that form the foundation of the "new" market failure theories based on information problems, four theoretical critiques of said theories, and 8 essays providing empirical and experimental evidence of the editors' thesis: that information-based market failure theory is often merely a theoretical possibility not borne out in real life and that economic analysis of knowledge often provides us with the reasons why.

Two pieces by Stiglitz are featured in the first theoretical section: one on information asymmetries and wage and price rigidities and the other on the incompleteness of markets. Akerlof's famous "lemons" paper and Paul David's paper on path dependence are also included. I was happy to see that Demsetz's "Information and Efficiency; Another Viewpoint" was the first essay in the theoretical critique section as it sets the stage for the other chapters in that section. The empirical and experimental section features Liebowitz and Margolis' response to Paul David on path dependence in technology, Eric Bond's direct test of Akerlof's "lemons" model, and an essay I've never ready by Gordon Tullock entitled "Non-Prisoner's Dilemma."

The introduction provides a short summary of the arguments presented in the following 3 sections and includes a great discussion of the editors' views of the core problems with information-based market failures. Here's the conclusion of the intro chapter:
Our world is a highly imperfect one, and these imperfections include the workings of markets. Nonetheless, while being vigilant about what we will learn in the future, we conclude that the 'new theories' of market failure overstate their case and exaggerate the relative imperfections of the market economy. In some cases, the theoretical foundations of the market failure arguments are weak. In other cases, the evidence doe snot support what the abstract models suggest. Rarely is analysis done in a comparative institutional framework. 
The term 'market failure' is prejudicial - we cannot know whether markets fail before we actually examine them, yet most of market failure theory is just theory. Alexander Tabarrok (2002) suggests that 'market challenge theory' might be a better term. Market challenge theory alerts us to areas where market might fail and encourages us to seek out evidence. In testing these theories, we may find market failure or we may find that markets are more robust than we had previously believed. Indeed, the lasting contribution of the new market failure theorists may be in encouraging empirical research that broadens and deepens our understanding of markets.
We believe that the market failure or success debate will become more fruitful as it turns more to Hayekian themes and empirical and experimental methods. Above, we noted that extant models were long on 'information' - which can be encapsulated into unambiguous, articulable bits - and short on the broader category of 'knowledge,' as we find in Hayek [Hayek's 1945 article The Use of Knowledge in Society can be read here for free. A short explanation of the main theme of the article can be found here. - LR]. Yet most of the critical economic problems involve at least as much knowledge as information. Employers, for instance, have knowledge of how to overcome shirking problems, even when they do not have explicit information about how hard their employees are working. Many market failures are avoided to the extent we mobilize dispersed knowledge successfully. 
It is no accident that the new market failure theorists have focused on information to the exclusion of knowledge. Information is easier to model, whereas knowledge is not, and the economics profession has been oriented towards models. Explicitly modeling knowledge may remain impossible for the immediate future, which suggests a greater role for history, case studies, cognitive science, and the methods of experimental economics. 
We think in particular of the experimental revolution in economics as a way of understanding and addressing Hayek's insights on the markets and knowledge; Vernon Smith, arguably the father of modern experimental economics, frequently makes this connection explicit. Experimental economics forces the practitioner to deal with the kinds of knowledge an behavior patterns that individuals possess in the real world, rather than what the theorist writes into an abstract model. The experiment then tells us how the original 'endowments' might translate into real world outcomes. Since we are using real world agents, these endowments can include Hayekian knowledge and not just narrower categories of information. 
Experimental results also tend to suggest Hayekian conclusions. When institutions and 'rules of the game' are set up correctly, decentralized knowledge has enormous power. Prices and incentives are extremely potent. The collective result of a market process contains a wisdom that the theorist could not have replicated with pencil and paper alone.

Monday, December 19, 2016

More on Contestability and the Baysanto Merger

by Levi Russell
In a previous post, I discussed monopoly concerns with Bayer's acquisition of Monsanto. The deal was recently approved by Monsanto shareholders but will likely face significant scrutiny from anti-trust regulators.

In the previous post, I went through a paper by several Texas A&M economists that examined the likely consequences of the acquisition for several row crop seed prices. In this post, I'll make some other comments on contestability.

The A&M paper sticks to standard IO theory:
Concentrated markets do not necessarily imply the presence of market power. Key requirements for market contestability are: (a) Potential entrants must not be at a cost disadvantage to existing firms, and (b) entry and exit must be costless.
In contrast to standard IO theory, VRIO analysis suggests costs are always lower for incumbent firms. Managers of incumbent firms have experience with the specific marketing, managerial, and financial aspects of the industry that new entrants simply don't or must obtain at an additional cost.

Does this imply that no industry is "contestable" in an abstract sense? No. As I pointed out previously, prices are falling in many industries, even in those in which entry would entail 1) significant advantages for incumbents and 2) significant sunk costs. It does imply that the conditions for "contestability" are broader than the standard definition. The resource-based view of the firm provides an alternative view of contestability: The advantages for incumbents and potential sunk costs must simply be small enough that they are outweighed by an entrepreneur's expectation of economic profit associated with entering the industry.

So, when we see apparent divergences between price and marginal cost, as I see it there are three possibilities:

1) there are costs we as third-party observers don't see
2) the economic profit is associated with short-term returns to innovation (e.g. monopolistic competition)
3) there is a legal barrier to entry that is extraneous to the market itself.

This dynamic perspective (which I argue is easily teachable to undergrads) is much more powerful in advancing our understanding of real-world market behavior. Yes, the more unrealistic assumptions made in standard theory allow for more elegant mathematical modeling, but if our goal is to understand causal factors associated with firm behavior, the resource-based view of the fiirm, VRIO analysis, and other dynamic theories are more useful.

Sunday, October 23, 2016

Monopoly Concerns with Baysanto

by Levi Russell

The recent merger of DuPont/Pioneer with Dow and the acquisition of Monsanto by Bayer have sparked a lot of discussion of market concentration, monopoly, and prices. A recent working paper published by the Agriculture and Food Policy Center (AFPC) at Texas A&M University written by Henry Bryant, Aleksandre Maisashvili, Joe Outlaw, and James Richardson estimates that, due to the merger, corn, soybean, and cotton seed prices will rise by 2.3%, 1.9%, and 18.2%, respectively. They also find that "changes in market concentration that would result from the proposed mergers meet criteria such that the Department of Justice and Federal Trade Commission would consider them “likely to enhance market power” in the seed markets for corn and cotton." (pg 1) The paper is certainly an interesting read and I have no quibble the analysis as written. However, some might draw conclusions from the analysis that, in light of other important work in industrial organization, are not well-founded.

The first thing I want to point out is that mergers an acquisitions can, at least potentially, result in innovations that would justify increases in the prices of the merged firm's products. To the extent that VRIO analysis is descriptive of firm's behavior with respect to innovation, we would expect that better entrepreneurs would be able to price above marginal cost. Harold Demsetz made this point in his 1973 paper Industry Structure, Market Rivalry and Public Policy. The authors of the AFPC study point this out as well, but the problem is that, even though we have estimates of potential price increases due to the mergers, it is very difficult to determine whether any change in price in the future is actually attributable to market power or simply due to innovation in the seed technology.

Secondly, the standard models of monopoly assert that pricing above marginal cost is at least potentially a sign of a firm exercising market power. Here, articles by Ronald Coase and Armen Alchian are relevant. I provided a discussion of the relevant portions in a previous post so I'll just briefly summarize here: pricing above marginal cost is an important signal that the current market demand is potentially not being met by the firms in the industry. It's a signal to other potential investors that entering the industry might be worth it. Further, there is an issue of measurement. Outside observers may calculate fixed cost, variable cost, and price and determine that a firm is pricing above marginal cost. However, there may be costs of which said observers are unaware. For example, there may be significant uncertainty (which is not the same as risk) about the future prospects of the industry. This is certainly possible in the biotechnology industry since the government heavily regulates firms in this sector. This is not to say that such regulation is bad or should be removed, simply that it presents costs that are difficult for outsiders to calculate.

Finally, I want to examine one part of the analysis in the AFPC paper. On pages 10 and 11, the authors write (citations deleted):
A market is contestable if there is freedom of entry and exit into the market, and there are little to no sunk costs. Because of the threat of new entrants, existing companies in a contestable market must behave in a reasonably competitive manner, even if they are few in number.
Concentrated markets do not necessarily imply the presence of market power. Key requirements for market contestability are: (a) Potential entrants must not be at a cost disadvantage to existing firms, and (b) entry and exit must be costless. For entry and exit to be costless or near costless, there must be no sunk costs. If there were low sunk costs, then new firms would use a hit and run strategy. In other words, they would enter industry [sic], undercut the price and exit before the existing firms have time to retaliate. However, if there are  high sunk costs, firms would not be able to exit without losing significant [sic] portion of their investment. Therefore, if there are high sunk costs, hit-and-run strategies are less profitable, firms keep prices above average costs, and markets are not contestable. 
I submit that under this definition, scarcely any industry on the planet is contestable, yet we see prices fall in many industries over time, even in those we would expect to have significant sunk costs and in which we would expect incumbents to have significant cost advantages over new entrants.

It's true that we sometimes must make simplifying assumptions that are at odds with reality to forecast future market conditions. However, some might infer from the AFPC paper (though I stress that the authors do not) that something must be done by anti-trust authorities to unwind the mergers and acquisitions under discussion. To infer this would be to commit the Nirvana Fallacy. To expect anything in the real world (whether in markets or in the policymaking arena) to be "costless" is an impossible standard.

It will be interesting to see what becomes of these mergers and whether seed prices move sharply upward in coming years. What is certain is that there is tremendous causal density in any complex system, such as the market for bio-engineered seed. Thus, policymakers should be humble and cautious about applying the results of theoretical and statistical analysis in their attempts to better our world.

Thursday, October 20, 2016

Klingian Philosophy of Economic Science

by Levi Russell

One of my favorite things to do in this blog is to talk about unconventional perspectives on economic theory. A great source for such unconventional views is Arnold Kling's blog. The recent Nobel Prize awarded to Oliver Hart and Bengt Holmstrom prompted Kling to write a series of posts detailing his views on economic theory, specifically about the epistemology of economics. Kling's own brand of unconventionality is especially interesting given that he received his PhD from MIT. Below I reproduce a post from last week:

A commenter writes,
So in your opinion intuition is sufficient. As long as we can tell an intuitive story about something, that is as good as proving it?
I think that “proof” is too high a standard to use in economics. If our knowledge is limited to what we can prove, then we do not know anything. I think that we have frameworks of interpretation which give us insights. This is knowledge, even if it is not as definitive or reliable as knowledge in physics or chemistry.

As an example, take factor-price equalization. The insight is that the easier it is to trade across countries, the more that factor prices will tend to converge. I think that this is an important insight. It is one of what I call the Four Forces driving social and economic trends in recent decades. (The other three are assortative mating, the shift away from manufacturing toward health care and education, and the Internet.)

Paul Samuelson proved a “factor-price equalization theorem” for a special case of two factors, two goods and two countries. However, it is very difficult, if not impossible, to extend that theorem to make it realistic, including the fact that not all industries are subject to diminishing returns. In my view, Samuelson’s theorem per se offers no insight, because it is so narrow in scope. The unprovable broader insight is what is useful.

Incidentally, I also think that factor-price equalization is hard to prove statistically. Too many other things are happening at once to be able to say definitively that factor-price equalization is having an effect, say, on unskilled workers’ wages in the U.S. and China. I believe that it is having an effect, and there are studies that support my view, but it is not provable.

In order to prove something mathematically, you have to make narrow assumptions. In physics or engineering, this often works out well. When you roll a ball down an inclined plane, ignoring friction causes only a small error in the calculation.

In economics, the factors that you leave out in order to build a mathematical model tend to be more important. As a result, the requirement to express ideas in the form of mathematical models is harmful in two ways. We waste time proving false theorems and we miss out on useful insights.
The narrow assumptions lead you to prove something which is false in the real world.. For example, the central insight of the “market for lemons” proof is that a used car market cannot work. However, once we expand the assumptions to allow for warranties, dealer reputations, mechanics’ inspections, and so on, the original theorem does not hold.

Meanwhile, there are insights that are missed because they cannot be represented in an elegant mathematical way. A lot of the insights that I offer in Specialization and Trade fall in that category.
Our goal should be to acquire knowledge. The demand for proof hurts rather than helps with that process.
Bonus: I really enjoyed this piece from the Sloan Management Review published back in 2011.

Teaser: I'll be giving my thoughts on the Baysanto merger later this week or weekend.

Thursday, September 15, 2016

Coase and Hog Cycles

by David Williamson

If you read this blog, then you're probably familiar with Ronald Coase's work on the importance of transaction costs. But did you know that Coase devoted a substantial portion of his early career to criticizing the Cobweb Model? He actually wrote 4 separate articles on the subject between 1935 and 1940, but not one makes Dylan Matthew's list of Coase's top-five papers. This work is actually really fascinating in the context of economic intellectual history, so here is a quick summary!  

The 1932 UK Reorganization Commission for Pigs and Pig Products Report

It all started when the UK Reorganization Commission for Pigs and Pig Products claimed in a 1932 report that government intervention was needed to stabilize prices in the hog industry. The Commission found that hog prices followed a 4-year cycle: two years rising and two years falling. The Commission explained this cyclical behavior using the Cobweb Model. In this model, products take time to produce. So, to know how much to produce, firms have to guess what the price will be when their product is ready to bring to the market. If producers are systematically mistaken about what prices will be, this could lead to predictable cycles in product spot prices.

The Cobweb Model

How forecasting errors can lead to cycles in product prices is illustrated in the figure below. Suppose we begin time at period 1 and hog producers bring Q1 to the market to sell. Supply is essentially fixed this period because producers can't produce more hogs on the spot, so the price that prevails on the market will be P1. Since this price exceeds the marginal cost of production (represented by S), the individual producers wish they had produced more. Now, when the producers go back home to produce more hogs, they have to guess that the price will be when their hogs are ready to sell. Suppose it will take 2 years to produce more hogs. The UK Reorganization Commission argued that hog producers will assume the price of hogs next period will be the same as it was this period (in other words that producers had "static" expectations about price). That means, in this context, hog producers think the price of hogs in 2 years will still be P1. So each producer will individually increase production accordingly. However, when the producers return to the market in 2 years, they will find that everyone else increased production too and that quantity supplied is now Q2. As a result, the price plummets to P2 and the producers actually lose money. Not learning their lesson, the hog producers will again go home and assume that the price next period will be P2 and collectively cut back their production to Q3. Hopefully you see where this is going, even if the hog producers don't. The price will go up again in 2 years and then down again in 2 more. Thus, we have a 4-year cycle in hog prices. How long will this cycle continue? That depends on the elasticities of supply and demand. If demand is less elastic than supply, as was believed to be the case in the hog market, then the price swings will continue forever and only get bigger as time goes on.

220px-Cobweb_theory_(divergent).svg.png
Source: Wikipedia

Coase Takes the Model to the Data

The Cobweb Model is really clever, but does it actually capture the reality of the hog market? Coase and his co-author Ronald Fowler tried to answer that question by evaluating the model's assumptions. First, are hog producer expectations truly static? Expectations cannot be observed directly, but Coase and Fowler (1935) used market prices to try and infer whether producer expectations were static. It didn't seem like they were. Second, does it really take 2 years for hog producers to respond to higher prices? Coase and Fowler (1935) spend a lot time discussing how hogs are actually produced. They found that the average age of a hog at slaughter is eight months and that the period of gestation is four months. So a producer could respond to unexpectedly higher hog prices in 12 months (possibly even sooner since there were short-run changes producers could also make to increase production). So why does it take 24 months for prices to complete their descent? Even if we assumed producers have static expectations, shouldn't we expect the hog cycle to be 2 years instead of 4?  

This evidence is hard to square with the Cobweb Model employed by Reorganization Commission, but Coase's critics were not convinced. After all, if it wasn't forecasting errors that were driving the Hog Cycle, then what was? "They have, in effect, tried to overthrow the existing explanation without putting anything in its place" wrote Cohen and Barker (1935). Coase and Fowler (1937) attempted to provide an explanation, but this question would continue to be debated for decades.

The Next Chapter

Ultimately, John Muth (1961) proposed a model that assumed producers did not have systematically biased expectations about future prices (in other words that they had "rational" expectations). Muth argued this model yielded implications that were more consistent with the empirical results found by Coase and others. For example, rational expectations models generated cycles that lasted longer than models that assumed static or adaptive expectations. So a 4-year hog cycle no longer seemed as much of  a mystery. I'm not sure what happened to rational expectations after that. I hear they use it in Macro a bit.  Anyways, if you are interested in a more detailed summary of Coase's work on the Hog Cycle, then check out Evans and Guesnerie (2016). I found this article on Google while I was preparing this post and it looks very good.

References

Evans, George W., and Roger Guesnerie. "Revisiting Coase on anticipations and the cobweb model." The Elgar Companion to Ronald H. Coase (2016): 51.

Coase, Ronald H., and Ronald F. Fowler. "Bacon production and the pig-cycle in Great Britain." Economica 2, no. 6 (1935): 142-167.

Coase, Ronald H., and Ronald F. Fowler. "The pig-cycle in Great Britain: an explanation." Economica 4, no. 13 (1937): 55-82.

Cohen, Ruth, and J. D. Barker. "The pig cycle: a reply." Economica 2, no. 8 (1935): 408-422

Muth, John F. "Rational expectations and the theory of price movements."Econometrica: Journal of the Econometric Society (1961): 315-335.

Monday, July 11, 2016

Some Nuance on the $15 Minimum Wage

by Levi Russell

Adam Millsap at the Mercatus Center has a great short piece on the effect the $15 minimum wage would have on labor markets. Though Millsap criticizes the $15 minimum wage, he does it in a very different way than any I've seen.

He takes as a starting point Arindrajit Dube's conjecture that the minimum wage should be set at 50% of the median wage. It's important to note that Dube is actually a proponent of the $15 minimum wage but believes that it could create problems, especially if the ratio is above 80%.

Millsap uses data from Washington D.C. and Minneapolis, MN to calculate the (projected) ratio of the $15 minimum wage to half the median wage in each of these cities. Millsap shows that in Minneapolis, the $15 minimum wage is projected to be 86% of the median wage for people 16 years of age and older. In D.C., the ratio is only 53%.

So, given Dube's preference for a minimum wage set at 50% of the median wage and warning about a minimum wage over 80% of the median, the $15 minimum is potentially very problematic for cities like Minneapolis. I imagine that it would be far worse for smaller rural communities.

Thursday, June 23, 2016

Zoning Laws Past and Present

In the past I've linked to interesting material on community and regional economics that I found interesting. I'm not a community/regional economist but regulation (a subject I'm very interested in) is often discussed in regional economics analyses.

This article from the NY Times is one such example. The article discusses the various zoning regulations currently in place that would effectively prohibit the construction of 40% of the existing buildings in Manhattan, NY today. The article does a great job of explaining the various regulations and mapping out where the violations of each type exist in the city. Working on the (perhaps incorrect) assumption that agglomeration is a "good thing," it makes me wonder just how much economic growth New Yorkers are missing out on due to current regulations. We'll never know.

Sunday, June 19, 2016

Specialization and Trade - A Reintroduction to Economics

That's the tile of Arnold Kling's newest book. It's published by the Cato Institute and is available in e-book format on Amazon for a mere $3.19. You can also download a PDF copy here free. Arnold Kling is an MIT trained economist who spent the bulk of his professional economic career at the Federal Reserve and Freddie Mac. Kling's blog, one of the best on the web in my opinion, is always thought-provoking. As the title of his blog suggests, he makes every effort to understand and fairly state the positions of those with whom he disagrees.

I read a couple of blurbs about the book last week and have only just finished the first chapter. So, rather than write a review, I'll reproduce a section of the Introduction that gives a short description of each chapter. Kling certainly has a unique perspective and I suspect I'll learn a lot from this relatively short book.
“Filling in Frameworks” wrestles with the misconception that economics is a science. This section looks at the difficulties that economists face in trying to adopt scientific methods. I suggest that economics differs from the natural sciences in that we have to rely much less on verifiable hypotheses and much more on hard-to-verify interpretative frameworks. Economic analysis is a challenge, because judging interpretive frameworks is actually harder than verifying scientific hypotheses. 
“Machine as Metaphor” attacks the misconception held by many economists and embodied in many textbooks that the economy can be analyzed like a machine. This section looks at a widely used but misguided approach to economic analysis, treating it as if it were engineering. The economic engineers are stuck in a mindset that grew out of the Second World War, a conflict that was dominated by airplanes, tanks, and other machines. Their approach fails to take account of the many nonmechanistic aspects of the economy. 
“Instructions and Incentives” deals with the misconception that economic activity is directed by planners. This section explains that although people within a firm are guided to tasks through instruction from managers, the economy as a whole is not coordinated that way. Instead, the price system functions as the coordination mechanism. 
“Choices and Commands” is concerned with the misconceptions held by socialists and others who disparage the market system. This section explains why a decentralized price system can work better than a centralized command system. Central planning faces an information problem, an incentive problem, and an innovation problem. 
“Specialization and Sustainability” exposes the misconception that we must undertake extraordinary efforts in order to conserve specific resources. This section explains how the price system guides the economy toward sustainable use of resources. In contrast, individuals who attempt to override the price system through their individual choices or by imposing government regulations can easily miscalculate the costs of their actions. 
“Trade and Trust” addresses the misconception among some libertarians that the institutional infrastructure needed to support specialization and trade is minimal. Instead, this section suggests that for specialization to thrive, societies must reward and punish people according to whether they play by rules that facilitate specialization and trade. A variety of cultural norms, civic organizations, and government institutions serve this purpose, but each of those institutions has its drawbacks. 
“Finance and Fluctuations” deals with the misconceptions about finance that are common among economists, who often fail to appreciate the process of financial intermediation. This section looks at the special role played by financial intermediaries in enabling specialization. Intermediation is particularly dependent on trust, and as that trust ebbs and flows, the financial sector can amplify fluctuations in the economy’s ability to create patterns of sustainable specialization and trade. 
“Policy in Practice” corrects the misconception that diagnosis and treatment of “market failure” is straightforward. This section looks at challenges facing economists and policymakers trying to use the theory of market failure. The example I use is housing finance policy during the run-up to the financial crisis of 2008. The policy process was overwhelmed by the complexity of the specialization that emerged in housing finance. Moreover, the basic thrust of policy was determined by interest-group influence. The lesson is that a very large gap exists between the economic theory of public goods and the practical execution of policy. 
“Macroeconomics and Misgivings” argues that it is a misconception, albeit one that is well entrenched in the minds of both professional economists and the general public, to think of the economy as an engine with spending as its gas pedal. This section presents an alternative to the mainstream Keynesian and monetarist traditions. I argue that fluctuations in employment arise from changes in the patterns of specialization and trade. Discovering new patterns of sustainable specialization and trade is more complex and subtle and less mechanical than what is assumed by the Keynesian and monetarist traditions.

Friday, June 10, 2016

We're All Utilitarians Now?

by Levi Russell

As an avid EconTalk listener, I often hear Russ Roberts, the host, talk about his skepticism of many aspects of modern economics. I'm usually at least a little sympathetic with Russ's point of view, but a recent Wall Street Journal interview featuring Roberts threw me off:
Economics fancies itself a science, and Mr. Roberts used to believe, as many of his peers do, that practitioners could draw dispassionate conclusions. But he has in recent years undergone something of a crisis of economic faith. "The problem is, you can't look at the data objectively most of the time," he says. "You have prior beliefs that are methodological or ideological about the impact of things, and that inevitably color the assumptions you make." 
A recent survey of 131 economists by Anthony Randazzo and Jonathan Haidt found that their answers to moral questions predicted their answers to empirical ones. An economist who defines "fairness" as equality of outcome might be more likely to say that austerity hurts growth, or that single-payer health care would bend the cost curve. The paper's authors quote Milton Friedman's brief for "value-free economics" and reply that such a thing "is no more likely to exist than is the frictionless world of high school physics problems."
I certainly think our interests and ideology can steer us into asking certain questions, but I'm not sure I agree that it affects the results of our analyses as much as Roberts seems to think. The deeper issue just might be the following: our cost/benefit analyses implicitly assume a utilitarian worldview. Thus, when asked about our policy views, we are more likely to narrow our own morality to fit within the confines of utilitarianism. If a cost/benefit analysis comes out in favor of Policy X, are we not expected to favor Policy X even if our analysis didn't include other moral goods such as freedom or justice? Are we, as economists, all utilitarians?

The other day I happened to run across an article by philosopher Rutger Claassen in the Journal of Institutional Economics entitled "Externalities as a Basis for Regulation: A Philosophical View" that addresses this deeper issue. Here's an excerpt from the introduction:
Thus, the main question of the paper simply is: when should an externality be reason for state intervention? Which externalities deserve internalization? The aim of the paper is to show that the utilitarian criterion for answering this question which is embedded in economic analyses is implausible. Instead, I will argue that we need to follow those philosophers who have argued in the line of John Stuart Mill, in favor of the harm principle. Externalities are structurally analogous to harms in political philosophy. Work on the harm principle, however, points to the need for a theory of basic human interests to operationalize the concept of harm/externalities. In the end, therefore we need to fill in judgments about externalities with judgments about basic human interests. If my analysis  is convincing, then one overarching point of importance for the whole tradition of market failure theories emerges. This is what the customary attitude to the issue, to juxtapose economic theories and philosophical grounds for regulation, is highly problematic. It is telling that most handbooks on regulation start with an overview of market failures, and then add to these efficiency-based rationales some philosophical reasons for regulating: usually social justice (equity) reasons and moralistic/paternalist reasons. Instead we need to integrate both frameworks, by showing how philosophical pre-suppositions are at work within economic categories of market failure.
The author begins by discussing Pigovian and Coasean perspectives on externalities and how to deal with them. Claassen does a good job explaining both perspectives and mentions that transactions costs are a problem for both market participants and for government regulators.

The bulk of the article is dedicated to Claassen's criticism of the utilitarian perspective taken by the bulk of economic policy analysis, and discussing the harm principle as a better basis for normative analysis in economics. Specifically, he discusses 1) moral externalities which arise "from preferences about other people's behavior," 2) pecuniary externalities which are losses/gains due to changes in consumer preferences, technological innovation, or competition, and 3) positional externalities which "arise where consumers lose welfare because they compare themselves to others."

He concludes the section:
These cases point to different problems with a purely utilitarian calculus: it ignores issues of individual freedom (moralistic externalities) and justice (pecuniary externalities); and the calculus itself is highly indeterminate (positional externalities). However, philosophers thus far have been stronger at criticizing economic externality analyses than at providing an alternative. Can we find a more solid ground for a normative analysis of externalities?
The rest of the article develops his theory of "basic interests" and applies the theory to the Supreme Court's June 2012 verdict on the "individual mandate" found in the Affordable Care Act (Obamacare). I leave these to the interested reader.

Here's Claassen's conclusion:
This paper has aimed to establish three conclusions. First, economic externalities analyses are probelmatic because they ignore important normative considerations about individual freedom and justice, largely due to their utilitarian grounding (section 3). Second, some philosophers have proposed to exploit the analogy with the harm principle in liberal political philosophy. however, if we follow up on this suggestion and explore representative theories of harm (such as those by Joel Feinberg or Joseph Raz), this points to the need for a theory of basic human interests that does the real normative work in diagnosing harms. Such a theory is needed to evaluate which externalities call for state regulation (section 4). Third, what these basic interests are, in the end, is a matter of political dispute. Economists who have complained about the politicization of externality analyses have simply failed to accept the inherently political nature of questions about the organization of social and economic life. [emphasis mine]
Claassen's paper raises some important issues with the current moral underpinnings of economic analysis and challenges us to think more deeply about the assumptions we make about morality in normative analysis. As policymakers rely more and more on economic analysis, it's good to see these issues being addressed.

Thursday, May 5, 2016

Intentions, Faith, and the Nirvana Fallacy

I've addressed the Nirvana Fallacy several times on this blog, and keep finding new examples of it, especially in the popular press. Many economists seem to be unaware of this fallacy and Mark Thoma is no exception. I've critiqued him previously on this issue, but his most recent commission of the fallacy is especially interesting. Below I share key parts of his recent CBS News column (in block quotes) with some of my commentary.

The Nirvana Fallacy, as put forth by UCLA economist Harold Demsetz, is the comparison of real-world phenomena to unrealistic ideals. The mere fact that economic models can specify a perfect policy solution to a problem doesn't imply that real-world political and legal institutions can successfully implement that policy. More importantly, though, imperfections in markets which are the result of informational inefficiencies can't be solved readily by governments because the governments themselves lack the necessary information.

In addition to being quite confident about the ability of economic models to generate policies that "break up monopoly" and "force firms to pay the full cost of pollution they cause," Thoma seems to put a lot of stock in the intentions of regulators and politicians.
When government steps in to try to correct these market failures -- breaking up a monopoly, regulating financial markets, forcing firms to pay the full cost of the pollution they cause, ensuring that product information is accurate and so on -- it's not an attempt to interfere with markets or to serve political interests. It's an attempt to make these markets conform as closely as possible to the conditions required for competitive markets to flourish. 
The goal is to make these markets work better, to support the market system rather than undermine it.
It may very well be that all legislators and regulators have the purest of intentions. Even so, that doesn't imply that their policies will actually achieve the results they desire. Good intentions are a necessary but not sufficient condition for efficient and effective government solutions. Decades of work in public choice economics and more recent work in behavioral public choice show that the implementation of government policies is fraught with its own government failures. Why doesn't Thoma mention these?

Perhaps the clearest example of the Nirvana Fallacy in Thoma's column comes a few paragraphs down:
In other cases, it's less well understood that failure is the reason for the government to regulate a market, or even provide the goods and services itself. Social security and health care come to mind. But once again, the private sector's failure to deliver these goods at the lowest possible price, or to deliver them at all, is at the heart of the government's involvement in these markets. (emphases mine)
Here we have Thoma's standard for real world markets. They must deliver certain goods and services at the lowest possible price. What does he mean by "possible?" Possible in the abstract world of economic theory? Why is this a relevant comparison? Does Thoma also propose we hold the actual activities of politicians and regulators to such an ideal?

Further, I'm not sure what he means by "deliver them at all." We have accidental death and dismemberment insurance, life insurance, and health insurance in private markets and have had them for a long time. We've had health care for much longer than the government has been as heavily involved as it is now. In fact, the evidence suggests that political favoritism killed a very useful alternative health care system for the poor and blue-collar folks back in the 1930s. On the insurance side of things, it's at least plausible that increases in payroll taxes decades ago helped bring about employer-provided insurance and exacerbate the problem of preexisting conditions.

Finally, let's unpack the last two paragraphs in Thoma's column. He writes:
Conservatives tend to have more faith in the ability of markets to self-correct when problems exist, and less faith in government's ability to step in and fix market failures without creating even more problems. Honest differences on this point are likely, but there are certainly cases where most people would agree that some sort of action is needed to overcome significant market failures.
Where to start? From his use of the word "conservative" as the only descriptor of his intellectual opponents, it's clear that Thoma is thinking about this as a purely political issue, not as a technical economic issue. He also seems to think that mere faith is the only reason someone might disagree with his view. Conservatives, he says, have more faith in markets and less faith in governments. Again, the public choice literature documents quite well the problems actual politicians and regulators have with implementing the idealized policies derived from economic models. He goes on to say that honest differences are "likely," not "possibly justified" or "important to consider." It seems Thoma can't conceive of a reason for his opponents to doubt the ability of the government to fix the problems he sees with the world outside of pure ideology.

Thoma's final paragraph really demonstrates the problems with the static model through which he views the world:
However, when ideological or political goals (such as lower taxes for the wealthy or reduced regulation so that businesses can exploit market imperfections) lead to attacks on those who call for government to make markets work better -- often in the guise of getting government out of the way of the market system -- it undermines government's ability to promote the competitive market system the opponents claim to support.
Government regulations essentially amount to fixed costs that prevent new firms from entering markets and existing smaller firms from competing with larger firms. Maybe these regulations are still justified, but it's not plainly obvious using the static model Thoma seems to prefer. From their inception, anti-trust suits were and still are brought mostly by competitors, not consumers. A look at the data from the late 19th and early 20th centuries doesn't tell the same "Robber Baron" story we hear in 9th grade history texts. Output was expanding and prices falling in the industries accused of being dominated by monopolies.

Richard Langlois' recent testimony to the British Parliament on dynamic competition provides some important critiques of static models. Here are some excerpts:

On monopoly and barriers to entry:
There are only two ways that a platform can maintain prices above marginal costs. One is to be more efficient that one’s competitors – to have lower costs, for example. Such a situation would not be “policy relevant,” in the sense that taking regulatory or antitrust action against the more-efficient competitor would make society worse off. The other way to maintain price durably above marginal cost is to have a barrier to entry.  
The static and dynamic views are in agreement that competition requires free entry. Taking a static view often leads to intellectual confusions about the nature of barriers to entry (that they can arise from the shape of cost curves, for example); but in the dynamic view it is clear that barriers to entry are always property rights – legal rights to exclude others.(1) For example, one can have a monopoly on newly-mined diamonds if one owns all the known underground reserves of diamonds. More typically, especially in the case of platforms, the property rights involved are government-created rights of exclusion, either in the form of intellectual property or regulatory barriers.
On the abuse of market power:
What if it is customers who complain about the “abuse” of market power? To an economist, the problem with market power is the (static) inefficiency it creates. There is no such thing as the “abuse” of market power. Economists have understood for some time that a firm possessing market power cannot by its own actions increase that market power. The only way a firm can get market power (apart from being more efficient) is to possess a barrier to entry. What many see as “abuses” are usually what modern-day economists have come to call non-standard contracts: contractual practices beyond the simple calling out of prices in a market, practices that seem “restrictive.” These practices are often solutions to a much more complicated problem of production and sales than is contemplated in the simplified models of market power. They are very frequently an effort to overcome problems created by high transaction costs.(2)
The quality of discussions of the benefits of government intervention would be greatly improved if some notion of the costs of such intervention were mentioned. This would include discussions of dynamic models of competition and the explicit admission that politicians and regulators are subject to the same cognitive biases and information problems that cause real-world markets to deviate from the perfection of static economic models.

Saturday, April 30, 2016

Behavioral Public Choice - A Literature Review

Bryan Caplan recently posted about a fantastic West Virginia Law Review article that provides a lengthy discussion of the intersection of public choice (the application of economics to politics) and behavioral economics (the application of psychology to economics). 

Here's a segment of the introduction sans footnotes:
Behavioral public choice is both an extension of and a reaction to behavioral economics and its counterpart in legal scholarship, behavioral law and economics. Psychologists and behavioral economists have documented imperfections in human reasoning, including mental limitations and cognitive and emotional biases. Their research challenges the rational actor model of conventional economics, especially the idea that individuals acting in a free market can make optimal decisions without the government's assistance. Behavioral economists and legal scholars in the behavioral law and economics movement have used this research to justify paternalistic government interventions, including cigarette taxes and consumer protection laws, that are intended to save people from their own irrational choices. 
Because of their focus on market participants and paternalism, most behavioral economists and behavioral law and economics scholars ignore the possibility that irrationality also increases the risk of government failure. Behavioral public choice addresses that oversight by extending the findings of behavioral economics to
the political realm.  
A key insight of behavioral public choice is that people have less incentive to behave rationally in their capacity as political actors than in their capacity as market actors. 
Another law and economics article entitled "Nudging in an Evolving Marketplace: How Markets Improve Their Own Choice Architecture" tackles a similar topic. Here's the abstract:
Behavioral economics claims to have identified certain systematic biases in human decision-making with the implied assumption — sometimes leading to an explicit policy proposal — that these biases can only be corrected through centralized planning. While the appropriateness of policy corrections to perceived biases remains an open debate, far less attention has focused on the role markets already play in “nudging” consumers toward more mutually beneficial outcomes. We describe a process by which markets evolve over time to satisfy consumer preferences — or risk failure and removal from the marketplace. By organizing our understanding of markets in this dynamic, evolutionary sense, we expose a basic logic that dominates market transactions as they occur in practice; that is, the mechanisms that ultimately survive market competition tend to compensate for, limit, or otherwise reduce the incidence of bias. We explore empirical evidence for this argument in the market for consumer financial products.
This brings to mind a few previous posts of mine on market dynamics and monopoly. You can read them here, here, and here.

Friday, March 11, 2016

Relatively Good Regulation - GMO Edition

In previous posts on food labeling I've discussed food labels and the information they provide as well as possible reasons why a private GMO label hasn't already appeared. In this post, I'll discuss the reasons commodity groups are in favor of federal GMO labeling legislation.

Senator Pat Roberts (R-Kansas) recently introduced legislation that would establish federal guidelines for GMO labeling. The law would preempt state mandates for GMO food labels and start an educational campaign for the public on the safety of GMO foods.

The question isn't whether farmers, food companies, and retailers believe the guidelines are good for them financially but whether these guidelines are better than the relevant alternative. I'd wager that food companies would, in an ideal world, prefer to label their food in a manner that maximizes their profit.

Since that world doesn't exist, and there's a credible threat that interest groups in some states will successfully pass legislation mandating GMO labels, federal preemption of such laws is preferable. For producer groups, federal preemption makes it less likely that potential discounts on conventionally-produced food will be passed on to them. Additionally, the cost of educating consumers will not be borne by food companies, retailers, and farmers but by taxpayers.

As I argued in a previous post, the tremendous cost of educating the public on the safety of GMO foods is one possible reason why we haven't seen widespread efforts by food companies or third parties to create a GMO labeling scheme.  Another possible reason is the presence of substitute labels. Many consumers who are concerned about the safety of GMO food might be content buying food labeled "organic."

The more I think about it, though, the more I'm convinced that the main reason we haven't seen a third-party, private effort to create a GMO label is that the public generally trusts only the federal government to ensure food safety. It's true we have all sorts of private labels informing consumers of the characteristics of the food they buy, but safety is a separate issue in most people's minds.

The new legislation introduced is likely to be a net benefit to farmers, food companies, and retailers. They'll be shielded from the risk of more onerous regulation at the state level and won't have to bear the cost of educating the public about GMO safety. This makes the bill, from their points of view, relatively good regulation.

Monday, March 7, 2016

Don Boudreaux's Review of Phishing for Phools

I'm a regular reader of Don Boudreaux's (George Mason U) blog Cafe Hayek so I was glad to see his review of George Akerlof's (Georgetown U) and Robert Shiller's (Yale U) recent book "Phishing for Phools." Boudreaux's review is very good and I recommend you read the whole thing. If you're not a Barron's subscriber, you should be able to bypass the gate by searching "Boudreaux ivory tower economics" in Google News.

The review begins with a short synopsis of the book. Akerlof and Shiller argue that most people, as consumers and investors, suffer from weaknesses and informational problems that lead them to buying things they don't really want or need. Entrepreneurs who take advantage of these problems are "phishermen" and those who fall prey are "phools."

As you might expect, food is a popular topic in the book. The folks at Cinnabon are said to create a product so irresistible, yet full of empty calories, that passersby are helpless to resist. Another example are "Sunkist" oranges. According to Akerlof and Shiller, this advertising "trick" causes consumers to buy too many oranges.

It's certainly true that these products are tempting and likely deliberately so. The questions are 1) whether these temptations amount to a moral problem and 2) what is to be done. Assuming that there is a moral problem, it's difficult to know what should be done. As I've discussed previously here on the FH blog, it's not enough to criticize real-world market results relative to perfect theoretical policies drawn out on a black board. We have to compare said market results relative to policies that operate in the real world.

Here's Boudreaux's analysis of Akerlof and Shiller's solution:
Suppose it’s true, however, that modern markets are chock-full of devious phishermen preying successfully upon helpless phools who buy too many oranges in the belief that each has been “kist” by the sun. What’s to be done? The authors offer no specific proposals. Yet they clearly imply that more government regulation is a key part of the solution. At one point, for example, they advocate “more generous funding” for the Securities and Exchange Commission; at another, they speak approvingly of greater regulation of slot machines.

Solutions via government are based on a glaring fallacy: that people deficient in choosing for themselves in the marketplace will automatically shed those deficiencies once the government authorizes them to choose for others. Ironically, while citing slot machines, the authors make no mention of a related scam: government-run lotteries. The lotteries are perhaps the most obvious example of how those who are supposed to protect us from phishing scams themselves eagerly phish for phools.

Nothing, indeed, could be more phoolish than for ordinary men and women to submit to elites who are as confident as professors Akerlof and Shiller that they know best how other people should behave. Such elitism poses a far worse danger to society than entrepreneurs offering aromatic pastries for sale.

Even if people are terrible at making choices in their own best interests, a fundamental truth is that they own their lives. Self-respecting people want to be free to consult those with greater knowledge. But they would much prefer to risk undermining their own well-being through their own choices than to be saddled, bridled, and steered by self-appointed experts.

Friday, February 26, 2016

More on Rural Health Care and Certificate of Need Laws

After my last post on certificate of need laws (CON laws) went up earlier this week a colleague rightly noted that the paper only addressed the quantity of facilities available in the rural areas of a state, not the quality of the care provided.

I think this recent post is at least a step in that direction. The authors examine the availability of imaging services in states with and without CON laws and find that:
These CON requirements effectively protect established hospitals from nonhospital competitors that provide medical imaging services, such as independently practicing physicians, group practices, and other ambulatory settings. In the process of protecting hospitals from these nonhospital providers, CON laws limit the imaging services available to patients.
The existence of a CON law decreases MRI scans by 34 percent, CT scans by 44 percent, and PET scans by 65 percent, all relative to states without CON laws.
As Hayek argued in the conclusion to his Nobel speech:
If man is not to do more harm than good in his efforts to improve the social order, he will have to learn that in this, as in all other fields where essential complexity of an organized kind prevails, he cannot acquire the full knowledge which would make mastery of the events possible. He will therefore have to use what knowledge he can achieve, not to shape the results as the craftsman shapes his handiwork, but rather to cultivate a growth by providing the appropriate environment, in the manner in which the gardener does this for his plants.
CON laws have nothing to do with licensing of practicing physicians but with planning by state committees exactly how and in what manner new health care services will be established. The problem is that centrally planning (at the state level) this type of activity presumes that regulators possess knowledge they very well may not. It may very well be that hospital administrators and investors have a better grasp of the needs of their communities than state planning boards.

While CON laws may have important benefits, Stratmann and Baker find equally important drawbacks.

Tuesday, February 23, 2016

Masonomics and Externalities

During a conversation a couple of weeks ago on externalities and market failure, a colleague of mine noted that my perspective on these topics seems to be in line with that of the economists at George Mason University. This is slightly misleading since the origin of GMU Econ's perspective is quite diverse (and thus not uniquely their own), but GMU is certainly a hotbed of research and education in the path-breaking economic work of Alchian, Buchanan, Coase, Demsetz, Hayek, Mises, Ostrom, Williamson, and others.

Given GMU Econ's unique perspective, it's no surprise that a term like "Masonomics" was coined. In an article back in 2007, economist Arnold Kling laid out several characteristics that make Masonomics unique. Below I reproduce the section of Kling's essay on the "cure for market failure."
At the University of Chicago, economists lean to the right of the economics profession. They are known for saying, in effect, "Markets work well. Use the market."

At MIT and other bastions of mainstream economics, most economists are to the left of center but to the right of the academic community as a whole. These economists are known for saying, in effect, "Markets fail. Use government."
Masonomics says, "Markets fail. Use markets." 
Somewhere along the way, mainstream economics became hung up on the concept of a perfect market and an optimal allocation of resources. The conditions necessary for a perfect market are absurdly demanding. Everything in the economy must be transparent. Managers must have perfect information about worker productivity and consumers must have perfect information about product quality. There can be nothing that gives an advantage to a firm with a large market share. There cannot be any benefits or costs of any market activity that spill over beyond that market.

The argument between Chicago and MIT seems to be over whether perfect markets are a "good approximation" or a "bad approximation" to reality. Masonomics goes along with the MIT view that perfect markets are a bad approximation to reality. But we do not look to government as a "solution" to imperfect markets.

Masonomics sees market failure as a motivation for entrepreneurship. As an example of market failure, let us use a classic case described by a Nobel Laureate, which is that the seller of a used car knows more about the condition of the car than the buyer. Masonomics predicts that entrepreneurs will try to address this problem. In fact, there are a number of entrepreneurial solutions. Buyers can obtain vehicle history reports. Sellers can offer warranties. Firms such as Carmax undertake professional inspections and stake their reputation on the quality of the cars that they sell.

Masonomics worries much more about government failure than market failure. Governments do not face competitive pressure. They are immune from the "creative destruction" of entrepreneurial innovation. In the market, ineffective firms go out of business. In government, ineffective programs develop powerful constituent groups with a stake in their perpetuation.
This is a (well-written) summary of my own view on the topic. Thanks to my exposure to this perspective in graduate school I continue to develop interests and to work in the political economy of agriculture. What do you think? What problems can you identify with the Masonomics view of externalities?

Russ Roberts also has a nice blog post giving his take on Masonomics here. Several Farmer Hayek posts have addressed externalities and market failure over the past year and can be read here, here, here, here, here, and here.

Friday, February 19, 2016

Pigou's Persistence

I recently ran across an interesting working paper by James McClure and Tyler Watts on some lesser-known or lesser-applied critiques of the standard Pigouvian perspective on externalities. The authors note that Pigou's perspective is still the standard in today's undergraduate texts, teaching students that externalities cause all manner of market failures which governments can fix with the appropriate political will.

The author's quote Pigou's 1920 book "The Economics of Welfare"
No "invisible hand" can be relied on to produce a good arrangement of the whole from a combination of separate treatments of the parts. It is therefore necessary that an authority of wider reach should intervene to tackle the collective problems of beauty, of air, and light, as those other collective problems of gas and water have been tackled.
Critiques of this perspective can be found all over the economic literature, but much of it is ignored in today's policy discussions. The authors identify 5 critiques and extensions of externality theory missing in current treatments of the subject: 1) the distinction between pecuniary and technological externalities, 2) the "invisible hand" as a generator of positive externalities, 3) the over-emphasis on negative externalities, 4) ignoring Coase's critique of Pigovian taxes as a solution to negative externalities, and 5) ignoring the potential for negative consequences of policy solutions to negative externalities. I'll discuss 2, 4, and 5 here and leave 1 and 3 to the interested reader.

Adam Smith's concept of the "invisible hand" is well known, if perhaps not well understood by most economists. (Pete Boettke recently wrote a great post on this subject.) McClure and Watts provide some helpful discussion on the subject:
The idea that "the market" generates positive external effects has been clearly articulated among a long line of economists, even though the term "externality" is often absent in their discussions. Since Adam Smith, economists have maintained that the use of scarce resources in ways that foster prosperity throughout society generally is the natural, but unintended, byproduct of economic interactions between individuals each pursuing his or her own self-interest.
I agree with the authors that Smith's concept of the invisible hand can be thought of as a positive externality. Boettke's post above also notes that Smith's concept is fundamentally about institutions, not about perfect information and other elements of individual rationality.

McClure and Watts provide some interesting discussion on Coase's critique of Pigou, focusing on Coase's concept of "reciprocal harm:"
To expose the weaknesses in Pigou's approach, Coase considered the reciprocal harm inherent in two conceptual experiments; in each the production of one economic good interferes with the production of other goods
Since both production processes in question produce economic goods, there is a trade off associated with taxing or subsidizing either process. In a previous post I discussed a column by Dierdre McCloskey in which she discusses a more important insight from Coase regarding externalities. In her characteristic style, McCloskey puts it this way:
Coase is forever saying that this or that proposal for a public policy entails knowing things that no economist can in fact know. He claims, with considerable empirical evidence, that in many cases laissez faire will be in practice better than what we will get from actual governments - though neither is perfect (we live in a second-best world, that is, a world of transaction costs). The methodological point is that Coase does not claim to have proven laissez faire on a blackboard. He says in effect, "If you look at the FCC or the lighthouses or the law of liability you see that governmental attempts to guide things minute-by-minute - as you say, Tom, 'getting the prices right'- don't work very well. Maybe it's better to just deal the cards and play. But in this veil of tears there are no guarantees. It may not work like some curves you have drawn. Life is hard. Knowledge is scarce. Grow up and admit that you can't extract policy from a couple of lines on a blackboard.
Finally, McClure and Watts discuss inframarginal or "irrelevant" externalities that can be relevant to policy decisions. The idea is that a policy designed to correct some problem with market allocations or prices may, on net, harm people if there is some positive externality "hidden" in a negative one.

For instance, when prices of basic necessities skyrocket during a natural disaster, policymakers might feel the need to outlaw "price gouging." However, such a prohibition on higher prices would reduce the incentive to bring in more of the necessities from areas unaffected by the disaster. Aid might come much slower than it otherwise would have.

The authors state the issue more generally:
Any policy that attempts correction of a negative externality while ignoring positive externalities in the form of inframarginal benefits, risks the possibility that corrective policy may impose welfare losses that could, if of sufficient magnitude, end up making matters worse than had the negative externality been ignored. 
The article is an interesting read so, as usual, I recommend reading the whole thing. There are of course many applications of these theoretical insights in agriculture. As the (vocal subset of the) public continues to emphasize the negative externalities associated with production agriculture, it will become more important to bring the insights of Coase, Demsetz, Buchanan, and others to bear.

Sunday, January 31, 2016

Fixed Costs, Marginal Cost, and Ronald Coase

On January 18, Dean Baker published a blog post on the subject of Medicare. In it he discusses the cost of medicine and the standard economic theory of sunk costs and marginal cost pricing. He writes:
In the vast majority of cases, the drugs in question are not actually expensive to manufacture. The way the drug industry justifies high prices is that they must recover their research costs. While the industry does in fact spend a considerable amount of money on research (although they likely exaggerate this figure), at the point the drug is being administered this is a sunk cost. In other words, the resources devoted to this research have already been used; the economy doesn’t somehow get back the researchers’ time and the capital expended if fewer people take a drug that is developed from their work.

Ordinarily economists treat it as an absolute article of faith that we want all goods and services to sell at their marginal cost without interference from the government, like a trade tariff or quota. However in the case of prescription drugs, economists seem content to ignore the patent monopolies granted to the industry, which allow it to charge prices that are often ten or even a hundred times the free market price. (The hepatitis C drug Sovaldi has a list price in the United States of $84,000. High quality generic versions are available in India for a few hundred dollars per treatment.) In this case, we are effectively looking at a tariff that is not the 10-20 percent that we might see in trade policy, but rather 1,000 percent or even 10,000 percent.
The high fixed costs associated with research and development (R&D) in medicine are "sunk" in the sense that they can't be recovered after they are incurred. Leaving aside the effect of patents on the price of medicine, it is really the case that the price of a pill of Medicine X should be equal to the incremental cost of producing the marginal pill? In other words, should these firms follow marginal cost pricing?

Saturday, January 23, 2016

GMO Labeling and Market Failure

Three Oklahoma State University ag economists, Eric DeVuyst, Jayson Lusk, and Cheryl DeVuyst, recently published a short fact sheet on GMOs. The whole thing is interesting, but I especially liked #9.
9. Should food companies be required to label foods with GMOs?
     There are several existing voluntarily labeling programs, such as the USDA organic certification, which provides consumers choices on this matter in the marketplace. Thus, the question isn't whether GMOs should be labeled, but rather whether labels should be mandatory.
Though they aren't the same thing, any product that is USDA certified organic is also GMO free. The only reason I can imagine this wouldn't be a satisfactory solution to the problem of GMO labeling would be that a significant number of people want GMO-free products but don't care if they're certified organic.

Whether or not that group exists in large enough numbers determines the need for a mandatory label. If most people who want GMO-free products are content buying certified organic products, there's not much of a profit incentive to create another label or certification scheme specifically for GMOs. In that case, the mandatory label isn't necessary either.

If there are a substantial number of people who want a specific label for GMOs and that label doesn't exist, I would conclude that either 1) there's some "hidden" cost out there preventing the creation of that label (see this podcast or read about the "people could be different" fallacy) or 2) this is a textbook case of market failure.

I'm no expert on labeling (I dabble) but it seems to me that most people who don't want to consume GMOs also prefer organic. What do you think? Is this a textbook market failure? Are there costs we aren't counting? Have I missed something?