Saturday, December 31, 2016

Testing Market Failure Theories

by Levi Russell

I recently picked up a copy of Tyler Cowen and Eric Crampton's 2002 edited volume Market Failure or Success: The New Debate (now only in print with the Independent Institute, though it was originally published by Edward Elgar) and have really enjoyed what I've read so far. The book is a collection of essays by prominent IO scholars organized into four sections: a fantastic introduction by the editors, four essays that form the foundation of the "new" market failure theories based on information problems, four theoretical critiques of said theories, and 8 essays providing empirical and experimental evidence of the editors' thesis: that information-based market failure theory is often merely a theoretical possibility not borne out in real life and that economic analysis of knowledge often provides us with the reasons why.

Two pieces by Stiglitz are featured in the first theoretical section: one on information asymmetries and wage and price rigidities and the other on the incompleteness of markets. Akerlof's famous "lemons" paper and Paul David's paper on path dependence are also included. I was happy to see that Demsetz's "Information and Efficiency; Another Viewpoint" was the first essay in the theoretical critique section as it sets the stage for the other chapters in that section. The empirical and experimental section features Liebowitz and Margolis' response to Paul David on path dependence in technology, Eric Bond's direct test of Akerlof's "lemons" model, and an essay I've never ready by Gordon Tullock entitled "Non-Prisoner's Dilemma."

The introduction provides a short summary of the arguments presented in the following 3 sections and includes a great discussion of the editors' views of the core problems with information-based market failures. Here's the conclusion of the intro chapter:
Our world is a highly imperfect one, and these imperfections include the workings of markets. Nonetheless, while being vigilant about what we will learn in the future, we conclude that the 'new theories' of market failure overstate their case and exaggerate the relative imperfections of the market economy. In some cases, the theoretical foundations of the market failure arguments are weak. In other cases, the evidence doe snot support what the abstract models suggest. Rarely is analysis done in a comparative institutional framework. 
The term 'market failure' is prejudicial - we cannot know whether markets fail before we actually examine them, yet most of market failure theory is just theory. Alexander Tabarrok (2002) suggests that 'market challenge theory' might be a better term. Market challenge theory alerts us to areas where market might fail and encourages us to seek out evidence. In testing these theories, we may find market failure or we may find that markets are more robust than we had previously believed. Indeed, the lasting contribution of the new market failure theorists may be in encouraging empirical research that broadens and deepens our understanding of markets.
We believe that the market failure or success debate will become more fruitful as it turns more to Hayekian themes and empirical and experimental methods. Above, we noted that extant models were long on 'information' - which can be encapsulated into unambiguous, articulable bits - and short on the broader category of 'knowledge,' as we find in Hayek [Hayek's 1945 article The Use of Knowledge in Society can be read here for free. A short explanation of the main theme of the article can be found here. - LR]. Yet most of the critical economic problems involve at least as much knowledge as information. Employers, for instance, have knowledge of how to overcome shirking problems, even when they do not have explicit information about how hard their employees are working. Many market failures are avoided to the extent we mobilize dispersed knowledge successfully. 
It is no accident that the new market failure theorists have focused on information to the exclusion of knowledge. Information is easier to model, whereas knowledge is not, and the economics profession has been oriented towards models. Explicitly modeling knowledge may remain impossible for the immediate future, which suggests a greater role for history, case studies, cognitive science, and the methods of experimental economics. 
We think in particular of the experimental revolution in economics as a way of understanding and addressing Hayek's insights on the markets and knowledge; Vernon Smith, arguably the father of modern experimental economics, frequently makes this connection explicit. Experimental economics forces the practitioner to deal with the kinds of knowledge an behavior patterns that individuals possess in the real world, rather than what the theorist writes into an abstract model. The experiment then tells us how the original 'endowments' might translate into real world outcomes. Since we are using real world agents, these endowments can include Hayekian knowledge and not just narrower categories of information. 
Experimental results also tend to suggest Hayekian conclusions. When institutions and 'rules of the game' are set up correctly, decentralized knowledge has enormous power. Prices and incentives are extremely potent. The collective result of a market process contains a wisdom that the theorist could not have replicated with pencil and paper alone.

Tuesday, December 27, 2016

On Regulatory Cost-Benefit Analysis

by Levi Russell

I recently ran across a fantastic article in Regulation magazine written by George Washington University regulation expert Susan Dudley. The article, entitled "OMB's Reported Benefits of Regulation: Too Good to Be True?" tackles an issue not often raised in policy discussions: What are the assumptions underlying cost-benefit analysis of regulation? Dudley explains in detail the way in which benefits are counted and how the scope of the analysis differs for benefits and costs. A single benefit category, reductions in fine particulate matter (PM 2.5), is responsible for the bulk of benefits calculated by OMB.

Given this focus on fine particulate matter, it would make sense that the science on the harm caused by PM 2.5 would inspire a lot of confidence. On the contrary, Dudley writes:
The OMB identifies six key assumptions that contribute to this uncertainty in PM2.5 benefits estimates. One assumption is that “inhalation of fine particles is causally associated with premature death at concentrations near those experienced by most Americans on a daily basis.” The EPA bases this assumption on epidemiological evidence of an association between particulate matter concentrations and mortality; however, as all students are taught, correlation does not imply causation (cum hoc non propter hoc), and the agency cannot identify a biological mechanism that explains  the  observed  correlation.  Risk  expert  Louis  Anthony  Cox raises questions as to whether the correlation the EPA claims is real. His statistical analysis (published in the journal Risk Analysis) concludes with a greater than 95 percent probability that no association exists and that, instead, the EPA’s results are a product of its choice of models and selected data rather than a real, measured correlation.

Another  key  assumption  on  which  the  EPA’s (and therefore the OMB’s) benefit estimates hinge is  that  “the  impact  function  for  fine  particles  is approximately  linear  within  the  range  of  ambient  concentrations  under  consideration,  which includes concentrations below the National Ambient Air Quality Standard” (NAAQS). Both theory and data suggest that thresholds exist below which further  reductions  in  exposure  to PM 2.5 do  not yield changes in mortality response and that one should expect diminishing returns as exposures are reduced to lower and lower levels. However, the EPA assumes  a  linear  concentration response  impact function that extends to concentration below background levels. The OMB observes, “indeed, a significant portion of the benefits associated with more  recent  rules  are  from  potential  health  benefits in regions that are in attainment with the fine particle standard.”

Based  on  its  assumptions  of  a  causal,  linear, no-threshold relationship between PM 2.5 exposure and premature mortality, the EPA quantifies a number  of  “statistical  lives”  that  will  be  “saved” when concentrations of PM 2.5 decline as a result of regulation. If any of those assumptions are false (in other words, if no association exists, if the relation-ship is not causal, or if the concentration-response relationship is not linear at low doses), the benefits of reducing PM 2.5 would be less than estimated and perhaps even zero.

Further, as the OMB notes, “the value of mortality risk reduction is taken largely from studies of the willingness to accept risk in the labor market[where the relevant population is healthy and has a  long  remaining  life  expectancy]  and  might  not necessarily apply to people in different stages of life or health status.” This caveat is particularly important in the case of PM2.5 because, as the EPA’s 2011 analysis reports, the median age of the beneficiaries of these regulations is around 80 years old, and the average extension in life expectancy attributable to lower PM 2.5 levels is less than six months.
 It's clear that there are some serious, objective problems with the way some benefits of regulation are calculated. Dudley concludes:
The OMB’s role is to serve as a check against agencies’ natural motivation to paint a rosy picture of their proposed actions. While it cannot ensure that agencies consider all the possible consequences of an action in their analyses, it should try to ensure that the boundaries of those analyses are set with some regard to objective science. When a few categories of benefits that have questionable legitimacy puff up benefits by a five-fold margin or more, that does not appear to be the case.
Beyond the objective, scientific questions concerning the benefits of regulation, analysis of the costs are important as well. In my recent piece in Perspective, a magazine published by the Oklahoma Council of Public Affairs, on the costs of environmental regulation of agriculture, I point to the fundamental uncertainty facing regulators. This uncertainty is not accounted for in the cost calculations of the regulations they enforce:
The uncertainty and compliance costs associated with these regulations represent serious concerns for producers. Recent surveys of row crop producers, cattle producers, and feedlot operators indicate that future environmental regulation is a top concern for their businesses over the long term.
...
This is not to say that regulators are ill-intentioned. They face a highly complex and difficult problem: implementing the will of Congress for the betterment of the American people. The knowledge and information required to regulate even one industry is immense. Not only is it costly to obtain the information necessary to pass effective regulations, regulators can’t be sure that unforeseen unintended consequences won’t diminish the effectiveness of their rules or cause more harm than good. Proposed measures to ensure effective regulation that is not overly burdensome, such as sunset provisions that would require regulations to lapse on a periodic basis, have been put forth but have not been implemented widely. Other propositions include less federal and more local and state control over environmental policy and greater use of common law courts to deal with environmental problems. Both of these proposals acknowledge the information problems inherent in the regulation of agriculture.
There are significant political hurdles to overcome if we are to inject more scientific and objective analysis into regulatory cost-benefit calculation. Knowing how that calculation is done is a crucial first step; Susan Dudley's article is a great way to inform the public so we can get the reform ball rolling!

Monday, December 19, 2016

More on Contestability and the Baysanto Merger

by Levi Russell
In a previous post, I discussed monopoly concerns with Bayer's acquisition of Monsanto. The deal was recently approved by Monsanto shareholders but will likely face significant scrutiny from anti-trust regulators.

In the previous post, I went through a paper by several Texas A&M economists that examined the likely consequences of the acquisition for several row crop seed prices. In this post, I'll make some other comments on contestability.

The A&M paper sticks to standard IO theory:
Concentrated markets do not necessarily imply the presence of market power. Key requirements for market contestability are: (a) Potential entrants must not be at a cost disadvantage to existing firms, and (b) entry and exit must be costless.
In contrast to standard IO theory, VRIO analysis suggests costs are always lower for incumbent firms. Managers of incumbent firms have experience with the specific marketing, managerial, and financial aspects of the industry that new entrants simply don't or must obtain at an additional cost.

Does this imply that no industry is "contestable" in an abstract sense? No. As I pointed out previously, prices are falling in many industries, even in those in which entry would entail 1) significant advantages for incumbents and 2) significant sunk costs. It does imply that the conditions for "contestability" are broader than the standard definition. The resource-based view of the firm provides an alternative view of contestability: The advantages for incumbents and potential sunk costs must simply be small enough that they are outweighed by an entrepreneur's expectation of economic profit associated with entering the industry.

So, when we see apparent divergences between price and marginal cost, as I see it there are three possibilities:

1) there are costs we as third-party observers don't see
2) the economic profit is associated with short-term returns to innovation (e.g. monopolistic competition)
3) there is a legal barrier to entry that is extraneous to the market itself.

This dynamic perspective (which I argue is easily teachable to undergrads) is much more powerful in advancing our understanding of real-world market behavior. Yes, the more unrealistic assumptions made in standard theory allow for more elegant mathematical modeling, but if our goal is to understand causal factors associated with firm behavior, the resource-based view of the fiirm, VRIO analysis, and other dynamic theories are more useful.