Jeff’s Intermediate Micro Course
Intermediate Micro Course
I teach undergraduate intermediate microeconomics, a 10 week course that is the second in a two-part seqeunce at Northwestern University. I have developed a unique approach to intermediate micro based originally on a course designed by my former colleague Kim-Sau Chung. The goal is to study the main themes of microeconomics from an institution- and in particular market-free approach. To illustrate what I mean, when I cover public goods, I do not start by showing the inefficiency of market provided public goods. Instead I ask what are the possibilities and limitations of any institution for providing public goods. By doing this I illustrate the basic difficulty without confounding it with the additional problems that come from market provision. I do similar things with externalities, informational asymmetries, and monopoly.
All of this is done using the tools of dominant-strategy mechanism design. This enables me to talk about basic economic problems in their purest form. Once we see the problems posed by the environments mentioned above, we investigate efficiency in the problem of allocating private goods with no externalities. A cornerstone of the course is a dominant-strategy version of the Myerson-Satterthwaite theorem which shows the basic friction that any institution must overcome. We then investigate mechanisms for efficient allocation in large economies and we see that the institutions that achieve this begin to resemble markets.
Only at this stage do markets become the primary lens through which to study microeconomics. We look at a simple model competition among profit-maximizing auctioneers and a sketch of convergence to competitive equilibrium. Then we finish with a brief look at general equilibrium in pure exchange economies and the welfare theorems.
There is a minimal amount of game theory, mostly just developing the tools necessary to use mechanism design in dominant strategies, but also a side trip into Nash equilibrium and mixed strategies.
I begin with welfare economics because I think it is important to address at the very beginning what standard we should be using to evaluate economic institutions. And students learn a lot from just being confronted with the formal question of what is a sensible welfare standard. Naturally these lectures build to Arrow’s theorem, first discussing the axioms and motivating them and then stating the impossibility result. In previous years I would present a proof of Arrow’s theorem but recently I have stopped doing that because it is time consuming and bogs the course down at an early stage. This is one of the casualties of the quarter system.
Olympic Venue Voting
After the showstopper that is Arrow’s Theorem, we could just throw in the towel. The motivation for studying social welfare functions was to find a coherent standard by which to judge institutions and to propose policies. Now we see that there is no coherent standard. Well, sorry students we are not getting away so easily here in the second week of the class. We will accept that we must violate one of the axioms. Which one do we choose?
A lot of normative economic theory is implicitly built upon one of two welfare criteria, either Pareto efficiency or utilitarianism. While it is standard to formally define Pareto efficiency in an undergraduate micro class, utilitarianism is often invoked without explicit mention. For example, we are implicitly using some form of utilitarianism when we talk about consumer and producer surplus. And to argue that a monopoly is inefficient in a partial equilibrium framework is a utilitarian judgment (absent compensating transfers.)
So I make it explicit. And I take the time to formally define utilitarianism, explain where it applies and what justifies it and I point out its limitations. In terms of Arrow’s theorem I tell the students that we are dropping the axiom of universal domain (UD.) That is, we are not requiring our social welfare function to apply in all situations, only in those situations in which there is a valid measure of welfare that can be transferred and/or compared inter-personally. In this class, that measure of welfare is willingness to pay, and it applies when there are monetary transfers available and all agents value money in equal terms, i.e. quasi-linear utility.
These lectures contain one important formal result. In the quasi-linear world with monetary transfers utilitarianism coincides with Pareto efficiency. So these two common welfare standards are the same. (Any utilitarian improvement can be made into a Pareto improvement with judiciously chosen transfers and any Pareto improvement is a utilitarian improvement.)
Over the years I have learned that Pareto efficiency is a deceptively difficult concept to teach. The definition sounds simple and most students believe they understand it when you tell it to them but when they try to repeat the definition or apply it you see that they haven’t really understood it. I have found that defining it in two steps rather than one helps a lot. First define Pareto dominance. Then define Pareto efficiency as the absence of a Pareto dominating alternative.
Incentives and Game Theory
Now we have set the stage. We are considering social choice problems with transferrable utility. We want to achieve Pareto efficient outcomes which in this context is equivalent to utilitarianism.
Now we face the next problem. How do we know what the efficient policy is? It of course depends on the preferences of individuals and any institution must implicitly involve providing a medium through which preferences are communicated and mediated. In this lecture I introduce this idea in the context of a simple example.
Two roomates are condering purchasing an espresso machine. The machine costs $50. Each has a maximum willingness to pay, but each knows only his own willingness to pay and not the others. It is efficient to buy the machine if and only if the sum exceeds $50. They have to decide two things: whether or not to buy the machine and how to share the cost. I ask the class what they would do in this situation.
A natural proposal is to share the cost equally. I show that this is inefficient because it may be that one roomate has a high willingness to pay, say $40, and the other has a low willingness to pay, say $20. The sum exceeds $50 but one roomate will reject splitting the cost. This leads to discussion of how to improve the mechanism. Students propose clever mechanisms and we work out how each of them can be manipulated and we discover the conflict between efficiency and incentive-compatibility. There is scope for some very engaging class discussions here that create a good mindset for the coming more careful treatment.
At this stage I tell the students that these mechanisms create something like a game played by the roomates and if we are going to get a good handle on how institutions perform we need to start by developing a theory of how people play games like this. So we will take a quick detour into game theory.
For most of this class, very little game theory is necessary. So I begin by giving the basic notation and defining dominated and dominant strategies. I introduce all of these concepts through a hilarious video: The Golden Balls Split or Steal Game (which I have blogged here before.) I play the beginning video to setup the situation, then pause it and show how the game described in the video can be formally captured in our notation. Next I play the middle of the video where the two players engage in “pre-play communication.” I pause the video and have a discussion about what the players should do and whether they think that communication should matter. I poll the class on what they would do and what they predict the two players will do. Then I show them the dominant strategies.
Finally I play the conclusion of the video. Its a pretty fun moment.
We will take a first glimpse at applying game theory to confront the incentive problem and understand the design of efficient mechanisms. The simplest starting point is the efficient allocation of a single object. In this lecture we look at efficient auctions. I start with a straw-man: the first-price sealed bid auction. This is intended to provoke discussion and get the class to think about the strategic issues bidders face in an auction. The discussion reaches the conclusion that there is no dominant strategy in a first-price auction and it is hard to predict bidders’ behavior. For this reason it is easy to imagine a bidder with a high value being outbid by a bidder with a low value and this is inefficient.
The key problem with the first-price auction is that bidders have an incentive to bid less than their value to minimize their payment, but this creates a tricky trade-off as lower bids also mean an increased chance of losing altogether. With this observation we turn to the second-price auction which clearly removes this trade-off altogether. On the other hand it seems crazy on its face: if bidders don’t have to put their money whether mouths are won’t they now want to go in the other direction and raise their bid above their value?
We prove that it is a dominant strategy to bid your value in a second-price auction and that the auction is therefore an efficient mechanism in this setting.
Next we explore some of the limitations of this result. We look at externalities: it matters not just whether I get the good, but also who else gets it in the event that I don’t. We see that a second-price auction is not efficient anymore. And we look at a setting with common values: information about the object’s value is dispersed among the bidders.
For the comon-value setting I do a classroom experiment where I auction an unknown amount of cash. The amount up for sale is equal to the average of the numbers on 10 cards that I have handed out to 10 volunteers. Each volunteer sees only his own card and then bids. If the experiment works (it doesnt always work) then we should see the winner’s curse in action: the winner will typically be the person holding the highest number, and bidding something close to that number will lose money as the average is certainly lower.
(I got the idea from the winner’s curse experiment from Ben Polak, who auctions a jar of coins in his game theory class at Yale. Here is a video.
Here is the full set of Ben Polak’s game theory lectures on video. They are really outstanding. Northwestern should have a program like this. All Universities should.)
After showing how the Vickrey auction efficiently allocates a private good we revisit some of the other social choice problems discussed at the beginning and speculate how to extend the Vickrey logic to those problems. We look at the auction with externalities and see how the rules of the Vickrey auction can be modified to achieve efficiency. At first the modification seems strange, but then we see a theme emerge. Agents should pay the negative externalities they impose on the rest of society (and receive payment in compensation for the postive externalities.
We distill this idea into a general formula which measures these externalities and define a transfer function according to that formula. The resulting efficient mechanism is called the Vickrey-Clarke-Groves mechanism. We show that the VCG mechanism is dominant-strategy incentive compatible and we show how it works in a few examples.
We conclude by returning to the roomate/espresso machine example. Here we explicitly calculate the contributions each roomate should make when the espresso machine is purchased. We remind ourselves of the constraint that the total contributions should cover the cost of the machine and we see that the VCG mechanism falls short. Next we show that in fact the VCG mechanism is the only dominant-strategy efficient mechanism for this problem and arrive at this lecture’s punch line.
There is no efficient, budget-balanced, dominant-strategy mechanism.
One of the simplest and yet most central insights of information economics is that, independent of the classical technological constraints, transactions costs, trading frictions, etc., standing in the way of efficient employment of resources is an informational constraint. How do you find out what the efficient allocation is and implement it when the answer depends on the preferences of individuals? Any institution, whether or not it is a market, is implicitly a channel for individuals to communicate their preferences and a rule which determines an allocation based on those preferences. Understanding this connection, individuals cannot be expected to faithfully communicate their true preferences unless the rule gives them adequate incentive.
As we saw last time there typically does not exist any rule which does this and at the same time produces an efficient allocation. This result is deeper than “market failure” because it has nothing to do with markets
It applies to markets as well as any other idealized institution we could dream up.
So how are we to judge the efficiency of markets when we know that they didnt have any chance of being efficient in the first place? That is the topic of this lecture.
Let’s refer to the efficient allocation rule as the
In the language of mechanism design the first-best is typically not feasible because it is not
Given this, we can ask what is the closest we can get to the first best using a mechanism that is incentive compatible (and budget-balanced.) That is a well-posed constrained optimization problem and the solution to that problem we call the
Information economics tells us we should measure existing institutions relative to the second best. In this lecture I demonstrate how to use the properties of incentive-compatibility and budget balance to characterize the second-best mechanism in the public goods problem we have been looking at. (Previously the espresso machine problem.)
I am particularly proud of these notes because as you will see this is a complete characterization of second-best mechanisms (remember: dominant strategies)for public goods entirely based on a graphical argument. And the characterization is especially nice: any second-best mechanism reduces to a simple rule where the contributors are assigned
a share of the cost and asked whether they are willing to contribute their share. Production of the public good requires unanimity.
For example, the very simple mechanism we started with, in which two roomates share the cost of an espresso machine equally, is the unique
second-best mechanism. We argued at the beginning that this mechanism is inefficient and now we see that the inefficiency is inevitable and there is no way to improve upon it.
In the last lecture we demonstrated that there was no way to efficiently provide public goods, whether via a market or any other institution. Now we turn to private goods.
We start with a very simple example: bilateral trade. A seller holds an object that is valued by a potential buyer. We want to know how to bring about efficient trade: the seller sells the object to the buyer if, and only if, the buyer’s willingness to pay exceeds the seller’s.
We first analyze the problem using the Vickrey-Clarke-Groves Mechanism. We see that the VCG mechanism, while efficient, is not feasible because it would require a payment scheme which results in a deficit: the buyer pays less than the seller should receive.
Then, following the lines of the public goods problem from the previous lecture we show that in fact there is no mechanism for efficient trade. This is the dominant strategy version of the
In fact, we show that the best mechanism among all dominant-strategy incentive compatible and budget balanced mechanisms (i.e. the second mechanism) takes a very simple form. There is a price fixed in advance and the buyer and seller simply announce whether they are willing to trade at that price.
We see the first emergence of something like a market as the solution to the optimal design of a trading institution. We also see that markets are not automatically efficient even when there are no externalities, and goods are private. There is a basic friction due to information and incentives that constrains the market.
Next we consider the effects of competition. Our instincts tell us that if there are more buyers and more sellers, the inefficiency will be reduced. By a series of arguments I show the first sense in which this is true. There exists a mechanism which effectively makes sellers compete with one another to sell and buyers compete with one another to buy. And this mechanism improves upon the fixed price mechanism because it enables the traders themselves to determine the most efficient price. I call this the
price discovery mechanism
(it is really just a double auction.)
Finally, in one of the best moments of the class, what was previously some random plots of values and costs on the screen coalescees into supply and demand curves and we see how this price discovery mechanism is just another way of seeing a competitive market. This is the second look at how markets emerge from an analytical framework that did not presuppose the existence of markets at the beginning.
Large Public Goods
In previous lectures we looked at the design of mechanisms to allocate public and private goods in “small markets.” In both cases we saw that incentive compatibility is a basic friction preventing efficiency. But in the case of private goods we saw how that friction vanishes in larger markets. In this lecture we show that the opposite occurs for public goods. The inefficiency only gets worse as the size of the population served by a public good grows larger. We are capturing the foundations of the free-rider problem. This is another set of notes that I am particularly proud of becuase here is a completely elementary and graphical proof of a dominant-strategy version of the Mailath-Postlewaite theorem.
The conclusion we draw from this lecture is that the idea of “competition” that restored efficiency in markets for private goods cannot be harnessed for public goods and therefore some non-voluntary institution is necessary to provide these. This gives an opportunity to have an informal discussion of the kinds of public goods that are provided by governments and the way in which government provision circumvents the constraints in the mechanism design problem (coercive taxation.) The possibility of providing public goods by such means comes at the expense of losing the ability to aggregate information about the efficient level of the public good.
Auctions and Profit Maximization
We have spent most of the course using the tools of dominant-strategy mechanism design to understand efficient institutions and second-best tradeoffs. These topics have a normative flavor: they describe the limits of what could be achieved if institutions were designed with efficiency as the goal.
But most economic activity is regulated not by efficieny-motivated planners but by self-interested agents. This adds an additional friction which potentially moves us even further from the first-best. Self-interested mechanism designers will probably introduce new distortions into their mechanisms because as they try to tilt the distribution of surplus their way.
In this lecture we use the model of an auction to see the simplest version of this. We consider the problem of designing an auction for two bidders with the goal of maximizing revenue rather than efficiency. We do not have the tools necessary to do the full-blown optimal auction problem but we can get intuition by studying a narrower problem: find an optimal reserve price in an English auction.
With a diagram we can see the tradeoffs arising from adjusting the reserve price above the efficient level. The seller loses because sometimes the good will go unsold but in return he gains from receiving a higher price when the good is sold. The size and shape of the regions where these gains and losses occur suggest that it should be profitable to raise the reserve price above cost.
Without solving explicitly for the optimal reserve price we can give a pretty compelling, albeit not 100% formal, argument that this is indeed the case. At the efficient reserve price (equal to the cost of selling) total surplus is maximized. A graph of total expected surplus as a function of the reserve price should be locally flat at the efficient point. (We are implicitly assuming differentiability of total expected surplus which holds if the distribution of bidder values is nice.) Buyers’ utility is unambigously declining when the reserve price increases. Since total surplus is by definition the sum of buyers’ utility and seller profit, it follows that seller profit is locally increasing as the reserve price is raised above the efficient level.
Thus, while we know that in principle this allocation problem can be solved efficiently, when the allocation is controlled by a profit maximizer, there is a new source of inefficiency. The natural next question is whether competition among profit-maximizing sellers will mitigate this.
This lecture brings together everything built up to this point. We are going to develop an intuition for why competitive markets are efficient using a model of profit maximizing sellers who compete in an auction market by setting reserve prices. In the previous lecture we saw how the profit maximization motive leads a seller with market power to choose an inefficient selling mechanism. This came in the form of a reserve price above cost. Here we begin by getting some intuition why competition should reduce the incentive to distort price in this way.
(This is probably the weak link in the whole class. I do not have a good idea of how to teach this and in fact I am not sure I understand it so well myself. This is the first place to work on improving the class next time. Any suggestions would be appreciated.)
Finally, we jump to a model with a large number of buyers and sellers all competing in a simultaneous ascending double auction. With so much competition, if sellers set reserve prices above their costs there will be
no sellers who are doing better than if they just set the reserve price equal to cost
a positive mass of sellers who would do strictly better by reducing their reserve price to equal their cost
In that sense it is a dominant strategy for all sellers to set reserve price equal to their cost. This equates the “supply” curve with the cost curve and produces the utilitarian allocation.