Changing capital aggregation

The Rad Geek People’s Daily has an interesting comment on my post Capitalists vs. Entrepreneurs:
The only thing that I would want to add here is that it’s not just a matter of projects being able to expand or sustain themselves with little capital (although that is a factor). It’s also a matter of the way in which both emerging distributed technologies in general, and peer production projects in particular, facilitate the aggregation of dispersed capital — without it having to pass through a single capitalist chokepoint, like a commercial bank or a venture capital fund. Because of the way that peer production projects distribute and amortize their costs of operation, entrepreneurs can afford to bypass existing financial operators and go directly to people with $20 or $50 to give away and take the money in in small donations, because they no longer need to get multimillion dollar cash infusions all at once just to keep themselves running: the peer production model allows greater flexibility by dispersing fixed costs among many peers (and allowing new entrepreneurs to easily step in and take over the project, if one has to bow out due to the pressures imposed by fixed costs), rather than by concentrating them into the bottom line of a single, precarious legal entity. Meanwhile, because of the way that peer production projects distribute their labor, peer-production entrepreneurs can also take advantage of “spare cycles” on existing, widely-distributed capital goods — tools like computers, facilities like offices and houses, software, etc. which contributors own, which they still would have owned personally or professionally whether or not they were contributing to the peer production project, and which can be put to use as a direct contribution of a small amount of fractional shares of capital goods directly to the peer production project. So it’s not just a matter of cutting total aggregate costs for capital goods (although that’s an important element); it’s also, importantly, a matter of new models of aggregating the capital goods to meet whatever costs you may have, so that small bits of available capital can be rounded up without the intervention of money-men and other intermediaries.
I like the point about the improved aggregation of capital, that is quite possibly as important as the reduction in capital requirements.

This gets me thinking more about the importance of reduced coordination costs (aggregation of capital being a special case). Clearly computers and networks contribute enormously to improving the productivity of coordination. However I don’t think we have good models for the costs of coordination, or the effects of improved coordination, so its importance tends to be underestimated.

I guess cheaper / easier / faster coordination is a special case of (what economists call) “technology” — changes with big economic impact, that are outside the economic model. From another point of view, coordination costs and delays are a special case of (what economists call) “frictions” which are also hard for them to model well. So trends in coordination may well affect the economy in major ways, but are none the less mostly invisible in economic models.

And in fact we’d expect coordination costs to fall and speed and capacity to rise on an exponential trend, riding Moore’s law. Given that changes in coordination have significant economic impact (how could they not?) there’s a huge long term economic trend that’s formally invisible to economists.

One aspect of this that’s perhaps subtle, but often important, and that shows up strongly in capital aggregation, is how “technology” changes the risk of cooperation. One of the big sources of risk is that someone you’re cooperating with will “defect” (in the prisoner’s dilemma sense) — that the cooperative situation will give them ways to benefit by hurting you. In fund raising this risk is obvious, but working with people, letting others use your “spare cycles”, etc. have risks too. Ways of coordinating have been evolving to counteract or at least mitigate these risks — clearer norms for response to bad actors, online reputations, large scale community reactions to serious bad behavior, and so forth. Wikipedia’s mechanisms for quick reversion of vandalism is an example. Even spam filtering is a case in point, reducing our costs and the benefits to the bad actors. It is too early to know for sure, but right now there are enough success stories that I’d guess that the space of defensible and sustainable cooperation is pretty big — maybe even big enough to “embrace and extend” important parts of the current economy.

Turking! The idea and some implications

I recently read an edited collection of five stories, Metatropolis; the stories are set in a common world the authors developed together. This is a near future in which nation state authority has eroded and in which new social processes have grown up and have a big role in making things work. In some sense the theme of the book was naming and exploring those new processes.

One of those processes was turking. The term is based on Amazon’s Mechanical Turk. Google shows the term in use as far back as 2005 but I hadn’t really wrapped my mind around the implications; Metatropolis broadens the idea way beyond Amazon’s implementation or any of the other discussions I’ve read.

Turking: Getting bigger jobs done by semi-automatically splitting them up into large numbers of micro-jobs (often five minutes long or less), and then automatically aggregating and cross-checking the results. The turkers (people doing the micro-jobs) typically don’t have or need any long term or contractual relationship with the turking organizers. In many cases, possibly a majority, the turkers aren’t paid in cash, and often they aren’t paid at all, but do the tasks as volunteers or because they are intrinsically rewarding (as in games).

One key element that is distinctive to turking is some sort of entirely or largely automated process for checking the results — usually by giving multiple turkers the same task and comparing their results. Turkers who screw up too many tasks aren’t given more of those tasks. Contrast this with industrial employment where the “employer” filters job candidates, contracts with some to become “employees”, and then enforces their contracts. The relationship in turking is very different: the “employer” lets anybody become an “employee” and do some tasks, doesn’t (and can’t) control whether or how the “employee” does the work, but measures each “employee’s” results and decides whether and how to continue with the relationship.

This is an example of a very consistent pattern in the transition from industrial to networked relationships: a movement from gatekeeping and control to post hoc filtering. Another example is academic publishing. The (still dominant) industrial model of publishing works through gatekeeping — articles and books don’t get published until they are approved through peer review. The networked model works through post hoc processes: papers go up on web sites, get read, commented and reviewed, often are revised, and over time get positioned on a spectrum from valid/valuable to invalid/worthless. The networked model is inexorably taking over, because it is immensely faster, often fairer (getting a few bad anonymous reviews can’t kill a good paper), results in a wider range of better feedback to authors, etc.

It seems quite possible — even likely — that post hoc filtering for work will produce substantially better results than industrial style gatekeeping and control in most cases. In addition to having lower transaction costs, it could produce better quality, a better fit between worker and task, and less wasted effort. It also, of course, will change how much the results cost and how much people get paid — more on that below.

Amazon’s Mechanical Turk just involves information processing — web input, web output — and this is typical of most turking today. However there are examples which involve real world activities. In an extreme case turking could be used to carry out terrorist acts, maybe without even doing anything criminal — Bruce Sterling has some stories that explore this possibility. But there are lots of ordinary examples, like counting the empty parking spaces on a given block, or taking a package and shipping it.

Examples

  • Refugees in camps are turking for money. The tasks are typical turking tasks, but the structure seems to be some more standard employment relationship. If there were enough computers, I bet a high percentage of the camp residents would participate, after some short period in which everyone learned from each other how to do the work. Then the organizers would have to shift to turking methods because the overhead of managing hundreds of thousands of participants using contracting and control would be prohibitive.
  • A game called FoldIt is using turking to improve machine solutions to protein folding. Turns out humans greatly improve on the automatic results but need the machine to do the more routine work. The turkers have a wide range of skill and a variety of complementary strategies, so the project benefits from letting a many people try and then keeping the ones who succeed. (This is an example where the quality is probably higher than an industrial style academic model could generate.) The rewards are the intrinsic pleasure of playing the game, and also maybe higher game rankings.
  • There’s a startup named CrowdFlower that aims to make a business out of turkingrowdFlower has relationships with online games that include the turking in their game play. So the gamers get virtual rewards (status, loot). I can easily imagine that the right turking tasks would actually enhance game play. CrowdFlower are also doing more or less traditional social science studies of turking motivations etc. Of course the surveys that generate data for the research are also a form of turking.
  • Distributed proofreading. OCR’d texts are distributed to volunteers and the volunteers check and correct the OCR. (They get both the image and the text.) The front page goes out of its way to note that “there is no commitment expected on this site beyond the understanding that you do your best.” This is an early turking technology, and works in fairly large chunks, a page at a time. It may be replaced by a much finer grained technology that works a word at a time — see below.
  • Peer production (open source and open content). An important component of peer production is small increments of bug reporting, testing, code review, documentation editing, etc. Wikipedia also depends on a lot of small content updates, edits, typo fixes, etc. These processes have the same type of structure as turking, although they typically hasn’t been called turking. The main difference from the other examples is there’s no clear-cut infrastructure for checking the validity of changes. This is at least partly historical, these processes arose before the current form of turking was worked out. The incentive — beyond altruism and the itch to correct errors — is that one can get credit in the community and maybe even in the product.
  • I just recently came across another good example that deserves a longer explanation: ReCaptcha. It is cool because it takes two bad things, and converts them into two good things, using work people were doing anyway.

    The first bad thing is that OCR generates lots of errors, especially on poorly printed or scanned material — which is why the distributed proofreading process above is required. These can often be identified because the results are misspelled and/or the OCR algorithm reports low confidence. From the OCR failures, you can generate little images that OCR has trouble recognizing correctly.

    The second bad thing is that online services are often exploited by bad actors who use robots to post spam, abusively download data, etc. Often this is prevented by captchas, images that humans can convert into text, but that are hard for machines to recognize. Since OCR failures are known to be hard for machines to recognize correctly, they make good captchas.

    Recaptcha turns the user effort applied to solving captchas, which would otherwise be wasted, into turking to complete the OCR — essentially very fine grained distributed proofreading. Recaptcha figures out who’s giving the correct answers by having each user recognize both a known word and an unknown word, in addition to comparing answers by different users. Users are rewarded by getting the access they wanted.

    Note that if spammers turk out captcha reading (which they are doing, but which increases their costs significantly) then they are indirectly paying for useful work as well. Potentially Recaptcha could be generalized to any kind of simple pattern recognition that’s relatively easy for humans and hard for machines, which could generate a lot of value from human cognitive capacities.

    Some implications

    It seems that over time a huge variety and quantity of work could be turked. The turking model has the capacity to productively employ a lot of what Clay Shirky calls our “cognitive surplus”, and also whatever time surplus we have. Many unemployed people, refugee populations and I’m sure lots of other groups have a lot of surplus. As Shirky points out, even employed people have a discretionary surplus that they spend watching TV, reading magazines, playing computer games, etc. However right now there’s no way to bring this surplus to market.

    Switching from industrial relationships (heavyweight, gatekeeping and control) to networked relationships (lightweight, post hoc filtering) reduces per task transaction costs to a tiny fraction of their current level, and makes it feasible to bring much of this surplus to market.

    The flip side of that of course is that the more this surplus is available for production, the less anyone will get paid for the work it can do. Already in a lot of existing turking, the participants aren’t getting paid — and in many cases the organizers aren’t getting paid either. Also, more or less by definition, the surplus that would be applied to turking currently isn’t being used for any other paid activity, so potential workers aren’t giving up other pay to turk. Therefore, I expect the average payment for a turked task to approach zero, for both turkers and organizers. Usually there will still be rewards, but they will tend to be locally generated within the specific context of the tasks (online community, game, captcha, whatever). Often the entity that generates the rewards won’t won’t get any specific benefit from the turking — for example, in the case of ReCaptcha, the sites that use it don’t particularly benefit from whatever proofreading gets done.

    Mostly turking rewards won’t be measurable in classical monetary terms — in some cases rewards may involve “in game” currency but this doesn’t yet count in the larger economy. In classical monetary terms, the marginal cost of getting a job turked will probably approach the cost of building, maintaining and running the turking infrastructure — and that cost is exponentially declining and will continue to do so for decades.

    This trend suggests that we need to find some metric complementary to money to aggregate preferences and allocate large scale social effort. But I’m not going to pursue that question further here.

    Obviously it will be important to understand what types of work can be turked and what can’t. For example, could the construction of new houses be turked? That may seem like a stretch, but Habitat for Humanity and other volunteer groups do construct houses with a process very much like turking — and of course this has a long history in the US, with institutions like barn raising. Furthermore the use of day labor isn’t that different from turking. I’d guess that within ten years we’ll be turking much of the construction of quite complex buildings. It is interesting to try to imagine what this implies for construction employment.

    Realistically, at this point we just don’t know the limits of turking. My guess is that the range of things that can be done via turking will turn out to be extremely broad, but that it will take a lot of specific innovations to grow into that range. Also of course there will be institutional resistance to turking many activities.

    When a swarm of turkers washes over any given activity and devours most of it, there will typically be a bunch of nuggets left over that can’t be turked reliably. These will probably be things that require substantial specialized training and/or experience, relatively deep knowledge of the particular circumstances, and maybe certification and accountability. Right now those nuggets are embedded in turkable work and so it is hard or impossible to figure out their distribution, relative size, etc. For a while (maybe twenty years or so) we’ll keep being surprised — we’ll find some type of nuggets we think can’t be turked, and then someone will invent a way to make most of them turkable. Only if and when turking converges on a stable institution will we be able to state more analytically and confidently the characteristics that make a task un-turkable.

    Another issue is security / confidentiality. Right now, corporations are willing to use turking for lots of tasks, but I bet they wouldn’t turk tasks involving key market data, strategic planning, or other sensitive material. On the other hand, peer production projects are willing to turk almost anything, because they don’t have concerns about maintaining a competitive advantage by keeping secrets. (They do of course have to keep some customer data private if they collect it at all, but usually they just avoid recording personal details.) I’d guess that over time this will give entities that keep fewer secrets a competitive advantage. I think this is already the case for a lot of related reasons because broadly speaking “Trying to keep secrets imposes huge transaction costs.” Eventually keeping big secrets may come to be seen as an absurdly expensive and dubious proposition and the ability to keep big pointless secrets will become an assertion of wealth and power. (Every entity will need to keep a few small secrets, such as the root password to their servers. But we know how to safely give everyone read only access to almost everything, and still limit changes to those who have those small secrets.)

    There’s lots more to say, but that’s enough for now.

    Untrustworthy by design? or incompetence? Just untrustworthy

    James Kwak has an interesting post Design or Incompetence? in which he discusses the ways banks are delaying and obstructing customer efforts to get benefits the banks have offered. In addition the banks are misrepresenting their own actions and obligations. As the title suggests he wonders whether this behavior is due to design or incompetence, and ultimately concludes (after some discussion of the internal institutional issues) that it doesn’t matter because it is a systemic consequence of the banks’ incentives.

    As in my previous post on “Lying or Stupid?”, I’d say this analysis is interesting and sometimes useful but that we should start by saying the institutions involved are untrustworthy which is true either way, and often we don’t need to look into the finer distinctions. Debating the details usually just gives these bad actors ways to muddy the water.

    More generally, we need to develop social sanctions — applied by governments, broad condemnation, boycott, and/or whatever else will work — to “adjust the incentives” of these bad actors so they either become trustworthy or are replaced by organizations that are trustworthy. These social sanctions worked with apartheid and to some extent with third world sweatshops, we can at least imagine them working with respect to untrustworthiness.

    Right now unfortunately there are many who argue that not only should corporations ignore this sort of issue, but even further that it would be immoral for them to take such considerations into account. Furthermore the general perception is that we can’t expect corporations to care about morality, as Roger Lowenstein discusses. I’ve been chewing on this issue in the comments to a couple of interesting posts by Timothy Lee and plan to summarize my resulting thoughts here soon. The good news from that discussion is that even some committed free market folks such as Timothy agree that we need to have corporations put moral obligations such as trustworthiness above profits. Now we need rough consensus on that…

    Analyzing crazy beliefs

    Recently there’s been a renewed attempt in the liberal / scientific blogosphere to figure out what’s up with all the crazy social / political claims that keep erupting — about creationism, Obama, health care, global warming, etc. A new and I think potentially major step forward in this analysis has just been posted by Mike the Mad Biologist, building on two excellent posts by Slactivist (False witnesses, False witnesses 2). This analysis is the first one I’ve seen that makes me feel like I understand most of the craziness we are seeing today (and have seen in some form for many decades), and have at least a hope of figuring out how it will evolve, how we should respond, etc.

    The basic point is that the crazy stories (death panels, global warming conspiracies, Obama’s birth, etc. etc.) aren’t really “believed” as we understand that term, at least not by their most vigorous proponents. We use “belief” to mean ideas that are part of an overall picture that we intend to be coherent, to help guide our actions in the world (including in the lab if we are scientists), etc.

    Instead, these crazy “beliefs” are really a way of recruiting emotional and social support, declaring membership in a group, etc. So “believers” can’t be persuaded that the “beliefs” are “wrong” just because they are incoherent, lead to obviously wrong conclusions that the “believers” won’t adopt, etc. A strenuous attempt to persuade believers on pragmatic grounds just confirms you are not one of their crowd, can’t be recruited, and are probably one of the enemy. The post “False witnesses” referenced above has a very good discussion of this in some detail. It is worth reading because it is hard to imagine this state of mind (at least I find it hard) until you see it laid out in very specific terms.

    I don’t want to say the crazy stories aren’t “really” beliefs — though I’m not sure saying they are crazy beliefs is any nicer. Instead, let’s call the first kind of belief (aiming at coherence and effectiveness) “pragmatic”. We can call the second kind (aiming at recruiting or maintaining support) “participatory” beliefs. (I’m sure there are harmless and even charming participatory beliefs, as well as these crazy ones.) Realistically we all have both kinds, the question is which kind are dominant in any given area, how we react when they are challenged, etc.

    Properties of pragmatic vs. participatory beliefs
    Slacktivist usefully summarizes his expectations and how he found these extreme participatory beliefs actually work:
    I was operating under a set of false assumptions [viewing these as pragmatic beliefs]. Among these:
    1. I assumed that the people who claimed to believe [a particular crazy story] really did believe such a thing.
    2. I assumed that they were passing on this rumor in good faith — that they were misinforming others only because they had, themselves, been misinformed.
    3. I assumed that they would respect, or care about, or at least be willing to consider, the actual facts of the matter.
    4. Because the people spreading this rumor claimed to be horrified/angry about its allegations, I assumed that they would be happy/relieved to learn that these allegations were, indisputably, not true.

    All of those assumptions proved to be false. All of them. This was at first bewildering, then disappointing, and then, the more I thought about it, appalling — so appalling that I was reluctant to accept that it could really be the case.

    But it is the case. Let’s go through that list again. The following are all true of the people spreading the [crazy story]:

    1. They didn’t really believe it themselves [using the "pragmatic definition of belief].
    2. They were passing it along with the intent of misinforming others. Deliberately.
    3. They did not respect, or care about, the actual facts of the matter, except to the extent that they viewed such facts with hostility.
    4. Being told that the Bad Thing they were purportedly upset about wasn’t real only made them more upset. Proof that the [crazy story wasn't true] made them defensive and very, very angry.

    Rather than saying the people he was talking to “didn’t really believe it themselves” and intended to misinform others, I’d say that they didn’t care about the pragmatic dimension at all, and so didn’t consider their recruiting to be misinformation. Quite possibly they didn’t expect those they were trying to recruit to interpret the rumor as a pragmatic fact.

    This analysis has a lot going for it, much of it discussed rather well in these posts. Obviously participatory disagreements will be more like turf wars than practical discussions. As Mike says in the first post below, “part of the reason [for global warming denialism] is the ever-present desire to punch a hippie in the face” but he thinks that is a different issue. No, it is the same issue — hippies are cultural icons who stand for a different set of participatory beliefs incompatible with the main crazy participatory beliefs. (Obviously for this analysis it doesn’t matter if hippies really do have those beliefs or if hippies even exist.)

    The members of the tribes that tell these crazy stories fear they can’t recruit hippies (and in fact fear that hippies are dangerously capable of seducing their own weakly committed members). Punching them in the face is their sincerest form of acknowledgement.

    I think this analysis is a good guide to anticipating likely future scenarios, and to judging the effectiveness of possible actions. The worst scenarios are very bad, and while not highly likely I think they are credible. The 20th century leaves us with many examples of participatory “cults” that generated massive death, suffering, and social destruction (military cultures in Europe and Japan, Nazis, Soviet Communists, Maoists, Khymer Rouge, and so forth).

    What’s the role of religion?
    None of these posts focus on religion per se (though the crazy beliefs they talk about are especially relevant to evangelicals). And certainly some major participatory cults have been very hostile to religion (e.g. the Khymer Rouge, Maoism, etc.) — I suppose viewing it as competition.

    However I think just about all organized religion depends on participatory beliefs (some forms of buddhism may be exceptions). Even if a believer is otherwise rational, their religion says it is OK for them to have beliefs that are basically incoherent (or carefully not evaluated for consistency), that aren’t effective in guiding action (or aren’t evaluated in terms of effectiveness), etc. Evangelical religions, furthermore, are defined by recruiting others to their participatory beliefs — that’s what evangelism is.

    One of our constraints is that liberals have a participatory belief (or meta-belief) in pragmatic belief. We want to debate at the level of pragmatic beliefs (what is coherent, what will work) and avoid tribalism. Thus liberals can seem weak when they are attacked in social turf wars at the level of participatory belief. I guess this liberal participatory belief is partly historical, in that the liberal coalition (meta-tribe) was largely founded on the rejection of religious wars and the valorization of pragmatic choices relative to participatory beliefs, and partly structural, in that the liberal coalition still depends largely on uniting groups with partially incompatible participatory beliefs (liberal protestants, liberal catholics, liberal jews, liberal non-religious, liberal muslims, etc.).

    We don’t anyway want to respond to tribal attacks by organizing tribal counter-attacks — that just tends to pull everything down to the tribal level and would make our problems a lot worse. So as an initial response, the rejection of participatory, tribal responses by the liberal coalition is correct. However we can’t just respond with pragmatic arguments because that doesn’t work against participatory attacks. We have to actually take on the participatory attacks and defeat them — we just have to find ways of doing it that are better than fighting them on their own participatory terms.

    Of moose and men

    In a post several weeks ago, One Man’s Moose, Timothy Burke discussed the social tradeoffs between regulation and respect for individual desires and needs. That post came out of a larger web discussion on game management in Vermont, and resulting conflicts with people who keep moose as pets. Timothy summarized a key point:

    If you start cutting separate deals with everyone who pleads that their circumstances are special, that a legitimate attempt to safeguard the public shouldn’t apply to them, you’ll end up with a public policy that applies to no one.
    I reacted strongly to that general point; I’m posting a reworked version of my comments here.

    Of course Timothy’s summary is a more detailed statement of the classic bureaucratic argument, “If we let you do it, we’d have to let everyone do it”; living in a complex society we encounter this constraint on our liberty implicitly or explicitly many times a day.

    The basic point is tough to dispute. But the way it typically plays out, for example in Timothy’s quote, relies on an implicit assumption about the “cognitive” limitations of bureaucracy. We assume that the bureaucrat can only use fairly simple rules based on local information. As a practical matter this has been true of bureaucrats for the last several thousand years, so this assumption has gotten deeply embedded. But maybe it isn’t true anymore.

    Let’s suspend that assumption for a moment and instead, use the typical (crazy) assumptions of micro-economic models. Suppose all the bureaucrats enforcing a given policy (game wardens, medical referral reviewers, etc.) knew everything relevant to their decisions, including all the issues being considered by similar bureaucrats, and could see the implications of every choice.

    In this case, every bureaucrat could cut deals tailored to the individual circumstances of each moose owner, land owner, hospital, sick person, etc. while still preserving the effects of the global policy. Some otherwise unhappy citizens could be bought off with voluntary transfers (of money, services, alternative services, etc.) from others. Quite likely (but not necessarily) some people would remain dissatisfied, but surely far fewer. In addition, everyone could see that the decisions were closely tailored to circumstances, and if they viewed a range of decisions, they could probably see that it would be hard (by hypothesis, actually impossible) to improve on the local tradeoffs. So they’d be more inclined to accept their deals as “the best we all could do”.

    We know these assumptions are crazy. But they are crazy in exactly the same way as standard micro-economic models. To prove that markets “work”, the bodega operator on the corner is presumed to know everything relevant to his business and to fully calculate the effects of all his choices; we are just extending this generous assumption to bureaucrats. We now have the power to enrich our decision making with almost unlimited amounts of information, real time social networking, computation, etc. So maybe the micro-economic assumptions aren’t as crazy as they used to be.

    Thinking in these terms helps us see why a bureaucratic process, as Timothy says, “seems impoverished and cold compared to the vivid individuality of real people and real circumstances.” The problem isn’t mainly policy goals or the attempt to impose rational constraints on the situation. The problem is our (circumstantial) limits in matching up local variation with the demands of our global goal. This is not an in-principle problem of public vs. private, large vs. small etc. Instead it is basically a problem of data and computation, and perhaps techniques to prevent gaming.

    Intractability

    Conceivably we might run into in-principle problems of computational intractability, but that would need to be demonstrated and would be an interesting result. There are such intractability results for general micro-economic models, but it isn’t clear they apply to the much more limited cases of trying to manage moose, etc. Even if an exactly optimal outcome is intractable, likely we could find a tractable close approximation. If anyone cares I can dig up references; ask in comments or email me.

    Let’s compare this with a “free market” story, say about game management. In that story everyone could own their own moose, but we’d make sure they internalized all their externalities; if moose were giving each other diseases, we’d allocate the costs of diseases to the original sick moose, and so forth. This market story depends on just as many unrealistic assumptions about people, knowledge, calculation, etc. as “optimal bureaucracy” applied to game management — in fact it is arguably even less realistic, because we have to price the externalities correctly, impose the prices, and moose owners have to anticipate and respond to the possible costs correctly.

    So why do we like markets? To some extent they solve the information and calculation problem by aggregating choices into prices.

    When do markets work?

    The proof that free markets are optimal actually cheats by assuming every market participant knows everything and can calculate everything anyway! In many cases prices do usefully aggregate information and simplify calculation, but I don’t know of a strong analysis of where (and how) they actually work and where they don’t.

    More generally, though, markets create incentives for participants to locally optimize using abundant, cheap local data, and they aggregate those local optimizations (through prices) in ways that approximate a global optimum. (Of course often they totally screw things up in new ways, typically by incenting participants to pursue socially dysfunctional goals, some of which also systematically distort the social process to even more strongly favor dysfunctional ends. See ponzi schemes, patent medicines, marketing new drugs that are less effective than the old ones, lobbying and regulatory capture.)

    Happily we’re coming to understand how to do this sort of local optimization and aggregation without ownership and exchange. We all locally optimize and aggregate our ideo-dialects of our language, clothing styles, music choices, etc. The open source community has figured out how to locally optimize and aggregate software design and construction, and so forth. The web makes all of this easier and faster.

    Economic theory has focused on the exchange case, but markets are obviously derivative from the more general case. After all, markets arise from stable social arrangements, not the other way around, and these arrangements are stable because they have found local optima. In many ways exchange creates problems; for example, it creates opportunities to use bribes in one form or another.

    Given this analysis, how might we improve matters?

    How to get better at bureaucracy

    Historically we’ve found that large scale organizations, and setting and enforcing public policy gets us into these bureaucratic quandaries; but scale and public policy are unavoidable and we tend to figure we can’t do any better. If we realize the problem has been process limitations, and that now we can do better, we should devote more effort to process engineering. A better process would pull in more information and cognitive resources from the affected citizens and would organize their activities with constraints and incentives so they approximate the intended policy. We don’t (yet) have a good engineering approach to building and managing processes like this, but we surely we can improve current processes if we put our minds to it. One demonstration of the potential for improvement is the enormous differences in the effectiveness of complex organizations like hospitals — organizations which deliberately evolve their processes, monitoring and incorporating experience over time, can improve by orders of magnitude relative to those that don’t.

    Comment from T. Burke

    At this point I was very happy to get a comment from Timothy indicating we were in sync:

    I am really finding this a useful and thought-provoking way to circle back around the problem and come at it from some new angles. Thinking about open source as a generalized strategy or at least an insight to possible escapes from the public/private national/local is very stimulating. There’s something here about abandoning the kind of mastery and universalism that liberalism seems too attached to, while not abandoning a way of aggregating knowledge towards shared best practices (which include ethical/moral/social dispensations, not just technical ones).

    Maybe here it would help to think about why we keep getting stuck in this cul-de-sac. Bureaucracy is a highly evolved set of practices that maybe started in fertile crescent farm products management around 3,000 BCE.

    correction by G. Weaire

    Thanks to G. Weaire whose comment, in addition to raising fascinating issues, very gently corrected my overstatement of this period by 2,000 years.

    We’ve had plenty of time to figure out how to do things better but I can’t think of any historical societies that really got out of this bind. Even if some did, we have to grapple with why bureaucracy in basically all cultures today generates similar problems — of course with variations in corruption, efficiency, etc.

    The model for effective bureaucracy should perhaps be our other successful distributed negotiations. As I mentioned, we’re very good at “negotiating” changes in our language, social and cultural conventions, background assumptions, etc. etc. We’re so good at this that most of our negotiation is implicit and even unconscious.

    Is there theory?

    Elinor Ostrom analyzes the stable results of this sort of negotiation (as do Coase and others). But do we have any good models of the negotiation process itself? G. Weaire in his comment suggested “the sociolinguistics of politeness, esp. the still (I think) leading Brown-Levinson model. This tradition of inquiry is more-or-less entirely about trying to formalize an understanding of this sort of process at the level of conversational interaction.” He also mentioned “Michael Gagarin, Writing Greek Law… with its focus on highly formal public processes that aren’t bureaucratic but aren’t quite the village consensus either.” Luc Steels has simulated the negotiation of simple vocabularies during language formation…

    I believe these distributed negotiations are responsible for generating, shaping and maintaining essentially all of our institutions — replicated patterns of interaction — and thus our apparently stable social environment.

    So if we’re so good at this, why can’t we negotiate the enforcement of policy in the same way? I guess the main reason is that our negotiations operate in “consensus time” but bureaucratic processes have to operate in “transaction time”, and also need to maintain more detailed, reliable information than social memory typically does. When a farmer in Ur put grain into storage he needed a receipt right then, not when the village discussion could get around to it, and he needed a detailed stable record not whatever the members of the village could remember a few weeks or months later. So we got clerks making marks on a tablet, the rest is history.

    Could it really scale?

    G. Weaire commented that “the modern state has so much greater a bureaucratic capacity than any predecessor that it’s a difference of degree that adds up to a difference of kind, and that speaking of [5,000] years of bureaucracy maybe isn’t a helpful frame of reference.”

    He is right about a difference in scale of maybe six decimal orders of magnitude being certainly a difference in kind (from maybe 100 clerks in a city of several tens of thousands, to a hundred million or maybe even a billion bureaucrats of various flavors).

    However I think some important characteristics can persist even across such a great change. My own analogy here would be Turing’s original abstract machine compared with the one I’m using to write this. I’m sure the performance difference, storage capacity, etc. is at least as great. And Turing couldn’t anticipate huge differences in kind, such as the web (and its social consequences), open source, the conceptual problems of large scale software, etc. However even today everyone who works with computers, to a considerable extent, must learn to think the way Turing did.

    Similarly, the work of the clerk depended on social formations of fungibility of goods, identity of persons, standards of quantity and quality, etc. which are still the foundations of bureaucratic policy.

    So while it would be wrong to ignore this difference of kind, at the same time, I think there are important constraints that have stayed immutable from Ur until recently.

    I believe the limits on implementing a complex, widely distributed negotiation at transaction speed are mostly cognitive — humans just can’t learn quickly enough, keep enough in mind, make complex enough judgments, etc. As long as the process has to go through human minds at each step, and still has to run at transaction speed, bureaucracy (public or “private” — think of your favorite negotiation with a corporate behemoth) is the best we can do (sigh); we’re pretty much stuck with the tradeoff Timothy is talking about, and thus the perennial struggles.

    On the other hand, open peer production — open source, Wikipedia, etc. — seems to have partially gotten out of this bind by keeping the state of the negotiation mostly in the web, rather than in the participants heads.

    For example, on the web people negotiate largely through versioned (and often branching) repositories. These repositories can simultaneously contain all the alternatives in contention and make them easy to mutate, merge and experiment with. This option isn’t directly available to us for moose management

    check ‘em in!

    (though I enjoy the thought of checking all the moose, their owners, and the game management bureaucracy into git, and then instantiating modified versions of the whole lot on multiple branches)

    but examples like this suggest what may be possible going forward.

    The web also helps to make rapid distributed negotiation work through extreme transparency. Generally all the consequential interactions are on the public record as soon as they occur (in repositories or email archives). All the history is archived in public essentially forever, so is always available as a resource for analysis or bolstering or attacking a position. This has good effects on incentives, and also on the evolution of discourse norms.

    Evils of opacity

    The current financial system is pretty far from this, and is working hard to stay far away, by keeping transactions off exchanges, creating opaque securities, etc. As investigation proceeds, it seems more and more likely that the financial crisis would not have occurred if most transactions had been visible to other participants.

    We are in the process of generating transparency for a lot of existing bureaucratic processes and it probably can and should be made a universal norm for all of them (including game management). Note that simply having public records is not nearly enough — the records need to be on line, accessible without fees, and in a format consistent enough to be searchable. Then open content processes will tend to generate transparency for the process as a whole. There’s still a lot of contention around electronically accessible records — existing interests have thrown up all kinds of obstacles, including trade secrets (e.g. testing voting machines), copyright (e.g. building codes and legal records), refusal to convert to electronic form (e.g. legislative calendars), fees for access, etc. etc. But these excuses usually seem pretty absurd when made explicit, and they are gradually being ground down. Electronic transparency isn’t yet a social norm, but we seem to be slouching in that direction.

    My guess is that if we simply make any given bureaucratic process visible to all the participants through the web, it would evolve fairly quickly toward a much more flexible distributed negotiation. This would be fairly easy to try, technically; just put all the current cases, including correspondence etc. on a MediaWiki site, and keep them there as history when they are decided. The politics, privacy issues, etc. will be a lot more thorny. But it seems like an experiment worth trying.

    Open peer production also works because the payoffs for manipulating the system are generally very low. No one owns the content, and there’s no way for contributors to appropriate a significant share of the social benefits. There have been a few semi-successful cases where commercial enterprises manipulated open processes, such as the Rambus patent scam (essentially Rambus successfully promoted inclusion of ideas in standards, and only afterward revealed it had applicable patents). But these cases are rare and so far the relevant community has always been able to amend its practices fairly quickly and easily to prevent similar problems in the future.

    I’m much less clear how we can reduce the payoffs for manipulating social processes. In many cases (such as game management) payoffs are probably already pretty low. But in many important areas like finance and health care they are huge. My guess is that there are ways to restructure our institutions of ownership and control to improve matters but this will be a multi-decade struggle.

    Response to Vassar on thoughtless equilibrium

    Michael Vassar writes in response to my post on rationality as an optimzation for equilibriums that emerge from thoughtless wandering:

    So, how should we compare the equilibriums in question if not rationally?

    I can’t tell whether the statement “that all rationality is an optimization, which lets us get much faster to the same place we’d end up after sufficiently extended thoughtless wandering of the right sort.” is trivial or trivially wrong, which is probably a bad sign. The statement invokes clear opinions about math, evolution, and computer science, but verbalizing them seems neither easy nor necessary. At any event, one of the major classes of findings that interest me in financial economics are those which refute the idea that a mix of rational and irrational agents necessarily produce an efficient equilibrium, such as
    this paper.

    The standard neo-classical proofs that a market will produce optimal equilibria require assumptions of unbounded, costless computational power, omniscience, etc. Arrow and Debreu got their Nobel prize for those proofs. Presumably they would have preferred to use weaker, more realistic assumptions but didn’t see how. So proving that we can approximate those equilibria with much weaker thoughtless wandering seems far from trivial.

    I’m not sure why Michael thinks this idea could be trivially wrong — the papers I reference seem pretty conclusive. Perhaps the phrase “thoughtless wandering” is too informal, but the papers show that these equilibria can be approximated by populations of entities that have no ability to anticipate consequences or plan, so they’re pretty thoughtless.

    Certainly we can always come up with thoughtless wandering of the wrong sort, which will lead to equilibria that don’t optimize the function we want, or perhaps to systems that don’t converge to any equilibrium at all. But this is actually one of the big advantages of viewing rationality as an optimization of thoughtless wandering. It lets us ask specifically what sorts of thoughtless wandering do and don’t approximate the equilibria that we find valuable, or conversely, what interesting failure modes arise in a given class of thoughtless wandering.

    The paper by DeLong et al on noise traders that Michael references is a good example of the kind of insights we can gain by stepping back from rationality. Analyzing a simple stochastic regime, the authors show that even in competition with rational traders, noise traders can capture a significant share of wealth in a market, at the cost of most of them going bankrupt. In effect, the small fraction of lucky survivors have been so lucky that they got very, very rich.

    However the assumptions the authors have to make indicate the difficulties of this enterprise. Specifically, to make their analysis tractable, the authors assume that these very wealthy noise traders have no effect on prices, even though they dominate the market (!). So we don’t know what noise traders would actually do to the equilibrium. Unfortunately, analyzing this kind of stochastic system is hard — but it is very worthwhile.

    Finally, Michael’s question of how we should compare equilibria isn’t answered by any concept of optimality — rational, stochastic, evolutionary, or otherwise. To optimize we always have to specify an objective function, and the objective function is exogenous — it comes from somewhere outside the optimization process itself. Typically in economics the objective function is the (weighted) vector of utilities of all the consumers, for example. Economics doesn’t have any intrinsic way to say that consumers have “irrational” utilities.

    Objective functions may be subject to critiques based on internal inconsistencies, observations that other “nearby” objective functions lead to much higher optima on some dimensions, etc. I conjecture that generally these critiques can be understood in the “thoughtless wandering” perspective in terms of the dynamics of the system — it may fail to converge at all if an objective function is inconsistent, for example. Also, while “rationality intensive” neoclassical economic equlibria are very fragile — they don’t hold up under perturbation — the “thoughtless wandering” approximations are much more robust since they are stochastic to begin with, so they are less likely to produce bad results due to small problems with initial conditions.

    Bubbles of humanity in a post-human world

    Austin Henderson had some further points in his comment on Dancing toward the singularity that I wanted to discuss. He was replying to my remarks on a social phase-change toward the end of the post. I’ll quote the relevant bits of my post, substituting my later term “netminds” for the term I was using then, “hybrid systems”:

    If we put a pot of water on the stove and turn on the heat, for a while all the water heats up, but not uniformly–we get all sorts of inhomogeneity and interesting dynamics. At some point, local phase transitions occur–little bubbles of water vapor start forming and then collapsing. As the water continues to heat up, the bubbles become more persistent, until we’ve reached a rolling boil. After a while, all the water has turned into vapor, and there’s no more liquid in the pot.

    We’re now at the point where bubbles of netminds (such as “gelled” development teams) can form, but they aren’t all that stable or powerful yet, and so they aren’t dramatically different from their social environment. Their phase boundary isn’t very sharp.

    As we go forward and these bubbles get easier to form, more powerful and more stable, the overall social environment will be increasingly roiled up by their activities. As the bubbles merge to form a large network of netminds, the contrast between people who are part of netminds and normal people will become starker.

    Unlike the pot that boils dry, I’d expect the two phases–normal people and netminds–to come to an approximate equilibrium, in which parts of the population choose to stay normal indefinitely. The Amish today are a good example of how a group can make that choice. Note that members of both populations will cross the phase boundary, just as water molecules are constantly in flux across phase boundaries. Amish children are expected to go out and explore the larger culture, and decide whether to return. I presume that in some cases, members of the outside culture also decide to join the Amish, perhaps through marriage.

    After I wrote this I encountered happiness studies that show the Amish are much happier and dramatically less frequently depressed than mainstream US citizens. I think its very likely that the people who reject netminds and stick with GOFH (good old fashioned humanity) may similarly be much happier than people who become part of netminds (on the average).

    It isn’t too hard to imagine why this might be. The Amish very deliberately tailor their culture to work for them, selectively adopting modern innovations and tying them into their social practices in specific ways designed to maintain their quality of life. Similarly, GOFH will have the opportunity to tailor its culture and technical environment in the same way, perhaps with the assistance of friendly netminds that can see deeper implications than the members of GOFH.

    I’m inclined to believe that I too would be happier in a “tailored” culture. Nonetheless, I’m not planning to become Amish, and I probably will merge into a netmind if a good opportunity arises. I guess my own happiness just isn’t my primary value.

    [A]s the singularity approaches, the “veil” between us and the future will become more opaque for normal people, and at the same time will shift from a “time-like” to a “space-like” boundary. In other words, the singularity currently falls between our present and our future, but will increasingly fall between normal humans and netminds living at the same time. Netminds will be able to “see into” normal human communities–in fact they’ll be able to understand them far more accurately than we can now understand ourselves–but normal humans will find hybrid communities opaque. Of course polite netminds will present a quasi-normal surface to normal humans except in times of great stress.

    By analogy with other kinds of phase changes, the distance we can see into the future will shrink as we go through the transition, but once we start to move toward a new equilibrium, our horizons will expand again, and we (that is netminds) may even be able to see much further ahead than we can are today. Even normal people may be able to see further ahead (within their bubbles), as long as the equilibrium is stable. The Amish can see further ahead in their own world than we can in ours, because they have decided that their way of life will change slowly.

    Austin raises a number of issues with my description of this phase change. His first question is why we should regard the population of netminds as (more or less) homogeneous:

    All water boils the same way, so that when bubbles coalesce they are coherent. Will bubbles of [netmind] attempt to merge, maybe that will take more work than their hybrid excess capability provides, so they will expend all their advantage trying to coalesce so that they can make use of that advantage. Maybe it will be self-limiting: the “coherence factor” — you have to prevent it from riding off at high speed in all directions.

    Our current experience with networked systems indicates there’s a messy dynamic balance. Network effects generate a lot of force toward convergence or subsumption, since the bigger nexus tends to outperform the smaller one even if it is not technically as good. (Here I’m talking about nexi of interoperability, so they are conceptual or conventional, not physical — e.g. standards.)

    Certainly the complexity of any given standard can get overwhelming. Standards that try to include everything break down or just get too complex to implement. Thus there’s a tendency for standards to fission and modularize. This is a good evolutionary argument for why we see compositionality in any general purpose communication medium, such as human language.

    When a standard breaks into pieces, or when competing standards emerge, or when standards originally developed in different areas start interacting, if the pieces don’t work together, that causes a lot of distress and gets fixed one way or another. So the network effects still dominate, through making pieces interact gracefully. Multiple interacting standards ultimately get adjusted so that they are modular parts of a bigger system, if they all continue to be viable.

    As for riding off in all directions, I just came across an interesting map of science. In a discussion of the map, a commenter makes just the point I made in another blog post, that real scientific work is all connected, pseudo-science goes off into little encapsulated belief systems.

    I think that science stays connected because each piece progresses much faster when it trades across its boundaries. If a piece can’t or won’t connect for some reason it falls behind. The same phenomenon occurs in international trade and cultural exchange. So probably some netminds will encapsulate themselves, and others will ride off in some direction far enough so they can’t easily maintain communication with the mainstream. But those moves will tend to be self-limiting, as the relatively isolated netminds fall behind the mainstream and become too backward to have any power or influence.

    None of this actually implies that netminds will be homogeneous, any more than current scientific disciplines are homogeneous. They will have different internal languages, different norms, different cultures, they will think different things are funny or disturbing, etc. But they’ll all be able to communicate effectively and “trade” questions and ideas with each other.

    Austin’s next question is closely related to this first one:

    Why is there only one phase change? Why wouldn’t the first set of [netminds] be quickly passed by the next, etc. Just like the generation gap…? Maybe, as it appears to me in evolution in language (read McWharter, “The Word on the Street” for the facts), the speed of drift is just matched by our length of life, and the bridging capability of intervening generations; same thing in space, bridging capability across intervening African dialects in a string of tribes matches the ability to travel. Again, maybe mechanisms of drift will limit the capacity for change.

    Here I want to think of phase changes as occurring along a spectrum of different scales. For example, in liquid water, structured patterns of water molecules form around polar parts of protein molecules. These patterns have boundaries and change the chemical properties of the water inside them. So perhaps we should regard these patterns as “micro-phases”, much smaller and less robust than the “macro-phases” of solid, liquid and gas.

    Given this spectrum, I’m definitely talking about a “macro-phase” transition, one that is so massive that it is extremely rare in history. I’d compare the change we’re going through to the evolution of the genetic mechanisms that support multi-cellular differentiation, and to the evolution of general purpose language supporting culture that could accumulate across generations. The exponential increases in the power of digital systems will have as big an impact as these did. So, yes, there will be more phase changes, but even if they are coming exponentially closer the next one of this magnitude is still quite some time away:

    • Cambrian explosion, 500 Million Years ago
    • General language, 500 Thousand Years ago
    • Human / Digital hybrids (netminds), now
    • next phase change, 500 years from now?

    Change vs. coherence is a an interesting issue. We need to distinguish between drift (which is fairly continuous) and phase changes (which are quite discontinuous).

    We have a hard time understanding Medieval English, as much because of cultural drift as because of linguistic drift. The result of drift isn’t that we get multiple phases co-existing (with rare exceptions), but that we get opaque history. In our context this means that after a few decades, netminds will have a hard time understanding the records left by earlier netminds. This is already happening as our ability to read old digital media deteriorates, due to loss of physical and format compatibility.

    I imagine it would (almost) always be possible to go back and recover an understanding of historical records, if some netmind is motivated to put enough effort into the task — just as we can generally read old computer tapes, if we want to work hard enough. But it would be harder for them than for us, because of the sheer volume of data and computation that holds everything together at any given time. Our coherence is very very thin by comparison.

    For example the “thickness” of long term cultural transmission in western civilization can be measured in some sense by the manuscripts that survived from Rome and Greece and Israel at the invention of printing. I’m pretty sure that all of those manuscripts would fit on one (or at most a few) DVDs as high resolution images. To be sure these manuscripts are a much more distilled vehicle of cultural transmission than (say) the latest Tom Cruise DVD, but at some point the sheer magnitude of cultural production overwhelms this issue.

    Netminds will up the ante at an exponential rate, as we’re already seeing with digital production technology, blogging, etc. etc. Our increasing powers of communication pretty quickly exceed my ability to understand or imagine the consequences.

    A good example of post-capitalist production

    This analysis of the Firedoglake coverage of the Libby trial hits essentially all the issues we’ve been discussing.

    • Money was required, but it was generated by small contributions from stakeholders (the audience), targeted to this specific project.
    • A relatively small amount of money was sufficient because the organization was very lightweight and the contributors were doing it for more than just money.
    • The quality was higher than the work done by the conventional organizations (news media) because the FDL group was larger and more dedicated. They had a long prior engagement with this story.
    • FDL could afford to put more feet on the ground than the (much better funded) news media, because they were so cost-effective.
    • The group (both the FDL reporters and their contributors) self-organized around this topic so their structure was very well suited to the task.
    • Entrepreneurship was a big factor — both long-term organization of the site, and short-term organization of the coverage.
    • FDL, with no prior journalistic learning curve, and no professional credentials, beat the professional media on their coverage of a high-profile hard-core news event.

    This example suggests that we don’t yet know the inherent limits of this post-capitalist approach to production of (at least) information goods. Most discussions of blogs vs. (traditional) news media have assumed the the costs inherent in “real reporting” meant blogs couldn’t do it effectively. The FDL example shows, among other things, that the majority of those costs (at least in this case) are due to institutional overhead that can simply be left out of the new model.

    We’re also discovering that money can easily be raised to cover specific needs, if an audience is very engaged and/or large. Note that even when raising money, the relationship remains voluntary rather than transactional — people contribute dollars without imposing any explicit obligations on their recipient. No one incurs the burden of defining and enforcing terms. In case of fraud or just disappointing performance, the “customers” will quickly withdraw from the relationship, so problems will be self-limiting.

    It is interesting to speculate about how far this approach could go. To pick an extreme example, most of the current cost of new drugs is not manufacturing (which will remain capital intensive for the forseeable future), but rather is the information goods — research, design, testing, education of providers, etc. — needed to bring drugs to market. At this point it seems impossible that these processes could be carried out in a post-capitalist way. But perhaps this is a failure of imagination.

    Rationality is only an optimization

    I’m reading a lovely little book by H. Peyton Young, Individual Strategy and Social Structure, very dense and tasty. I checked out what he had done recently, and found “Individual Learning and Social Rationality” in which, as he says, “[w]e show how high-rationality solutions can emerge in low-rationality environments provided the evolutionary process has sufficient time to unfold.”

    This reminded me of work by Duncan Foley on (what might be called) low-rationality economics, beginning with “Maximum Entropy Exchange Equilibrium” and moving to a more general treatment in “Classical thermodynamics and economic general equilibrium theory“. Foley shows that the equilibria of neoclassical economics, typically derived assuming unbounded rationality, can in fact be approximated by repeated interactions between thoughtless agents with simple constraints. These results don’t even depend on agents changing due to experience.

    So from the careful, well grounded results by these two scholars, I’d like to take an alarmingly speculative leap: I conjecture that all rationality is an optimization, which lets us get much faster to the same place we’d end up after sufficiently extended thoughtless wandering of the right sort. This hardly makes rationality unimportant, but it does tie it to something less magical sounding.

    I like this way of thinking about rationality, because it suggests useful questions like “What thoughtless equilibrium does this rational rule summarize?” and “How much rationality do we need to get close to optimal results here?” In solving problems a little rationality is often enough, trying to add more just may just produce gratuitous formality and obscurity.

    At least in economics and philosophy, rationality is often treated as a high value, sometimes even an ultimate value. If it is indeed an optimization of the path to thoughtless equilibria, it is certainly useful but probably not worthy of such high praise. Value is more to be found through comparing the quality of the equilibria and understanding the conditions that produce them, than by getting to them faster.

    Capital is just another factor

    Wow! Lots of people came to see Capitalists vs. Entrepreneurs, via great responses by Tim Lee, Jesse Walker, Tech and Science News Updates, and Logan Ferree (scroll down) and maybe others I didn’t see. Thanks! Reading over those posts and comments, I think perhaps the issue is simpler than I realized, although the implications certainly aren’t.

    Really we are talking about a very basic idea: Capital is just another factor in production, like labor or material resources.

    Since capital is just a factor, its importance in production will change over time. Specifically now, the importance of capital is falling. As we get richer and industry gets more productive,any given capital item gets cheaper. Things like a fast computer, a slice of network bandwidth, etc. are so cheap that any professional in a developed economy can do their own production of information goods with no outside capital.

    It seems that we’ve confused free markets with “capitalism”. This only makes sense as long as the key issue in markets is the availability of capital. From a long term perspective, naming our economic system after one factor of production is just silly.

    On the other hand, free markets depend essentially on individual judgment, choice, creativity, and on people’s ability to sustain a network of social relationships. These make free markets possible, and taken together they constitute entrepreneurship.

    So unlike capital, entrepreneurship is central to any possible free market system.

    The inevitability of peer production

    In this context, rather than being strange or subversive, or even needing to be explained, peer production is viable when:

    1. capital costs (needed for production) fall far enough and
    2. coordination costs fall far enough.

    Cheap computing and communication reduce both of these exponentially, so peer production becomes inevitable.

    This was not apparent until recently, and even now is hard for many people to believe. People are still looking for an “economic justification” for peer production. “How does it help people make money?” they ask. But this confuses the means with the end. Money is a means of resource allocation and coordination. If we have other means that cost less or work better, economics dictates that we will use them instead of money.

    A digression on coordination

    Economists typically talk about “transaction costs” but I’m deliberately using the term “coordination costs”. Transactions (a la Coase) typically involve money, and certainly require at least contractual obligations. Coordination by contrast only depends on voluntary cooperation. Transaction costs will always be higher than coordination costs, because transactions require the ability to enforce the terms of the transaction. This imposes additional costs — often enormously larger costs.

    As I point out in “The cost of money” introducing money into a relationship creates a floor for costs. I didn’t say it there, but it is equally true that contractual obligations introduce the same kind of floor for costs. Only when a relationship is freely maintained by the parties involved, with no requirement to monitor and enforce obligations, can these costs be entirely avoided.

    Not surprisingly, peer production succeeds in domains where people can coordinate without any requirement to enforce prior obligations. Even the most limited enforcement costs typically kill it. Clay Shirky develops this argument in the specific case of Citizendum (a replacement for Wikipedia that attempts to validate the credentials of its contributors).

    A shift of perspective

    I’m only beginning to see the implications of this way of thinking about capital, but it has already brought to mind one entertaining analogy.

    In the late middle ages, feudalism was being undermined by (among other things) the rise of trade. Merchants, previously beneath notice, began to get rich enough so that they could buy clothes, furniture and houses that were comparable to those of the nobility.

    One response of the “establishment” was to institute sumptuary laws, strictly limiting the kinds of clothes, furniture, houses, etc. merchants could own. There was a period where rich merchants found ways to “hack” the laws with very expensive plain black cloth and so forth, and then the outraged nobility would try to extend the laws to prohibit the hack. Of course this attempt to hold back the tide failed.

    I think that in the current bizarre and often self-damaging excesses of copyright and patent owners, we’re seeing something very like these sumptuary laws. Once again, the organization of economic activity is changing, and those who’ve benefited from the old regime aren’t happy about that at all. They are frantically throwing up any legal barriers they can to keep out the upstarts. But once again, attempts to hold back the tide will fail.

    Next Page »