Of moose and men

In a post several weeks ago, One Man’s Moose, Timothy Burke discussed the social tradeoffs between regulation and respect for individual desires and needs. That post came out of a larger web discussion on game management in Vermont, and resulting conflicts with people who keep moose as pets. Timothy summarized a key point:

If you start cutting separate deals with everyone who pleads that their circumstances are special, that a legitimate attempt to safeguard the public shouldn’t apply to them, you’ll end up with a public policy that applies to no one.
I reacted strongly to that general point; I’m posting a reworked version of my comments here.

Of course Timothy’s summary is a more detailed statement of the classic bureaucratic argument, “If we let you do it, we’d have to let everyone do it”; living in a complex society we encounter this constraint on our liberty implicitly or explicitly many times a day.

The basic point is tough to dispute. But the way it typically plays out, for example in Timothy’s quote, relies on an implicit assumption about the “cognitive” limitations of bureaucracy. We assume that the bureaucrat can only use fairly simple rules based on local information. As a practical matter this has been true of bureaucrats for the last several thousand years, so this assumption has gotten deeply embedded. But maybe it isn’t true anymore.

Let’s suspend that assumption for a moment and instead, use the typical (crazy) assumptions of micro-economic models. Suppose all the bureaucrats enforcing a given policy (game wardens, medical referral reviewers, etc.) knew everything relevant to their decisions, including all the issues being considered by similar bureaucrats, and could see the implications of every choice.

In this case, every bureaucrat could cut deals tailored to the individual circumstances of each moose owner, land owner, hospital, sick person, etc. while still preserving the effects of the global policy. Some otherwise unhappy citizens could be bought off with voluntary transfers (of money, services, alternative services, etc.) from others. Quite likely (but not necessarily) some people would remain dissatisfied, but surely far fewer. In addition, everyone could see that the decisions were closely tailored to circumstances, and if they viewed a range of decisions, they could probably see that it would be hard (by hypothesis, actually impossible) to improve on the local tradeoffs. So they’d be more inclined to accept their deals as “the best we all could do”.

We know these assumptions are crazy. But they are crazy in exactly the same way as standard micro-economic models. To prove that markets “work”, the bodega operator on the corner is presumed to know everything relevant to his business and to fully calculate the effects of all his choices; we are just extending this generous assumption to bureaucrats. We now have the power to enrich our decision making with almost unlimited amounts of information, real time social networking, computation, etc. So maybe the micro-economic assumptions aren’t as crazy as they used to be.

Thinking in these terms helps us see why a bureaucratic process, as Timothy says, “seems impoverished and cold compared to the vivid individuality of real people and real circumstances.” The problem isn’t mainly policy goals or the attempt to impose rational constraints on the situation. The problem is our (circumstantial) limits in matching up local variation with the demands of our global goal. This is not an in-principle problem of public vs. private, large vs. small etc. Instead it is basically a problem of data and computation, and perhaps techniques to prevent gaming.

Intractability

Conceivably we might run into in-principle problems of computational intractability, but that would need to be demonstrated and would be an interesting result. There are such intractability results for general micro-economic models, but it isn’t clear they apply to the much more limited cases of trying to manage moose, etc. Even if an exactly optimal outcome is intractable, likely we could find a tractable close approximation. If anyone cares I can dig up references; ask in comments or email me.

Let’s compare this with a “free market” story, say about game management. In that story everyone could own their own moose, but we’d make sure they internalized all their externalities; if moose were giving each other diseases, we’d allocate the costs of diseases to the original sick moose, and so forth. This market story depends on just as many unrealistic assumptions about people, knowledge, calculation, etc. as “optimal bureaucracy” applied to game management — in fact it is arguably even less realistic, because we have to price the externalities correctly, impose the prices, and moose owners have to anticipate and respond to the possible costs correctly.

So why do we like markets? To some extent they solve the information and calculation problem by aggregating choices into prices.

When do markets work?

The proof that free markets are optimal actually cheats by assuming every market participant knows everything and can calculate everything anyway! In many cases prices do usefully aggregate information and simplify calculation, but I don’t know of a strong analysis of where (and how) they actually work and where they don’t.

More generally, though, markets create incentives for participants to locally optimize using abundant, cheap local data, and they aggregate those local optimizations (through prices) in ways that approximate a global optimum. (Of course often they totally screw things up in new ways, typically by incenting participants to pursue socially dysfunctional goals, some of which also systematically distort the social process to even more strongly favor dysfunctional ends. See ponzi schemes, patent medicines, marketing new drugs that are less effective than the old ones, lobbying and regulatory capture.)

Happily we’re coming to understand how to do this sort of local optimization and aggregation without ownership and exchange. We all locally optimize and aggregate our ideo-dialects of our language, clothing styles, music choices, etc. The open source community has figured out how to locally optimize and aggregate software design and construction, and so forth. The web makes all of this easier and faster.

Economic theory has focused on the exchange case, but markets are obviously derivative from the more general case. After all, markets arise from stable social arrangements, not the other way around, and these arrangements are stable because they have found local optima. In many ways exchange creates problems; for example, it creates opportunities to use bribes in one form or another.

Given this analysis, how might we improve matters?

How to get better at bureaucracy

Historically we’ve found that large scale organizations, and setting and enforcing public policy gets us into these bureaucratic quandaries; but scale and public policy are unavoidable and we tend to figure we can’t do any better. If we realize the problem has been process limitations, and that now we can do better, we should devote more effort to process engineering. A better process would pull in more information and cognitive resources from the affected citizens and would organize their activities with constraints and incentives so they approximate the intended policy. We don’t (yet) have a good engineering approach to building and managing processes like this, but we surely we can improve current processes if we put our minds to it. One demonstration of the potential for improvement is the enormous differences in the effectiveness of complex organizations like hospitals — organizations which deliberately evolve their processes, monitoring and incorporating experience over time, can improve by orders of magnitude relative to those that don’t.

Comment from T. Burke

At this point I was very happy to get a comment from Timothy indicating we were in sync:

I am really finding this a useful and thought-provoking way to circle back around the problem and come at it from some new angles. Thinking about open source as a generalized strategy or at least an insight to possible escapes from the public/private national/local is very stimulating. There’s something here about abandoning the kind of mastery and universalism that liberalism seems too attached to, while not abandoning a way of aggregating knowledge towards shared best practices (which include ethical/moral/social dispensations, not just technical ones).

Maybe here it would help to think about why we keep getting stuck in this cul-de-sac. Bureaucracy is a highly evolved set of practices that maybe started in fertile crescent farm products management around 3,000 BCE.

correction by G. Weaire

Thanks to G. Weaire whose comment, in addition to raising fascinating issues, very gently corrected my overstatement of this period by 2,000 years.

We’ve had plenty of time to figure out how to do things better but I can’t think of any historical societies that really got out of this bind. Even if some did, we have to grapple with why bureaucracy in basically all cultures today generates similar problems — of course with variations in corruption, efficiency, etc.

The model for effective bureaucracy should perhaps be our other successful distributed negotiations. As I mentioned, we’re very good at “negotiating” changes in our language, social and cultural conventions, background assumptions, etc. etc. We’re so good at this that most of our negotiation is implicit and even unconscious.

Is there theory?

Elinor Ostrom analyzes the stable results of this sort of negotiation (as do Coase and others). But do we have any good models of the negotiation process itself? G. Weaire in his comment suggested “the sociolinguistics of politeness, esp. the still (I think) leading Brown-Levinson model. This tradition of inquiry is more-or-less entirely about trying to formalize an understanding of this sort of process at the level of conversational interaction.” He also mentioned “Michael Gagarin, Writing Greek Law… with its focus on highly formal public processes that aren’t bureaucratic but aren’t quite the village consensus either.” Luc Steels has simulated the negotiation of simple vocabularies during language formation…

I believe these distributed negotiations are responsible for generating, shaping and maintaining essentially all of our institutions — replicated patterns of interaction — and thus our apparently stable social environment.

So if we’re so good at this, why can’t we negotiate the enforcement of policy in the same way? I guess the main reason is that our negotiations operate in “consensus time” but bureaucratic processes have to operate in “transaction time”, and also need to maintain more detailed, reliable information than social memory typically does. When a farmer in Ur put grain into storage he needed a receipt right then, not when the village discussion could get around to it, and he needed a detailed stable record not whatever the members of the village could remember a few weeks or months later. So we got clerks making marks on a tablet, the rest is history.

Could it really scale?

G. Weaire commented that “the modern state has so much greater a bureaucratic capacity than any predecessor that it’s a difference of degree that adds up to a difference of kind, and that speaking of [5,000] years of bureaucracy maybe isn’t a helpful frame of reference.”

He is right about a difference in scale of maybe six decimal orders of magnitude being certainly a difference in kind (from maybe 100 clerks in a city of several tens of thousands, to a hundred million or maybe even a billion bureaucrats of various flavors).

However I think some important characteristics can persist even across such a great change. My own analogy here would be Turing’s original abstract machine compared with the one I’m using to write this. I’m sure the performance difference, storage capacity, etc. is at least as great. And Turing couldn’t anticipate huge differences in kind, such as the web (and its social consequences), open source, the conceptual problems of large scale software, etc. However even today everyone who works with computers, to a considerable extent, must learn to think the way Turing did.

Similarly, the work of the clerk depended on social formations of fungibility of goods, identity of persons, standards of quantity and quality, etc. which are still the foundations of bureaucratic policy.

So while it would be wrong to ignore this difference of kind, at the same time, I think there are important constraints that have stayed immutable from Ur until recently.

I believe the limits on implementing a complex, widely distributed negotiation at transaction speed are mostly cognitive — humans just can’t learn quickly enough, keep enough in mind, make complex enough judgments, etc. As long as the process has to go through human minds at each step, and still has to run at transaction speed, bureaucracy (public or “private” — think of your favorite negotiation with a corporate behemoth) is the best we can do (sigh); we’re pretty much stuck with the tradeoff Timothy is talking about, and thus the perennial struggles.

On the other hand, open peer production — open source, Wikipedia, etc. — seems to have partially gotten out of this bind by keeping the state of the negotiation mostly in the web, rather than in the participants heads.

For example, on the web people negotiate largely through versioned (and often branching) repositories. These repositories can simultaneously contain all the alternatives in contention and make them easy to mutate, merge and experiment with. This option isn’t directly available to us for moose management

check ‘em in!

(though I enjoy the thought of checking all the moose, their owners, and the game management bureaucracy into git, and then instantiating modified versions of the whole lot on multiple branches)

but examples like this suggest what may be possible going forward.

The web also helps to make rapid distributed negotiation work through extreme transparency. Generally all the consequential interactions are on the public record as soon as they occur (in repositories or email archives). All the history is archived in public essentially forever, so is always available as a resource for analysis or bolstering or attacking a position. This has good effects on incentives, and also on the evolution of discourse norms.

Evils of opacity

The current financial system is pretty far from this, and is working hard to stay far away, by keeping transactions off exchanges, creating opaque securities, etc. As investigation proceeds, it seems more and more likely that the financial crisis would not have occurred if most transactions had been visible to other participants.

We are in the process of generating transparency for a lot of existing bureaucratic processes and it probably can and should be made a universal norm for all of them (including game management). Note that simply having public records is not nearly enough — the records need to be on line, accessible without fees, and in a format consistent enough to be searchable. Then open content processes will tend to generate transparency for the process as a whole. There’s still a lot of contention around electronically accessible records — existing interests have thrown up all kinds of obstacles, including trade secrets (e.g. testing voting machines), copyright (e.g. building codes and legal records), refusal to convert to electronic form (e.g. legislative calendars), fees for access, etc. etc. But these excuses usually seem pretty absurd when made explicit, and they are gradually being ground down. Electronic transparency isn’t yet a social norm, but we seem to be slouching in that direction.

My guess is that if we simply make any given bureaucratic process visible to all the participants through the web, it would evolve fairly quickly toward a much more flexible distributed negotiation. This would be fairly easy to try, technically; just put all the current cases, including correspondence etc. on a MediaWiki site, and keep them there as history when they are decided. The politics, privacy issues, etc. will be a lot more thorny. But it seems like an experiment worth trying.

Open peer production also works because the payoffs for manipulating the system are generally very low. No one owns the content, and there’s no way for contributors to appropriate a significant share of the social benefits. There have been a few semi-successful cases where commercial enterprises manipulated open processes, such as the Rambus patent scam (essentially Rambus successfully promoted inclusion of ideas in standards, and only afterward revealed it had applicable patents). But these cases are rare and so far the relevant community has always been able to amend its practices fairly quickly and easily to prevent similar problems in the future.

I’m much less clear how we can reduce the payoffs for manipulating social processes. In many cases (such as game management) payoffs are probably already pretty low. But in many important areas like finance and health care they are huge. My guess is that there are ways to restructure our institutions of ownership and control to improve matters but this will be a multi-decade struggle.

Economic theory and policy failure

Recently, we have seen repeated, massive failures of major policies focused on economic issues. For example, recent development policies advocated for countries in South America and Africa, the policies promoted by Western advisors during the conversion of the Soviet economies from communist to market-oriented, and the regulatory and risk management policies responsible for the still unfolding world-wide credit imbroglio.

These policy failures have imposed immense suffering, excess death, and social and economic costs on their target populations. Furthermore, all of these policies were promoted as dictated by economic theory—and promoted not only by political snake oil salesmen, but by cadres of distinguished economic experts. Economic theory probably contributed significantly to the adoption of these policies by supporting passionate intensity, convincing rhetoric and intimidating mathematics. Complete lack of a theory for analyzing these policies, while far from ideal, would probably have saved many lives and much treasure.

What was wrong? Economic theory in these cases actively suppressed consideration of institutional dynamics.

These were not special cases. Classical economic theory is pervasively hostile to institutional dynamics:

  • It assumes a constant (and adequate) institutional backdrop. It assumes that transactions settle, contracts are enforced, theft is more costly than honest purchase, etc. It assumes this backdrop is not subject to manipulation by the market participants. Thus institutional dynamics are simply impossible in classical economic models.
  • It assumes transactions are evaluated in terms of one or a few scalar values — money, “utility”, etc. Basing all decisions on a scalar metric eliminates any “fine structure”, and participants in real transactions depend on this “fine structure” to negotiate their expectations. As a result, theoretical economic transactions lack any means to engage with institutional dynamics, although real economic transactions, of course, play a major role in institutional dynamics.

These moves are justified as “idealizations” that make economic models tractable. However as with metaphors, idealizations must fit the task at hand or they are worse than useless. The policy failures mentioned above, as well as a wide range of other examples, indicate that the idealizations of classical economic theory are a very bad fit to many of the social policy tasks where economic models are typically invoked as justifications.

Specifically, the effects of most public policy decisions depend critically on institutional issues — the institutions that will implement them, the institutions they will strengthen or weaken, and usually the institutional changes they will cause, intentionally or unintentionally.

Economics aspires to judge the implications of a wide range of public policy proposals. But because it clings to a theoretical framework that idealizes away institutional dynamics, it actually cannot address critical aspects of any policy that is substantially affected by institutional issues—and that is most policies. Worse, economic models tend to distract from or entirely suppress deep analysis of institutional dynamics, so they actively damage policy discussions.

One possible option for those who wish to preserve classical economic theory would be to restrict its application to cases where its idealizations do fit well enough to be useful. Certainly there are such cases. To exclude such domains as our modern financial systems from the purview of economic theory seems perverse, yet this would be an inevitable consequence of limiting current theory to domains where it works “well enough”.

In practice institutional issues can’t be ignored in the vast majority of policy decisions. As a result, policy discussion tend to be defined in terms of economic “stories” about policy effects that are not theoretically valid, but that gain credence by invoking the impressive theoretical framework of classical economics. Actual arguments for or against a policy then resort to ad hoc models of institutional dynamics, retrofitted rhetorically and pragmatically to these economic “stories”. The dominance of the economic “story” obscures the ad hoc nature of the argument, avoids deep engagement with the models of institutional dynamics, and keeps the institutional analysis shallow and weak.

Rhetorically effective but obviously mendacious examples of this are the policy arguments for “supply side economics”, school vouchers, increased “personal responsibility” in medical care, and unrestricted free trade. The arguments that led to disasters in recent development policy and financial market policy are more sophisticated but equally hollow.
As mentioned above, the absence of a dominant theory would be less harmful in these cases than current use of economic theory. Unfortunately even the worst theories can only be displaced by other theories, so simply expelling classical economics from policy debates is not realistic. On the other hand, if we can find a theory that allows us to integrate and give proper weight to pragmatic arguments about institutional dynamics, and helps us develop stronger models as we go forward, then we have a reasonable chance to improve policy discussion.

Happily, finding a theory that allows for institutional dynamics and still can match classical economics in its areas of strength is a realistic goal. We now have theoretical resources that allow us to subsume the results of classical economics models, and that can also be gracefully extended to model institutional dynamics. This is a strong claim, but it is justified by examples like Duncan Foley’s work in economics, and H. Peyton Young’s work in institutional dynamics.

Long time no see

I hope to be around more often, at least for a while.

Response to Vassar on thoughtless equilibrium

Michael Vassar writes in response to my post on rationality as an optimzation for equilibriums that emerge from thoughtless wandering:

So, how should we compare the equilibriums in question if not rationally?

I can’t tell whether the statement “that all rationality is an optimization, which lets us get much faster to the same place we’d end up after sufficiently extended thoughtless wandering of the right sort.” is trivial or trivially wrong, which is probably a bad sign. The statement invokes clear opinions about math, evolution, and computer science, but verbalizing them seems neither easy nor necessary. At any event, one of the major classes of findings that interest me in financial economics are those which refute the idea that a mix of rational and irrational agents necessarily produce an efficient equilibrium, such as
this paper.

The standard neo-classical proofs that a market will produce optimal equilibria require assumptions of unbounded, costless computational power, omniscience, etc. Arrow and Debreu got their Nobel prize for those proofs. Presumably they would have preferred to use weaker, more realistic assumptions but didn’t see how. So proving that we can approximate those equilibria with much weaker thoughtless wandering seems far from trivial.

I’m not sure why Michael thinks this idea could be trivially wrong — the papers I reference seem pretty conclusive. Perhaps the phrase “thoughtless wandering” is too informal, but the papers show that these equilibria can be approximated by populations of entities that have no ability to anticipate consequences or plan, so they’re pretty thoughtless.

Certainly we can always come up with thoughtless wandering of the wrong sort, which will lead to equilibria that don’t optimize the function we want, or perhaps to systems that don’t converge to any equilibrium at all. But this is actually one of the big advantages of viewing rationality as an optimization of thoughtless wandering. It lets us ask specifically what sorts of thoughtless wandering do and don’t approximate the equilibria that we find valuable, or conversely, what interesting failure modes arise in a given class of thoughtless wandering.

The paper by DeLong et al on noise traders that Michael references is a good example of the kind of insights we can gain by stepping back from rationality. Analyzing a simple stochastic regime, the authors show that even in competition with rational traders, noise traders can capture a significant share of wealth in a market, at the cost of most of them going bankrupt. In effect, the small fraction of lucky survivors have been so lucky that they got very, very rich.

However the assumptions the authors have to make indicate the difficulties of this enterprise. Specifically, to make their analysis tractable, the authors assume that these very wealthy noise traders have no effect on prices, even though they dominate the market (!). So we don’t know what noise traders would actually do to the equilibrium. Unfortunately, analyzing this kind of stochastic system is hard — but it is very worthwhile.

Finally, Michael’s question of how we should compare equilibria isn’t answered by any concept of optimality — rational, stochastic, evolutionary, or otherwise. To optimize we always have to specify an objective function, and the objective function is exogenous — it comes from somewhere outside the optimization process itself. Typically in economics the objective function is the (weighted) vector of utilities of all the consumers, for example. Economics doesn’t have any intrinsic way to say that consumers have “irrational” utilities.

Objective functions may be subject to critiques based on internal inconsistencies, observations that other “nearby” objective functions lead to much higher optima on some dimensions, etc. I conjecture that generally these critiques can be understood in the “thoughtless wandering” perspective in terms of the dynamics of the system — it may fail to converge at all if an objective function is inconsistent, for example. Also, while “rationality intensive” neoclassical economic equlibria are very fragile — they don’t hold up under perturbation — the “thoughtless wandering” approximations are much more robust since they are stochastic to begin with, so they are less likely to produce bad results due to small problems with initial conditions.

Bubbles of humanity in a post-human world

Austin Henderson had some further points in his comment on Dancing toward the singularity that I wanted to discuss. He was replying to my remarks on a social phase-change toward the end of the post. I’ll quote the relevant bits of my post, substituting my later term “netminds” for the term I was using then, “hybrid systems”:

If we put a pot of water on the stove and turn on the heat, for a while all the water heats up, but not uniformly–we get all sorts of inhomogeneity and interesting dynamics. At some point, local phase transitions occur–little bubbles of water vapor start forming and then collapsing. As the water continues to heat up, the bubbles become more persistent, until we’ve reached a rolling boil. After a while, all the water has turned into vapor, and there’s no more liquid in the pot.

We’re now at the point where bubbles of netminds (such as “gelled” development teams) can form, but they aren’t all that stable or powerful yet, and so they aren’t dramatically different from their social environment. Their phase boundary isn’t very sharp.

As we go forward and these bubbles get easier to form, more powerful and more stable, the overall social environment will be increasingly roiled up by their activities. As the bubbles merge to form a large network of netminds, the contrast between people who are part of netminds and normal people will become starker.

Unlike the pot that boils dry, I’d expect the two phases–normal people and netminds–to come to an approximate equilibrium, in which parts of the population choose to stay normal indefinitely. The Amish today are a good example of how a group can make that choice. Note that members of both populations will cross the phase boundary, just as water molecules are constantly in flux across phase boundaries. Amish children are expected to go out and explore the larger culture, and decide whether to return. I presume that in some cases, members of the outside culture also decide to join the Amish, perhaps through marriage.

After I wrote this I encountered happiness studies that show the Amish are much happier and dramatically less frequently depressed than mainstream US citizens. I think its very likely that the people who reject netminds and stick with GOFH (good old fashioned humanity) may similarly be much happier than people who become part of netminds (on the average).

It isn’t too hard to imagine why this might be. The Amish very deliberately tailor their culture to work for them, selectively adopting modern innovations and tying them into their social practices in specific ways designed to maintain their quality of life. Similarly, GOFH will have the opportunity to tailor its culture and technical environment in the same way, perhaps with the assistance of friendly netminds that can see deeper implications than the members of GOFH.

I’m inclined to believe that I too would be happier in a “tailored” culture. Nonetheless, I’m not planning to become Amish, and I probably will merge into a netmind if a good opportunity arises. I guess my own happiness just isn’t my primary value.

[A]s the singularity approaches, the “veil” between us and the future will become more opaque for normal people, and at the same time will shift from a “time-like” to a “space-like” boundary. In other words, the singularity currently falls between our present and our future, but will increasingly fall between normal humans and netminds living at the same time. Netminds will be able to “see into” normal human communities–in fact they’ll be able to understand them far more accurately than we can now understand ourselves–but normal humans will find hybrid communities opaque. Of course polite netminds will present a quasi-normal surface to normal humans except in times of great stress.

By analogy with other kinds of phase changes, the distance we can see into the future will shrink as we go through the transition, but once we start to move toward a new equilibrium, our horizons will expand again, and we (that is netminds) may even be able to see much further ahead than we can are today. Even normal people may be able to see further ahead (within their bubbles), as long as the equilibrium is stable. The Amish can see further ahead in their own world than we can in ours, because they have decided that their way of life will change slowly.

Austin raises a number of issues with my description of this phase change. His first question is why we should regard the population of netminds as (more or less) homogeneous:

All water boils the same way, so that when bubbles coalesce they are coherent. Will bubbles of [netmind] attempt to merge, maybe that will take more work than their hybrid excess capability provides, so they will expend all their advantage trying to coalesce so that they can make use of that advantage. Maybe it will be self-limiting: the “coherence factor” — you have to prevent it from riding off at high speed in all directions.

Our current experience with networked systems indicates there’s a messy dynamic balance. Network effects generate a lot of force toward convergence or subsumption, since the bigger nexus tends to outperform the smaller one even if it is not technically as good. (Here I’m talking about nexi of interoperability, so they are conceptual or conventional, not physical — e.g. standards.)

Certainly the complexity of any given standard can get overwhelming. Standards that try to include everything break down or just get too complex to implement. Thus there’s a tendency for standards to fission and modularize. This is a good evolutionary argument for why we see compositionality in any general purpose communication medium, such as human language.

When a standard breaks into pieces, or when competing standards emerge, or when standards originally developed in different areas start interacting, if the pieces don’t work together, that causes a lot of distress and gets fixed one way or another. So the network effects still dominate, through making pieces interact gracefully. Multiple interacting standards ultimately get adjusted so that they are modular parts of a bigger system, if they all continue to be viable.

As for riding off in all directions, I just came across an interesting map of science. In a discussion of the map, a commenter makes just the point I made in another blog post, that real scientific work is all connected, pseudo-science goes off into little encapsulated belief systems.

I think that science stays connected because each piece progresses much faster when it trades across its boundaries. If a piece can’t or won’t connect for some reason it falls behind. The same phenomenon occurs in international trade and cultural exchange. So probably some netminds will encapsulate themselves, and others will ride off in some direction far enough so they can’t easily maintain communication with the mainstream. But those moves will tend to be self-limiting, as the relatively isolated netminds fall behind the mainstream and become too backward to have any power or influence.

None of this actually implies that netminds will be homogeneous, any more than current scientific disciplines are homogeneous. They will have different internal languages, different norms, different cultures, they will think different things are funny or disturbing, etc. But they’ll all be able to communicate effectively and “trade” questions and ideas with each other.

Austin’s next question is closely related to this first one:

Why is there only one phase change? Why wouldn’t the first set of [netminds] be quickly passed by the next, etc. Just like the generation gap…? Maybe, as it appears to me in evolution in language (read McWharter, “The Word on the Street” for the facts), the speed of drift is just matched by our length of life, and the bridging capability of intervening generations; same thing in space, bridging capability across intervening African dialects in a string of tribes matches the ability to travel. Again, maybe mechanisms of drift will limit the capacity for change.

Here I want to think of phase changes as occurring along a spectrum of different scales. For example, in liquid water, structured patterns of water molecules form around polar parts of protein molecules. These patterns have boundaries and change the chemical properties of the water inside them. So perhaps we should regard these patterns as “micro-phases”, much smaller and less robust than the “macro-phases” of solid, liquid and gas.

Given this spectrum, I’m definitely talking about a “macro-phase” transition, one that is so massive that it is extremely rare in history. I’d compare the change we’re going through to the evolution of the genetic mechanisms that support multi-cellular differentiation, and to the evolution of general purpose language supporting culture that could accumulate across generations. The exponential increases in the power of digital systems will have as big an impact as these did. So, yes, there will be more phase changes, but even if they are coming exponentially closer the next one of this magnitude is still quite some time away:

  • Cambrian explosion, 500 Million Years ago
  • General language, 500 Thousand Years ago
  • Human / Digital hybrids (netminds), now
  • next phase change, 500 years from now?

Change vs. coherence is a an interesting issue. We need to distinguish between drift (which is fairly continuous) and phase changes (which are quite discontinuous).

We have a hard time understanding Medieval English, as much because of cultural drift as because of linguistic drift. The result of drift isn’t that we get multiple phases co-existing (with rare exceptions), but that we get opaque history. In our context this means that after a few decades, netminds will have a hard time understanding the records left by earlier netminds. This is already happening as our ability to read old digital media deteriorates, due to loss of physical and format compatibility.

I imagine it would (almost) always be possible to go back and recover an understanding of historical records, if some netmind is motivated to put enough effort into the task — just as we can generally read old computer tapes, if we want to work hard enough. But it would be harder for them than for us, because of the sheer volume of data and computation that holds everything together at any given time. Our coherence is very very thin by comparison.

For example the “thickness” of long term cultural transmission in western civilization can be measured in some sense by the manuscripts that survived from Rome and Greece and Israel at the invention of printing. I’m pretty sure that all of those manuscripts would fit on one (or at most a few) DVDs as high resolution images. To be sure these manuscripts are a much more distilled vehicle of cultural transmission than (say) the latest Tom Cruise DVD, but at some point the sheer magnitude of cultural production overwhelms this issue.

Netminds will up the ante at an exponential rate, as we’re already seeing with digital production technology, blogging, etc. etc. Our increasing powers of communication pretty quickly exceed my ability to understand or imagine the consequences.

Who’s in charge here?

In a very useful post, Jonah Lehrer wonders:

…if banal terms like “executive control” or “top-down processing” or “attentional modulation” hide the strangeness of the data. Some entity inside our brain, some network of neurons buried behind our forehead, acts like a little petit tyrant, and is able to manipulate the activity of our sensory neurons. By doing so, this cellular network decides, in part, what we see. But who controls the network?

I posted a comment on Jonah’s blog but it took so long to get approved that probably no one will see it. So I’m posting an enhanced version here.

Jonah’s final sentence, “But who controls the network?” illustrates to me the main obstacle to a sensible view of human thought, identity, and self-regulation.

We don’t ask the same question about the network that controls our heart rate. It is a fairly well defined, important function, tied to many other aspects of our mental state, but it is an obviously self-regulating network. It has evolved to manage its own fairly complex functions in ways that support the survival of the organism.

So why ask the question “Who controls it?” about attentional modulation? We know this network can be self-controlling. There are subjectively strange but fairly common pathologies of attentional modulation (such as hemi-neglect where we even understand some of the network behavior) that are directly traceable to brain damage, and that reveal aspects of the network’s self-management. We can measure the way attention degrades when overloaded through various cognitive tasks. Etc. etc. There’s nothing fundamentally mysterious or challenging to our current theoretical frameworks or research techniques.

Yet many people seem to have a cognitive glitch here, akin to the feeling people had on first hearing that the earth was round, “But then we’ll fall off!” Our intuitive self-awareness doesn’t stretch naturally to cover our scientific discoveries. As Jerry Fodor says “there had… better be somebody who is in charge; and, by God, it had better be me.”

I’ve written some posts (1, 2) specifically on why this glitch occurs but I think it will take a long time for our intuitive sense of our selves to catch up with what we already know.

And I guess I ought to write the post I promised back last April. I’ll call it “Revisiting ego and enforcement costs”. Happily it seems even more interesting now than it did then, and it ties together the philosophy of mind themes with some of my thinking on economics.

Social fixed points

Austin Henderson in his comment on Dancing toward the singularity starts by remarking on an issue that often troubles people when dealing with reflexive (or reflective) systems:

On UI, when the machine starts modeling us then we have to incorporate that modeling into our usage of it. Which leads to “I think that you think that ….”. Which is broken by popping reflective and talking about the talk. Maybe concurrently with continuing to work. In fact that may the usual case: reflecting *while* you are working.

We need the UIs to support his complexity. You talk about the ability to “support rapid evolution of conventions between the machine and the human,”. …. As for the “largely without conscious human choice” caveat, I think that addresses the other way out of the thinking about thinking infinite regress: practice, practice, practice.

I think our systems need to be reflexive. Certainly our social systems need to be reflective. But then what about the infinite regress that concerns Austin?

There are many specific tricks, but really they all boil down to the same trick: take the fixed point. Fixed points make recursive formal systems, such as lambda calculus, work. They let us find stable structure in dynamic systems. They are great.

Fixed points are easy to describe, but sometimes hard to understand. The basic idea is that you know a system is at a fixed point when you apply a transformation f to the system, and nothing happens. If the state of the system is x, then at the fixed point, f(x) = x — nothing changes. If the system isn’t at a fixed point, then f(x) = x’ — when you apply f to x, you “move” the system to x’.

A given system may have a unique fixed point — for example, well behaved expressions in the lambda calculus have a unique least fixed point. Or a system may have many fixed points, in which case it will get stuck at the one it gets to first. Or it may have no fixed points, in which case it just keeps changing each time you apply f.

Now suppose we have a reflective system. Let’s say we’re modeling a computer system we’re using (as we must to understand it). Let’s also say that at the same time, the system is modeling us, with the goal of (for example) showing us what we want to see at each point. We’d like our behavior and the system’s behavior to converge to a fixed point, where our models don’t change any more — which is to say, we understand each other. If we never reached a fixed point, we’d find it very inconvenient — the system’s behavior would keep changing, and we’d have to keep “chasing” it. This sort of inconvenience does arise, for example, in lists that try to keep your recent choices near the top.

Actually, of course, we probably won’t reach a truly fixed point, just a “quiescent” point that changes much more slowly than it did in the initial learning phase. As we learn new aspects of the system, as our needs change, and perhaps even as the system accumulates a lot more information about us, our respective models will adjust relatively slowly. I don’t know if there is a correct formal name for this sort of slowly changing point.

People model each other in interactions, and we can see people finding fixed points of comfortable interaction, that drift and occasionally change suddenly when they discover some commonality or difference. People can also get locked into very unpleasant fixed points with each other. This might be a good way to think about the sort of pathologies that Ronald Laing called “knots”.

Fixed points are needed within modeling systems, as well as between them. The statistical modeling folks have recently (say the last ten years) found that many models containing loops, which they previous thought were intractable, are perfectly well behaved with the right analysis — they provably converge to the (right) fixed points. This sort of reliably convergent feedback is essential in lots of reasoning paradigms, including the set of compression / decompression algorithms that come closest to the the Shannon bounds on channel capacity.

Unfortunately we typically aren’t taught to analyze systems in terms of this sort of dynamics, and we don’t have good techniques for designing reflexive systems — for example, UIs that model the user and converge on stable, but not excessively stable fixed points. If I’m right that we’re entering an era where our systems will model everything they interact with, including us, we’d better get used to reflexive systems and start working on those design ideas.

Information technology and social change

I’ve recently written up some strategic thoughts for a university (which shall remain nameless) and will post them here, since they develop some themes that I’ve discussed in other posts.

Information technology driving social change

Our information environment is rapidly being transformed by digital systems. Today’s students will work most of their lives in a world transformed by digital information. Their success will depend to a large extent on how well they cope with, understand and anticipate the social and institutional consequences of these technology trends.

The technical trend is that the cost of storing, transmitting and processing digital information has been declining exponentially for decades, and will continue to decline at more or less the same rate for decades. This creates immense pressure for economic and social changes unprecedented in history.

The economic trend is radical factor substitution. Any activity that can take advantage of the declining cost of digital information gets “sucked into” the digital domain. In many cases, the costs become so low that they are effectively zero, like the cost of napkins or a glass of water in a restaurant — there is indeed a cost, but it is below the threshold of individual accounting or control.

While these are simple trends, their social implications are far from simple, because we have no easy way to anticipate what changes are possible or likely. These factor substitutions are radical because they typically involve reinvention of a business, and such drastic changes can only be discovered through innovation and testing in the real world. We have been repeatedly surprised by personal computers, the internet, the world wide web, search engines,Wikipedia, YouTube, etc.

So the overall effect is that large social changes will be driven by simply, easily stated technical trends for at least several more decades. Even though we know the cause, we will be continually surprised by these changes because they arise from technical, business and social innovation that takes advantage of exponentially falling costs.

The ground rules of information goods

Information goods are very different from material goods. Scientific and scholarly communities have always operated largely by the ground rules of information goods, but since material goods were dominant in most areas of society, information goods haven’t gotten major attention from scholars, until recently.

Until the early 1990s, essentially all information goods were embedded in material goods (books, vinyl records, digital tapes, etc.). High-speed digital communication finally split material and information goods completely, and enabled new modes of production. We are finally understanding how the differences in ground rules between material and information goods arise from very different transaction costs, coordination costs, and different levels of asymmetric information on the part of producers and consumers.

One key ground rule is becoming clear: Voluntary contribution and review are essential and often dominant aspects of information good production.

This ground rule has always been important in scholarship. Scholars have always done research, written articles and performed peer review primarily because producing information goods was intrinsic to their vocation. Now, due to the exponential shifts in the cost of information technology, this ground rule is applying to a much wider swath of society.

The successful businesses of the internet era such as Amazon, Google and EBay, depend almost entirely on external content voluntarily contributed and reviewed by their stakeholders — buyers, sellers, creators of indexed web sites, people who create and post video (in the case of YouTube), etc.

The same pattern applies to major new social enterprises enabled by information technology. For example Wikipedia, Linux and Apache all produce information goods (software and content) that are dominant in their very large and important markets, and they produce them through voluntary contributions and review by their stakeholders.

Meta: Patterns in my posting (and my audience)

I’ve been posting long enough, and have enough reaction from others (mainly in the form of visits, links and posts on other blogs) that I can observe some patterns in how this all plays out.

My posts cluster roughly around three main themes (in retrospect, not by design):

  • Economic thinking, informed more by stochastic game theory than Arrow-Debreu style models
  • The social impact of exponential increases in computer power, especially coupled with statistical modeling
  • Philosophical analysis of emergence, supervenience, downward causation, population thinking, etc.

These seem to be interesting to readers roughly in that order, in (as best I can tell) a power-law like pattern — that is, I get several times as many visitors looking at my economic posts than my singularity / statistical modeling posts, and almost no one looking for my philosophical analysis (though the early Turner post has gotten some continuing attention).

I find the economics posts the easiest — I just “write what I see”. The statistical modeling stuff is somewhat more work, since I typically have to investigate technical issues in more depth than I would otherwise. Philosophical analysis much harder to write, and I’m typically less satisfied with it when I’m done.

The mildly frustrating thing about this is that I think the philosophical analysis is where I get most of my ability to provide value. My thinking about economics, for example, is mainly guided by my philosophical thinking, and I wouldn’t be able to see what I see without an arduously worked out set of conceptual habits and frameworks. I’d enjoy the kind of encouragement and useful additional perspectives I get from seeing people react to the other topics.

Reflecting on this a bit, I think mostly what I’m doing with the philosophical work is gradually prying loose a set of deeply rooted cognitive illusions — illusions that I’m pretty sure arise from the way consciousness works in the human brain. Early on, I wrote a couple of posts that touch on this theme — and in keeping with the pattern described above, they were hard to write, didn’t seem to get a lot of interested readers, and I found them useful conceptual steps forward.

“Prying loose illusions” is actually not a good way to describe what needs to be done. We wouldn’t want to describe Copernicus’ work as “prying loose the geocentric illusion”. If he just tried to do that it wouldn’t have worked. Instead, I’m building up ways of thinking that I can substitute for these cognitive illusions (partially, with setbacks). This is largely a job of cognitive engineering — finding ways of thinking that stick as habits, that become natural, that I can use to generate descriptions of stuff in the world (such as economic behavior) which others find useful, etc.

In my (ever so humble) opinion this is actually the most useful task philosophers could be doing, although unfortunately as far as I can tell they mostly don’t see it an important goal, and I suspect in many cases would say it is “not really philosophy”. To see if I’m being grossly unfair to philosophers, I just googled for “goals of x” for various disciplines (philosophy, physics, sociology, economics, …). The results are interesting and I think indicate I’m right (or at least not unfair), but I think I’ll save further thoughts for a post about this issue. If you’re curious feel free to try this at home.

A good example of post-capitalist production

This analysis of the Firedoglake coverage of the Libby trial hits essentially all the issues we’ve been discussing.

  • Money was required, but it was generated by small contributions from stakeholders (the audience), targeted to this specific project.
  • A relatively small amount of money was sufficient because the organization was very lightweight and the contributors were doing it for more than just money.
  • The quality was higher than the work done by the conventional organizations (news media) because the FDL group was larger and more dedicated. They had a long prior engagement with this story.
  • FDL could afford to put more feet on the ground than the (much better funded) news media, because they were so cost-effective.
  • The group (both the FDL reporters and their contributors) self-organized around this topic so their structure was very well suited to the task.
  • Entrepreneurship was a big factor — both long-term organization of the site, and short-term organization of the coverage.
  • FDL, with no prior journalistic learning curve, and no professional credentials, beat the professional media on their coverage of a high-profile hard-core news event.

This example suggests that we don’t yet know the inherent limits of this post-capitalist approach to production of (at least) information goods. Most discussions of blogs vs. (traditional) news media have assumed the the costs inherent in “real reporting” meant blogs couldn’t do it effectively. The FDL example shows, among other things, that the majority of those costs (at least in this case) are due to institutional overhead that can simply be left out of the new model.

We’re also discovering that money can easily be raised to cover specific needs, if an audience is very engaged and/or large. Note that even when raising money, the relationship remains voluntary rather than transactional — people contribute dollars without imposing any explicit obligations on their recipient. No one incurs the burden of defining and enforcing terms. In case of fraud or just disappointing performance, the “customers” will quickly withdraw from the relationship, so problems will be self-limiting.

It is interesting to speculate about how far this approach could go. To pick an extreme example, most of the current cost of new drugs is not manufacturing (which will remain capital intensive for the forseeable future), but rather is the information goods — research, design, testing, education of providers, etc. — needed to bring drugs to market. At this point it seems impossible that these processes could be carried out in a post-capitalist way. But perhaps this is a failure of imagination.

« Previous PageNext Page »