Turking! The idea and some implications

I recently read an edited collection of five stories, Metatropolis; the stories are set in a common world the authors developed together. This is a near future in which nation state authority has eroded and in which new social processes have grown up and have a big role in making things work. In some sense the theme of the book was naming and exploring those new processes.

One of those processes was turking. The term is based on Amazon’s Mechanical Turk. Google shows the term in use as far back as 2005 but I hadn’t really wrapped my mind around the implications; Metatropolis broadens the idea way beyond Amazon’s implementation or any of the other discussions I’ve read.

Turking: Getting bigger jobs done by semi-automatically splitting them up into large numbers of micro-jobs (often five minutes long or less), and then automatically aggregating and cross-checking the results. The turkers (people doing the micro-jobs) typically don’t have or need any long term or contractual relationship with the turking organizers. In many cases, possibly a majority, the turkers aren’t paid in cash, and often they aren’t paid at all, but do the tasks as volunteers or because they are intrinsically rewarding (as in games).

One key element that is distinctive to turking is some sort of entirely or largely automated process for checking the results — usually by giving multiple turkers the same task and comparing their results. Turkers who screw up too many tasks aren’t given more of those tasks. Contrast this with industrial employment where the “employer” filters job candidates, contracts with some to become “employees”, and then enforces their contracts. The relationship in turking is very different: the “employer” lets anybody become an “employee” and do some tasks, doesn’t (and can’t) control whether or how the “employee” does the work, but measures each “employee’s” results and decides whether and how to continue with the relationship.

This is an example of a very consistent pattern in the transition from industrial to networked relationships: a movement from gatekeeping and control to post hoc filtering. Another example is academic publishing. The (still dominant) industrial model of publishing works through gatekeeping — articles and books don’t get published until they are approved through peer review. The networked model works through post hoc processes: papers go up on web sites, get read, commented and reviewed, often are revised, and over time get positioned on a spectrum from valid/valuable to invalid/worthless. The networked model is inexorably taking over, because it is immensely faster, often fairer (getting a few bad anonymous reviews can’t kill a good paper), results in a wider range of better feedback to authors, etc.

It seems quite possible — even likely — that post hoc filtering for work will produce substantially better results than industrial style gatekeeping and control in most cases. In addition to having lower transaction costs, it could produce better quality, a better fit between worker and task, and less wasted effort. It also, of course, will change how much the results cost and how much people get paid — more on that below.

Amazon’s Mechanical Turk just involves information processing — web input, web output — and this is typical of most turking today. However there are examples which involve real world activities. In an extreme case turking could be used to carry out terrorist acts, maybe without even doing anything criminal — Bruce Sterling has some stories that explore this possibility. But there are lots of ordinary examples, like counting the empty parking spaces on a given block, or taking a package and shipping it.


  • Refugees in camps are turking for money. The tasks are typical turking tasks, but the structure seems to be some more standard employment relationship. If there were enough computers, I bet a high percentage of the camp residents would participate, after some short period in which everyone learned from each other how to do the work. Then the organizers would have to shift to turking methods because the overhead of managing hundreds of thousands of participants using contracting and control would be prohibitive.
  • A game called FoldIt is using turking to improve machine solutions to protein folding. Turns out humans greatly improve on the automatic results but need the machine to do the more routine work. The turkers have a wide range of skill and a variety of complementary strategies, so the project benefits from letting a many people try and then keeping the ones who succeed. (This is an example where the quality is probably higher than an industrial style academic model could generate.) The rewards are the intrinsic pleasure of playing the game, and also maybe higher game rankings.
  • There’s a startup named CrowdFlower that aims to make a business out of turkingrowdFlower has relationships with online games that include the turking in their game play. So the gamers get virtual rewards (status, loot). I can easily imagine that the right turking tasks would actually enhance game play. CrowdFlower are also doing more or less traditional social science studies of turking motivations etc. Of course the surveys that generate data for the research are also a form of turking.
  • Distributed proofreading. OCR’d texts are distributed to volunteers and the volunteers check and correct the OCR. (They get both the image and the text.) The front page goes out of its way to note that “there is no commitment expected on this site beyond the understanding that you do your best.” This is an early turking technology, and works in fairly large chunks, a page at a time. It may be replaced by a much finer grained technology that works a word at a time — see below.
  • Peer production (open source and open content). An important component of peer production is small increments of bug reporting, testing, code review, documentation editing, etc. Wikipedia also depends on a lot of small content updates, edits, typo fixes, etc. These processes have the same type of structure as turking, although they typically hasn’t been called turking. The main difference from the other examples is there’s no clear-cut infrastructure for checking the validity of changes. This is at least partly historical, these processes arose before the current form of turking was worked out. The incentive — beyond altruism and the itch to correct errors — is that one can get credit in the community and maybe even in the product.
  • I just recently came across another good example that deserves a longer explanation: ReCaptcha. It is cool because it takes two bad things, and converts them into two good things, using work people were doing anyway.

    The first bad thing is that OCR generates lots of errors, especially on poorly printed or scanned material — which is why the distributed proofreading process above is required. These can often be identified because the results are misspelled and/or the OCR algorithm reports low confidence. From the OCR failures, you can generate little images that OCR has trouble recognizing correctly.

    The second bad thing is that online services are often exploited by bad actors who use robots to post spam, abusively download data, etc. Often this is prevented by captchas, images that humans can convert into text, but that are hard for machines to recognize. Since OCR failures are known to be hard for machines to recognize correctly, they make good captchas.

    Recaptcha turns the user effort applied to solving captchas, which would otherwise be wasted, into turking to complete the OCR — essentially very fine grained distributed proofreading. Recaptcha figures out who’s giving the correct answers by having each user recognize both a known word and an unknown word, in addition to comparing answers by different users. Users are rewarded by getting the access they wanted.

    Note that if spammers turk out captcha reading (which they are doing, but which increases their costs significantly) then they are indirectly paying for useful work as well. Potentially Recaptcha could be generalized to any kind of simple pattern recognition that’s relatively easy for humans and hard for machines, which could generate a lot of value from human cognitive capacities.

    Some implications

    It seems that over time a huge variety and quantity of work could be turked. The turking model has the capacity to productively employ a lot of what Clay Shirky calls our “cognitive surplus”, and also whatever time surplus we have. Many unemployed people, refugee populations and I’m sure lots of other groups have a lot of surplus. As Shirky points out, even employed people have a discretionary surplus that they spend watching TV, reading magazines, playing computer games, etc. However right now there’s no way to bring this surplus to market.

    Switching from industrial relationships (heavyweight, gatekeeping and control) to networked relationships (lightweight, post hoc filtering) reduces per task transaction costs to a tiny fraction of their current level, and makes it feasible to bring much of this surplus to market.

    The flip side of that of course is that the more this surplus is available for production, the less anyone will get paid for the work it can do. Already in a lot of existing turking, the participants aren’t getting paid — and in many cases the organizers aren’t getting paid either. Also, more or less by definition, the surplus that would be applied to turking currently isn’t being used for any other paid activity, so potential workers aren’t giving up other pay to turk. Therefore, I expect the average payment for a turked task to approach zero, for both turkers and organizers. Usually there will still be rewards, but they will tend to be locally generated within the specific context of the tasks (online community, game, captcha, whatever). Often the entity that generates the rewards won’t won’t get any specific benefit from the turking — for example, in the case of ReCaptcha, the sites that use it don’t particularly benefit from whatever proofreading gets done.

    Mostly turking rewards won’t be measurable in classical monetary terms — in some cases rewards may involve “in game” currency but this doesn’t yet count in the larger economy. In classical monetary terms, the marginal cost of getting a job turked will probably approach the cost of building, maintaining and running the turking infrastructure — and that cost is exponentially declining and will continue to do so for decades.

    This trend suggests that we need to find some metric complementary to money to aggregate preferences and allocate large scale social effort. But I’m not going to pursue that question further here.

    Obviously it will be important to understand what types of work can be turked and what can’t. For example, could the construction of new houses be turked? That may seem like a stretch, but Habitat for Humanity and other volunteer groups do construct houses with a process very much like turking — and of course this has a long history in the US, with institutions like barn raising. Furthermore the use of day labor isn’t that different from turking. I’d guess that within ten years we’ll be turking much of the construction of quite complex buildings. It is interesting to try to imagine what this implies for construction employment.

    Realistically, at this point we just don’t know the limits of turking. My guess is that the range of things that can be done via turking will turn out to be extremely broad, but that it will take a lot of specific innovations to grow into that range. Also of course there will be institutional resistance to turking many activities.

    When a swarm of turkers washes over any given activity and devours most of it, there will typically be a bunch of nuggets left over that can’t be turked reliably. These will probably be things that require substantial specialized training and/or experience, relatively deep knowledge of the particular circumstances, and maybe certification and accountability. Right now those nuggets are embedded in turkable work and so it is hard or impossible to figure out their distribution, relative size, etc. For a while (maybe twenty years or so) we’ll keep being surprised — we’ll find some type of nuggets we think can’t be turked, and then someone will invent a way to make most of them turkable. Only if and when turking converges on a stable institution will we be able to state more analytically and confidently the characteristics that make a task un-turkable.

    Another issue is security / confidentiality. Right now, corporations are willing to use turking for lots of tasks, but I bet they wouldn’t turk tasks involving key market data, strategic planning, or other sensitive material. On the other hand, peer production projects are willing to turk almost anything, because they don’t have concerns about maintaining a competitive advantage by keeping secrets. (They do of course have to keep some customer data private if they collect it at all, but usually they just avoid recording personal details.) I’d guess that over time this will give entities that keep fewer secrets a competitive advantage. I think this is already the case for a lot of related reasons because broadly speaking “Trying to keep secrets imposes huge transaction costs.” Eventually keeping big secrets may come to be seen as an absurdly expensive and dubious proposition and the ability to keep big pointless secrets will become an assertion of wealth and power. (Every entity will need to keep a few small secrets, such as the root password to their servers. But we know how to safely give everyone read only access to almost everything, and still limit changes to those who have those small secrets.)

    There’s lots more to say, but that’s enough for now.

    The two faces of Avatar

    Avatar just keeps demanding a bit more analysis.

    To recap, I agree the story is an embarrassingly naive retread of the “white man goes native and saves the natives” plus gooey nature worship. But…

    I also believe the world Pandora, as shown to us in the movie, can’t be confined within that story. It keeps escaping and cutting across or contradicting the premises of the narrative, as discussed in many nerd posts including mine.

    So Avatar has two very different faces, and different personalities to go with them. And I think this goes back to the basic character of the social processes used to create Avatar. No, seriously, stay with me for a minute and I’ll explain.

    For our purposes we can say there are three modes of production in films and a lot of other activities: craft, industrial, and networked. Of course any real film is produced through a mix of these.

    A film made by a small team working on their own (with or without a presiding genius) is an example of craft production, just like similar teams producing ceramic tiles or houses.

    A film made in a “factory” environment along with many others (like The Wizard of Oz or Casablanca) is an example of industrial production.

    And a film made by multiple loosely coordinated groups with different expertise is an example of network production.

    Network production is now nearly universal in large films, but before Avatar I can’t think of any examples of network production driving the film content. Generally the network mostly fleshes out content dictated by a small team that is using craft production. (If you can think of good previous examples, please comment or email, I’d really like to know.)

    In Avatar, Cameron wanted a lot of depth in his world, and had the money and skills to pull together a network to produce it. Pandora was created by a huge collaboration between ecologists, biologists, linguists, artists, rendering experts and so forth. The collaboration also necessarily included software and hardware experts who built the computer networks, and project managers who shaped the social network, and these people were no doubt also very engaged with the ideas about Pandora and contributed to its character in significant ways. Cameron was of course involved, but the depth and complexity of the world (and the network) meant that most of the decisions had to be internal to the network.

    So Avatar inevitably has two faces. The plot arc, the characters and the dialog were crafted by Cameron. Much of the commercial success of the film no doubt is due to his judgements about what would work in that domain. But Pandora, and probably much of the human tech in the film was created by a social network that was focused on scientific (as well as artistic) verisimilitude, conceptual integrity across a wide range of disciplines and scales, and our best current skills for designing and managing big networks of people and machines. And a significant amount of the success of the film is due to the richness and coherence of the vision generated by the network.

    In some sense Cameron was responsible for both faces. In one case he was directly shaping the content. In the other, he was shaping and directing the social network that produced the content. But the two forms of production generate very different kinds of results, and those generate the divergent critical reactions that tend to focus either on the story or on the world.

    This analysis brings into focus a question on which I have no information, but which I think is important to our deeper understanding of Avatar and our thinking about the successors it will inevitably inspire. Who defined the parts of the world that bridge between the network and the story? For example, in Pandora, animals, Na’vi and trees can couple their nervous systems to each other. This coupling plays a role in the story, but it could have been avoided in some cases, and made less explicit and more “magical” in others. On the other hand this coupling mechanism is constitutive of key parts of Pandora such as the “world brain”, and it drastically affects our understanding of the nature of Pandora and its possible history.

    Did the network come up with this coupling as a way to make mind transfer — a part of the story that would otherwise have been magic — into science? Or was it somehow integral to Cameron’s vision of Pandora? Or — more likely — some combination of those.

    Bubbles of humanity in a post-human world

    Austin Henderson had some further points in his comment on Dancing toward the singularity that I wanted to discuss. He was replying to my remarks on a social phase-change toward the end of the post. I’ll quote the relevant bits of my post, substituting my later term “netminds” for the term I was using then, “hybrid systems”:

    If we put a pot of water on the stove and turn on the heat, for a while all the water heats up, but not uniformly–we get all sorts of inhomogeneity and interesting dynamics. At some point, local phase transitions occur–little bubbles of water vapor start forming and then collapsing. As the water continues to heat up, the bubbles become more persistent, until we’ve reached a rolling boil. After a while, all the water has turned into vapor, and there’s no more liquid in the pot.

    We’re now at the point where bubbles of netminds (such as “gelled” development teams) can form, but they aren’t all that stable or powerful yet, and so they aren’t dramatically different from their social environment. Their phase boundary isn’t very sharp.

    As we go forward and these bubbles get easier to form, more powerful and more stable, the overall social environment will be increasingly roiled up by their activities. As the bubbles merge to form a large network of netminds, the contrast between people who are part of netminds and normal people will become starker.

    Unlike the pot that boils dry, I’d expect the two phases–normal people and netminds–to come to an approximate equilibrium, in which parts of the population choose to stay normal indefinitely. The Amish today are a good example of how a group can make that choice. Note that members of both populations will cross the phase boundary, just as water molecules are constantly in flux across phase boundaries. Amish children are expected to go out and explore the larger culture, and decide whether to return. I presume that in some cases, members of the outside culture also decide to join the Amish, perhaps through marriage.

    After I wrote this I encountered happiness studies that show the Amish are much happier and dramatically less frequently depressed than mainstream US citizens. I think its very likely that the people who reject netminds and stick with GOFH (good old fashioned humanity) may similarly be much happier than people who become part of netminds (on the average).

    It isn’t too hard to imagine why this might be. The Amish very deliberately tailor their culture to work for them, selectively adopting modern innovations and tying them into their social practices in specific ways designed to maintain their quality of life. Similarly, GOFH will have the opportunity to tailor its culture and technical environment in the same way, perhaps with the assistance of friendly netminds that can see deeper implications than the members of GOFH.

    I’m inclined to believe that I too would be happier in a “tailored” culture. Nonetheless, I’m not planning to become Amish, and I probably will merge into a netmind if a good opportunity arises. I guess my own happiness just isn’t my primary value.

    [A]s the singularity approaches, the “veil” between us and the future will become more opaque for normal people, and at the same time will shift from a “time-like” to a “space-like” boundary. In other words, the singularity currently falls between our present and our future, but will increasingly fall between normal humans and netminds living at the same time. Netminds will be able to “see into” normal human communities–in fact they’ll be able to understand them far more accurately than we can now understand ourselves–but normal humans will find hybrid communities opaque. Of course polite netminds will present a quasi-normal surface to normal humans except in times of great stress.

    By analogy with other kinds of phase changes, the distance we can see into the future will shrink as we go through the transition, but once we start to move toward a new equilibrium, our horizons will expand again, and we (that is netminds) may even be able to see much further ahead than we can are today. Even normal people may be able to see further ahead (within their bubbles), as long as the equilibrium is stable. The Amish can see further ahead in their own world than we can in ours, because they have decided that their way of life will change slowly.

    Austin raises a number of issues with my description of this phase change. His first question is why we should regard the population of netminds as (more or less) homogeneous:

    All water boils the same way, so that when bubbles coalesce they are coherent. Will bubbles of [netmind] attempt to merge, maybe that will take more work than their hybrid excess capability provides, so they will expend all their advantage trying to coalesce so that they can make use of that advantage. Maybe it will be self-limiting: the “coherence factor” — you have to prevent it from riding off at high speed in all directions.

    Our current experience with networked systems indicates there’s a messy dynamic balance. Network effects generate a lot of force toward convergence or subsumption, since the bigger nexus tends to outperform the smaller one even if it is not technically as good. (Here I’m talking about nexi of interoperability, so they are conceptual or conventional, not physical — e.g. standards.)

    Certainly the complexity of any given standard can get overwhelming. Standards that try to include everything break down or just get too complex to implement. Thus there’s a tendency for standards to fission and modularize. This is a good evolutionary argument for why we see compositionality in any general purpose communication medium, such as human language.

    When a standard breaks into pieces, or when competing standards emerge, or when standards originally developed in different areas start interacting, if the pieces don’t work together, that causes a lot of distress and gets fixed one way or another. So the network effects still dominate, through making pieces interact gracefully. Multiple interacting standards ultimately get adjusted so that they are modular parts of a bigger system, if they all continue to be viable.

    As for riding off in all directions, I just came across an interesting map of science. In a discussion of the map, a commenter makes just the point I made in another blog post, that real scientific work is all connected, pseudo-science goes off into little encapsulated belief systems.

    I think that science stays connected because each piece progresses much faster when it trades across its boundaries. If a piece can’t or won’t connect for some reason it falls behind. The same phenomenon occurs in international trade and cultural exchange. So probably some netminds will encapsulate themselves, and others will ride off in some direction far enough so they can’t easily maintain communication with the mainstream. But those moves will tend to be self-limiting, as the relatively isolated netminds fall behind the mainstream and become too backward to have any power or influence.

    None of this actually implies that netminds will be homogeneous, any more than current scientific disciplines are homogeneous. They will have different internal languages, different norms, different cultures, they will think different things are funny or disturbing, etc. But they’ll all be able to communicate effectively and “trade” questions and ideas with each other.

    Austin’s next question is closely related to this first one:

    Why is there only one phase change? Why wouldn’t the first set of [netminds] be quickly passed by the next, etc. Just like the generation gap…? Maybe, as it appears to me in evolution in language (read McWharter, “The Word on the Street” for the facts), the speed of drift is just matched by our length of life, and the bridging capability of intervening generations; same thing in space, bridging capability across intervening African dialects in a string of tribes matches the ability to travel. Again, maybe mechanisms of drift will limit the capacity for change.

    Here I want to think of phase changes as occurring along a spectrum of different scales. For example, in liquid water, structured patterns of water molecules form around polar parts of protein molecules. These patterns have boundaries and change the chemical properties of the water inside them. So perhaps we should regard these patterns as “micro-phases”, much smaller and less robust than the “macro-phases” of solid, liquid and gas.

    Given this spectrum, I’m definitely talking about a “macro-phase” transition, one that is so massive that it is extremely rare in history. I’d compare the change we’re going through to the evolution of the genetic mechanisms that support multi-cellular differentiation, and to the evolution of general purpose language supporting culture that could accumulate across generations. The exponential increases in the power of digital systems will have as big an impact as these did. So, yes, there will be more phase changes, but even if they are coming exponentially closer the next one of this magnitude is still quite some time away:

    • Cambrian explosion, 500 Million Years ago
    • General language, 500 Thousand Years ago
    • Human / Digital hybrids (netminds), now
    • next phase change, 500 years from now?

    Change vs. coherence is a an interesting issue. We need to distinguish between drift (which is fairly continuous) and phase changes (which are quite discontinuous).

    We have a hard time understanding Medieval English, as much because of cultural drift as because of linguistic drift. The result of drift isn’t that we get multiple phases co-existing (with rare exceptions), but that we get opaque history. In our context this means that after a few decades, netminds will have a hard time understanding the records left by earlier netminds. This is already happening as our ability to read old digital media deteriorates, due to loss of physical and format compatibility.

    I imagine it would (almost) always be possible to go back and recover an understanding of historical records, if some netmind is motivated to put enough effort into the task — just as we can generally read old computer tapes, if we want to work hard enough. But it would be harder for them than for us, because of the sheer volume of data and computation that holds everything together at any given time. Our coherence is very very thin by comparison.

    For example the “thickness” of long term cultural transmission in western civilization can be measured in some sense by the manuscripts that survived from Rome and Greece and Israel at the invention of printing. I’m pretty sure that all of those manuscripts would fit on one (or at most a few) DVDs as high resolution images. To be sure these manuscripts are a much more distilled vehicle of cultural transmission than (say) the latest Tom Cruise DVD, but at some point the sheer magnitude of cultural production overwhelms this issue.

    Netminds will up the ante at an exponential rate, as we’re already seeing with digital production technology, blogging, etc. etc. Our increasing powers of communication pretty quickly exceed my ability to understand or imagine the consequences.

    Rationality is only an optimization

    I’m reading a lovely little book by H. Peyton Young, Individual Strategy and Social Structure, very dense and tasty. I checked out what he had done recently, and found “Individual Learning and Social Rationality” in which, as he says, “[w]e show how high-rationality solutions can emerge in low-rationality environments provided the evolutionary process has sufficient time to unfold.”

    This reminded me of work by Duncan Foley on (what might be called) low-rationality economics, beginning with “Maximum Entropy Exchange Equilibrium” and moving to a more general treatment in “Classical thermodynamics and economic general equilibrium theory“. Foley shows that the equilibria of neoclassical economics, typically derived assuming unbounded rationality, can in fact be approximated by repeated interactions between thoughtless agents with simple constraints. These results don’t even depend on agents changing due to experience.

    So from the careful, well grounded results by these two scholars, I’d like to take an alarmingly speculative leap: I conjecture that all rationality is an optimization, which lets us get much faster to the same place we’d end up after sufficiently extended thoughtless wandering of the right sort. This hardly makes rationality unimportant, but it does tie it to something less magical sounding.

    I like this way of thinking about rationality, because it suggests useful questions like “What thoughtless equilibrium does this rational rule summarize?” and “How much rationality do we need to get close to optimal results here?” In solving problems a little rationality is often enough, trying to add more just may just produce gratuitous formality and obscurity.

    At least in economics and philosophy, rationality is often treated as a high value, sometimes even an ultimate value. If it is indeed an optimization of the path to thoughtless equilibria, it is certainly useful but probably not worthy of such high praise. Value is more to be found through comparing the quality of the equilibria and understanding the conditions that produce them, than by getting to them faster.

    Leaving knowledge on the table

    Yesterday I had a very interesting conversation with an epidemiologist while I was buying a cup of coffee (it’s great to live in a university town).

    She confirmed a dark suspicion I’ve had for some time — large population studies do a terrible job of extracting knowledge from their data. They use basic statistical methods, constrained by the traditions of the discipline, and by peer review that has an extremely narrow and wasteful view of what count as valid statistical tools. She also said that even if they had the freedom to use other methods, they don’t know how to find people who understand better tools and can still talk their language.

    The sophisticated modeling methods that have been developed in fields like statistical learning aren’t being applied (as far as either of us know) to the very large, rich, expensive and extremely important datasets collected by these large population studies. As a result, we both suspect a lot of important knowledge remains locked up in the data.

    For example, her datasets include information about family relationships between subjects, so the right kind of analysis could potentially show how specific aspects of diet interact with different genotypes. But the tools they are using can’t do that.

    We’d all be a lot better off if some combinations of funding agencies and researchers could bridge this gap.

    Dancing toward the singularity

    Vernor Vinge gave a talk in the Long Now Foundation seminar series last week (which is great, by the way, you should go if you can). Stewart Brand sent out an email summary but it isn’t on the web site yet.

    As Brand says, “Vinge began by declaring that he still believes that a Singularity event in the next few decades is the most likely outcome — meaning that self-accelerating technologies will speed up to the point of so profound a transformation that the other side of it is unknowable. And this transformation will be driven by Artificial Intelligences (AIs) that, once they become self-educating and self-empowering, soar beyond human capacity with shocking suddeness.”

    At Stewart’s request, Vinge’s talk was about knowable futures – which by definition mean that the singularity doesn’t happen. But the follow up questions and discussion after the talk were mostly about the singularity.

    All of this has crystallized my view of the singularity. The path isn’t all that strange, but I now have a much better sense of the details, and see aspects that haven’t been covered in any essays or science fiction stories I know.

    One point that came out at Vinge’s talk is important for context. Vinge (and I) aren’t imagining a singularity as an absolute point in time, and in that sense the term “singularity” is quite misleading. The sense of singularity arises because looking further up the curve from a given point, we see “so profound a transformation that the other side of it is unknowable.” However as we’re moving along the curve we won’t perceive a discontinuity; the future will remain comprehensible to some extent, though our horizon may get closer. However, we also won’t be the same people we are now. Among other things we’ll only be able to understand what’s going on because we are gradually integrated into hybrid human / machine systems. I’ll discuss how that will happen as we go along.

    What’s missing from current discussions of the singularity?

    Above I claim that the path I imagine has “some features that haven’t been a part of any essays or science fiction stories about approaching the singularity”. So I’ll elaborate on that first.

    A major enabler of progress for systems (computer or human / computer hybrid) is their ability to quickly and accurately learn effective models of parts of their environment.

    Examples of current model learning technology:

    • model road conditions by tracking roads (Thrun);
    • model safe driving by watching a driver (Thrun);
    • model the immediate visual indicators of a “drivable road” by watching the road right in front of the vehicle (Thrun);
    • model the syntactic and semantic mapping between two languages by analyzing parallel texts (many);
    • model the motion style of dancers (Brand);
    • model the painting style of various painters (Lyu)
    • model users’ choices of topics, tags etc. for text (lots of people);
    • etc. etc. etc.

    A few observations about this list:

    • Often model learning delivers performance comparable to humans. Thrun’s models performed at near human levels in adverse real-world conditions (130 miles of driving unpaved, unmarked roads with lots of obstacles). The best parallel text language learning is near mediocre human translation ability. Model learning performs far better for spam filtering than hand built filters. Statistical modeling of painting style can identify artists as well as human experts. Etc.
    • Model learning is quickly getting easier. Thrun created three quite robust modeling capabilities in less than a year. A basic parallel text translation system can be created with off the shelf components. Fairly good text classification can be done with simple open source software. Etc.
    • None of these models is part of a reflexive loop, except in the extremely weak sense that e.g. spam filtering contributes to the productivity of developers who work on spam filtering.

    Generally, discussions of and stories about AI, the singularity etc. don’t emphasize this sort of modeling. For example, when have we seen a story in which robots could quickly and accurately imitate the body language, intonations, verbal style etc. of the people they are dealing with? But this is a basic human ability, and it probably is essential to human level interaction.

    The larger story

    The modeling I discuss above is just one of a set of trends that taken together will lead to the singularity (unless something stops us, like nuclear war, pandemic virus, or some as yet unexplained metaphysical impossibility). If all the following trends continue, I think we’ll get to the singularity within, as Vinge says, “a few decades”.

    Increasing processing power, storage, and bandwidth per $

    It seems like Moore’s law and its various relatives will continue to smile upon us for a while yet, so I don’t think we need to worry about this.

    Also my intuition (for which I have very little evidence) is that we can do the vast majority of what we need with the processing power, storage and bandwidth we already have. Of course, moving up the curve will make things easier, and perhaps most importantly, economically accessible to many more people.

    Huge amounts of real world data easily accessible

    This is the sine qua non of effective model learning; any area where we don’t have a lot of raw data can’t be modeled effectively. Conversely, if we do have lots of data, even simple approaches are likely to produce fairly good models.

    This is taking care of itself very nicely, with immense amounts of online text, click streams, video chat, surveillance cameras, etc. etc. As the model acquisition ability is built, the data will be available for it.

    General purpose, high performance population-based computing

    All model learning is population-based – that is, it works with distributions, not crisp values. Continued progress in modeling will lead to more and more general grasp of population-based computing. Conversely, general population-based computing will make writing new modeling tools much easier and faster. Also, population-based computing makes it immensely easier to scale up using massively parallel, distributed systems.

    Right now we have piecemeal approaches to population-based computing, but we don’t have a family of general mechanisms, equivalent to the “instruction stream + RAM” model of computing that we know so well. I think a better conceptual synthesis is required here. The synthesis may or may not be a breakthrough in the sense of requiring big conceptual changes, and/or providing big new insights.

    Tightly coupled hybrid human / machine systems

    To make hybrid systems more powerful, we need high bandwidth interaction mechanisms, and UI support to maximize coupling. Multi-touch displays, for example, allow much higher bandwidth human input. Good speech understanding would also help. Etc. In the opposite direction, visual and auditory output needs to make good use of human ecological perception (e.g. our ability to notice potential predators or prey during a jungle walk without consciously looking for them, our ability to unconsciously read subtle shades of feeling on people’s faces, our ability to unconsciously draw on context to interpret what people say, etc.).

    Lots of people are working on the underlying technology for this. I don’t know of any project that is explicitly working on “ecological” machine – human communication, but with better modeling of humans by machines, it will probably come about fairly naturally. I don’t see a need for any big synthesis or breakthrough to make this all work, just a lot of incremental improvement.

    As an aside, I bet that the best ideas on full engagement of people’s ecological abilities for perception and action will come from gaming. The Wii is already a good example. Imagine a gaming system that could use all joint angles in a player’s arms and hands as input without requiring physical contact. This would certainly require modeling the player at several levels (physical, motor control, gesture interpretation). Such a UI will only be highly usable and natural if it supports rapid evolution of conventions between the machine and the human, largely without conscious human choice.

    General, powerful model learning mechanisms

    This is a huge topic and is getting lots of attention from lots of different disciplines: statistics, computer science, computational neuroscience, biological neuroscience, differential topology, etc.

    Again here I doubt that we need any big synthesis or breakthrough (beyond the underlying general model of population computing above). However this is such a huge area that I have trouble guessing how it will evolve. There is a significant chance that along the way, we’ll get insights into model learning that will cause a lot of current complexity to collapse down into a much simpler and more powerful approach. This isn’t essential, but obviously it would accelerate progress.

    Reflexive improvement of hybrid systems

    This is where the big speedup comes. Existing hybrid systems, such as development teams using intensive computer support, are already reflexively improving themselves by writing better tools, integrating their social processes more deeply into their networks, etc. Once the machines in a hybrid system can model their users, this reflexive improvement should accelerate.

    I don’t see any serious obstacles to user modeling in development systems, but I also don’t see any significant examples of it, which is somewhat puzzling. Right now, as far as I’m aware, there are no systems that apply user modeling to accelerating their own development.

    Such systems are technically possible today; in fact, I can imagine feasible examples fairly easily. Here’s one: A machine could “watch” a developer identifying interesting events in trouble logs or performance logs, learn a model that can predict the person’s choices, and use the model to filter a much larger collection of log data. Even if the filter wasn’t all that good, it would probably help the human to see patterns they’d otherwise miss, or would take much longer to find. The human and the machine could continue to refine and extend the filter together. This kind of model learning isn’t terribly challenging any more.

    We may be stuck in a conceptual blind spot here, as we were with hypertext prior to Berners-Lee. If so, the Christiansen disruptive innovation scenario will probably apply: A new “adaptive” development environment will be created by someone outside the mainstream. It will be much less functional than existing development environments, and will be denounced as a total pile of crap by existing development gurus. However it will be adopted rapidly because it fills a huge unmet need for much easier, more tightly coupled human / machine development.

    At the risk of dangerous hubris, I think these ingredients are all we need to produce the singularity. Specifically, I believe that no conceptual breakthroughs are required beyond those listed above. (Of course unexpected breakthroughs might occur and accelerate things.) None of the ingredients seems particularly risky, although most of us (probably including me) would have been astounded by current model learning if it was demonstrated five years ago. (Bernie Widrow and a few other people knew we could do this even thirty years ago.)

    How long will it be before things get really weird?

    Systems will be hybrid human / machine up to the singularity, and maybe far beyond (allowing for some equivocation about what counts as “human”). I don’t expect the banner of progress to be carried by systems with no humans in them any time soon.

    The rate of improvement of reflexive hybrid human / machine systems determines the rate of progress toward the singularity. These are systems that can monitor their own performance and needs, and adapt their own structure, using the same kinds of processes they use for anything else. This kind of hybrid system exists today – online software development communities are examples, since they can build their own tools, reconfigure their own processes, etc.

    The power of these systems is a function of the depth of integration of the humans and machines, and the breadth of the tasks they can work on (flexibility). Since they can apply their power to improving themselves, increasing their power will generally accelerate their improvement.

    We already learn models of our machines. Good UIs and programming environments are good because they help us build effective models. To really accelerate progress, our machines will have to model us, and use those models to improve the coupling between us and them. The depth of integration, and thus the power, and thus the rate of improvement of human / machine hybrids will depend on how accurately, deeply, and quickly machines can model humans.

    The big unknowns in determing how fast we’ll move are

    • how quickly model learning will evolve,
    • how effectively we’ll apply model learning to enhancing reflexive hybrid human / machine development environments and
    • how quickly we’ll transition to population-based computing.

    Probably I can do a worst case / best case / expected case trend projection for these three unknowns, but this will take some further research and thought.

    Some scenarios

    Vinge talked a bit about different trajectories approaching the singularity – in particular, he distinguished between “hard takeoffs” in which the necessary technology is developed in one place and then spreads explosively, and “soft takeoffs” in which the technology development is more open and a large fraction of the population participates in the takeoff.

    My analysis has some strong implications for those scenarios, and introduces a new, intermediate one:

    A hard takeoff

    For example due to a large secret military project. This is possible, but unlikely. Such a takeoff would have to create human / machine hybrids far more powerful than any generally available, and at the same time, prevent the techniques involved from diffusing out into the world.

    Over and over, we’ve seen that most closed research projects fail, because they lack the variety and external correctives that come from being embedded in a larger network of inquiry. Also, the very institutional mechanisms that can create a big project and keep it secret tend to prevent creative exploration of the marginal ideas that are probably needed for success.

    Note: This point is about research projects, not engineering projects that operate within good existing theories. Thus e.g. the Manhattan project is not a counter-example.

    A soft takeoff

    This is very likely as long as progress depends on combining innovations across a wide range of groups, and I think this is likely to be essential throughout the process. As things speed up, the dynamics are increasingly sensitive to the ratio between the rate of local change and the rate of diffusion of innovations. Aside from institutional barriers, software innovations in a networked world are likely to diffuse extremely quickly, so the network is likely to approach takeoff as a whole.

    Local disruptions

    As we get closer to the singularity, a third possibillity emerges. Powerful reflexive systems may enable local, disruptive innovation, for example by relatively small groups that keep their activities secret. These groups would not be able to “outrun” the network as a whole, so they would not have a sustainable lead, but they could temporarily be very disruptive.

    In Rainbow’s End Vinge describes a group like this, who are the villains, as it happens. Another example would be a horribly lethal disease created in a future rogue high school biology lab.

    These localized disruptive innovations could make the world a very uncomfortable place during the transition. However, a much larger population of networked hybrid systems with local computing power and high bandwidth connections will be able to move faster and farther than such a small group. Furthermore, as we’ve increasingly been seeing, in a highly networked world, it gets very hard to hide anything substantial, and our emergency response keeps getting faster.

    How will normal humans experience the singularity?

    Humans who are not part of a hybrid system (and probably there will be lots of them) will gradually lose their ability to understand what’s going on. It will be as though their horizon shrinks, or perhaps as though their conceptual blind spots expand to cover more of their world. They will find larger and larger parts of the world just incomprehensible. This could produce a lot of social stress. Maybe it already does – for example, this could be a contributing factor to the anxiety that gets exploited by “stop the world, I want to get off” movements.

    We don’t experience our blind spot as a hole in our visual field because our brain fills it in. I think this is a good analogy for how we deal with conceptual blind spots – we fill them in with myths, and we can’t tell (unless we do careful analysis) where our understanding ends and the myths begin. So non-hybridized humans mostly won’t be aware that a lot of the world is disappearing into their blind spots, they will spontaneously generate myths to “wrap” the incomprehensible parts of their world. Some will defend the validity of those myths very aggressively, others will accept them as “just stories”. Again this is pretty consistent with the rise of retrograde movements like intelligent design and modern geocentrism.

    This phenomenon is closely related to Clark’s law: “Any sufficiently advanced technology is indistinguishable from magic.” “Sufficiently advanced” just means “beyond my understanding”.

    A phase transition

    Putting this all together, I now think that we actually could end up with a fairly sharp discontinuity that plays the role of the singularity, but is spread out in time and space.

    This could happen if the transition through the singularity is effectively a phase transition. A metaphor may help. If we put a pot of water on the stove and turn on the heat, for a while all the water heats up, but not uniformly – we get all sorts of inhomogeneity and interesting dynamics. At some point, local phase transitions occur – little bubbles of water vapor start forming and then collapsing. As the water continues to heat up, the bubbles become more persistent, until we’ve reached a rolling boil. After a while, all the water has turned into vapor, and there’s no more liquid in the pot.

    We’re now at the point where bubbles of hybrid systems (such as “gelled” development teams) can form, but they aren’t all that stable or powerful yet, and so they aren’t dramatically different from their social environment. Their phase boundary isn’t very sharp.

    As we go forward and these bubbles get easier to form, more powerful and more stable, the overall social environment will be increasingly roiled up by their activities. As the bubbles merge to form a large hybrid network, the contrast between people who are part of hybrid systems and normal people will become starker.

    Unlike the pot that boils dry, I’d expect the two phases—normal people and hybrid systems—to come to an approximate equilibrium, in which parts of the population choose to stay normal indefinitely. The Amish today are a good example of how a group can make that choice. Note that members of both populations will cross the phase boundary, just as water molecules are constantly in flux across phase boundaries. Amish children are expected to go out and explore the larger culture, and decide whether to return. I presume that in some cases, members of the outside culture also decide to join the Amish, perhaps through marriage.

    If this is correct, as the singularity approaches, the “veil” between us and the future will become more opaque for normal people, and at the same time will shift from a “time-like” to a “space-like” boundary. In other words, the singularity currently falls between our present and our future, but will increasingly fall between normal humans and hybrid systems living at the same time. Hybrid systems will be able to “see into” normal human communities – in fact they’ll be able to understand them far more accurately than we can now understand ourselves – but normal humans will find hybrid communities opaque. Of course polite hybrids will present a quasi-normal surface to normal humans except in times of great stress.

    By analogy with other kinds of phase changes, the distance we can see into the future will shrink as we go through the transition, but once we start to move toward a new equilibrium, our horizons will expand again, and we (that is hybrid systems) may even be able to see much further ahead than we can are today. Even normal people may be able to see further ahead (within their bubbles), as long as the equilibrium is stable. The Amish can see further ahead in their own world than we can in ours, because they have decided that their way of life will change slowly.

    I’d love to read stories set in this kind of future world. I think the view of a normal human, watching their child go off into the hybrid world, and wondering if they will return, would be a great topic. The few stories that touch on this kind of transition are very poignant: for example Tunç Blumenthal’s brief story in Vinge’s Across Realtime, and the story of Glenn Tropile’s merger into the Snowflake, and then severance from it, in Pohl and Kornbluth’s Wolfbane.

    Ego, enforcement costs, and the unitary executive

    Preparing the ground

    This is the first of a few posts on this topic. Here I ask the motivating question, and review evidence on human (and animal) consciousness that will help to answer it.

    Jerry Fodor, with his usual brilliant phrasing of bad ideas, sums up the issue:

    If… there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.
    In Critical Condition

    More generally, human beings seem to feel that any time there is a population working together, somebody has to be in charge — and it had better be “one of us”.

    But this is at best a debatable proposition, and it sometimes has very bad consequences. Furthermore, as I will discuss in subsequent posts, it acts as a barrier to population thinking. Why do we hold to it so strongly, and often (like Fodor) treat it as an axiom in philosophical, social, or political reasoning? And more practically, how can we adopt a more sensible stance?

    I believe we can start to answer the first question by looking at the function of consciousness.

    Bernard Baars, in his Cognitive Theory of Consciousness proposes a functional account of consciousness: an animal needs to be able to make coherent responses to potential threats or opportunities, and more generally to act as a coherent whole. Uncoordinated behavior is likely to be ineffective or even damaging. Consciousness provides a common frame that multiple sensory-motor systems can use to integrate their responses; most importantly, it helpt the animal to generate appropriate coordinated behavior in novel situations.

    Libet’s experiments throw interesting additional light on the role of consciousness. Summarizing somewhat brutally, Libet found that our conscious awareness lags our perceptual stimulation, response, and even voluntary decisions by 200-500 milliseconds (1/5 to 1/2 second). Sometimes responses triggered by conscious awareness can intervene in an ongoing process and stop it, but in other cases the conscious awareness comes too late. We have probably all had the experience of seeing a glass tip over or fall off a table and being unable to get our body to move quickly enough to avert the mess, even though it seemed like there was enough time. The problem, of course, is that “we” (i.e. our conscious awareness) didn’t “get the news” until it was too late to do anything — and our sensory-motor subsystems weren’t ready to respond pre-consciously to that class of event because we weren’t in the right (conscious) “frame of mind” to prepare them.

    All of this is consistent with Baar’s hypothesis. If consciousness mainly provides a “frame of mind” that helps (fairly autonomous) subsystems coordinate their activities, then it can still work fine even if it tracks events with some delay. Routine activities can proceed with minimal conscious awareness. Drastic interruptions or significant violations of expectations will generate an orienting response that pushes conscious awareness to the forefront.

    This is a good story and may well be true — the empirical jury is still deliberating, though research continues to generate supporting evidence, making a favorable verdict increasingly likely.

    But this story doesn’t account for the rhetorical verve and intensity of Fodor’s comment, or more generally our passionate attachment to feelings of comprehensive self-awareness and self-control. Why do we typically, like Fodor, feel uncomfortable with the possibility of being a community of subsystems, often loosely coordinated, with our conscious self acting as an intermittent framing mechanism for their activities? Surely this is a better description of our ordinary experience than vivid, comprehensive internal perception of a luminous self?

    I’ll take up these questions in the next post of this series.

    Language and populations

    After posting my ignorant non-argument about philosophical theories of reference, I was happy to see a post on The Use Theory of Meaning at The Leiter Reports that seemed highly relevant, with abundant comments. Alas, on reading it I found that it was unhelpful in two ways. First, generally the arguments seemed to assume as background that the meaning of linguistic expressions was universal (much more on this below). Second, the discussion was obviously confused – by which I mean that participants disagreed about the meaning of the terms they used, how others paraphrased their positions, etc.

    If the first problem is correct, then the whole discussion is fairly pointless. Furthermore I think the first problem creates the conditions for the second, because an assumption of universal meanings for expressions is so far from any actual situation in language that an attempt to base theories on it is likely to lead to chaos.

    Here is a clear example of this assumption of “universal meaning”: William Lycan in Some objections to a simple ‘Use’ theory of meaning says “[A rule for the meaning of a name must be] a rule that every competent speaker of your local dialect actually obeys without exception, because it is supposed to constitute the public linguistic meaning of the name.” “A rule that every competent speaker… obeys” is universal in just the sense I mean.

    Now, this simply isn’t an accurate way to look at how people actually use language. I hope any readers can see this if they think about some examples of creating and understanding expressions, but I’m not going to argue for it now – maybe in another post. I can imagine all sorts of responses: Chomsky’s competence stance, claims that we have to talk that way to have a meaningful (!) or useful (!) theory, statements that it is some sort of harmless idealization (of a different sort from competence), etc. However given the messes in the philosophy of language now which are (in my opinion) largely due to this background assumption, and the concrete results in linguistics and machine learning that show we can get along just fine without it, I reject any such claim. Again, I’m not going to try to substantiate these bald claims right now – but I’m confident I can, and the Steels paper in the earlier post is a good example.

    As my earlier post says, what we actually have is a population. To take the story further, each member has dispositions (rules if you will) about how to use a term, how to compose terms to create more complex meanings, or decompose expressions to recover their meanings, etc. But the dispositions of each member of the population will in general be different in all sorts of ways from those of other members. There is no requirement that these dispositions be completely describable, any more than your disposition to shape your hand as you reach for a cup is completely describable – though they might be remarkably consistent in some ways. As a result, no matter how narrowly we define the circumstances, two members of the population will quite likely differ in some details of their use of expressions in those circumstances.

    Even with no total agreement in any particular, language works because (again as mentioned in the earlier post) people can resort to context and can create more context through interaction while trying to understand or make themselves understood. This resort prompts us to adjust our usage dispositions over time to bring them closer together, when we find such adjustment helpful and not too difficult. However it also implies the meaning of any given expression may depend in an unbounded way on its context.

    I’ll end this with comments on two related issues. First, even apparently consonant ideas, such as Wittgenstein’s “family resemblances”, typically embed the background “universal meaning” assumption. In Wittgenstein’s metaphor the word “game” refers to a particular family, held together only by those resemblances – but the family is treated as a universally accepted meaning for the term, albeit not conveniently delimited by necessary and sufficient conditions. My use of overlapping (and largely consonant) dispositions is not equivalent to this, as I hope is obvious, perhaps with a little thought. However of course overlapping dispositions can easily give rise to meanings that fit Wittgenstein’s “family resemblances”, and the relationship between two different speakers usage dispositions for a given term should perhaps be seen as a family resemblance.

    Second, such things as Gettier problems and difficulties with vagueness seem to me to arise quite directly from this assumption of universal meaning. Given the context dependence of meaning in my proposed (very fundamental) sense, it is not surprising that unusual contexts induce incoherence in our intuitions about meaning. The interpretation of our claims that we’ve seen a barn will depend on whether the listener knows there are lots of fake barns about (and knows that we know or don’t know). A population with varying dispositions about the boundaries of Everest will produce something very like supervaluation, and our actual use of language will take that into account. And so forth.

    What’s wrong with Stephen Turner’s Social Theory of Practices

    In The Social Theory of Practices: Tradition, Tacit Knowledge, and Presuppositions (1994), Stephen Turner mounts a sophisticated attack on the idea of “social practices” as some kind of supra-individual entities. (I will call these “coarse grained entities” below, to avoid the value laden implications of “higher level” or similar locutions.) This attack is part of a broad effort to support methodological individualism and attack any theory or evidence that contradicts it.

    This problem is important in its own right, but it gains additional significance in the context of population thinking. If “only the population is real” then should we regard claims about coarse grained entities as fictitious? Of course my answer is “no”, but this requires careful analysis. We want to develop a firm distinction that allows us to adopt a realistic ontology for these coarse grained entities, but reject any treatment of them as abstract entities that somehow exist independent of the population.

    Turner’s book is worth a response because it is a relatively clear and thoughtful examination of the argument against supra-individual entities. Analyzing Turner can help us figure out why he (and the methodological individualists in general) are wrong, but also can bring into clearer focus the nature, dynamics, and importance of coarse grained entities.

    More below the fold

    Population thinking and institutions– quick sketch

    Here are a few ideas about how population thinking applies to various institutions, to be elaborated in some future posting:

    Yochai Benkler’s peer-production arguments are based on population thinking. He explores the implications of recruiting project members from large populations, where the potential members have substantial variation in “fit” to the project.

    Eric Raymond’s reasoning in “The Cathedral and the Bazaar” is based on population thinking in ways similar to Benkler’s (not surprisingly). “With many eyes all bugs are shallow” only makes sense in the context of a population with the appropriate variation in debugging skills. The whole “bazaar” idea is basically a population-based approach, while “cathedrals” (in Raymond’s usage) very much attempt to approximate ideal types. (Cathedral construction was probably much more population-based in real cases.)

    Large open source projects are decisively population oriented. Consideration of these projects brings to the surface a central problem in applying population thinking to institutions. At any given time much of the structure and activity is defined by a small number of participants, and it is hard to regard them as a “population” – their individual characteristics and their relationships have a big influence on the project. Over a somewhat longer time scale, however, the populations (of users, contributors, partners, etc.) in which they are embedded have a major and often decisive influence on the evolution of the project. Often the core group is replaced over time (usually voluntarily) and the replacement process depends on the larger populations as much as the prior structure of the core group. (I’m talking here as though there was a clear boundary for the core group, but of course that is importantly incorrect. The core group merges into the population and individuals can migrate smoothly in and out, without necessarily crossing any clear-cut boundaries.)

    Large companies should probably be understood (and managed) largely in terms of population thinking. Legal frameworks impose a significant amount of typological structure on companies (corporations, tax rules, accounting rules, etc. etc.). However when a company of more than a few people succeeds it is because of population dynamics – the dynamics of the internal population structure, and the dynamics of the external populations of potential partner companies and/or populations of individual customers.

    The only workable view of science that I know is population based. (Often this is called “evolutionary epistemology”, but a lot of work still has to be done to figure out why science works so much better at generating knowledge than other institutions, even given the evolutionary framework.) Previous views, and most current views, attempt to find “rules” for scientific activity that guarantee (an approximation to) “truth” – obviously typological thinking, and more importantly clearly wrong; this “rule governed” approach can’t be made to work either retrospectively (trying to account for the real history of science) or prospectively (trying to give scientists recipes for how to do their job).

    Operations like Ebay depend critically on population thinking. Ebay is a place for populations to coordinate. (Not so much true of Amazon, though the book reviews etc are population oriented.)

    Duncan Foley has shown that population models of economic activity (using thermodynamics) can produce economic equilibria that model real economic data better than the standard neo-classical mechanisms (which are based on deterministic auction processes). Furthermore the equilibria produced by these population models are much more robust than neo-classical equilibria under random perturbation. Also the neo-classical process of convergence to equilibrium depends on absurd assumptions about the rationality of the participants and the information processing capacity of the market mechanisms; the assumptions required for a population approach are completely reasonable.

    Next Page »